title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "Creation and evolution of particle number asymmetry in an expanding universe", "Creation and evolution of particle number asymmetry in an expanding universe" ]
[ "Takuya Morozumi [email protected] ", "Keiko I Nagao ", "Apriadi Salim ", "Adam ", "Hiroyuki Takata ", "\nGraduate School of Science\nCore of Research for Energetic Universe\nHiroshima University\nHigashi-Hiroshima739-8526Japan\n", "\nGraduate School of Science\nCore of Research for Energetic Universe\nNational Institute of Technology\nNiihama College\n792-8580Japan\n", "\nHiroshima University\nHigashi-Hiroshima739-8526Japan\n", "\nTomsk State Pedagogical University\n634061TomskRussia\n" ]
[ "Graduate School of Science\nCore of Research for Energetic Universe\nHiroshima University\nHigashi-Hiroshima739-8526Japan", "Graduate School of Science\nCore of Research for Energetic Universe\nNational Institute of Technology\nNiihama College\n792-8580Japan", "Hiroshima University\nHigashi-Hiroshima739-8526Japan", "Tomsk State Pedagogical University\n634061TomskRussia" ]
[]
We introduce a model which may generate particle number asymmetry in an expanding Universe.The model includes CP violating and particle number violating interactions. The model consists of a real scalar field and a complex scalar field. Starting with an initial condition specified by a density matrix, we show how the asymmetry is created through the interaction and how it evolves at later time. We compute the asymmetry using non-equilibrium quantum field theory and as a first test of the model, we study how the asymmetry evolves in the flat limit. *
10.1007/s11182-017-0986-x
[ "https://arxiv.org/pdf/1609.02990v1.pdf" ]
119,298,414
1609.02990
389fe1e6c3a55bb669fd1f48038bdbe8a864e0b0
Creation and evolution of particle number asymmetry in an expanding universe 10 Sep 2016 Takuya Morozumi [email protected] Keiko I Nagao Apriadi Salim Adam Hiroyuki Takata Graduate School of Science Core of Research for Energetic Universe Hiroshima University Higashi-Hiroshima739-8526Japan Graduate School of Science Core of Research for Energetic Universe National Institute of Technology Niihama College 792-8580Japan Hiroshima University Higashi-Hiroshima739-8526Japan Tomsk State Pedagogical University 634061TomskRussia Creation and evolution of particle number asymmetry in an expanding universe 10 Sep 2016(Dated: July 31, 2018) We introduce a model which may generate particle number asymmetry in an expanding Universe.The model includes CP violating and particle number violating interactions. The model consists of a real scalar field and a complex scalar field. Starting with an initial condition specified by a density matrix, we show how the asymmetry is created through the interaction and how it evolves at later time. We compute the asymmetry using non-equilibrium quantum field theory and as a first test of the model, we study how the asymmetry evolves in the flat limit. * I. INTRODUCTION The origin of the particle and anti-particle asymmetry of our universe has not been identified yet. We propose a model of a neutral scalar and a complex scalar. U(1) charge carried by the complex scalar corresponds to the particle number. CP and U(1) violating interactions are introduced and they generate particle and anti-particle asymmetry. We study time evolution of particle number using two particle irreducible (2 PI) formalism combined with density matrix formulation of quantum field theory. It enables us to study the time evolution of the particle number starting with an intial state specified with a density matrix. In the previous work [1], the time evolution of the particle number is computed with mass term which violates the particle number asymmetry. In contrast to the previous work where initial asymmetry should be non-zero, in this work, we aim to generate non-zero asymmetry starting with the zero asymmetry at the beginning. II. A MODEL WITH CP AND PARTICLE NUMBER VIOLATING INTERAC- TION In this section, we present a Lagrangian for the model. We denote N for the neutral scalar and φ as a complex scalar. S = d 4 x(L free + L int. ),(1)L free = ∂ µ φ * ∂ µ φ + B 2 2 (φ 2 + φ * 2 ) − m 2 φ |φ| 2 + 1 2 (∂ µ N∂ µ N − m 2 N N 2 ),(2)L int. = Aφ 2 N + A * φ * 2 N + A 0 |φ| 2 N,(3) where A is a complex number and the corresponding interaction is CP violating. B and A φ are real numbers. The particle number is related to U(1) transformation, φ ′ (x) = e iα φ(x).(4) Nöether current related to the transformation is, j µ (x) = iφ † ← → ∂ µ φ,(5) and the particle number is given as, Q(x 0 ) = d 3 xj 0 (x).(6) The U(1) symmetry is explicitly broken by the terms with the coefficients B and A. The particle number asymmetry per unit volume is given by j 0 (x) and its expectation value is written with a density matrix as follows, j 0 (x) = Tr(j 0 (x)ρ(0)).(7) The current density j 0 (x) is written with Heisenberg operators and ρ(0) is an initial density matrix which specifies the initial state by means of statistics. In this work, we use the equilibrium statistical density matrix as an initial density matrix. Specifically, it is given as, ρ(0) = e −βH 0 Tr(e −βH 0 ) ,(8) where β denotes inverse temperature 1 T and H 0 is a free Hamiltonian which corresponds to the free part of the Lagrangian L free . If three dimensional space is translational invariant, the expectation value of the current depends only on time. It is convenient to write all the fileds in terms of real scalar fields defined as, φ(x) = φ 1 + iφ 2 √ 2 , φ 3 = N.(9) With the definition, the free part of the Lagrangian is rewritten as, L free = 1 2 (∂ µ φ i ∂ µ φ i ) − m 2 i 2 φ 2 i , m 2 1 = m 2 φ − B 2 , m 2 2 = m 2 φ + B 2 , m 2 3 = m 2 N .(10) Non-zero B 2 leads to the nondegenerate mass spectrum for φ 1 and φ 2 . The interaction Lagrangian is written with a complete symmetric tensor A ijk , (i, j, k = 1, 2, 3) L int = 3 ijk=1 A ijk 3 φ i φ j φ k .(11) The non-zero components of A ijk are written with the couplings for cubic interaction, A and A φ as shown in Table I. We also summarize the qubic interactions and their property according to U(1) symmetry and CP symmetry. III. 2 PI EFFECTIVE ACTION AND THE EXPECTATION VALUE FOR THE CURRENT The expectation value is written with two parts. j 0 (x) = lim y→x ( ∂ ∂x 0 − ∂ ∂y 0 )Re.[G 12 12 (x, y)] + Re. φ * 2 ← → ∂ 0φ1 ,(12)A 223 = A 0 2 − Re.(A) A 113 − A 223 = 2Re.(A) U(1) violation A 123 = −Im.(A) U(1), CP violation where G 12 12 is a Green function and φ is an expectation value. G ij (x, y) andφ i are obtained from 2 PI effective action Γ [G,φ]. Γ[G,φ] = S[φ] + i 2 TrLnG −1 + 1 2 d 4 xd 4 y δ 2 S δφ a i (x)δφ b j (y) G ab ij (x, y) + i 3 D abc A ijk d 4 xd 4 y[G aa ′ ii ′ (x, y)G bb ′ jj ′ (x, y)G cc ′ kk ′ (x, y)]D a ′ b ′ c ′ A i ′ j ′ k ′ .(13) The last term of Eq.(13) is obtained from two particle irreducible diagram shown in Fig. 1. G kk ′ (x, y) A i ′ j ′ k ′ A ijk G jj ′ (x, y) G ii ′ (x, y) FIG. 1. Two particle irreducible diagram. IV. EXPECTATION VALUE FOR THE CURRENT While there are several different contributions to the current up to the first order of the coupling constant A, we focus on the contribution which comes from Green function. This contribution becomes non-zero even if we start with the vanishing expectation value for the complex scalar asφ i (0) = 0(i = 1, 2). j(x 0 ) O(A) = lim y→x ( ∂ ∂x 0 − ∂ ∂y 0 )Re.[G 12O(A) 12 (x, y)].(14) where G 12O(A) 12 implies the correction to the first order contribution with respect to the cubic interaction to the Green function. We call each contribution as, absorption/emission, decay/inverse decay and vacuum. j 0 (x 0 ) O(A),φ 1 (0)=φ 2 (0)=0 = j 0 (x 0 ) emission absorption + j 0 (x 0 ) inverse decay decay + j 0 (x 0 ) vacuum . (15) The contribution corresponds to absorption and emission is, j 0 (x 0 ) emission absorption = φ 3 (0)A 123 2 d 3 k (2π) 3 1 ω 1k + 1 ω 2k × coth βω 2k 2 − coth βω 1k 2 2 + tanh βω 30 2 sin(ω 2k − ω 1k )x 0 + sin ω 30 x 0 ω 2k − ω 1k + ω 30 − coth βω 1k 2 − coth βω 2k 2 2 + tanh βω 30 2 sin(ω 1k − ω 2k )x 0 + sin ω 30 x 0 ω 1k − ω 2k + ω 30 .(16) where ω ik = k 2 + m 2 i (i = 1, 2, 3). The decay and inverse decay contribution is given as, j 0 (x 0 ) inverse decay decay = φ 3 (0)A 123 2 d 3 k (2π) 3 1 ω 2k − 1 ω 1k × coth βω 2k 2 + coth βω 1k 2 − 2 tanh βω 30 2 2 sin ω 30 x 0 − sin(ω 1k + ω 2k )x 0 ω 30 − ω 2k − ω 1k .(17) The vacuum contribution is, j 0 (x 0 ) vacuum = φ 3 (0)A 123 2 d 3 k (2π) 3 1 ω 2k − 1 ω 1k × coth βω 2k 2 + coth βω 1k 2 + 2 tanh βω 30 2 2 sin ω 30 x 0 + sin(ω 1k + ω 2k )x 0 ω 30 + ω 2k + ω 1k .(18) V. CONCLUSION We propose a model of scalars which may generate the particle number asymmetry. In the interacting model, 2 PI effective action Γ[G,φ] and Schwinger Dyson equation for Green functions G and expectation valueφ are obtained. They are iteratively solved by treating interaction A ijk is small. The current for the particle and anti-particle asymmetry is given up to the first order of A. The contribution is classified to five important processes. As a future extension of the work, we will carry out the numerical calculation of the asymmetry. [1] R. Hotta, T. Morozumi andH. Takata, Phys. Rev. D 90, no. 1, 016008 (2014) [arXiv:1403.0733 [hep-ph]]. TABLE I . IThe cubic interactions and their propertyA 113 = A 0 2 + Re.(A)
[]
[ "A portable atom gravimeter operating in noisy urban environments", "A portable atom gravimeter operating in noisy urban environments" ]
[ "Bin Chen \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n", "Jinbao Long \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n", "Hongtai Xie \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n", "Chenyang Li \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n", "Luokan Chen \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n", "Bonan Jiang \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n", "Shuai Chen \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nShanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina\n" ]
[ "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina", "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina", "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina", "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina", "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina", "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina", "Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina", "Shanghai Branch\nCAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics\nUniversity of Science and Technology of China\n201315ShanghaiChina" ]
[]
The gravimeter based on atom interferometry has potentially wide applications on building the gravity networks, geophysics as well as gravity assisted navigation. Here, we demonstrate experimentally a portable atom gravimeter operating in the noisy urban environment. Despite the influence of noisy external vibrations, our portable atom gravimeter reaches a sensitivity as good as 65µGal/ √ Hz and a resolution of 1.1µGal after 4000 s integration time, being comparable to stateof-the-art atom gravimeters. Our achievement paves the way for bring the portable atom gravimeter to field applications, such as gravity survey on a moving platform.
null
[ "https://arxiv.org/pdf/2203.10696v1.pdf" ]
247,593,727
2203.10696
f8fb88b558ad73ce42a9483800f347b6e011b6ea
A portable atom gravimeter operating in noisy urban environments 21 Mar 2022 Bin Chen Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina Jinbao Long Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina Hongtai Xie Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina Chenyang Li Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina Luokan Chen Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina Bonan Jiang Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina Shuai Chen Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics University of Science and Technology of China 230026HefeiAnhuiChina Shanghai Branch CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics University of Science and Technology of China 201315ShanghaiChina A portable atom gravimeter operating in noisy urban environments 21 Mar 2022atom gravimeter, noisy environment The gravimeter based on atom interferometry has potentially wide applications on building the gravity networks, geophysics as well as gravity assisted navigation. Here, we demonstrate experimentally a portable atom gravimeter operating in the noisy urban environment. Despite the influence of noisy external vibrations, our portable atom gravimeter reaches a sensitivity as good as 65µGal/ √ Hz and a resolution of 1.1µGal after 4000 s integration time, being comparable to stateof-the-art atom gravimeters. Our achievement paves the way for bring the portable atom gravimeter to field applications, such as gravity survey on a moving platform. Atom gravimeter offers a new concept for both very sensitive and accurate absolute gravity measurement [1,2]. This technology can reach a best short-term sensitivity of a few µGal/ √ Hz that outperforms state-ofthe-art classical corner cube sensors [3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Nowadays, the development of portable atom gravimeters overcomes many limitations of existing sensors and open wide area of potential applications in geophysics, resource finding, gravity field mapping, Earthquake prediction as well as gravity-aided navigations [17][18][19][20][21][22][23][24][25][26]. More and more trials efforts have been made in the field applications of atom gravimeters recently. Muquans has deployed a commercial portable atom gravimeter on Mount Etna for monitoring the gravity anomaly induced by the volcanic activity [27]. Mounted on an inertial stabilized platform, an atom gravimeter was applied to measured the gravity in Atlantic as a marine gravimeter [28]. A vehicle atom gravimeter is also operated for the gravity survey in the hills [29]. Despite the high performance obtained in very quiet and well controlled laboratory conditions, such as cave laboratory [14], remote locations [1], and underground galleries [30,31], rare precise measurements with portable atom gravimeters are reported in the noisy urban environment with high sensitivity and resolution, since the performance is largely limited by parasitic vibrations from the ground [11,12,15]. Rising to the challenge of operating a portable atom gravimeter in the noisy urban environment, we develop a miniatured atom sensor mounted on a mobile active vibration isolation stage. The portable atom gravimeter is then transported to and operating in the urban environment for more than 10 days. With the external vibrations of the Raman mirror being suppressed simultaneously in three dimensions, the vibration noise is reduced by a factor up to 2000 from 0.01 Hz to 10 Hz in vertical direction and by a factor up to 30 horizontally, allowing our portable atom gravimeter to reach a sensitivity of 65µGal/ √ Hz (74µGal/ √ Hz) at night (daytime) and a resolution of 1.1µGal after 4000 seconds integration. The scheme of the portable atom gravimeter operating in the urban environment is shown in Fig. 1(a). It includes a cold atom sensor head (30cm×30cm×65cm) that providing the gravity measurement and a mobile active vibration isolator (60cm×60cm×50cm) that suppressing the parasitic vibration from the ground. An integrated controller package (56cm×68cm×72cm) provides all the lasers for manipulating cold atoms and performs data acquisition and processing. The total power consumption is less than 400W. The miniatured atom sensor consists of a titanium vacuum chamber, magnetic coils, optics for delivering laser beams and collectors of fluorescence signals. The sensor is implemented in a compact 2-layer magnetic-shield, with residue magnetic field below 50 nT near the center. The 87 Rb atoms as test mass are loaded directly from the background vapor by the 3-dimensional magneto-optical trap (3D MOT) in 120ms and further cooled down to 3.7 µk by optical molasses. Two optical-phase-locked Raman lasers are combined together and aligned carefully along vertical direction with a retro-reflected configuration ( Fig. 1(a)). The initial state is prepared by a series of Raman π-pulses, during which 10 6 atoms are selected with temperature of 300 nK in vertical direction and prepared in the magnetic insensitive state |F = 1, m F = 0 . The Mach-Zehnder type matter wave interferometry is realized by shining a π/2-π-π/2 Raman pulse sequence with time interval of T = 82 ms and π-pulse length τ = 20 µs. To prevent for the incoherent photon scattering, the Raman lasers are detuned to the red of 700MHz from the D 2 transition lines. Doppler effect during the free fall is compensated by chirping the relative frequency of the Raman lasers at the rate of α ∼ 25.1 MHz/s. After the interferometry, the atoms at |1, 0 & |2, 0 are counted by collecting the fluorescence of each state, respectively. The interferometry fringe is obtained by scanning the chirping rate α and the gravity value g is achieved via full-fringe fitting. A homemade portable three-dimension active vibration isolator underneath the atom sensor is applied to isolate the Raman retro-reflector from the ground vibration. The schematic overview of the active vibration isolator is shown in Fig. 2 (a). It is based on a commercial passive vibration isolation platform (Minus-K Technology 50BM-4). Eight voice coil motors driven by homemade voltage controlled current sources (VCCS) are added inside to implement the active feedback units. A precision threeaxis seismometer (Guralp CMG-3ESP) mounted on the isolation platform is used to monitor the vibration noise of the retro-reflector. The residual vibration noises go through an anolog-digital converter are transferred to a programmable digital feedback filter with a sampling rate of 1 kHz. By passing through a digital-analog converter and the VCCS, the output signal from the filter is turned into current to drive the voice coils that implement the feedback force in three dimensions [32]. The performance of the active vibration isolator is characterized by monitoring the residual vibrations with the seismometer. Fig. 2(b) show the equivalent vertical acceleration noise power spectrum in different situations. The red curve indicates the ground vibration noise at the assembly site, located in one of the text labs at Shanghai institute, University of Science and Technology of China. As the site is suffered badly from heavy human activities and highway traffic nearby, the vibration noise below 10 Hz is much larger than 2×10 −7 m/s 2 / √ Hz, and the vibration eigenmode around 2.5 Hz [33] is as large as 10 −4 m/s 2 / √ Hz. Transformed to the noise of the gravity measurement, it is equivalent to the noise of higher than 15000µGal/ √ Hz [34], which is impossible for the precision g measurement at µGal level. It means we are in a quite noisy urban environment compared to a well established gravity measurement sites [13,[35][36][37]. The blue curve indicates the residual vibration noise on the pas-sive vibration isolation platform without feedback. The noise above 1 Hz is effectively suppressed. While, around the natural resonance frequency of the isolation platform, 0.5Hz, the noise is slightly amplified. The black curve indicates the residual vibration noise of the Raman retroreflector with the active feedback loop turned on. The residual vibration noise is reduced by a factor of 2000 from 0.01 Hz to 10 Hz, as the residual noise to crosstalk between horizontal and vertical directions is further suppressed by extra horizontal feedback channels. The vibration noise is also reduced horizontally by a factor of 30 from 0.01 Hz to 10 Hz (not shown here) [32]. The longterm performance of the three-dimension active vibration isolator is presented in Fig. 2 (c). We characterize it by monitoring the sensitivity suffered to the residual vibration noise for more than 10 days. It is maintained around 20 µGal/ √ Hz with a standard deviation of 5 µGal/ √ Hz. There are some outliers (mostly during the daytime) due to occasional human activities close to the setup. After that, the active vibration isolator recovers very quickly and works well for the whole period. From the capture of cold atoms to the end of the interferometry, The full circle of the gravimeter takes about 300ms and works at the repetition rate of 3 Hz. We perform the gravity measurements by π/2 − π − π/2 Raman pulses interact with the cold atoms during free-falling. The interferometry fringe is obtained by scanning the chirping rate α. We have P = A(1 ± cos( k eff · g − 2πα)T 2 ),(1) where P is normalized population of atoms in |1, 0 state, A is the contrast of the interference fringe, k eff is the effective wave vector of the Raman lasers, and T is the interrogation time between Raman pulses. The gravity g can be achieve via full-fringe fitting. In order to get rid of the k eff -independent system- atic errors, which including quadratic Zeeman shift, the one photon light shift and the radio-frequency phase shift [38], we flip the direction of k eff by switching the sign of chirping rate α for every 48 drops (16 seconds). The interferometry fringes with the chirp up and chirp down during gravity measurements are shown in Fig. 3. The cosine function is applied to fit the interference pattern. At each fitting extreme maximum point for chirp up and chirp down, we obtain α 0u = 25.105403743MHz/s and α 0d = 25.105450345MHz/s, respectively, which mark the point with zero phase shift of the interferometer. By the least-squares fitting of the pattern, we obtain the uncertainties of α 0u and α 0d as σ α0u = 2.9 × 10 −7 MHz/s and σ α 0d = 2.3 × 10 −7 MHz/s for a single interference fringe in every 16 seconds. For obtaining g value, we derive α 0u and α 0d in Eq.(1) with counting the systematic errors as, k eff gT 2 − 2πα 0u T 2 + ∆Φ dep + ∆Φ ind = 0 − k eff gT 2 − 2π(−α 0d )T 2 − ∆Φ dep + ∆Φ ind = 0 (2) where ∆Φ dep(ind) represent the systematic phase shift which dependent (independent) on the direction of the k eff . We derive the expression of the g from those two equation and obtain: g = π(α 0u + α 0d ) k eff − ∆Φ dep k eff T 2 .(3) The systematic phase shifts which are independent on the sign of k eff are eliminated, only the k eff dependent term ∆Φ dep is left, which includes the effect of twophoton light shift, self gravity, Coriolis forces and the wave-front aberrations et al. [38]. From the Eq. (3), we further obtain the uncertainty of each g measurement: σ g = π k eff σ 2 α0u + σ 2 α 0d ≈ 7.2µGal within 30 seconds integration time. After the completion of the assembly and adjustment, the portable atom gravimeter performs the continuous measurement of the local gravity over 245 hours at the assembly site, from 29th October to 8th November 2019. As shown in Fig. 4, despite the influence of noisy external vibrations, the experimental results (black dots) agree well with earth's tides (red) predicted theoretically with an inelastic non-hydrostatic Earth model [39]. The fluctuation of the residue signal may come from the temperature variation in the lab. The Allan deviation of the residue signal is then calculated to characterize the sensitivity and long term sta-bility of our portable atom gravimeter. As shown in Fig. 5, the sensitivity of the portable gravimeter follows 74 µGal/ √ Hz for up to 1000s during daytime and follows 65 µGal/ √ Hz for up to 4000s during nighttime. 10 µGal level is obtained after about 60 s of measurement and the Allan deviation continues to decrease down to 1.1 µGal (2 µGal ) within an integration time of 4000 s (2000s) during nighttime (daytime). For nearby locations, the portable atom gravimeter is mobile enough to be deployed and perform gravity measurements in the field with handle by only one or two people. Meanwhile, it can also fit inside a miniVan and be robust enough to be transported in the long range. Even after over 1300-km-long transportation from Shanghai to the comparison site at National Institute of Metrology of China in Changping, Beijing, our gravimeter still exhibit an 2σ uncertainty of 15.6 µGal and reach a degree of equivalence of -12.5 µGal compare with the reference value given by the FG5-type gravimeter NIM-3A [32,40]. In conclusion, we demonstrate that the portable atom gravimeter operates well in the noisy urban environment, with the help of a portable three-dimension active vibration isolator. The portable atom gravimeter reaches a sensitivity as good as 65 µGal/ √ Hz and a resolution of 1.1µGal within 4000 s integration. Moreover, the setup is robust enough to be deployed in the long range, and to perform gravity measurements with handling by one or two people. The technique demonstrated here helps us to push the portable atom gravimeter to field applications where gravity survey has to be performed in the noisy environment. Furthermore, with the flexibility of being mounted on a vehicle or a gyro-stabled platform [28,29], the demonstration would be of interest for applications of the atom sensor using active vibration isolation to mobile gravity survey or inertial navigation. This work was supported by the National Key R & D Program of China (2016YFA0301601); the National Natural Science Foundation of China (No. 11604321); and Anhui Initiative in Quantum Information Technologies (AHY120000). Figure 1 . 1(a) Schematic diagram of the main science package, where the miniatured atom sensor is mounted on the portable active vibration isolation platform. (b) Photo of the portable atom gravimeter running in a noisy lab. Figure 2 . 2(a) Schematic overview of the three-dimension active vibration isolator. (b) Vibration noise on vertical direction. Red: The noise spectrum measured directly on the lab floor. Blue: The residual vibration noise on the passive isolator. Black: The residual vibration noise on the isolator with active feedback. (c) Long-term performance of the isolator. Inset: vibrational transfer function of the atom sensor. Figure 3 . 3Interferometry fringe for T = 82 ms. It is obtained by 48 drops in 16 s for chirp up and down, respectively. Each black dot is the probability of atoms in |1, 0 state by averaging of 4 drops. The error bar represent the statistical error. The purple and red line are the fitting according to chirp up and down, respectively. Figure 4 .Figure 5 . 45top: The gravity acceleration g measured by the portable atom gravimeter between 29th October and 8th November 2019. The setup works continuously for more than 10 days in the noisy lab. The two breaks (from 130 h to 143 h and from 192 h to 201 h) are cause by the lasers out of lock. bottom: The residue is achieved form the corresponding gravity signal substracted by Earth's tides. Allan deviation of the gravity signal corrected for earth's tides, in the daytime (red) and at night (black). The τ −1/2 slopes represent the corresponding averaging expected for white noise. Mobile quantum gravity sensor with unprecedented stability. C Freier, M Hauth, V Schkolnik, B Leykauf, M Schilling, H Wziontek, H.-G Scherneck, J Müller, A Peters, J. Phys.: Conf. Ser. 72312050C. Freier, M. Hauth, V. Schkolnik, B. Leykauf, M. Schilling, H. Wziontek, H.-G. Scherneck, J. Müller, and A. Peters, "Mobile quantum gravity sensor with unprece- dented stability", J. Phys.: Conf. Ser. 723, 012050 (2016) Multiaxis inertial sensing with long-time point source atom interferometry. S Dickerson, J M Hogan, A Sugarbaker, D M S Johnson, M A Kasevich, Phys. Rev. Lett. 11183001S. Dickerson, J. M. Hogan, A. Sugarbaker, D. M. S. John- son, and M. A. Kasevich, "Multiaxis inertial sensing with long-time point source atom interferometry", Phys. Rev. Lett. 111, 083001 (2013). Shift evaluation of the atomic gravimeter NIMAGRb-1 and its comparison with FG5X. S Wang, Y Zhao, W Zhuang, T Li, S Wu, J Feng, C Li, Metrologia , 55360S. Wang, Y. Zhao, W. Zhuang, T. Li, S. Wu, J. Feng, and C. Li, Metrologia,"Shift evaluation of the atomic gravimeter NIMAGRb-1 and its comparison with FG5X", 55, 360 (2018) Large-momentum-transfer Bragg interferometer with strontium atoms. T Mazzoni, X Zhang, R P D Aguila, L Salvi, N Poli, G M Tino, Phys. Rev. A. 92536191T. Mazzoni, X. Zhang, R. P. D. Aguila, L. Salvi, N. Poli, and G. M. Tino, "Large-momentum-transfer Bragg in- terferometer with strontium atoms", Phys. Rev. A 92, 0536191 (2015). Compact portable laser system for mobile cold atom gravimeters. X Zhang, J Zhong, B Tang, X Chen, L Zhu, P Huang, J Wang, M Zhan, Appl. Opt. 576545X. Zhang, J. Zhong, B. Tang, X. Chen, L. Zhu, P. Huang, J. Wang, and M. Zhan, "Compact portable laser system for mobile cold atom gravimeters", Appl. Opt. 57, 6545 (2018). A cold atom pyramidal gravimeter with a single laser beam. Q Bodart, S Merlet, N Malossi, F P D Santos, P Bouyer, A Landragin, Appl. Phys. Lett. 96134101Q. Bodart, S. Merlet, N. Malossi, F. P. D. Santos, P. Bouyer, and A. Landragin, "A cold atom pyramidal gravimeter with a single laser beam", Appl. Phys. Lett. 96, 134101 (2010). Local gravity measurement with the combination of atom interferometry and Bloch oscillations. R Charriere, M Cadoret, N Zahzam, Y Bidel, A Bresson, Phys. Rev. A. 8513639R. Charriere, M. Cadoret, N. Zahzam, Y. Bidel, and A. Bresson, "Local gravity measurement with the combi- nation of atom interferometry and Bloch oscillations", Phys. Rev. A 85, 013639 (2012). Local gravity measurement with the combination of atom interferometry and Bloch oscillations. N Poli, F Y Wang, M Tarallo, A Alberti, M Prevedelli, G M Tino, Phys. Rev. Lett. 10638501N. Poli, F. Y. Wang, M. Tarallo, A. Alberti, M. Prevedelli, and G. M. Tino, "Local gravity measurement with the combination of atom interferometry and Bloch oscillations", Phys. Rev. Lett. 106, 038501 (2011). Measurement of Local Gravity via a Cold Atom Interferometer. L Zhou, Z Xiong, W Yang, B Tang, W Peng, Y Wang, P Xu, J Wang, M Zhan, Chin. Phys. Lett. 2813701L. Zhou, Z. Xiong, W. Yang, B. Tang, W. Peng, Y. Wang, P. Xu, J. Wang, and M. Zhan, "Measurement of Local Gravity via a Cold Atom Interferometer", Chin. Phys. Lett. 28, 013701 (2011). Precision atomic gravimeter based on Bragg diffraction. P A Altin, M Johnsson, V Negnevitsky, G Dennis, R P Anderson, J E Debs, S S Szigeti, K S Hardman, S Bennetts, G Mcdonald, New J. Phys. 1523009P. A. Altin, M. Johnsson, V. Negnevitsky, G. Dennis, R. P. Anderson, J. E. Debs, S. S. Szigeti, K. S. Hardman, S. Bennetts, and G. Mcdonald, "Precision atomic gravime- ter based on Bragg diffraction", New J. Phys. 15, 023009 (2013). Precision atomic gravimeter based on Bragg diffraction. Y Bidel, O Carraz, R Charriere, M Cadoret, N Zahzam, A Bresson, Appl. Phys. Lett. 102144107Y. Bidel, O. Carraz, R. Charriere, M. Cadoret, N. Za- hzam, and A. Bresson, "Precision atomic gravimeter based on Bragg diffraction", Appl. Phys. Lett. 102, 144107 (2013). Development of an atom gravimeter and status of the 10-meter atom interferometer for precision gravity measurement. L Zhou, Z Xiong, W Yang, B Tang, W Peng, K Hao, R Li, M Liu, J Wang, M Zhan, Gen. Relativ. Gravitation. 431931L. Zhou, Z. Xiong, W. Yang, B. Tang, W. Peng, K. Hao, R. Li, M. Liu, J. Wang, and M. Zhan, "Development of an atom gravimeter and status of the 10-meter atom interferometer for precision gravity measurement", Gen. Relativ. Gravitation 43, 1931 (2011). First gravity measurements using the mobile atom interferometer GAIN. M Hauth, C Freier, V Schkolnik, A Senger, M Schmidt, A Peters, Appl. Phys. B. 11349M. Hauth, C. Freier, V. Schkolnik, A. Senger, M. Schmidt, and A. Peters, "First gravity measurements us- ing the mobile atom interferometer GAIN", Appl. Phys. B 113, 49 (2013). . Z K Hu, B L Sun, X C Duan, M K Zhou, L L , Z. K. Hu, B. L. Sun, X. C. Duan, M. K. Zhou, L. L. Demonstration of an ultrahigh-sensitivity atom-interferometry absolute gravimeter. S Chen, Q Z Zhan, J Zhang, Luo, Phys. Rev. A. 8843610Chen, S. Zhan, Q. Z. Zhang, and J. Luo, "Demonstration of an ultrahigh-sensitivity atom-interferometry absolute gravimeter", Phys. Rev. A 88, 043610 (2013). The investigation of a µGal-level cold atom gravimeter for field applications. B Wu, Z Wang, B Cheng, Q Wang, A Xu, Q Lin, Metrologia. 51452B. Wu, Z. Wang, B. Cheng, Q. Wang, A. Xu, and Q. Lin, "The investigation of a µGal-level cold atom gravimeter for field applications", Metrologia 51, 452 (2014). Participation in the absolute gravity comparison with a compact cold atom gravimeter. Z J Fu, Q Y Wang, Z Y Wang, B Wu, B Cheng, Q Lin, Chin. Opt. Lett. 1711204Z. J. Fu, Q. Y. Wang, Z. Y. Wang, B. Wu, B. Cheng, and Q. Lin, "Participation in the absolute gravity comparison with a compact cold atom gravimeter", Chin. Opt. Lett. 17, 011204 (2019). G Dagostino, S Desogus, A Germak, C Origlia, D Quagliotti, G Berrino, G Corrado, V Derrico, G Ricciardi, The new IMGC-02 transportable absolute gravimeter: measurement apparatus and applications in geophysics and volcanology. 5139G. Dagostino, S. Desogus, A. Germak, C. Origlia, D. Quagliotti, G. Berrino, G. Corrado, V. Derrico, and G. Ricciardi, "The new IMGC-02 transportable absolute gravimeter: measurement apparatus and applications in geophysics and volcanology", ANN GEOPHYS-ITALY 51, 39 (2008). Combining relative and absolute gravity measurements to enhance volcano monitoring. F Greco, G Currenti, G Dagostino, A Germak, R Napoli, A Pistorio, C D Negro, 741745F. Greco, G. Currenti, G. Dagostino, A. Germak, R. Napoli, A. Pistorio, and C. D. Negro, "Combining rel- ative and absolute gravity measurements to enhance vol- cano monitoring", Bull. Volcanol. 74, 1745 (2012). Establishment of a gravity calibration baseline with the constrain of absolute gravity measurements after 17 August 1999 Izmit earthquake in Marmara region, Turkey. U Dogan, S Ergintav, G Arslan, D O Demir, B Karaboce, E Bilgic, E Sadikoglu, A Direnc, ACTA. GEOD. GEOPHYS. HU. 48377U. Dogan, S. Ergintav, G. Arslan, D. O. Demir, B. Karaboce, E. Bilgic, E. Sadikoglu, and A. Direnc, "Es- tablishment of a gravity calibration baseline with the constrain of absolute gravity measurements after 17 Au- gust 1999 Izmit earthquake in Marmara region, Turkey", ACTA. GEOD. GEOPHYS. HU. 48, 377 (2013). Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake. J Montagner, K Juhel, M Barsuglia, J Ampuero, E Chassandemottin, J Harms, B F Whiting, P Bernard, E Clevede, P Lognonne, Nat. Commun. 713349J. Montagner, K. Juhel, M. Barsuglia, J. Ampuero, E. Chassandemottin, J. Harms, B. F. Whiting, P. Bernard, E. Clevede, and P. Lognonne, "Prompt gravity signal in- duced by the 2011 Tohoku-Oki earthquake", Nat. Com- mun. 7, 13349 (2016). Time lapse surface to depth gravity measurements on a karst system reveal the dominant role of the epikarst as a water storage entity. T Jacob, J Chery, R Bayer, N L Moigne, J P Boy, P Vernant, F Boudin, Geophys. J. Int. 177347T. Jacob, J. Chery, R. Bayer, N. L. Moigne, J. P. Boy, P. Vernant, and F. Boudin, "Time lapse surface to depth gravity measurements on a karst system reveal the domi- nant role of the epikarst as a water storage entity", Geo- phys. J. Int. 177, 347 (2009). River basin flood potential inferred using GRACE gravity observations at several months lead time. J T Reager, B F Thomas, J S Famiglietti, Nat. Geosci. 7588J. T. Reager, B. F. Thomas, and J. S. Famiglietti, "River basin flood potential inferred using GRACE gravity ob- servations at several months lead time", Nat. Geosci. 7, 588 (2014). Historical development of the gravity method in exploration. M N Nabighian, M E Ander, V J S Grauch, R O Hansen, T R Lafehr, Y Li, W C Pearson, J W Peirce, J D Phillips, M E Ruder, Geophysics. 7063M. N. Nabighian, M. E. Ander, V. J. S. Grauch, R. O. Hansen, T. R. Lafehr, Y. Li, W. C. Pearson, J. W. Peirce, J. D. Phillips, and M. E. Ruder, "Historical development of the gravity method in exploration", Geophysics 70, 63ND (2005). Measurements of Time-Variable Gravity Show Mass Loss in Antarctica. I Velicogna, J Wahr, Science. 3111754I. Velicogna and J. Wahr, "Measurements of Time- Variable Gravity Show Mass Loss in Antarctica", Science 311, 1754 (2006). Development of an atom-interferometer gravity gradiometer for gravity measurement from space. N Yu, J M Kohel, J R Kellogg, L Maleki, Appl. Phys. B. 84647N. Yu, J. M. Kohel, J. R. Kellogg, and L. Maleki, "De- velopment of an atom-interferometer gravity gradiometer for gravity measurement from space", Appl. Phys. B 84, 647 (2006). Fundamentals Of High Accuracy Inertial Navigation. A B Chatfield, Prog. Aeronaut. Sci. 174291A. B. Chatfield, "Fundamentals Of High Accuracy Iner- tial Navigation", Prog. Aeronaut. Sci. 174, 291 (1997) LASER 2019: quantum sensor set for trip to Mount Etna. B Desruelle, B. Desruelle, "LASER 2019: quantum sensor set for trip to Mount Etna", https://optics.org/news/10/6/43 (Jun 26, 2019). Gravity surveys using a mobile atom interferometer. X Wu, Z Pagel, B S Malek, T H Nguyen, F Zi, D S Scheirer, H Muller, Sci. Adv. 59X. Wu, Z. Pagel, B. S. Malek, T. H. Nguyen, F. Zi, D. S. Scheirer, and H. Muller, "Gravity surveys using a mobile atom interferometer", Sci. Adv. 5, 9 (2019). Absolute marine gravimetry with matterwave interferometry. Y Bidel, N Zahzam, C Blanchard, A Bonnin, M Cadoret, A Bresson, D Rouxel, M F Lequentreclalancette, Nat. Commun. 9627Y. Bidel, N. Zahzam, C. Blanchard, A. Bonnin, M. Cadoret, A. Bresson, D. Rouxel, and M. F. Lequentre- clalancette, "Absolute marine gravimetry with matter- wave interferometry", Nat. Commun. 9, 627 (2018). Stability comparison of two absolute gravimeters: optical versus atomic interferometers. P Gillot, O Francis, A Landragin, F P D Santos, S Merlet, Metrologia. 515P. Gillot, O. Francis, A. Landragin, F. P. D. Santos, and S. Merlet, "Stability comparison of two absolute gravime- ters: optical versus atomic interferometers", Metrologia 51, 5 (2014). Underground operation at best sensitivity of the mobile LNE-SYRTE cold atom gravimeter. T Farah, C Guerlin, A Landragin, P Bouyer, S Gaffet, F P D Santos, S Merlet, Gyroscopy and Navigation. 5266T. Farah, C. Guerlin, A. Landragin, P. Bouyer, S. Gaffet, F. P. D. Santos, and S. Merlet, 'Underground operation at best sensitivity of the mobile LNE-SYRTE cold atom gravimeter", Gyroscopy and Navigation 5, 266 (2014). A mobile three-dimensional active vibration isolator and its application to cold atom interferometry. B Chen, J B Long, H T Xie, L K Chen, S Chen, Acta Phys. Sin. 68183301B. Chen, J. B. Long, H. T. Xie, L. K. Chen, and S. Chen, "A mobile three-dimensional active vibration isolator and its application to cold atom interferometry", Acta Phys. Sin. 68, 183301 (2019). Measurement of Local Gravity using Atom Interferometry. C Freier, BerlinUniversity of HumboldtPh.D. thesisC. Freier, "Measurement of Local Gravity using Atom Interferometry", Ph.D. thesis, University of Humboldt, Berlin, (2010). Measurement of the Sensitivity Function in a Time-Domain Atomic Interferometer. P Cheinet, B Canuel, F P D Santos, A Gauguet, F Yverleduc, A Landragin, IEEE Trans. Instrum. Meas. 571141P. Cheinet, B. Canuel, F. P. D. Santos, A. Gauguet, F. Yverleduc, and A. Landragin, "Measurement of the Sen- sitivity Function in a Time-Domain Atomic Interferome- ter", IEEE Trans. Instrum. Meas. 57, 1141 (2008). Limits to the sensitivity of a low noise compact atomic gravimeter. J L L Gouet, T E Mehlstaubler, J Kim, S Merlet, A Clairon, A Landragin, F P D Santos, Appl. Phys. B. 92133J. L. L. Gouet, T. E. Mehlstaubler, J. Kim, S. Merlet, A. Clairon, A. Landragin, and F. P. D. Santos, "Limits to the sensitivity of a low noise compact atomic gravimeter", Appl. Phys. B 92, 133 (2008). Operating an atom interferometer beyond its linear range. S Merlet, J L L Gouet, Q Bodart, A Clairon, A Landragin, F P D Santos, P Rouchon, Metrologia. 4687S. Merlet, J. L. L. Gouet, Q. Bodart, A. Clairon, A. Lan- dragin, F. P. D. Santos, and P. Rouchon, "Operating an atom interferometer beyond its linear range", Metrologia 46, 87 (2009). Performance of a cold-atom gravimeter with an active vibration isolator. M Zhou, Z Hu, X Duan, B Sun, L Chen, Q Zhang, J Luo, Phys. Rev. A. 8643630M. Zhou, Z. Hu, X. Duan, B. Sun, L. Chen, Q. Zhang, and J. Luo, "Performance of a cold-atom gravimeter with an active vibration isolator", Phys. Rev. A 86, 043630 (2012). The influence of transverse motion within an atomic gravimeter. A Louchetchauvet, T Farah, Q Bodart, A Clairon, A Landragin, S Merlet, F P D Santos, New J. Phys. 1365025A. Louchetchauvet, T. Farah, Q. Bodart, A. Clairon, A. Landragin, S. Merlet, and F. P. D. Santos, "The influence of transverse motion within an atomic gravimeter", New J. Phys. 13, 065025 (2011). Tides for a convective Earth. V Dehant, P Defraigne, J M Wahr, J. Geophys. Res.: Solid Earth. 1041035V. Dehant, P. Defraigne, and J. M. Wahr, "Tides for a convective Earth", J. Geophys. Res.: Solid Earth 104, 1035 (1999). . H T Xie, in preparationH. T. Xie,et al., (in preparation)
[]
[ "Everywhere Equivalent 3-Braids", "Everywhere Equivalent 3-Braids" ]
[ "Integrability Symmetry ", "Geometry " ]
[]
[ "Methods and Applications SIGMA" ]
A knot (or link) diagram is said to be everywhere equivalent if all the diagrams obtained by switching one crossing represent the same knot (or link). We classify such diagrams of a closed 3-braid.
10.3842/sigma.2014.105
[ "https://arxiv.org/pdf/1411.4223v1.pdf" ]
55,940
1411.4223
54d3a88af17a4987ec85629caf1c1e3f39f8a08c
Everywhere Equivalent 3-Braids 2014 Integrability Symmetry Geometry Everywhere Equivalent 3-Braids Methods and Applications SIGMA 1022201410.3842/SIGMA.2014.105Received July 08, 2014, in final form November 04, 2014;3-braid groupJones polynomialKauffman bracketBurau representationadequate diagram 2010 Mathematics Subject Classification: 57M2520F3620E4520C08 A knot (or link) diagram is said to be everywhere equivalent if all the diagrams obtained by switching one crossing represent the same knot (or link). We classify such diagrams of a closed 3-braid. Introduction How does a diagram D of a knot (or link) L look, which has the following property: all diagrams D obtained by changing exactly one crossing in D represent the same knot (or link) L (which we allow to be different from L)? This suggestive question was possibly first proposed in this form by K. Taniyama, who called such diagrams everywhere equivalent (see Definition 2.1 in Section 2.6). Through the connection between links and braid groups (see, e.g., [7]), this question turns out related to the following group-theoretic question: given a group in a set of conjugate generators, which words in these generators have the property that reversing each individual letter gives a set of conjugate elements? When by the crossing-changed diagrams D display the unknot, Taniyama's question was studied previously in [16], where D was called everywhere trivial . There some efforts were made to identify such diagrams, mostly by computationally verifying a number of low-crossing cases. The upshot was that, while there is a (hard to be described) abundance of diagrams for D unknotted, only 6 simple diagrams seem to occur when D is not; see (2.9). Motivated by Taniyama's more general concept, we made in [19] a further study of such phenomena. We conjectured, as an extension of the everywhere trivial case, a general description of everywhere equivalent diagrams for a knot, and proved some cases of low genus diagrams. We also proposed some graph-theoretic constructions of everywhere equivalent diagrams for links. In this paper we give the answer for 3-braids. A 3-braid diagram corresponds to a particular braid word in Artin's generators. This word can be regarded up to inversion, cyclic permutations, the interchanges σ i ↔ σ −1 i (reflection) and σ 1 ↔ σ 2 (flip). However, beyond this it will be of importance to distinguish between braids and their words, i.e., how a given braid is written. 3) the words σ l 1 σ l 2 k for k, l ≥ 1 (symmetric case), and 4) the words σ k 1 for k > 0 (split case). Using the exponent sum (2.6), one easily sees that the answer to the related group-theoretic question for the 3-braid group consists of the last three families in the theorem. This outcome, and even more so its derivation, have turned out more complicated than expected. The first family mainly comes, except for σ 1 σ 2 σ −1 1 σ −1 2 and the trivial cases, from the diagrams of (2.9) (under the exclusion of the 5 crossing one, which is not a 3-braid diagram). The last two families are also quite suggestive, and given (in more general form for arbitrary diagrams) in [19]. However, we initially entirely overlooked the second family of diagrams. We did not explicitly ask in [19] whether our link diagram constructions are exhaustive, but we certainly had them in mind when approaching Theorem 1.1. These previous examples come in some way from totally symmetric (edge transitive) planar graphs. However, there is little symmetry in general here. One can easily construct examples of central (element) words lacking any symmetry. (One can also see that every positive word in the 3-braid group can be realized as a subword of a positive word representing a central element.) The second family does not yield (and thus answer negatively our original question for) knots, and it does not lead out of the positive case (whose special role was well recognized in [19]). Still we take it as a caution that everywhere equivalence phenomena, although sporadic, may occur in far less predictable ways than believed. Our proof is almost entirely algebraic, and will consist in using the Jones polynomial to distinguish the links of various D except in the desired cases. Mostly we will appeal to the description of the Jones polynomial in terms of the Burau representation, but at certain points it will be important to use information coming from the skein relation and the Kauffman bracket. The proof occupies Sections 3 and 4, divided by whether the braid word is positive or not. Both parts require somewhat different treatment. We will see that for 3-braids the non-positive case quickly recurs to the everywhere trivial one. A final observation (Proposition 4.8) addresses the lack of interest of the situation opposite to everywhere equivalence: when all crossing-switched versions of a diagram are to represent different links. (This property was called everywhere different, and some constructions for such knot diagrams were given in [19].) In a parallel paper [21] we observe how to solve the classification of (orientedly) everywhere equivalent diagrams in another case, this of two components. Preliminaries It seems useful to collect various preliminaries, which will be used at different places later in the paper. Link diagrams and Jones polynomial All link diagrams are considered oriented, even if orientation is sometimes ignored. We also assume here that we actually regard the plane in which a link diagram lives as S 2 , that is, we consider as equivalent diagrams which differ by the choice of the point at infinity. The Jones polynomial V can be defined as the polynomial taking the value 1 on the unknot, and satisfying the skein relation t −1 V − tV = t 1/2 − t −1/2 V . (2.1) In each triple as in (2.1) the link diagrams are understood to be identical except at the designated spot. The fragments are said to depict a positive crossing, negative crossing, and smoothed out crossing. The skein smoothing is thus the replacement of a crossing by the third fragment. The writhe w(D) of a link diagram D is the sum of the signs of all crossings of D. If all crossings of D are positive, then D is called positive. It is useful to recall here the alternative description of V via Kauffman's state model [8]. A state is a choice of splicings (or splittings) of type A or B (see Fig. 1) for any single crossing of a link diagram D. We call the A-state the state in which all crossings are A-spliced, and the B-state B(D) is defined analogously. When for a state S all splicings are performed, we obtain a splicing diagram, which consists of a collection of (disjoint) loops in the plane (solid lines) together with (crossing) traces (dashed lines). We call a loop separating if both its interior and exterior contain other loops (regardless of what traces). We will for convenience identify below a state S with its splicing diagram for fixed D. We will thus talk of the loops and traces of a state. Figure 1. The A-and B-corners of a crossing, and its both splittings. The corner A (resp. B) is the one passed by the overcrossing strand when rotated counterclockwise (respectively clockwise) towards the undercrossing strand. A type A (resp. B) splitting is obtained by connecting the A (resp. B) corners of the crossing. The dashed line indicates the trace of the crossing after the split. Recall, that the Kauffman bracket D [8] of a link diagram D is a Laurent polynomial in a variable A, obtained by a sum over all states S of D: D = S A #A(S)−#B(S) −A 2 − A −2 |S|−1 . (2.2) Here #A(S) and #B(S) denote the number of type A (respectively, type B) splittings and |S| the number of (solid line) loops in the splicing diagram of S. The formula (2.2) results from applying the first of the bracket relations = A −1 + A , ∪ X = −A 2 − A −2 X , to each crossing of D (here traces are ignored), and then deleting (except one) loops using the second relation, at the cost of a factor −A 2 − A −2 per deleted loop. (The normalization is thus here that the diagram of one circle with no crossings has unit bracket.) The Jones polynomial of a link L can be determined from the Kauffman bracket of some diagram D of L by V L (t) = −t −3/4 −w(D) D A=t −1/4 , (2.3) with w(D) being the writhe of D. This is another way, different from (2.1), to specify the Jones polynomial. It is well-known that V ∈ Z[t ±1 ] (i.e., only integral powers occur) for odd number of link components (in particular, for knots), while V ∈ t 1/2 · Z[t ±1 ] (i.e., only half-integral powers occur) for even number of components. For V ∈ Z[t 1/2 , t −1/2 ], the minimal or maximal degree min deg V or max deg V is the minimal resp. maximal exponent of t with non-zero coefficient in V . Let span V = max deg V −min deg V . Semiadequacy and adequacy Let S be the A-state of a diagram D and S a state of D with exactly one B-splicing. If |S| > |S | for all such S , we say that D is A-adequate. Similarly one defines a B-adequate diagram D (see [11] [14,Appendix] is an example. This property of the Perko knot follows from work of Thistlethwaite [22], and is explained, e.g., in Cromwell's book [3, p. 234], or (along with further examples) in [20]. A link diagram D is said to be split, if its planar image (forgetting crossing information) is a disconnected set. A region of D is a connected component of the complement of this planar image. At every crossing of D four regions meet; if two coincide, we call the crossing nugatory. A diagram with no nugatory crossings is reduced . It is easily observed (as in [11]) that a reduced alternating diagram (and hence an alternating link) is adequate, and that the A-and B-state of an alternating non-split diagram have no separating loops. The maximal degree of A of the summands in (2.2) is realized by the A-state S. However, its contribution may be cancelled by this of some other state. One situation when this does not happen is when D is A-adequate. Then the A-state gives in (2.2) the unique contribution to the maximal degree in A, and, via (2.3), the minimal degree of V . We call this the extreme A-term. Thus, for A-adequate diagrams, min deg V can be read off from the A-state directly, and the coefficient is ±1. For not A-adequate diagrams, the situation is a bit more subtle and studied in [1]. We will use the following important special case of that study. When D is not A-adequate, the A-state has a trace connecting a loop to itself, which we call self-trace. For a given loop, we call a pair of self-traces ending on it intertwined if they have the mutual position (2.5) A self-trace is isolated if it is intertwined with no other self-trace. Bae and Morton show (among others) that if in the A-state a self-trace is isolated, the contribution in the extreme A-term of V is zero. Similar remarks apply on B-adequate diagrams and max deg V , and then on adequate diagrams and span V . Braid groups and words The n-string braid , or shortly n-braid, group B n is considered generated by the Artin standard generators σ i for i = 1, . . . , n − 1. An Artin generator σ i , respectively, its inverse σ −1 i will be called a positive, respectively, negative letter, and i the index of this letter. We will almost entirely focus on n = 3. The Artin generators are subject to commutativity relations (for n ≥ 4; the bracket denotes the commutator below) and braid relations, which give B n the presentation B n = σ 1 , . . . , σ n−1 [σ i , σ j ] = 1, |i − j| > 1, σ j σ i σ j = σ i σ j σ i , |i − j| = 1 . For the sake of legibility, we will commonly use a bracket notation for braid words. The meaning of this notation is the word obtained by replacing in the content of the brackets every integer ±i for i > 0, not occurring as an exponent, by σ ±1 i , and removing the enclosing brackets. Thus, e.g., 1(2 − 1) 4 − 2 − 1 = σ 1 σ 2 σ −1 1 4 σ −1 2 σ −1 1 . Although negative exponents will not be used much here, let us fix for clarity that for a letter they are understood as the inverse letter, and for a longer subword as the inverse letters written in reverse order. Thus [−1 − 12] = 1 −2 2 = σ −2 1 σ 2 and (−12) −2 1 = (−21) 2 1 = [−21 − 211] = −21 − 21 2 = σ −1 2 σ 1 σ −1 2 σ 2 1 . Occasionally we will insert into the bracket notation vertical bars '|'. They have no influence on the value of the expression, but we use them to highlight special subwords. A word which does not contain a letter followed or preceded by its inverse is called reduced . In a reduced braid word, a maximal subword σ ±k i for k > 0, i.e., one followed and preceded by letters of different index, is called a syllable. The number i is called index of the syllable, and the number ±k its exponent, which is composed of its sign '±' and its length k > 0. According to the sign, a syllable is positive or negative. A syllable of length 1 (of either sign) will be called trivial . Obviously every reduced braid word decomposes into syllables in a unique way: β = n i=1 σ p i i , with p i ∈ Z \ {0}. The sequence (p 1 , . . . , p n ) will be called exponent vector . Thus an entry '±1' in the exponent vector corresponds to a trivial syllable. A word is positive, if all entries in its exponent vector are positive (i.e., it has no negative syllable). Often braid words will be considered up to cyclic permutations. In this case so will be done with the exponent vector. The length of the exponent vector considered up to cyclic permutations will be called weight ω(β) of β. If β ∈ B 3 , then ω(β) is even, since indices 1 and 2 can only interchange, except when ω(β) = 1 (and β is a single syllable). The quantity |p i | is the length of the word β (and is, of course, different from its weight, unless all syllables are trivial). The length-zero word will be called trivial word . The quantity [β] = n i=1 p i (2.6) is called exponent sum of β, and is invariant of the braid (i.e., equal for different words of the same braid), and in fact its conjugacy class. The half-twist element ∆ ∈ B n is given by ∆ = (σ 1 σ 2 · · · σ n−1 )(σ 1 · · · σ n−2 ) · · · (σ 1 σ 2 )σ 1 , and its square ∆ 2 = (σ 1 σ 2 · · · σ n−1 ) n is the generator of the center of B n . We will need mostly the group B 3 , where ∆ has the two positive word representations [121] and [212]. We will use the notation · for the involution σ 1 ↔ σ 2 of B 3 induced by conjugacy with ∆. Braids and links There is a well-known graphical representation for braids: Thus, a (positive/negative) letter of a braid word gives a (positive/negative) crossing, and smoothing a crossing corresponds to deleting a letter. In this sense we will feel free to speak of (switching) crossings and smoothings of a braid (word). For braid(word)s β there is a closure operationβ: In this way, a braid closes to a knot or link and a braid word (which can be also regarded here up to cyclic permutations) closes to a knot or link diagram. Here the separation between the two levels of correspondence must be kept in mind. Thus β is a positive word if and only ifβ is a positive link diagram. For a further analogy, we say that β is split ifβ is a split diagram, which means that β contains neither σ i nor σ −1 i for some i. For every link L there is a braid β ∈ B n withβ = L. The minimal n for given L is called braid index of L. (See, e.g., [5,12].) Burau representation The (reduced) Burau representation ψ of B 3 in M 2 (Z[t ±1 ]) is defined by ψ(σ 1 ) = −t 1 0 1 , ψ(σ 2 ) = 1 0 t −t . Then for k ∈ Z we have ψ(σ k 1 ) =   (−t) k 1 − (−t) k 1 + t 0 1   , ψ(σ k 2 ) =   1 0 t 1 − (−t) k 1 + t (−t) k   . (2.7) For a closed 3-braid, there is a relation between the Burau representation and the Jones polynomial, which is known from the related Hecke algebra theory explained in [7]: Vβ(t) = − √ t [β]−2 t · tr ψ(β) + 1 + t 2 . (2.8) We will more often than the formula itself use an important consequence of it: for a 3-braid, two of the Burau trace, exponent sum and Jones polynomial (of the closure) determine the third. Note that ψ is faithful on B 3 . One proof which uses directly this relation to the Jones polynomial is given in [18]. This property of ψ is not mandatory for our work, but we use it to save a bit of exposition overhead at some places. More importantly, there is a way to identify for given matrix whether it is a Burau matrix, and if so, of which braid. The Burau matrix determines (for 3-braids) along with the Jones polynomial also the skein and Alexander polynomial. These in turn determine (the skein polynomial precisely [15], the Alexander polynomial up to a twofold ambiguity [17]) the minimal length of a band representation of the braid. Thus one has only a finite list of band representations to check for given Burau matrix. We used this method in fact also here to identify certain braids from their matrix. No tools even distantly comfortable are available (or likely even possible) for higher braid groups. Some properties of everywhere equivalent diagrams We stipulate that in general D will be used for a link diagram and D for a diagram obtained from D by exactly one crossing change. If we want to indicate that we switch a crossing numbered as i, we also write D i . Similarly β will stand for a braid, usually a particular word of it, and β for a word obtained by inverting exactly one letter in β. The central attention of this study is the following type of diagrams. A, potentially complete, list of knotted everywhere trivial diagrams was determined in [16]. These are given below and consist of two trefoil and four figure-8-knot diagrams: (2.9) We saw this list compatible with the 3-braid case in Theorem 1.1. The list draws evidence for its exhaustiveness from various sources. For minimal crossing diagrams (and in particular that only the trefoil and figure-8-knot occur), the problem seems to have been noticed previously, and is now believed to be some kind of "knot-theory folklore". Yet, despite its very simple formulation, it is extremely hard to resolve. Apart from our own (previously quoted) efforts, we are not aware of any recent progress on it. Another situation where everywhere equivalence can be resolved is for 2-bridge (rational) and Montesinos link diagrams. The classification of their underlying links gives a rather straightforward, albeit somewhat tedious, method to list up such EE diagrams. In particular, these diagrams without trivial clasps are as follows, agreeing also with (2.9): (2.10) • the rational diagrams in (2.9) (all except the 8-crossing one), • those with Conway notation C(p) and C(p, −p) for p ≥ 1, and • the pretzel diagrams (p, . . . , p q times ) = P (q, p) for q ≥ 3 and p ≥ 2 (see (2.10) for an example). Note that in both the Montesinos and 3-braid case, it is not necessary to exclude unknotted everywhere trivial diagrams to formulate a reasonable statement. However, we know that such diagrams occur in multitude outside these two classes, which is another hint to why the classes are rather special. The next lemma proposes a new family of EE diagrams for links, suggested by the 3-braid case. It identifies the second family in Theorem 1.1 in more general form. Beyond this point, we will from now on focus on 3-braids. The proof of Theorem 1.1 relies on several of their very special properties. Despite the various input, the length of the argument shows that there was some effort in putting the pieces together. In that sense, our initial optimism in carrying out, for example, a similar investigation of 4-braids, seems little reasonable. In relation to our method of proof, we conclude the preliminaries with the following remark. The use of some algebraic technology might be occasionally (but not always) obsolete, for Birman-Menasco's work [2] has reduced the isotopy problem of closed 3-braids (mainly) to conjugacy. This fact provided a guideline for our proof. One place where the analogy surfaces is Lemma 4.2, which in some vague sense imitates, on the level of the Jones polynomial, a partial case of the combination of Birman-Menasco with the summit power in Garside's conjugacy normal form [6]. We will invoke Garside's algorithm at one point quite explicitly, in Lemma 3.2. On the other hand, the use of [2] would not make the proof much simpler, yet would build a heavy framework around it, which we sought to avoid. We will see that we can manageably work with the Burau representation and Jones polynomial (and that they remain essential to our argument). Proof of the non-positive case We start now with the proof of Theorem 1.1. Initial restrictions There is also here a dichotomy as to whether we are allowed to switch crossings of either sign, i.e., whether the diagram is (up to mirroring) positive or not. Let us throughout the following call a braid word everywhere equivalent (EE) if the diagramβ is such. We start by dealing with non-positive braids. The goal is to obtain the first family in Theorem 1.1. Notice here that for any (non-trivial) such word β, the closureβ is either the unknot or the figure-8-knot. For non-positive braids β, a strong restriction enters immediately, which will play a central role in the subsequent calculations. Since for a non-positive diagram D =β, one can get (3-string) braids of exponent sum differing by ±4 representing the same link, it follows from the Morton-Williams-Franks inequality [12,5] that the skein (HOMFLY-PT) polynomial P of such a link must have a single non-Alexander variable degree. Then a well-known identity [10,Proposition 21] implies that P = 1. By [15] we can conclude then thatβ = D is the unknot. In particular, the closure of β is a knot, and its exponent sum must be zero: [β] = 0. (3.1) We remark that if [β ] = ±2 (and the closure is unknotted), then by (2.8) its Burau trace is tr ψ(β ) = (−t) ±1 . (3.2) Note that not only does this trace determine a trivial Jones polynomial, but also that the Jones polynomial detects the unknot for 3-braids (see [18]). Thus this trace condition is in fact equivalent to the braid having unknotted closure. Since we know a priori that we expect a finite answer, it turns out helpful first to rule out certain subwords of β. Let us first exclude the cases when β contains up to cyclic permutations subwords of the form σ ±1 i σ ∓1 i , i.e., that it is not cyclically reduced. It is clear that if such β is everywhere equivalent, then so is the word obtained under the deletion of σ ±1 i σ ∓1 i . When we have proved the exhaustiveness of family 1 in Theorem 1.1, we see that it is enough to show that all words obtained by inserting σ ±1 i σ ∓1 i cyclically somewhere into any of these words β gives no everywhere equivalent word. Most cases can be ruled out right away. Note that in such a situation for some word β both βσ 2 i and βσ −2 i must have unknotted closure. In particular, [β] = 0, and the positive words β need not be treated. The other (non-positive) words β can be ruled out by noticing that when βσ 2 i gives (under closure) the unknot and β the unknot or figure-8-knot, then by the skein relation (2.1) of the Jones polynomial, βσ −2 i will have a closure with some non-trivial polynomial, and so it will not be unknotted. Thus from now on we assume that β is a reduced word and has an exponent vector. The case of exponent vector of length (i.e., weight) 2 is rather easy, and leads only to σ 1 σ −1 2 , so let us exclude this in the following. Syllable types We regard thus now β as cyclically reduced, and start by examining what type of syllables can occur in the exponent vector. For a syllable, we are interested in the exponent (i.e., sign and length) of the syllable, and the sign of the preceding and following syllable. Up to mirroring and σ 1 ↔ σ 2 we may restrict ourselves to positive syllables of σ 2 , and up to inversion we have three types of signs of neighboring syllables (of σ 1 ). Case 1. Two positive neighboring syllables. Up to cyclic permutations we have for n ≥ 1, β = ασ 1 σ n 2 σ 1 . (3.3) Let us call the subword starting after α the visible subword of β, and its three syllables (at least two of which are trivial) the visible syllables. We cannot assume that α ends on σ ±1 2 , so that the first visible syllable of β may not be a genuine syllable. We try to find out now how M = ψ(α) should look like. First note that because of (3.1), we must have (Here (−t) −1 has to occur everywhere on the right, since we always switch a positive crossing in β with (3.1), and thus [β ] = −2.) These three equalities give affine conditions on M , which restrict it onto a line in the space of 2 × 2 matrices, regarded over the fraction field F of Z[t, t −1 ]. Then the quadratic determinant condition (3.4) will give two solutions. These live a priori only in a quadratic extension of F. For the existence of α we need, (1) that the entries of M lie outside a quadratic field over F (i.e., the discriminant is a square), that then (2) they lie in Z[t, t −1 ] (i.e., that denominators disappear up to powers of t), and that in the end (3) the entries build up a valid Burau matrix of a braid. Something that startled us is that we encountered redundant cases which survived until any of these three intermediate stages. Putting the equations into MATHEMATICA TM [23], using the formulas (2.7), gives the solutions for the lower right entry d of M . We will use from now on the extra variable u = −t to simplify the complicated expressions somewhat (even just a sign affects the presentation considerably!). The value for d determined by MATHEMATICA looks thus: d = u 7+n + 5u 3+2n − 7u 4+2n + 6u 5+2n − 5u 6+2n + 4u 7+2n − 2u 8+2n − 3u 2+3n + 8u 3+3n − 10u 4+3n + 6u 5+3n − 2u 6+3n + u 7+3n + u 3+4n + u 5+n t + u 1+2n t + u 3+4n t ± u 1+2n (1 + t) 2 4u n + 8u 2+n + 8u 4+n − 4u 5+n + 4u 6+n + 4u n t (3.5) + 3 + 10u n + 3u 2n t 3 u n + 3u 2+n + (2 + u n )u n t + u n t 3 + (−1 + u n )t 4 2 1/2 × 2u 2n t(1 + t) 2 u n + 3u 2+n + 3u 4+n + u 6+n + u n t + 1 + 4u n + u 2n t 3 + u n t 5 −1 . Our attention is dedicated first to the discriminant (occurring under the root). Removing obvious quadratic factors, we are left with the polynomial u −3u 3 + 4u n − 4u n+1 + 8u 2+n − 10u n+3 + 8u 4+n − 4u 5+n + 4u 6+n − 3u 2n+3 . (3.6) We need that this polynomial becomes the square of a polynomial in Z[t, t −1 ]. For n ≥ 4 the edge coefficients are −3, and thus the polynomial is not a square. Similarly for n = 2, where the minimal and maximal degrees are odd. For n = 1 the visible subwords we consider in (3.3) are just the half-center element ∆ (and, under various symmetries, its inverse). Their exclusion as subwords of β will be most important for the rest of the argument (see Lemma 3.2), but occurs here in the most peculiar way. We have in (3.6) the polynomial 4−4u+ 5u 2 −10u 3 + 5u 4 −4u 5 + 4u 6 . This becomes a square, and so we obtain values for d d = t 4 + 2t 5 − t 6 − 4t 7 − t 8 + 2t 9 + t 10 ± t 6 (1 + t) 8 (−2 + t − 4t 2 + t 3 − 2t 4 ) 2 2t 4 (1 + t) 4 (1 − t + 3t 2 − t 3 + t 4 ) in F (rather than some of its quadratic fields). For the choice of negative sign we get for d d = 1 + t 2 + t 4 t − t 2 + 3t 3 − t 4 + t 5 . The evidence that this is not a Laurent polynomial in t can be sealed, for example, by setting t = 1 2 . The expression evaluates to 42 19 , whose denominator is not a power of 2. For the '+' we get d = −1/t. This gives indeed a matrix in Z[t, t −1 ]: M = t −2 0 t −1 −t −1 . But there is no braid α with such Burau matrix. This can be checked via the Alexander polynomial of the (prospective) closure, but there is a direct argument (which appeals, though, to the faithfulness of ψ). The matrix in question is M = t −2 · ψ(σ 2 ), but scalar Burau matrices are only in the image of the center of B 3 , and these are powers of t 3 . For n = 3 the polynomial (3.6) is 1 − 4u + 8u 2 − 10u 3 + 8u 4 − 4u 5 + u 6 , which is a square, and the rather complicated expressions (3.5) become t 8 + 4t 9 + 6t 10 + 3t 11 − 3t 12 − 6t 13 − 4t 14 − t 15 ± t 16 (1 + t) 10 (1 + t + t 2 ) 2 2t 11 (1 + t) 4 (1 + t + t 2 ) , giving d = t −3 and d = −t −2 . These lead to the matrices M = −t −2 t −3 0 t −3 and M = t −3 0 t −2 −t −2 . We used the Alexander polynomial to check that these indeed occur as Burau matrices, and to see what their braids are. (We remind that ψ is faithful on B 3 .) The answer is α 1 = ∆ −2 σ 1 and α 2 = ∆ −2 σ 2 . These solutions were unexpected, but can be easily justified: putting α i for α in (3.3) for n = 3, one easily sees that switching any of the last 5 crossings gives a braid with unknotted closure. Case 2. One positive and one negative neighboring syllable. In this case we have β = ασ −1 1 σ n 2 σ 1 . (Now in the first case we switch a negative crossing, thus [β ] = 2, and we need the trace −t.) The solutions for the lower right entry d of M are now even less pleasant than (3.5), and thus we do not reproduce them here. It is again important mainly to look at the discriminant, which was identified by MATHEMATICA as u 1+2n (1 + t) 2 1 − u n + t + t 2 2 −4u 2n + 4 1 + u n + u 2n u 5+n − 12u 2+2n − 12u 4+2n − 4u 6+2n − 4 1 + u n + u 2n u n t + 3 − 4u n − 6u 2n − 4u 3n + 3u 4n t 3 , and becomes a square times the following polynomial: u −3u 4n+3 + 4u 3n+5 + 4u 3n+3 + 4u 3n+1 − 4u 2n+6 + 4u 2n+5 − 12u 2n+4 + + 6u 2n+3 − 12u 2n+2 + 4u 2n+1 − 4u 2n + 4u n+5 + 4u n+3 + 4u n+1 − 3u 3 . This polynomial is not a square for n ≥ 3 by the leading coefficient argument. It is, though, a square for n = 1, 2, and gives solutions for d, M , and α. The braids α are more easily identified by direct observation. For n = 2 we have α 1 = σ −1 2 σ −1 1 and α 2 = σ −1 1 σ −1 2 . The solution α 1 can be seen from the word [−2 − 1 − 1221] in family 1. The other solution (was guessed but) is also easily confirmed. For n = 1 both solutions stem from words in family 1: α 1 = σ −1 2 and α 2 = σ −1 2 σ −1 1 σ 2 σ 1 σ −1 2 . Case 3. Two negative neighboring syllables. Here we have with The solutions for the lower right entry of M look thus: β = ασ −1 1 σ n 2 σ −1 1 ,(3.d = − 2u 2+n − 2u 3+n + u 4+n + 3u 2+2n − 4u 3+2n + u 4+2n + u 2+3n + u n t + u 3n t ± u 1+2n (1 + t) 2 (4u n + 4u 2+n + (3 + 2u n + 3u 2n ) t) (1 − u n + t + t 2 ) 2 × 1 2u 2n (1 + t) 2 (1 + t + u n t + t 2 ) . The non-square part of the discriminant is −3u 2 + 4u 1+n − 2u 2+n + 4u 3+n − 3u 2+2n . Here the edge coefficients become −3 for n ≥ 2. Thus n = 1. Now M again becomes a Burau matrix, and there are the solutions α 1 = σ 1 and α 2 = σ 2 . (Again the first comes from [1 − 21 − 2] in family 1.) With the discussion in the preceding three cases, we have thus now obtained restrictions on how syllables in β can look like. There are four 'local' syllable types (up to symmetries), which can be summarized thus, and we will call admissible. • No syllable has length at least 4. • A syllable of length 3 has both neighbored syllables of the same sign. • A syllable of length 2 has exactly one of its two neighbored syllables having the same sign. • A syllable of length 1 has at most one of its two neighbored syllables having the same sign. For each admissible syllable, we have also identified the two possible braids outside the syllable (although not their word presentations). Words of small length The next stage of the work consists in verifying a number of words of small length. One can easily argue (see [21]) that for more than one component, there are no crossings in an everywhere equivalent diagram between a component and itself. We notice that this observation specializes here to saying that (for non-split braids) the exponent sum (or word length) is even. This will not be essential in the proof, but helpful to avoid checking certain low crossing cases. The test of small length words is done by an algorithm which goes as follows. We start building a word β of the form β = αγ, where γ is a word known to us, and we know the Burau matrix .) The understanding is that switching any of the crossings in γ gives a braid with unknotted closure. We call γ an extendable word , in the sense that it can potentially be extended to a solution β. Whenever M is the identity matrix, we can take β = γ and have an everywhere equivalent braid, which we output. M = ψ(α) Next we try to extend γ by one letter τ = σ ±1 i , so that it is not the inverse of the preceding letter, and the admissible syllable shapes are not violated. LetM = ψ(τ −1 ) · M . We test whether tr M · ψ γ · τ −1 = (−t) ∓1 , which is equivalent to whether a crossing change at the new crossing (also) gives the unknot. If this happens, we can continue the algorithm with γ replaced by γ · τ and M replaced byM . This procedure can yield the solutions β up to given number of crossings (word length), and also produce the list of extendable braids γ up to that crossing number. Note that, since potential EE solutions can be directly checked, it is often enough to work with particular values of t. (These can be a priori complex numbers, but in practice most helpfully should be chosen to be rational.) We did not feel confident about this in the three initial cases, because of the presence of variable exponents. Alternatively, we could have used a t with |t| < 1 and some convergence (and error estimation) argument, but whether that would have made the proof nicer is doubtful. For rational t, we implemented the above outlined procedure in C++, whose arithmetic is indefinitely faster than MATHEMATICA. It, however, has the disadvantage of thoughtlessly producing over-/underflows, and some effort was needed to take care of that and to work with rational numbers whose numerator and denominator are exceedingly large. With this problem in mind, it is recommended to use simple (but non-trivial) values of t. We often chose t = 2, but also t = 3, and a few more exotic ones like t = 4/5 whenever feasible. We were able to perform the test up to 15 crossings for t = 2 and up to 12 crossings (still well enough for what we need, as we will see below) for the other t. This yielded the desired family 1, but also still a long list of extendable words even for large crossing number. Such words were not entirely unexpected: one can see, for example, that when β is a solution, then any subword γ of a power β k of β is extendable (with α being, roughly, a subword of β 1−k ). There turned out to be, however, many more extendable words, which made extra treatment necessary. The following argument gives a mild further restriction. Assume β has a subword [12 − 1 − 2]. Switching the '2' will give [−2 − 1], while switching the '−1' will give [21]. Now, these two subwords can be realized also by switching either crossings in [−21], which is a word of the same braid as [12 − 1 − 2]. This means that if β is EE, then also a word is EE in which [12 − 1 − 2] was replaced by [−21]. Thus verifying words up to 10 crossings would allow us to inductively discard words containing [12 − 1 − 2] and its various equivalents. Global conditions All these 'local' conditions were still not enough to rule out all possibilities, and finally we had to invent a 'global' argument. For this we use the following fact: if β ∈ B 3 has unknotted closure, then β is conju- gate to σ ±1 1 σ ±1 2 . The first proof appears to be due to Murasugi [13]. It was recovered by Birman-Menasco [2]. A different proof, based on the Jones polynomial, follows from (though not explicitly stated in) [18]. Namely, one can use relations of the sort σ 1 σ k 2 σ −1 1 = σ −1 2 σ k 1 σ 2 , (3.9) which do not augment word length (together with their versions under the various involutions), and cyclic permutations to reduce β to a length-2 word. The non-conjugacy to σ ±1 1 σ ±1 2 of a crossing-switched version β of our braids β is detected by Garside's algorithm [6]. We adapt it in our situation as follows. Lemma 3.1. Assume a braid β ∈ B 3 is written as ∆ k α with α a positive word with cyclically no trivial syllables and k ≤ −2. Then β is not conjugate to σ ±1 1 σ ±1 2 . Proof . This is a consequence of Garside's summit power in the conjugacy normal form. There is again an alternative (but a bit longer) way using span t tr ψ(β ). We refer for comparison to the proof of Lemma 4.2, but for space reasons only briefly sketch the argument here. For α one uses the relation (2.8) and that the span of V (α) is determined by the adequacy of the diagramα. The center multiplies tr ψ only by powers of t 3 . • k is at least half of the number of negative letters in β . • Only the first and last syllable of α can be trivial. If β starts and ends with positive letters τ (not necessarily the same for start and end), which are not followed resp. preceded byτ , then α has no trivial syllable. (For the bar notation recall the end of Section 2.3.) Proof . This is the result of the application of Garside's procedure on β . We manage, starting with trivial α and k = 0, a word presentation of β ∆ −k α γ, with the following property: α is a positive word with only the first and last syllable possibly trivial, and γ is a terminal subword of β . We also demand that if γ starts with a positive letter, this is the same as the final letter of α (unless α is trivial). We call this the edge condition. We apply the following iteration. 1. Move as many positive initial letters from γ as possible into the end of α , so as γ to start with a negative letter τ . This will not produce internal trivial syllables in α because of the edge condition and because β has no ∆ subword. 2. If γ has no negative letters, then we move it out into α entirely, and are done with k = k . 3. Otherwise we apply two types of modifications: • If γ's second letter (exists and) isτ , delete ττ in γ, replace α byᾱ , add at its end τ −1 , and augment k by 1. • If γ's second letter (does not exist or) is notτ , delete τ in γ, replace α byᾱ , add at its end τ −1τ −1 , and augment k by 1. Then go back to step 1. In the end we obtain the desired form. Since there is no ∆ −1 in β , each copy of ∆ −1 added compensates for at most two negative letters of β . Now we consider an EE word β of, say, more than 10 crossings. We switch, in a way we specify below, a crossing properly and apply the above procedure starting at a cyclically well-chosen point of the resulting word β . We obtain the shape of Lemma 3.2. This gives the contradiction that the closureβ is not unknotted by Lemma 3.1. If β contains a syllable of length 2 or 3, then we have (up to symmetries) [12221] or [−1221]. Switch the final '1'. This does not create a ∆ −1 because there is no subword [21 − 2 − 1] in β. Then apply the procedure starting from the second last letter: β = [2 − 1 · · · − 12] or [2 − 1 · · · 122]. With this we can exclude non-trivial syllables in β. Next, if there is a trivial syllable between such of opposite sign, up to symmetries [−2−12], we have with its two further neighbored letters [1| − 2 − 12| − 1]. (3.10) The right neighbor cannot be '1', because we excluded [−2 − 121] as subword. The left neighbor cannot be '−1', because we excluded ∆ −1 . Now we switch the middle '−1' in the portion (3.10), and start the procedure with the following (second last in that presentation) letter '2'. The only words that remain now are the alternating words β = [(1 − 2) k ]. There are various ways to see that they no longer create problems. In our context, one can switch a positive letter, group out a ∆ −1 built together with the neighbored letters, and then start the procedure right after that ∆ −1 . This completes the proof of the non-positive braids. 4 Proof of the positive case Adequate words From now we start examining, and gradually excluding the undesired, positive braids in B 3 . The nature of this part is somewhat different. Here no electronic computations are necessary, but instead a delicate induction argument. The presence of the central braids and their realization of every positive word as subword explain that no 'local' argument can work as in the non-positive case. Thus from the beginning we must use the 'global' features of the braid words. Our attitude will be that except for the stated words β, we find two diagrams D =β we can distinguish by the Jones polynomial. Because of the skein relation (2.1) of V , one can either distinguish the Jones polynomial of (the closure of) two properly chosen crossing-switched versions β , or of two smoothings of β. (In the crossing-switched versions, a letter of β is turned into its inverse, and in the smoothings it is deleted.) Moreover, one can switch back and forth between the Jones polynomial and the Burau trace, because of the consequence of (2.8) stated below it. Notice that for a positive word, length and exponent sum are the same. Accordingly we call a word ψ-everywhere equivalent, if all crossing-switched versions (or equivalently, all smoothed versions) have the same Burau trace. Trivial syllables will require a lot of attention in the following, and thus, to simplify language, we set up the following terminology. Definition 4.1. We call a positive word adequate, resp. cyclically adequate, if it has no trivial syllable (the exponent vector has no '1'), resp. has no such syllable after cyclic permutations. Otherwise, the word is called (cyclically) non-adequate. This choice of language is suggested by observing that cyclically adequate words β give adequate diagramsβ. Contrarily, for cyclically non-adequate words β the diagramsβ are not adequate: a trivial syllable of a positive word β always gives a self-trace in the B-state ofβ, i.e.,β is not B-adequate. (However, being positive,β is always A-adequate, and thus it is not inadequate in the sense of (2.4).) A useful application of adequacy is the following key lemma, which will help us carry out the induction without unmanageable calculations. We call below a trivial syllable isolated if it is not cyclically followed or preceded by another trivial syllable. Recall also the weight ω(β) from Section 2.3. Lemma 4.2. Assume positive words β and γ have the same length, and γ is cyclically adequate (i.e., its cyclic exponent vector has no '1'). Further assume that either 1) ω(γ) < ω(β), or 2) ω(γ) = ω(β) and β has an isolated (trivial) syllable. Then tr ψ(β) = tr ψ(γ). Proof . It is enough to argue with the Jones polynomial. The closure diagramγ is adequate, and by counting loops in the A-and B-states, we see span V (γ) = [γ] − ω(γ) + 1. If ω(γ) = ω(β), the right hand-sides of (4.1) and (4.2) agree, so we argue that the inequality (4.2) is strict. When a trivial syllable in β is isolated, so is its (self-)trace in the B-state ofβ, as defined below (2.5). In follows then from the explained work of [1] that the extreme B-degree term is zero, making (4.2) strict. We use the following lemma to first get disposed of cyclically adequate braids. Let us from now on use the symbol ' . =' for equality of braid words up to cyclic permutations. V ασ m−k i = A k (t)V ασ m i + B k (t)V ασ m−1 i , with A k , B k ∈ Z[t 1/2 , t −1/2 ] independent of α and i. We apply this argument for k = m − 1 once on the syllable σ m i and once on σ m j . Then we see again two positive words of equal length and weight that must have the same Jones polynomial, one of which has a (single, and thus isolated) trivial syllable, and the other has none. As before, Lemma 4.2 gives a contradiction. For the rest of the proof, we consider a cyclically non-adequate word β, and use induction over the word length. We assume that ψ-everywhere equivalent braids of smaller length are in families 2, 3 and 4. It will be helpful to make the families disjoint by excluding the (central) cases of l = 1 and 3 | k in family 2. Lemma 4.4. When β is positive and ψ-everywhere equivalent and β has a 6-letter subword representing ∆ 2 , then deleting that subword gives a ψ-everywhere equivalent braid word. Proof . All the crossing switched versions of β outside the copy of ∆ 2 have the same Burau trace, and deleting that copy of ∆ 2 , the Burau trace multiplies by t −3 . In relation, let us remark that ∆ 2 has the following 6-letter presentations up to σ 1 ↔ σ 2 and inversion (but not cyclic permutations): and β is ψ-everywhere equivalent, then so isᾱ 1 α 2 (where the over-bar again denotes σ 1 ↔ σ 2 ). Proof . Note that the crossing changes outside the two copies of ∆ will commute with putting the two ∆ close using α 2 ∆ = ∆ᾱ 2 (4.4) to form a ∆ 2 , and then apply Lemma 4.4. The move (4.4) will be used extensively below, and will be called sliding. Obviously, one can slide any copy of ∆ through any subword α i . Induction for words with trivial syllables Case 1. β has a ∆ 2 subword. We can apply Lemma 4.4, and use induction. We have a central ∆ 2 word inserted somewhere in a remainder , which is either (a) a central word, (b) a split word σ k 1 or (c) a symmetric word [1 l 2 l ] k . In (a) we have a central word β, and this case is clear. So consider (b) and (c). Case 1.1. The split remainders. Then β = ∆ 2 σ k 1 for k > 0. So we have the words 1 (up to reversal, cyclic permutations and σ 1 ↔ σ 2 ) |121121|1 k , |121212|1 k , |212212|1 k , |221221|1 k , |211211|1 k . We distinguish them directly by two smoothings: for the letters indicated in bold, we obtain a (2, n)-torus link, and smoothing a letter in 1 k gives a positive braid with ∆ 2 , and thus under closure a link of braid index 3. (For ' . =' recall above Lemma 4.3. Of course, δ represents ∆ 2 , but we separate both symbols as subwords of β.) If the whole word β is ψ-everywhere equivalent, then by Lemma 4.4 (since δ represents ∆ 2 ) so is ∆ 2 α, obtained after deleting δ in the remainder. Note for later, when we insert δ back, that this insertion must be done so that the last letter of δ and the first letter of α are not the same. Iterating this argument, we can start by testing α = [12] and α = [1212]. Thus it is enough to see that when one inserts a ∆ 2 word into (or before or after) [12] or [1212], the result β is not ψ-everywhere equivalent, unless it is β . = [12] k for k = 4 or 5. These are all knot diagrams of ≤ 10 crossings, and they can be checked directly. Then (iteratedly) inserting back δ must be done so as to yield only β . = [12] k for higher k. Case 1.2.2. Consider next l ≥ 2. Then β is up to cyclic permutations of the form ∆ 2 α with the exponent vector of α having a '1' possibly only at the start and/or end. We want to apply Lemma 4.2 to exclude these cases. Case 1.2.2.1. Let α be adequate (i.e., have no trivial syllable even at its start and end). We compare two crossing-switched versions of β = ∆ 2 α. First we switch a crossing in ∆ 2 , turning β into σ 2 1 σ 2 2 α or σ 2 2 σ 2 1 α, which is a cyclically adequate word. Another time we switch a crossing in (any non-trivial syllable of) α, yielding ∆ 2 α for a positive α , whereby the weight decreases by 2. Let us make this argument more precise. We look at the three words for ∆ 2 in ( In a similar way one checks for the other seven cases in (4.5) that one can apply Lemma 4.2. It is important to notice for later that in the first four cases, we do not need in fact that any of the neighboring syllables of ∆ 2 is trivial. Case 2. There is no ∆ 2 subword in β, but there are two ∆ subwords, i.e., we can apply Lemma 4.5. Thus β = α 1 ∆α 2 ∆ (4.6) withᾱ 1 α 2 a ψ-everywhere equivalent word. Ifᾱ 1 α 2 is central, the situation is clear. Case 2.1.ᾱ 1 α 2 is split. These are the words β (up to symmetries) |121|2 k |121|1 l , |121|2 k |212|1 l , |212|2 k |121|1 l , |212|2 k |212|1 l . They are distinguished by smoothings of the indicated boldfaced letter (giving as in Case 1.1 a (2, n)-torus link) and some letter outside the copies of ∆ (giving a link of braid index 3). Case 2.2. We have α =ᾱ 1 α 2 . = σ l 1 σ l 2 k . (4.7) We can obviously assume, by excluding ∆ 2 subwords, that none of α i is the trivial (empty) word. Recall the sliding (4.4) we used to bring two subwords ∆ together to form a ∆ 2 : β = α 1 ∆α 2 ∆ → ∆ᾱ 1 α 2 ∆ . = ∆ 2ᾱ 1 α 2 . (4.8) Case 2.2.1. If one of the ∆ in (4.6) has neighbored letters of different index, after the sliding the other ∆ close, will have neighbored letters to ∆ 2 of the same index. Look at the words in the first row of (4.5). These are built around the the four words for ∆ 2 factoring into two subwords of ∆. In all four cases, the indicated to-change crossings lie entirely in one of the copies of ∆ in ∆ 2 . By symmetry, it can be chosen in either of them, in particular in the one we did not slide. Thus one can undo in the same way bringing together the two copies of ∆ after either crossing changes (on the right of (4.8)), and realize the two crossing changes in the original braid word β (on the left of (4.8)). The crossing changes in (4.5) thus apply also in β to give positive words satisfying the assumptions of Lemma 4.2, and we are again done. Case 2.2.2. Now either of ∆ has neighboring syllables in α i of the same index. We assumed (by excluding ∆ 2 subwords in the present case) that none of α i is a trivial word. Case 2.2.2.1. Let us first exclude that an α i has length 1. We will return to this situation later. Since [1|121|12] = [1|212|12] is central, none of the neighboring syllables can be trivial. In particular, we can exclude the cases l = 1, i.e., that α =ᾱ 1 α 2 . = [12] k in (4.7). Sliding one copy of ∆ next to the other, we have a word for ∆ 2 factoring into two words for ∆. We can up to reversal assume that we slide the second ∆, while the first is fixed. As By permuting the last letter of α to the left, we have some of the following situations (up to reversal and σ 1 ↔ σ 2 ): [1|121|1], [1|121|2], [2|121|2]. In either the first and third case, none of the neighboring syllables to ∆ can be trivial, because otherwise we have a ∆ 2 subword up to cyclic permutations, and we dealt with this case. Thus permuting back all of α's letters to the right, we have the shape we wanted. In the second case, if the neighboring syllable '2' is trivial, then we have [1|121|21]. Now choosing a better ∆: [11|212|1], one arrives (up to σ 1 ↔ σ 2 ) in the third case above. Thus if we cannot obtain the desired shape of α, the syllable after ∆ must be non-trivial: [1|121|22]. Then compare the smoothings of the first and last '2': [1|11|22] and [1|121|2]. The first subword gives a cyclically adequate word of smaller weight, the second one a word of equal weight. This finishes the proof of Theorem 1.1. Remark 4.6. The use of the Jones polynomial means a priori that we distinguish the links of D i as links with orientation, up to simultaneously reversing orientation of all components. However, this restriction is not necessary, and in fact, we may see the links of D i non-isotopic as unoriented links. Namely, in the non-positive case (family 1 in Theorem 1.1), we have only knot diagrams D, where the issue is irrelevant. In the positive case (families 2 and 3), one can easily see that all β can be simplified to a positive braid word of two fewer crossings. It is a consequence of the minimal degree of the Jones polynomial (see, e.g., [4]) and its reversing property (see, e.g., [9]), that if the closures of two positive braids of the same exponent sum (and same number of strings) are isotopic as unoriented links, then they are isotopic (up to reversing all components simultaneously) with their positive orientations. Remark 4.7. Note also that, for links, our method does not restrict us to (excluding) isotopies between the links of D i mapping components in prescribed ways. For example, the diagram D gives a natural bijection between components of D i and D j , but this correspondence never played any role. On everywhere dif ferent diagrams As an epilogue, we make a useful remark that by modifying Lemmas 4.4 and 4.5, one can easily see that the construction of everywhere different diagrams (where all crossing-switched versions represent different links, and here even in the strict, oriented, sense) is meaningless for 3-braids. Proof . We show that for each 3-braid word β, there are two crossing-switched versions giving conjugate braids. We show this by induction over the word length. Assume β is an everywhere different 3-braid word. Obviously, by induction, we can restrict ourselves to braid words β with no σ ±1 i σ ∓1 i (whose deletion preserves the everywhere different property). Thus β has an exponent vector, and for evident reasons, all syllables must be trivial. (In particular the word length is even, and the closure is not a 2-component link.) If β were an alternating word, then β = (σ 1 σ −1 2 ) k , which is obviously not everywhere different (at most two different links occur after a crossing change). Since β is thus not alternating, it must contain cyclically a word for ∆ = [121] or [212], or ∆ −1 . Moreover, one easily sees that in a subword [1212] the edge crossing changes give the same braid, so that both syllables around a ∆ (resp. ∆ −1 ) must be negative (resp. positive). In particular, different subwords of β representing ∆ (or ∆ −1 ) are disjoint. Again by symmetry reasons, there must be more than one ∆ ±1 word. (The braids [121(−21) k −2] are not a problem to exclude: switch the two crossings cyclically neighbored to [121]; if k = 0 there are two unknot diagrams.) Thus β = ∆ ±1 α 1 ∆ ±1 α 2 . The wordᾱ 1 α 2 has now by induction two crossing changes (of the same sign) giving conjugate braids. Since ∆ ±1 ∆ ±1 is central (if not trivial), these crossing changes will remain valid in ∆ ±1 ∆ ±1ᾱ 1 α 2 , and then also, by the sliding argument, in β. Theorem 1 . 1 . 11A 3-braid word β gives an everywhere equivalent diagram if and only if it is in one of the following four families (up to the above equivalence):1) σ 1 σ −1 2 k or σ 1 σ 2 σ −1 1 σ −1 2k for k = 1, 2, and σ 1 σ −1 2 σ −2 1 σ 2 2 (non-positive case; refers to all three possibilities), 2) any positive (or negative, including the trivial) word representing a central element ∆ 2k , k ∈ Z (central case), Definition 2. 1 . 1We call a link diagram D everywhere equivalent (EE) if all diagrams D i depict the same link for all i. (This link may be different from the one represented by D.) If all D are unknot diagrams, we call D everywhere trivial . Lemma 2 . 2 . 22When β ∈ B n is central, then every positive word of β is everywhere equivalent.Proof . The crossing switched versions ofβ are represented up to cyclic permutations by braids of the form σ −2 i · α, where α are cyclically permuted words, one of which represents β. But if β is central, then all α represent this same central element, and thus all σ −2 i · α are conjugate. [α] = −2 − n, and so det M = (−t) −2−n . α] = −n, and we have to check the identity for M as in (3.4) det M = (−t) −n , and the unknot trace conditions: tr (M · ψ([12 n 1])) = −t, tr M · ψ([−12 n−2 1]) = (−t) −1 , tr (M · ψ([−12 n − 1])) = (−t) −1 . det M = (−t) 2−n , and the trace conditions tr (M · ψ([12 n − 1])) = −t, tr M · ψ([−12 n−2 − 1]) = (−t) −1 , tr (M · ψ([−12 n 1])) = −t. of α. (Thus essentially we know α itself, we do not know a word presentation of it.) We perform this with each of the shapes in (3.3), (3.7) and (3.8) with the two possible values of α we found in each case. (Thus there are in total 8 initial pairs of parameters (M, γ) Lemma 3 . 2 . 32Let β be a reduced braid word (not up to cyclic permutations) with no ∆ and ∆ −1 as subwords. Then under braid relations (without cyclic permutations) one can write β = ∆ −k α with k ≥ 0 and α a positive word. Moreover, by comparing the extremal degrees in the bracket expansion (2.2) of the diagramβ, we have span V (β) ≤ [β] − ω(β) + 1, (4.2) and [γ] = [β]. The first case of the lemma is then clear from (4.1) and (4.2). Lemma 4. 3 . 3Assume β is a positive adequate ψ-everywhere equivalent word. Then all cyclic exponent vector entries are equal, i.e., β . = [(1 l 2 l ) k ]. Proof . Let m be the minimal exponent vector entry, and assume by contradiction that there is an exponent vector entry m > m. If m = 2, compare the smoothings in the syllables corresponding to m and m . There is a contradiction by Lemma 4.2. For m = 3, consider the crossing-switched versions. Now, when m > 3, one can argue as follows. Because of the skein relation (2.1), the Jones polynomial of ασ m−k i for fixed k is determined by the one of ασ m i and ασ m . 5 . 5If β contains up to cyclic permutations two disjoint subwords ∆ (given as [121] or [212]), β = α 1 ∆α 2 ∆ Case 1 . 2 . 12The symmetric remainders [1 l 2 l ] k . Case 1.2.1. Let first l = 1 (and 3 k). If k > 3, then however one inserts that ∆ 2 , one has outside (in the remainder) up to cyclic permutations the word δ = [121212] or [212121], i.e., β . = ∆ 2 δα. Proposition 4. 8 . 8There exist no everywhere different 3-braids (where we consider links up to oriented isotopy). ). Then we set a diagram to be adequate = A-adequate and B-adequate, if it has an A (or B)-adequate diagram. It is semiadequate if it is A-or B-adequate, and inadequate, if it is not semiadequate, that is, neither A-nor Badequate. A link is adequate if it has an adequate diagram. This property is stronger than being both A-and B-adequate, since a link might have diagrams that enjoy either properties, but none that does so simultaneously. The Perko knot 10 161 insemiadequate = A-adequate or B-adequate, (2.4) inadequate = neither A-adequate nor B-adequate. (Note that inadequate is a stronger condition than not to be adequate.) A link is called A (or B)-adequate, 4.3). For [121212], switching the boldfaced '2' gives [1 2 2 2 ], and the weight decreases by 4. Thus we are done by the first (unequal weight) case of Lemma 4.2. For [212212], switching either one of the boldfaced letters gives [1 2 2 2 ] or [2 2 1 2 ]. At least for one of these two words the weight decreases by 4 (and we are through as above), unless both neighboring letters of ∆ 2 = [212212] are '2': [2|212212|2]. In this case the weight decreases by 2, as it does for ∆ 2 α . However, in ∆ 2 α , at least one of the two '1' of ∆ 2 gives an isolated trivial syllable. Thus we can apply the second case of Lemma 4.2. For [211211], we switch to [2 2 1 2 ], and the weight decreases by 2, but ∆ 2 in ∆ 2 α has an isolated trivial syllable '2'. Case 1.2.2.2. Thus let α have a trivial initial or final syllable. Then, because of l ≥ 2, the initial and final syllable have the same index (1 or 2).Up to σ 1 ↔ σ 2 , cyclic permutation and inversion, it is enough to look at the cases These words are again dealt with by Lemma 4.2. In all eight cases switching the boldfaced letter gives, after braid relations, a positive cyclically adequate word, and turning the italic one gives a positive cyclically non-adequate word. To see this, it is helpful to recall and use the identities(3.9).We exemplarily show the fourth word.Switching the italic letter gives [1|2112|1], with weight decreasing by 2. Switching the boldfaced letter gives [1−2122121] → [112−12121] → [11221−221] → [112211], and a cyclically adequate word with weight decreasing by 4.[1|1 21121|1], [1|1 21212|1], [1|212 121|1], [1|212 212|1], [1|1121 12|12], [1|1 22122|12], [1|221221 |12], [1|21121 1|12]. (4.5) In the following boldfaced letters in braid words should be understood as such whose switch (resp. smoothing) yields a word that can take the function of γ in Lemma 4.2. Italic letters switched (resp. removed) should yield the corresponding β in that lemma. Acknowledgment I wish to thank K. Taniyama and R. Shinjo for proposing the problems to me, and the referees for their helpful comments. words for ∆ 2 factoring into two words for ∆), one can change a crossing among the first three letters to obtain [1 2 2 2 ] or [2 2 1 2 ], giving a cyclically adequate wordβ. Up to σ 1 ↔ σ 2 the above four options for ∆ 2 words reduce to two. For ∆ 2 = [121212], we use the switch [121212], giving [1 2 2 2. and a cyclically adequate wordβ of weight by 4 less. In β the weight decreases by at most 2, and thus Lemma 4.2 applies. For ∆ 2 = [212212], we obtain [2 2 1 2. If it decreases the weight by 4, we are done. Otherwise it decreases the weight by 2, and the letter following ∆ 2 is '2': [|212212|2as we switch crossings outside the second ∆ (i.e., in the first ∆, or outside ∆ 2 ), we can crossing in a (non-trivial, as we assumed l ≥ 2 in (4such a word with one obtained from the crossing changes indicated in Case 1.2.2. In all of [121121], [121212], [212121] and [212212] (the words for ∆ 2 factoring into two words for ∆), one can change a crossing among the first three letters to obtain [1 2 2 2 ] or [2 2 1 2 ], giving a cyclically adequate wordβ. Up to σ 1 ↔ σ 2 the above four options for ∆ 2 words reduce to two. For ∆ 2 = [121212], we use the switch [121212], giving [1 2 2 2 ], and a cyclically adequate wordβ of weight by 4 less. In β the weight decreases by at most 2, and thus Lemma 4.2 applies. For ∆ 2 = [212212], we obtain [2 2 1 2 ]. If it decreases the weight by 4, we are done. Otherwise it decreases the weight by 2, and the letter following ∆ 2 is '2': [|212212|2]. the italic letter switched decreases the weight by at most 2, and leaves an isolated syllable '1' in ∆ 2 . Otherwise, the letter preceding ∆ 2 is '1'. By remembering (4.7) the word surrounding ∆ 2 and our assumption l ≥ 2, we see that we must have [11|212 212|2]. Then the italic letter switched decreases the weight by 2 and leaves an isolated '2'. Case 2.2.2.2. We must treat extra the case that one of α i has one letter. If the letter preceding ∆ 2 is also '2', i.e. [2|212212|2 ],. If both α i have, these are the words [1|121|1|121], [1|121|1|212], [1|212|1|212If the letter preceding ∆ 2 is also '2', i.e. [2|212212|2 ], the italic letter switched decreases the weight by at most 2, and leaves an isolated syllable '1' in ∆ 2 . Otherwise, the letter preceding ∆ 2 is '1'. By remembering (4.7) the word surrounding ∆ 2 and our assumption l ≥ 2, we see that we must have [11|212 212|2]. Then the italic letter switched decreases the weight by 2 and leaves an isolated '2'. Case 2.2.2.2. We must treat extra the case that one of α i has one letter. If both α i have, these are the words [1|121|1|121], [1|121|1|212], [1|212|1|212]. These are knot diagrams of 8 crossings, and it is easily checked (as done in [19]) that only the third is ψ-everywhere equivalent, as desired. (Here we use that for ≤ 8 crossings positive braid knots have all different Jones polynomial. These are knot diagrams of 8 crossings, and it is easily checked (as done in [19]) that only the third is ψ-everywhere equivalent, as desired. (Here we use that for ≤ 8 crossings positive braid knots have all different Jones polynomial.) When, say, α 2 has more than one letter, these are up to interchanges, and (cyclically) permuting the last letter of α to the left, the following situations. 1|121|1|121|1], [1|121|1|212|1], [1|212|1|212|1When, say, α 2 has more than one letter, these are up to interchanges, and (cyclically) per- muting the last letter of α to the left, the following situations: [1|121|1|121|1], [1|121|1|212|1], [1|212|1|212|1]. Let us assume also that α has at least three letters. (The other low crossing cases are rather obvious.) We know, by case assumption, that a trivial syllable in α in (4.9) can occur only with its first or last letter. We argue that, up to excluded cases, one can write β as in (4.9) so that even these two syllables are non-trivial, that is, so that α is adequate. If we can ascertain such α, we are done with Lemma 4.2 as follows. We compare two smoothings of β. On the one hand, smoothing any crossing in α will give a positive word of equal weight. On the other hand, smoothing the non-repeating letter in. ∆ Subword, 121weight. We argue thus how to find a presentation (4.9) of β with adequate (and not only cyclicallyand we assume that α has no ∆ subword. Let us assume also that α has at least three letters. (The other low crossing cases are rather obvious.) We know, by case assumption, that a trivial syllable in α in (4.9) can occur only with its first or last letter. We argue that, up to excluded cases, one can write β as in (4.9) so that even these two syllables are non-trivial, that is, so that α is adequate. If we can ascertain such α, we are done with Lemma 4.2 as follows. We compare two smooth- ings of β. On the one hand, smoothing any crossing in α will give a positive word of equal weight. On the other hand, smoothing the non-repeating letter in [121weight. We argue thus how to find a presentation (4.9) of β with adequate (and not only cyclically The spread and extreme terms of Jones polynomials. Y Bae, H R Morton, 10.1142/S0218216503002512math.GT/0012089J. Knot Theory Ramifications. 12Bae Y., Morton H.R., The spread and extreme terms of Jones polynomials, J. Knot Theory Ramifications 12 (2003), 359-373, math.GT/0012089. Studying links via closed braids. III. Classifying links which are closed 3-braids. J S Birman, W W Menasco, 10.2140/pjm.1993.161.25Pacific J. Math. 161Birman J.S., Menasco W.W., Studying links via closed braids. III. Classifying links which are closed 3-braids, Pacific J. Math. 161 (1993), 25-113. Knots and links. P R Cromwell, 10.1017/CBO9780511809767Cambridge University PressCambridgeCromwell P.R., Knots and links, Cambridge University Press, Cambridge, 2004. T Fiedler, 10.1016/0040-9383(91)90030-8On the degree of the Jones polynomial. 30Fiedler T., On the degree of the Jones polynomial, Topology 30 (1991), 1-8. Braids and the Jones polynomial. J Franks, R F Williams, 10.2307/2000780Trans. Amer. Math. Soc. 303Franks J., Williams R.F., Braids and the Jones polynomial, Trans. Amer. Math. Soc. 303 (1987), 97-108. The braid group and other groups. F A Garside, 10.1093/qmath/20.1.235Quart. J. Math. Oxford. 20Garside F.A., The braid group and other groups, Quart. J. Math. Oxford 20 (1969), 235-254. Hecke algebra representations of braid groups and link polynomials. V F R Jones, 10.2307/1971403Ann. of Math. 126Jones V.F.R., Hecke algebra representations of braid groups and link polynomials, Ann. of Math. 126 (1987), 335-388. State models and the Jones polynomial. L H Kauffman, 10.1016/0040-9383(87)90009-7Topology. 26Kauffman L.H., State models and the Jones polynomial, Topology 26 (1987), 395-407. The reversing result for the Jones polynomial. W B R Lickorish, K C Millett, 10.2140/pjm.1986.124.173Pacific J. Math. 124Lickorish W.B.R., Millett K.C., The reversing result for the Jones polynomial, Pacific J. Math. 124 (1986), 173-176. A polynomial invariant of oriented links. W B R Lickorish, K C Millett, 10.1016/0040-9383(87)90025-5Topology. 26Lickorish W.B.R., Millett K.C., A polynomial invariant of oriented links, Topology 26 (1987), 107-141. Some links with nontrivial polynomials and their crossing-numbers. W B R Lickorish, M B Thistlethwaite, 10.1007/BF02566777Comment. Math. Helv. 63Lickorish W.B.R., Thistlethwaite M.B., Some links with nontrivial polynomials and their crossing-numbers, Comment. Math. Helv. 63 (1988), 527-539. Seifert circles and knot polynomials. H R Morton, 10.1017/S0305004100063982Math. Proc. Cambridge Philos. Soc. 99Morton H.R., Seifert circles and knot polynomials, Math. Proc. Cambridge Philos. Soc. 99 (1986), 107-109. On closed 3-braids. K Murasugi, 10.1090/memo/0151Memoirs of the American Mathmatical Society. 151Amer. Math. SocMurasugi K., On closed 3-braids, Memoirs of the American Mathmatical Society, Vol. 151, Amer. Math. Soc., Providence, R.I., 1974. Knots and links, Publish or Perish. D Rolfsen, Berkeley, CalifRolfsen D., Knots and links, Publish or Perish, Berkeley, Calif., 1976. The skein polynomial of closed 3-braids. A Stoimenow, 10.1515/crll.2003.088math.GT/0103041J. Reine Angew. Math. 564Stoimenow A., The skein polynomial of closed 3-braids, J. Reine Angew. Math. 564 (2003), 167-180, math.GT/0103041. On unknotting numbers and knot trivadjacency. A Stoimenow, Math. Scand. 94Stoimenow A., On unknotting numbers and knot trivadjacency, Math. Scand. 94 (2004), 227-248. Properties of closed 3-braids. A Stoimenow, math.GT/0606435Stoimenow A., Properties of closed 3-braids, math.GT/0606435. Coefficients and non-triviality of the Jones polynomial. A Stoimenow, 10.1515/CRELLE.2011.047J. Reine Angew. Math. 657Stoimenow A., Coefficients and non-triviality of the Jones polynomial, J. Reine Angew. Math. 657 (2011), 1-55. Everywhere equivalent and everywhere different knot diagrams. A Stoimenow, 10.4310/AJM.2013.v17.n1.a5Asian J. Math. 17Stoimenow A., Everywhere equivalent and everywhere different knot diagrams, Asian J. Math. 17 (2013), 95-137. On the crossing number of semiadequate links. A Stoimenow, 10.1515/forum-2011-0121Forum Math. 26Stoimenow A., On the crossing number of semiadequate links, Forum Math. 26 (2014), 1187-1246. Everywhere equivalent 2-component links. A Stoimenow, PreprintStoimenow A., Everywhere equivalent 2-component links, Preprint. On the Kauffman polynomial of an adequate link. M B Thistlethwaite, 10.1007/BF01394334Invent. Math. 93Thistlethwaite M.B., On the Kauffman polynomial of an adequate link, Invent. Math. 93 (1988), 285-296. Wolfram S , Mathematica -a system for doing mathematics by computer. Reading, MAAddison-WesleyWolfram S., Mathematica -a system for doing mathematics by computer, Addison-Wesley, Reading, MA, 1988.
[]
[ "Phonons, Phase Transitions and Thermal Expansion in LiAlO2: An ab-initio Density Functional Study", "Phonons, Phase Transitions and Thermal Expansion in LiAlO2: An ab-initio Density Functional Study" ]
[ "Baltej Singh \nSolid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia\n\nHomi Bhabha National Institute\n400094AnushaktinagarMumbaiIndia\n", "M K Gupta \nSolid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia\n", "R Mittal \nSolid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia\n\nHomi Bhabha National Institute\n400094AnushaktinagarMumbaiIndia\n", "S L Chaplot \nSolid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia\n\nHomi Bhabha National Institute\n400094AnushaktinagarMumbaiIndia\n" ]
[ "Solid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia", "Homi Bhabha National Institute\n400094AnushaktinagarMumbaiIndia", "Solid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia", "Solid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia", "Homi Bhabha National Institute\n400094AnushaktinagarMumbaiIndia", "Solid State Physics Division\nBhabha Atomic Research Centre\n400085MumbaiIndia", "Homi Bhabha National Institute\n400094AnushaktinagarMumbaiIndia" ]
[]
We have used ab-initio density functional theory technique to understand the phase transitions and structural changes in various high temperature/pressure phases of LiAlO2. The electronic band structure as well as phonon spectra are calculated for various phases as a function of pressure. The phonon entropy used for the calculations of Gibbs free energy is found to play an important role in the phase stability and phase transitions among various phases. A sudden increase in the polyhedral bond lengths (Li/Al-O) signifies the change from the tetrahedral to octahedral geometry at high-pressure phase transitions. The activation energy barrier for the high-pressure phase transitions is calculated. The phonon modes responsible for the phase transition (upon heating) from high pressure phases to ambient pressure phases are identified. Moreover, ab-initio lattice dynamics calculations in the framework of quasi-harmonic approximations are used to calculate the anisotropic thermal expansion behavior of γ-LiAlO2. PACS numbers: 63.20.-e, 65.40.-b
10.1039/c8cp01474d
[ "https://arxiv.org/pdf/1804.05627v1.pdf" ]
19,089,352
1804.05627
0424588dcba2f1dee75b189bd9a201c5ce33da8e
Phonons, Phase Transitions and Thermal Expansion in LiAlO2: An ab-initio Density Functional Study Baltej Singh Solid State Physics Division Bhabha Atomic Research Centre 400085MumbaiIndia Homi Bhabha National Institute 400094AnushaktinagarMumbaiIndia M K Gupta Solid State Physics Division Bhabha Atomic Research Centre 400085MumbaiIndia R Mittal Solid State Physics Division Bhabha Atomic Research Centre 400085MumbaiIndia Homi Bhabha National Institute 400094AnushaktinagarMumbaiIndia S L Chaplot Solid State Physics Division Bhabha Atomic Research Centre 400085MumbaiIndia Homi Bhabha National Institute 400094AnushaktinagarMumbaiIndia Phonons, Phase Transitions and Thermal Expansion in LiAlO2: An ab-initio Density Functional Study 1Phonons in crystal latticeThermal properties of crystalline solidsHigh pressurePhase transition 2 We have used ab-initio density functional theory technique to understand the phase transitions and structural changes in various high temperature/pressure phases of LiAlO2. The electronic band structure as well as phonon spectra are calculated for various phases as a function of pressure. The phonon entropy used for the calculations of Gibbs free energy is found to play an important role in the phase stability and phase transitions among various phases. A sudden increase in the polyhedral bond lengths (Li/Al-O) signifies the change from the tetrahedral to octahedral geometry at high-pressure phase transitions. The activation energy barrier for the high-pressure phase transitions is calculated. The phonon modes responsible for the phase transition (upon heating) from high pressure phases to ambient pressure phases are identified. Moreover, ab-initio lattice dynamics calculations in the framework of quasi-harmonic approximations are used to calculate the anisotropic thermal expansion behavior of γ-LiAlO2. PACS numbers: 63.20.-e, 65.40.-b I. Introduction: The application of pressure or temperature on a material may give rise to interesting thermodynamic properties [1][2][3] and phase transitions [4][5][6][7] . The high-pressure experimental techniques have led to the synthesis of many functional materials [8][9][10][11] like superconductors, super-hard materials and highenergy-density materials. The pressure introduces compression in the material which gives rise to increased overlap of the electron cloud 7,[12][13][14] . This leads to a rearrangement of the band structure, which is reflected in the changes in the optical, electrical, dynamical and many other physical properties [15][16][17] . The change in atomic coordination, symmetry and atomic arrangement 18 takes places in pressure-induced phase transformations 19,20 . The pressure-volume curve of a material is important in simulating nuclear reactor accidents, designing of inertial confinement fusion schemes and for understanding rock mechanical effects of shock propagation in earth due to nuclear explosion. LiMO2 (M=B, Al, Ga, In) compounds are widely studied due to their rich phase diagrams. The polymorphism in these compounds arises from the cation order disorder as well as coordination changes [21][22][23] . The transitions in these systems occur by large lattice mismatch and are accompanied by enormous stresses in the crystals 21,22 . LiAlO2 finds several important applications. It is used as a coating material for Li based electrodes [24][25][26] and as an additive in composite electrolytes 27 . This material is a lithium ion conductor 28,29 at high temperature. A similar compound, LiCoO2, well known for battery cathode applications, is not cost effective 30 . Moreover, LiCoO2 decomposes at high temperature and is problematic due to the toxic nature of cobalt 30 . Therefore, many other materials are being investigated and engineered for the same application. The γ-LiAlO2 is stable upto 1873 K and is cost effective. It is even more stable towards intercalation/deintercalations of Li from the structure 28,29 .The Li diffusion in this material occurs with a migration barrier of 0.72(5) eV 28,29 .LiAlO2 exhibits very small lattice changes during lithium diffusion. This property also makes it suitable for use as a substrate material for epitaxial growth of III−V semiconductors like GaN 31 . It is also used as a tritium-breeder material in the blanket for fusion reactors due to its excellent performance under high neutron and electron radiation background 32,33 . LiAlO2 is highly studied using various experimental and computational techniques 28,29,[34][35][36][37][38][39] in recent years for its interesting properties as a Li ion battery material. The stability, safety and performance of battery materials are related to their behavior (thermal expansion and phase transitions) in the temperature and pressure range of interest 40 . The phase diagram of LiAlO2 as obtained from the X-ray diffraction experiments is reported over a range of temperature and pressure 41 . LiAlO2 is known to have six polymorphs of which the structure of only four (α, β, γ and δ) is known 41 . The stable form under ambient conditions is γ-LiAlO2.The α and β polymorphs begin to convert to the γ-polymorph above 700 o C 42,43 . The occurrence of α→ γ LiAlO2 phase transformation is one of the crucial problems for application of this compound as solid phase matrix for electrolyte in molten carbonate fuel cells. The γ-LiAlO2 is strongly considered as a breeder material because of its thermophysical, chemical and mechanical stability at high temperatures. The dilatometer measurements 44 for γ-LiAlO2 yielded values of the anisotropic thermal expansion coefficient to be αa=7.1×10 -6 K -1 and αc=15×10 -6 K -1 ,while αa=10.8 ×10 -6 K -1 and αc=17.97×10 -6 K -1 are reported from recent neutron diffraction studies 29 . However, the energetic and atomistic picture of the phase transitions is not well understood. Here we report a picture through study of the phase stability and phase transitions among various forms of LiAlO2 as a function of pressure and temperature. Experimentally, isotropic static high pressures are obtained using diamond anvil cell while dynamic pressures are achieved from mechanical or laser shock. Ab-initio density functional calculations provide a very good alternative 8,11,16,[45][46][47] to accurately study the energetics and properties of material under extreme pressure conditions 48 . The ab-initio calculated enthalpy is widely used to obtain the pressure stability region of many crystalline solids 45,[49][50][51] . To be more accurate, the entropy contribution can be added to enthalpy in order to obtain the free energy of the system 47 . Ab-initio lattice dynamics calculations of phonon spectra 10,15,18,52,53 can provide the phonon entropy contribution to the free energy of solid. The free energy provides the complete pressuretemperature phase diagram and stability range of crystalline solids 47,54,55 . The pressure and temperature dependence of vibrational modes 16,46,56,57,58,59 can be used to understand the atomistic picture of phase transition. We have provided the pressure dependent electronic band structure, phonon band structure and energetic of various phases of LiAlO2. Phonon spectra calculations at various pressures are performed in order to understand the role of phonon entropy in stability of phases and their phase transformation. Pressure dependence of elastic properties is calculated to understand the high-pressure behavior of various phases. Moreover, the mechanism of high temperature phase transitions from high pressure phases to ambient phase is studied in terms of phonon modes instability. The anisotropic thermal expansion is calculated for γ-LiAlO2 in the framework of quasiharmonic approximation. II. Computational Details: The Vienna based ab-initio simulation package 60, 61 (VASP) was used for structure optimization and total energy calculations. All the calculations are performed using the projected augmented wave (PAW) formalism 62 of the Kohn-Sham density functional theory within generalized gradient approximation 63,64 (GGA) for exchange correlation following the parameterization by Perdew, Becke and Ernzerhof. The kinetic energy cutoff of 820 eV is adapted for plane wave pseudo-potential. A k-point sampling with a grid of 4×4×4, generated automatically using the Monkhorst-Pack method 65 , is used for structure optimizations. The above parameters were found to be sufficient to obtain a total energy convergence of less than 0.1 meV for the fully relaxed (lattice constants & atomic positions) geometries. The total energy is minimized with respect to structural parameters. For lattice dynamics calculations, the Hellman-Feynman forces are calculated by the finite displacement method (displacement 0.03 Å). Total energies and force calculations are performed for the 18, 24, 14 and 14 distinct atomic configurations, resulting from symmetrical displacements of inequivalent atoms along the three Cartesian directions (±x, ±y and ±z), for α, β, γ and δ-phases respectively. The convergence criteria for the total energy and ionic forces were set to 10 -8 eV and 10 -5 eV A -1 . The phonon energies were extracted from subsequent calculations using the PHONON software 66 Where Φ(V) is the static lattice energy, Evib is the phonon energy, and P, V and T are pressure, volume and temperature respectively. III. Results and Discussion LiAlO2occursin several phases 41 and the crystal structures are known for the α, β, γ and δ-phase. The γ-phase is the stable phase at ambient condition and crystallizes in a tetragonal structure 41 . It consists of LiO4 and AlO4 tetrahedral units (Fig 1(a)). The α, β and δ-phases are the high pressure-temperature phases 41 . The β-LiAlO2, also called as the low-temperature form 68 which is stable below 0 o C, crystallizes in orthorhombic structure and it also consists of LiO4 and AlO4 tetrahedral units (Fig 1(b)) . In high pressure experiments 41 , β-phase can be obtained from the γ-phase at 0.8 GPa and 623K. The stability region for β-phase is quite narrow, so it possibly coexists with other phases 41 . The AlO4 and LiO4 polyhedral units share one edge in γ-phase while these polyhedra are corner shared in β-phase.α-LiAlO2 crystallizes in hexagonal symmetry 42 and consists of LiO6 and AlO6octahedra (Fig 1(c)). It can be experimentally obtained from γ-phase at high temperature and pressure range (0.5-3.5 GPa and 933-1123 K). α-phase can reversibly transform 42,43 to γ-phase when heated above 600 o C at zero pressure. δ-phase is experimentally obtained, at 4GPa using static pressure 22 and at 9 GPa, by dynamic shock compression 69 of γ-phase. It crystallizes in tetragonal structure and consists of LiO6 and AlO6 octahedral units (Fig 1(d)). The structure of δ-phase is known to pose small amount of Li/Al anti-site disorder. The δ-LiAlO2 is stable up to 773K and it transforms to α and γ-LiAlO2 at higher temperatures. The high pressure reconstructive transition from γ to δ-phase is reported as of displacive nature accompanied by enormous stresses stemming from the lattice mismatch 22 . Table-I gives the experimental and calculated structures of the various phases, which also provides some idea of the structural correlation among the various phases. The calculated structures are found to be in good agreement with the available experimental data. A. Electronic Structure The electronic band structure calculations ( higher than 4 eV and signify insulating behavior. The γ-phase has the least value of band gap of 4.7 eV among all the phases. Therefore, as compared to other phases, the γ-phase maybe most easily tuned for battery electrode applications, by creating defects in the crystal. As we apply pressure, γ-phase transforms to β-phase with band gap of 4.9 eV. The band gap for δ-phase (5.8 eV) and α-phase (6.2 eV) are further higher than that of γ-and β -phases which may be related to the increasing coordination (4 to 6) of Al and Li atoms in these high-pressure phases. The cathode materials for Li ion battery are required to behave as a good electronic and ionic conductor. Although α-phase has a layered distribution of Li in which Li ionic movement could be favorable, yet the high coordination and high band gap could limit the applications of this phase as a cathode material in the Li ion battery. The calculations reveal increase in electronic band gap in all the phases of LiAlO2 (Fig 3) on application of hydrostatic pressure. Above 20 GPa, the band gap for γ and β phases shows a sudden increase and takes the value which is equal to that for the δ phase. However, the corresponding phase transitions occur at much lower pressures. The band-gap crossing between α and δ phases occurs at very high pressure above 70 GPa. DFT usually underestimates the band-gap in insulators and semiconductors even if the exact Kohn- Sham potential corresponding to the exact density in a self-consistent field is used. The GW method 70,71 (where G is the Green's function and W is the dynamically screened interaction) is more pertinent and provides band-gap of insulators and semiconductors in good agreement with experiment. However, the GW calculations are computationally very expensive. In this paper, our interest is to estimate the change in the band-gap as a function of pressure and between various phases. While the absolute value of the DFT band-gap is systematically in error, we may expect that the change in the band-gap, as a function of pressure and between various phases, would be fairly well reproduced by DFT. B. Enthalpy and Phase Transitions The total energy and enthalpy for various phases of LiAlO2 are calculated as a function of pressure to find the stability region of these phases at high pressures at zero temperature (while we ignore the zeropoint phonon energy). The difference in enthalpy is calculated with respect to that of the ambient pressure stable phase (γ-LiAlO2) is shown in Fig 4. It can be seen that the γ and β phase possess nearly same energy and enthalpy at ambient pressure and hence are likely to be found at ambient pressure conditions. However, with increase in pressure, β phase lowers its enthalpy as compared to that of γ phase and hence is favorably found in this (0.0<P<1.2 GPa) pressure range. As pressure is further increased, enthalpy of α-phase get lowered than that of γ and β-phases. Therefore, the α-phase is favored above 1.3GPa. At further higher pressure, enthalpy shows the lowest value for highest pressure δ-phase. The calculated total energy and enthalpy show several crossovers which may indicate possible phase transitions, among metastable states, namely, from γ-to α, γ-to δ, β-to α, β-to δ and α to δ-phase. The enthalpy crossover shows a phase transition from γ-to α at around 1.3 GPa and γ-to δ phase transition at around 3.2 GPa. The transitions from β-to α and β-to δ are observed from the enthalpy plot at around 1.6 and 3.7 GPa respectively. The critical pressures for the high pressure phase transition as calculated from the enthalpy crossover are only slightly different from the experimental values 41 . However, enthalpy curve implies that β-phase is more stable in comparison to γ-phase at zero pressure (at zero temperature).Experimentally, β-phase was reported to be stable at low temperature 68 (<273K). The α to δ-phase transition is found at about 30 GPa. In order to calculate a more accurate picture of phase stability at finite temperatures, we need to include the entropy and calculate the Gibbs free energy in various phases (Section IIIE). Since these are crystalline solids hence most of the entropy is contributed by phonon vibrations. C. Phonon spectra and Free Energy The The calculated total and partial (atom wise) phonon density of states for the various high pressure/temperature phases of LiAlO2, are given in Fig 5. It is clear (Fig. 5) that different atoms contribute in different energy interval to the total phonon density of states. The comparison of total phonon density of states for various phases reveals a peak like behavior in low energy range for α and δ phases as compared to that of γ and β phases. This will give rise to high entropy contribution to the free energy in α and δ phases. This arises from the low energy peaks in the partial phonon density of states associated with Li atoms. However, the comparison of phonon spectra of γ and β phases shows that the phonon spectrum in the ambient pressure phase (γ-phase) is more populated in the low energy region as compared to that of first high pressure phase (β-phase). This will give rise to higher entropy at ambient pressure for γ-phase than that for β-phase. The mean squared displacements (MSD) of various atoms, <(u 2 )> at temperature T are calculated from the partial density of states of various atoms (Fig. 5) using the relation: < > = + 1 2 ħ Where = [exp D. Free Energy and Pressure Temperature Phase Diagram The free energy as a function of pressure at different temperatures for the various phases of LiAlO2 is calculated. It can be seen (Fig 8) as temperature increases to 600K, the phonon entropy starts playing a significant role in stabilizing the various phases. Due to this entropy contribution, the free energy of γphase is slightly lower than that of β-phase unlike the enthalpy behavior discussed above. Therefore, free energy reflects the stability of γ-phase at ambient pressure conditions. The calculated phase transition pressure for γ to β-phase at 600K is found to be 0.2 GPa. This transition was experimentally obtained at 0.8 GPa and 623K 41 . Therefore, phonon entropy plays an important role in γ to β phase transition. This is a first order phase transition with increase in density from 2.61 to 2.68 g/cm 3 .However, as the free energies of the γ and β phases are very close, both the phases could coexist at ambient pressure conditions as observed experimentally. If the γ to α phase transition is prevented, we find from the energy crossover that γ-LiAlO2could transform to aδ-LiAlO2 phase around 4GPa and 600K. The δ-phase is experimentally obtained from γ-phase with static pressure 4.0 GPa as well with dynamic shock pressure of 9.0 GPa. The earlier calculations 73 using different pseudo potentials underestimated this value to 2.3 GPa. This phase is also made up of AlO6 and LiO6 octahedra. Free energy crossovers also suggest β to α and β to δ phase transitions as earlier discussed in enthalpy plots. The α to δ-phase transition at very high pressure is found at about 40GPa from the freeenergy crossover. The experimental data for this phase transition is not yet available. The complete pressure-temperature phase diagrams, for transition between various phases, as calculated from the Gibbs free energy difference is shown in Fig 9. E. Structural Changes During Phase Transition: We have performed the high-pressure structure optimization for various phases of LiAlO2. These calculations are used to extract the pressure dependence of volume (Fig 10) and bond lengths (Fig 10(bf)) in all the phases. The different phases of LiAlO2 have different number of atoms in the unit cell, so for the sake of comparison, the volume per atom (Fig 10) is plotted. At around 21 GPa the volume/atom for γ-LiAlO2, which would be in a metastable state at this pressure, becomes equal to that for β -LiAlO2. Further at around 25GPa, the volume/atom for both γ-LiAlO2 and β -LiAlO2 becomes equal to the highest pressure (δ -LiAlO2) phase. The volume/atoms for α -LiAlO2 and δ -LiAlO2 phases become equal at very high pressure of 100GPa.The calculated bond lengths (Fig 10) show that the AlO4 (and LiO4) tetrahedra F. Energy Barrier for γ to β Phase Transition We have calculated the activation energy barrier for the γ to β phase transformation by following atomic displacements in the unit cell through the transition path. This approach has been used to study [74][75][76] continuous phase transitions. We have calculated the activation energy barrier using the nudged elastic band method as implemented in the USPEX software 77 . Group theory analysis using Bilbao crystallographic server 78 shows that the transition from γ (P41212) to β (Pna21) phase can take place through either of the two common subgroups, namely, P21 and P1, of both the phases. We designate these transition paths as Path-I and Path-II respectively. The group-subgroup transformation from P41212 to reported from previous calculations which may be expected since the latter transition also involves large coordination changes. G. Phonon Instability and High Temperature Phase Transition The high pressure phases of LiAlO2 become unstable on heating and transform to the ambient pressure phase 42,43,68,69 . The α-phase, which is used in molten carbonate fuel cells, undergoes α to γ phase transition 42,43 above 600 o C.The slow kinetics of this transition makes it difficult to identify the transition temperature. However, the transformation is generally complete by 900 o C.In this transition the structural volume expands. We have calculated the phonon dispersion curve (Fig. 12) in the α-phase where we have expanded the unit cell of α-phase to the corresponding volume of γ-phase. Three optic phonon branches are found to become unstable. These phonon modes may be responsible for the high temperature instability of the structure and might lead to α to γ phase transition. The eigenvectors of these modes are analyzed (Fig. 12) The δ-phase is experimentally known to transform to γ-phase at around 1173K 69 . We have calculated the phonon dispersion for δ-phase (Fig. 13) at expanded volume corresponding to that of γ-phase to understand the mechanism of the phase transition. Two optic phonon branches become unstable which are degenerate at zone centre of the Brillion zone. The eigenvectors of these modes involve (Fig. 13 H. Pressure Dependence of Elastic Properties We have calculated the pressure dependence of elastic constant tensor (Fig 14) for various phases of LiAlO2. The elastic constants are calculated using the symmetry-general least square method 80 For all the phases of LiAlO2, the above stability criteria are found to be satisfied at ambient pressure. I. Thermal Expansion Behavior of γ-LiAlO2 The linear thermal expansion coefficients along the 'a' and 'c' -axes have been calculated within the quasiharmonic approximation [82][83][84] Where sij are elements of elastic compliances matrix, s=C -1 at constant temperature T=0 K, V0 is volume at 0K and CV(q, i, T) is the specific heat at constant volume for i th phonon mode at point q in the Brillouin zone. The calculated elastic compliances matrix is given in Table II. The volume thermal expansion coefficient for tetragonal system is given by: J N = 2J I + J S The calculated temperature dependence of the lattice parameters is in excellent agreement ( Fig. 15(b)) with the recent experimental neutron diffraction data 29 . The calculated anisotropic linear thermal expansion coefficients at 300 K are αa= 10.1× 10 -6 K -1 and αc=16.5×10 -6 K -1 . They compare very well with available experimental values 29,44 . The thermal expansion along c-axis is large in comparison to that in the a-b plane. The calculated lattice parameters are in a very good agreement with the available experimental data even at very high temperatures up to 1500 K. This implies that the quasiharmonic approximation for this compound is valid even at very high temperature. The phonon modes around 35 and 55 meV have a large positive value of Grüneisen parameter ( Fig. 15(a)). The displacement patterns of two of the zone centre modes in this energy range are shown in Fig 16. The eigenvector of the first mode (36.9 meV) shows ( Fig. 16) that the mode is highly contributed by the Li motion in a-b plane along with a small component of AlO4 polyhedral vibrations. So, this mode can give an expansion in the a-b plane. The other mode of 55.7 meV (Fig. 16) has a high value of Γc and involves Li motion in a-c and bc planes along with some component of AlO4 polyhedral rotation. This type of mode can give an expansion along the c-axis. IV. Conclusions We have used ab-initio density functional theory techniques to calculate structural parameters of various phases of LiAlO2 with variable pressure conditions. The application of pressure leads to structural changes in γ-phase through tetrahedral to octrahedral coordination change about Li/Al atoms, which give rise to the high pressure α and δ phases. The phonon entropy is found to play an important role in phase stability and transitions among various phases. On the basis of calculated free energy, the complete phase diagram of LiAlO2 is obtained. Moreover, the phonon modes which are responsible for phase transition (upon heating) from α to γ are found to be dominated by the Li dynamics. This dynamics at high temperature could give rise to Li diffusion and hence the phase transformation to low density γ phase. On the other hand, phonon mode responsible for δ-to γ-phase transformation, upon heating, is dominated by Al and O atoms dynamics. This is accompanied by breaking of two Al-O bonds of AlO6 octahedra while converting to AlO4 tetrahedra. Moreover, the calculated anisotropic thermal expansion behavior of γ-LiAlO2 using ab-initio lattice dynamics agrees very well with experimental measurements and the thermal expansion is highly governed by phonon modes which involve the Li dynamics. . The phonon calculation has been done considering the crystal acoustic sum rule. The phonon spectra in the entire Brillion zone for various phases at different pressures are calculated using the finite displacement lattice dynamical methods. The phonon density of states are obtained by integrating the phonon dispersion curve over 8000 points in the entire Brillion zone. Thermal expansion calculations were performed using the pressure dependence of phonon frequencies 67 in the entire Brillouin zone. The details of the anisotropic thermal expansion calculations are given in section III-H. The phonon density of states for various phases calculated at different pressures are used to calculate the phonon entropy. The only phonon entropy at various temperatures has been included in our calculation of free energy. The Gibbs free energy of a system is calculated using , = + + Fig 2 ) 2of various phases of LiAlO2 are performed using ab-initio density functional theory method. These calculations are performed with a very high dense kpoint mesh of 20×20×20 to ensure the fine curvature of electronic bands. The electronic densities of states are obtained by integrating the band structure over complete Brillion zone. The negative energy of bands signifies the filled valence band while the empty conduction bands have the positive energy. For all the phases, the minimum of conduction band and the maxima of valence band are least separated at zone centre (Γ-point) of the Brillion zone. It can be seen that all the phases show a direct band gap(Fig 2) phonon dispersion curves along the high symmetry directions in the Brillion zone of various high-pressure phases of LiAlO2 are shown in Fig 6. The various high symmetry directions are chosen according to crystal symmetry 72 . The numbers of phonon branches are different for each phase as these branches are related to the number of atoms and symmetry of the primitive unit cell. The slope of the acoustic phonon modes are related to the elastic properties of the crystal. As seen from Fig 6, the slope of acoustic phonon modes for α, β and δ phases are large as compared to that of γ-phase. This indicates comparatively hard nature of these phases, which is related to their lower volume and high-pressure stability. The highest energy phonon bands signify the polyhedral stretching vibrations of Al-O and Li-O bonds. There exists a significant band gap in the phonon dispersion and phonon density of states (from 80 to 90 meV) for γ-phase which reduces in the first high pressure phase (β-phase) and ultimately disappears in the successive high-pressure phases (α and δ phases). This disappearance of phonon band gap and shifting of highest energy phonon branches towards lower energy signify delocalization of phonon bands due to tetrahedral to octahedral coordination as we go for the high-pressure phases. The lowering of Al-O and Li-O stretching frequencies in the high-pressure phases may come from the longer octahedral bonds as compared to tetrahedral bonds. ,gk(E) and mk are the atomic partial density of states and mass of the k th atom in the unit cell, respectively. The calculated MSD of various atoms as a function of temperature are shown inFig 7.Li atoms in all the phases have higher MSD values as compared to O and Al atoms due to its lighter mass. Among all the phases, the largest MSD values in α-phase is due to well oriented layered like structure of this phase in which atoms preferably vibrate in a-b plane. There are two Wyckoff sites for O atom in β-phase, both of which have very similar MSD behavior as a function of temperature. Further , the critical pressure for γ (or β) to α phase transition is obtained to be 0.6 (or 0.7) GPa at 600K. This transition is experimentally reported at 0.5-3.5 GPa and 933-1123 K41 . The γ to α phase transformation is of first order as it involves volume drop of about 20%. This transition takes place by increasing the coordination numbers of both the Li and Al atoms from four to six (more details in section E). The structure of the α-LiAlO2 is hexagonal containing AlO6 and LiO6 octahedral units. The octahedral units in α phase are found to be square bipyramidal in nature with different planner and axial bond lengths. are not regular in γ and β phase, due to the different Al-O (and Li-O) bond lengths of the tetrahedral units. The tetrahedral bond lengths decrease with increasing pressure and show a sudden increase at around 25 GPa. This sharp increase in Li-O and Al-O bond lengths signify the transformation of LiO4 and AlO4 tetrahedral units to corresponding LiO6 and AlO6 octahedra of δ -LiAlO2. The two different values of each Li-O (and Al-O) bond lengths implies the irregular nature of the octahedral units, which tend to become regular with further increase in pressure up to 100 GPa. Moreover, the bond-lengths of atomic pairs like Al-Al, Li-Li and Li-Al are well separated in γ-LiAlO2 and β-LiAlO2 at ambient pressure. These bond-lengths also show a sudden jump at around 25 GPa and converge to corresponding values for δ -LiAlO2. It is interesting to note that even up to very high pressure of 100 GPa, the bond lengths of α -LiAlO2 do not converge to that of corresponding δ -LiAlO2 phase. P21 occurs through Wyckoff site splitting from 4a (Li), 8b (O) and 4a (Al) in P41212 to corresponding all 2a (Li, O & Al) sites in P21. Similarly, group-subgroup transformation from Pna21 to P21 takes place through all 4a (Li, O, Al) sites to all 2a (Li, O, Al) sites. For γ to β-LiAlO2 transition through their common subgroup P1, all the Wyckoff sites in P41212 (γ-LiAlO2) and Pna21 (β-LiAlO2) split in 1a sites of corresponding P1 subgroup. The calculation of minimum energy path for γ to β transition has been performed by transforming the actual structure to their common subgroups P21 and P1. The transformed lattice parameters (a b c) and fractional atomic coordinates (x y z) are related to the parent phase as given below. The transformation matrix T is obtained using Bilbao crystallographic server 78 . ) ′ , * ′ , +′ , -= ), *, Going from the initial image in the γ phase (P21 or P1) to the final image in the β phase (P21 or P1) involves changes in both the cell parameters and atomic coordinates. A large number of structural images are created between the initial and final configurations. The total energy calculations are performed for these distorted structural images. The total energies for Path-I and Path-II as calculated using DFT are shown in Fig 11. The abscissa axis in Fig 11 represents various image configurations along the transition pathways. An activation energy per atom of 1.24 eV is required for this transformation through Path-I. Similarly, the activation energy per atom of 1.14 eV is calculated for the Path-II. These calculated values for the γ to β phase transition are lower in comparison to the activation energy barrier of 1.8 eV, for the γ to α phase transition 73 , as at the zone centre of the Brillion zone. These modes are highly contributed by the vibrations of Li atoms with small contribution from the oxygen atoms. The first phonon mode involves the Li vibrations perpendicular to the layers of AlO6octrahedra and may break the layered structure. The other phonon mode involves the sliding motion of Li in between the successive layers of AlO6 octahedra. These types of motion may cause the diffusion of Li in the 2D layers. This may give rise to the observed low binding energy of Li in α-LiAlO2 as observed experimentally 79 . The diffusion of Li breaks the Li-O octrahedral bonds. In a molten carbonate fuel cell, as the cell works, the temperature increases, this may cause large amplitude vibrations of Li atoms. These vibrations may trigger the phase transition from α to γ phase. ) large amplitudes of oxygen atoms along with some contribution from Al atoms. The Li atoms are found to remain steady in these particular phonon modes. Two of the oxygen atoms out of six in AlO6 octahedra show the outward vibrations which may results in breaking of Al-O bonds. All other four oxygens show bending of Al-O bonds about Al atom. This type of vibrations may give rise to formation of tetrahedral form from the existing AlO6 octahedra. Moreover, the vibration of Al atoms towards the centre of the tetrahedral geometry will give rise to the shortening of Al-O bond lengths. Therefore, both the highpressure phases transform to the ambient phase through very different mechanisms, one initiated by Li vibrations and the other by O and Al vibrational motion. as implemented in VASP5.2. The values are derived from the strain−stress relationships obtained from finite distortions of the equilibrium lattice. For small deformations we remain in the elastic domain of the solid and a quadratic dependence of the total energy with respect to the strain is expected (Hooke's law). The number of components 81 of elastic constant tensor is related to the symmetry of the crystal phase. It is clear from the Fig. 12 that the elastic constants of γ-phase have smaller values at ambient pressure conditions as compared to all other phases. The C33 tensile component of γ-phase is largest and increase on applying pressure. This implies that the γ-phase restricts the compression along the tetragonal c-axis. The C11 component of the same phase shows softening, although small, with pressure and favors the contraction in the a-b plane. On the other hand, the tensile components of the β-phase show hardening along a and b axis and small softening in c-direction. This implies that the ab plane of γ-phase behaves similar to c-axis of β-phase. The other two high-pressure phases pose large values of elastic constant components due to their higher density and low compressibility. Although α-phase possess higher values of tensile components as compared to that of the δ-phase (highest pressure phase), yet the hardening of these component with pressure is less in α-phase than that in δ-phase. The huge increase in tensile elastic constants of δ-phase with pressure implies high rigidity and low compressibility of this phase. For a crystalline solid, to be stable at any conditions, all the phonon frequencies must be positive and the Born elastic stability criteria must be fulfilled. The Born stability criteria demand the elastic constant matrix to be positive definite which give rise to different simplified expressions for different symmetry structures as follow 81 Tetragonal (γ & δ Phase) : C11>|C12|, 2C 2 13<C33(C11+C12), C44>0, 2C 2 16<C66(C11- Where . The calculations require anisotropic pressure dependence of phonon energies in the entire Brillouin zone67 . An anisotropic stress of 5 kbar is implemented by changing the lattice constant 'a' and keeping the 'c' parameter constant; and vice versa. These calculations are subsequently used to obtain the mode Grüneisen parameters using the relation 84 , A, is the frequency of i th phonon mode at wave-vector q in the Brillouin zone. The energy (E) and frequency (ν) of a phonon mode are related by the expression, E= hν (h is Planck constant). Eq,i is the energy of i th phonon mode at wave-vector q in the Brillouin zone. In the tetragonal system, Grüneisen parameters, > I = > . The calculated mode Grüneisen parameters as a function of phonon energy along different crystal directions are shown inFig. 15(a).The anisotropic linear thermal expansion coefficients are given by84,85 : FIG 5 ( 5Color online) Calculated partial and total phonon density of state in various phases of LiAlO2 using ab-initio lattice dynamics.FIG 6 Calculated phonon dispersion curves for various phases of LiAlO2, using ab-initio lattice dynamics. The high-symmetry points are chosen according to the crystal symmetry 72 .FIG 9 (Color online) Calculated pressure-temperature phase diagram ofLiAlO2. The phase boundaries are calculated from the Gibbs free-energy differences for various phases of LiAlO2. FIG10 (Color online) Calculated pressure dependence of volume and various atomic-pair distances for the various phases of LiAlO2. Fig 11 . 11Calculated activation energy barriers through Path-I and Path-II, for γ to β-LiAlO2 phase transition using ab-initio DFT nudged elastic band method. The abscissa axis in the figure represents various image configurations along the transition pathways through the Path-I and Path-II. FIG 12(Color online) Calculated phonon dispersion curves of α-LiAlO2for the expanded volume. The eigenvector pattern of corresponding unstable phonon modes at Γ point are given on the right. The highsymmetry points are chosen according to the crystal symmetry 72 FIG 13 (Color online) Calculated phonon dispersion curves of δ-LiAlO2 for the expanded volume. The eigenvector pattern of corresponding unstable phonon modes at Γ point is given on the right. The highsymmetry points are chosen according to the crystal symmetry 72 FIG14 (Color online) Calculated pressure dependence of elastic constants in various phases of LiAlO2 TABLE - I -: Crystal structure for various phases of LiAlO2as calculated at zero pressure and temperature.The calculated parameters are comparable with experiments 41 . TABLE - -II: The calculated non-zero components of elastic compliance (GPa -1 ) matrix for the γ-phase at ambient pressure.S11=S22 S12=S21 S13=S23=S31=S32 S33 S44=S55 S66 0.00984 -0.00332 -0.00253 0.00785 0.01679 0.01599 γ-LiAlO2 β-LiAlO2 δ-LiAlO2 α-LiAlO2 Structure Tetragonal Orthorhombic Tetragonal Hexagonal Space Group P41212 (92) Pna21(33) I41/amd (141) R-3m (166) Exp. Calc. Exp. Calc. Exp. Calc. Exp. Calc. a(Å) 5.169 5.226 5.280 5.316 3.887 3.906 2.799 2.827 b(Å) 5.169 5.226 6.300 6.335 3.887 3.906 2.799 2.827 c(Å) 6.268 6.300 4.900 4.946 8.300 8.452 14.180 14.348 V/Z (Å 3 ) 41.59 43.02 40.75 41.65 31.35 32.23 32.07 33.11 ρ(g/cm 3 ) 2.615 2.685 3.510 3.401 α, β, γ ( o ) 90, 90, 90 90, 90, 90 90, 90, 90 90, 90, 120 Z 4 4 4 3 Polyhedral units LiO4 , AlO4 LiO4, AlO4 LiO6, AlO6 LiO6, AlO6 . D Bansal, J L Niedziela, R Sinclair, V O Garlea, D L Abernathy, S Chi, Y Ren, H Zhou, O Delaire, Nature Communications. 915D. Bansal, J. L. Niedziela, R. Sinclair, V. O. Garlea, D. L. Abernathy, S. Chi, Y. Ren, H. Zhou and O. Delaire, Nature Communications, 2018, 9, 15. . C Bellin, A Mafety, C Narayana, P Giura, G Rousse, J.-P Itié, A Polian, A M Saitta, A Shukla, Physical Review B. 94110C. Bellin, A. Mafety, C. Narayana, P. Giura, G. Rousse, J.-P. Itié, A. Polian, A. M. Saitta and A. Shukla, Physical Review B, 2017, 96, 094110. . J Lee, S.-C Lee, C S Hwang, J.-H Choi, Journal of Materials Chemistry C. 1J. Lee, S.-C. Lee, C. S. Hwang and J.-H. Choi, Journal of Materials Chemistry C, 2013, 1, 6364- 6374. . L Zhang, Y Wang, J Lv, Y Ma, Nature Reviews Materials. L. Zhang, Y. Wang, J. Lv and Y. Ma, Nature Reviews Materials, 2017, 2, 17005. . D Alf, M J Gillan, G D Price, Nature. 401462D. Alf , M. J. Gillan and G. D. Price, Nature, 1999, 401, 462. . C Rajappa, S B Sringeri, Y Subramanian, J Gopalakrishnan, The Journal of Chemical Physics. 244512C. Rajappa, S. B. Sringeri, Y. Subramanian and J. Gopalakrishnan, The Journal of Chemical Physics, 2014, 140, 244512. . D Zagorac, K Doll, J Zagorac, D Jordanov, B Matović, 56D. Zagorac, K. Doll, J. Zagorac, D. Jordanov and B. Matović, Inorganic Chemistry, 2017, 56, 10644- 10654. . Y Qi, W Shi, P Werner, P G Naumov, W Schnelle, L Wang, K G Rana, S Parkin, S A Medvedev, B Yan, C Felser, npj Quantum Materials. Y. Qi, W. Shi, P. Werner, P. G. Naumov, W. Schnelle, L. Wang, K. G. Rana, S. Parkin, S. A. Medvedev, B. Yan and C. Felser, npj Quantum Materials, 2018, 3, 4. . Y Li, Y Zhou, Z Guo, F Han, X Chen, P Lu, X Wang, C An, Y Zhou, J Xing, G Du, X Zhu, H Yang, J Sun, Z Yang, W Yang, H.-K Mao, Y Zhang, H.-H Wen, 66Y. Li, Y. Zhou, Z. Guo, F. Han, X. Chen, P. Lu, X. Wang, C. An, Y. Zhou, J. Xing, G. Du, X. Zhu, H. Yang, J. Sun, Z. Yang, W. Yang, H.-K. Mao, Y. Zhang and H.-H. Wen, npj Quantum Materials, 2017, 2, 66. . S Li, Y Chen, Physical Review B. 134104S. Li and Y. Chen, Physical Review B, 2017, 96, 134104. . R Zhang, P Gao, X Wang, Y Zhou, AIP Advances107233R. Zhang, P. Gao, X. Wang and Y. Zhou, AIP Advances, 2015, 5, 107233. . D Smith, K V Lawler, M Martinez-Canales, A W Daykin, Z Fussell, G A Smith, C Childs, J S Smith, C J Pickard, A Salamat, Physical Review Materials. 213605D. Smith, K. V. Lawler, M. Martinez-Canales, A. W. Daykin, Z. Fussell, G. A. Smith, C. Childs, J. S. Smith, C. J. Pickard and A. Salamat, Physical Review Materials, 2018, 2, 013605. . C Li, F Ke, Q Hu, Z Yu, J Zhao, Z Chen, H Yan, Journal of Applied Physics. 119135901C. Li, F. Ke, Q. Hu, Z. Yu, J. Zhao, Z. Chen and H. Yan, Journal of Applied Physics, 2016, 119, 135901. . S López-Moreno, P Rodríguez-Hernández, A Muñoz, D Errandonea, 56S. López-Moreno, P. Rodríguez-Hernández, A. Muñoz and D. Errandonea, Inorganic Chemistry, 2017, 56, 2697-2711. . G A S Ribeiro, L Paulatto, R Bianco, I Errea, F Mauri, M Calandra, Physical Review B. 14306G. A. S. Ribeiro, L. Paulatto, R. Bianco, I. Errea, F. Mauri and M. Calandra, Physical Review B, 2018, 97, 014306. . V Balédent, T T F Cerqueira, R Sarmiento-Pérez, A Shukla, C Bellin, M Marsi, J.-P Itié, M Gatti, M A L Marques, S Botti, J.-P Rueff, Physical Review B. 24107V. Balédent, T. T. F. Cerqueira, R. Sarmiento-Pérez, A. Shukla, C. Bellin, M. Marsi, J.-P. Itié, M. Gatti, M. A. L. Marques, S. Botti and J.-P. Rueff, Physical Review B, 2018, 97, 024107. . J A Alarco, P C Talbot, I D R Mackinnon, Physical Chemistry Chemical Physics. 17J. A. Alarco, P. C. Talbot and I. D. R. Mackinnon, Physical Chemistry Chemical Physics, 2015, 17, 25090-25099. . E Mozafari, B Alling, M P Belov, I A Abrikosov, Physical Review B. 35152E. Mozafari, B. Alling, M. P. Belov and I. A. Abrikosov, Physical Review B, 2018, 97, 035152. . C Liu, M Hu, K Luo, D Yu, Z Zhao, J He, Journal of Applied Physics. 119185101C. Liu, M. Hu, K. Luo, D. Yu, Z. Zhao and J. He, Journal of Applied Physics, 2016, 119, 185101. . S Lopez-Moreno, A H Romero, J Mejia-Lopez, A Munoz, Physical Chemistry Chemical Physics. 18S. Lopez-Moreno, A. H. Romero, J. Mejia-Lopez and A. Munoz, Physical Chemistry Chemical Physics, 2016, 18, 33250-33263. . Q Hu, X Yan, L Lei, Q Wang, L Feng, L Qi, L Zhang, F Peng, H Ohfuji, D He, Physical Review B. 14106Q. Hu, X. Yan, L. Lei, Q. Wang, L. Feng, L. Qi, L. Zhang, F. Peng, H. Ohfuji and D. He, Physical Review B, 2018, 97, 014106. . Q Hu, L Lei, X Yan, L Zhang, X Li, F Peng, D He, Applied Physics Letters. 10971903Q. Hu, L. Lei, X. Yan, L. Zhang, X. Li, F. Peng and D. He, Applied Physics Letters, 2016, 109, 071903. . L Lei, T Irifune, T Shinmei, H Ohfuji, L Fang, Journal of Applied Physics. 83531L. Lei, T. Irifune, T. Shinmei, H. Ohfuji and L. Fang, Journal of Applied Physics, 2010, 108, 083531. . H Cao, B Xia, Y Zhang, N Xu, Solid State Ionics. 176H. Cao, B. Xia, Y. Zhang and N. Xu, Solid State Ionics, 2005, 176, 911-914. . L Li, Z Chen, Q Zhang, M Xu, X Zhou, H Zhu, K Zhang, Journal of Materials Chemistry A. 3L. Li, Z. Chen, Q. Zhang, M. Xu, X. Zhou, H. Zhu and K. Zhang, Journal of Materials Chemistry A, 2015, 3, 894-904. . G Ceder, Y M Chiang, D R Sadoway, M K Aydinol, Y I Jang, B Huang, Nature. 392G. Ceder, Y. M. Chiang, D. R. Sadoway, M. K. Aydinol, Y. I. Jang and B. Huang, Nature, 1998, 392, 694-696. . M A K Lakshman Dissanayake, Ionics. 10M. A. K. Lakshman Dissanayake, Ionics, 10, 221-225. D Wiedemann, S Indris, M Meven, B Pedersen, H Boysen, R Uecker, P Heitjans, M Lerch, Zeitschrift für Kristallographie-Crystalline Materials. 231D. Wiedemann, S. Indris, M. Meven, B. Pedersen, H. Boysen, R. Uecker, P. Heitjans and M. Lerch, Zeitschrift für Kristallographie-Crystalline Materials, 2016, 231, 189-193. . D Wiedemann, S Nakhal, J Rahn, E Witt, M M Islam, S Zander, P Heitjans, H Schmidt, T Bredow, M Wilkening, M Lerch, Chemistry of Materials. 28D. Wiedemann, S. Nakhal, J. Rahn, E. Witt, M. M. Islam, S. Zander, P. Heitjans, H. Schmidt, T. Bredow, M. Wilkening and M. Lerch, Chemistry of Materials, 2016, 28, 915-924. . D Deng, Energy Science & Engineering. 3D. Deng, Energy Science & Engineering, 2015, 3, 385-418. . P Waltereit, O Brandt, A Trampert, H T Grahn, J Menniger, M Ramsteiner, M Reiche, K H Ploog, Nature. 406P. Waltereit, O. Brandt, A. Trampert, H. T. Grahn, J. Menniger, M. Ramsteiner, M. Reiche and K. H. Ploog, Nature, 2000, 406, 865-868. . J Charpin, F Botter, M Briec, B Rasneur, E Roth, N Roux, J Sannier, Fusion Engineering and Design. 8J. Charpin, F. Botter, M. Briec, B. Rasneur, E. Roth, N. Roux and J. Sannier, Fusion Engineering and Design, 1989, 8, 407-413. . F Botter, F Lefevre, B Rasneur, M Trotabas, E Roth, Journal of Nuclear Materials. 1PartF. Botter, F. Lefevre, B. Rasneur, M. Trotabas and E. Roth, Journal of Nuclear Materials, 1986, 141- 143, Part 1, 364-368. . M M Islam, J Uhlendorf, E Witt, H Schmidt, P Heitjans, T Bredow, The Journal of Physical Chemistry C. 121M. M. Islam, J. Uhlendorf, E. Witt, H. Schmidt, P. Heitjans and T. Bredow, The Journal of Physical Chemistry C, 2017, 121, 27788-27796. . E Witt, S Nakhal, C V Chandran, M Lerch, P Heitjans, Zeitschrift für Physikalische Chemie. 229E. Witt, S. Nakhal, C. V. Chandran, M. Lerch and P. Heitjans, Zeitschrift für Physikalische Chemie, 2015, 229, 1327-1339. The journal of physical chemistry letters. M M Islam, T Bredow, 6M. M. Islam and T. Bredow, The journal of physical chemistry letters, 2015, 6, 4622-4626. . D Wohlmuth, V Epp, P Bottke, I Hanzu, B Bitschnau, I Letofsky-Papst, M Kriechbaum, H Amenitsch, F Hofer, M Wilkening, Journal of Materials Chemistry A. 2D. Wohlmuth, V. Epp, P. Bottke, I. Hanzu, B. Bitschnau, I. Letofsky-Papst, M. Kriechbaum, H. Amenitsch, F. Hofer and M. Wilkening, Journal of Materials Chemistry A, 2014, 2, 20295-20306. . Q Hu, L Lei, X Jiang, Z C Feng, M Tang, D He, Solid State Sciences. 37Q. Hu, L. Lei, X. Jiang, Z. C. Feng, M. Tang and D. He, Solid State Sciences, 2014, 37, 103-107. . S Indris, P Heitjans, R Uecker, B Roling, The Journal of Physical Chemistry C. 116S. Indris, P. Heitjans, R. Uecker and B. Roling, The Journal of Physical Chemistry C, 2012, 116, 14243-14247. . B Singh, M K Gupta, S K Mishra, R Mittal, P U Sastry, S Rols, S L Chaplot, Physical Chemistry Chemical Physics. 19B. Singh, M. K. Gupta, S. K. Mishra, R. Mittal, P. U. Sastry, S. Rols and S. L. Chaplot, Physical Chemistry Chemical Physics, 2017, 19, 17967-17984. . L Lei, D He, Y Zou, W Zhang, Z Wang, M Jiang, M Du, Journal of Solid State Chemistry. 181L. Lei, D. He, Y. Zou, W. Zhang, Z. Wang, M. Jiang and M. Du, Journal of Solid State Chemistry, 2008, 181, 1810-1815. . M Marezio, J Remeika, The Journal of Chemical Physics. 44M. Marezio and J. Remeika, The Journal of Chemical Physics, 1966, 44, 3143-3144. . V Danek, M Tarniowy, L Suski, Journal of materials science. 39V. Danek, M. Tarniowy and L. Suski, Journal of materials science, 2004, 39, 2429-2435. . B Cockayne, B Lent, Journal of Crystal Growth. 54B. Cockayne and B. Lent, Journal of Crystal Growth, 1981, 54, 546-550. . Y X Wang, Q Wu, X R Chen, H Y Geng, Scientific Reports. 632419Y. X. Wang, Q. Wu, X. R. Chen and H. Y. Geng, Scientific Reports, 2016, 6, 32419. . W Peng, R Tétot, G Niu, E Amzallag, B Vilquin, J.-B Brubach, P Roy, W.-w. Peng, R. Tétot, G. Niu, E. Amzallag, B. Vilquin, J.-B. Brubach and P. Roy, Scientific Reports, 2017, 7, 2160. . B Cheng, M Ceriotti, Physical Review B. 54102B. Cheng and M. Ceriotti, Physical Review B, 2018, 97, 054102. . Y X Wang, H Y Geng, Q Wu, X R Chen, Y Sun, Journal of Applied Physics. 122235903Y. X. Wang, H. Y. Geng, Q. Wu, X. R. Chen and Y. Sun, Journal of Applied Physics, 2017, 122, 235903. . D Plašienka, R Martoňák, E Tosatti, Scientific Reports. 637694D. Plašienka, R. Martoňák and E. Tosatti, Scientific Reports, 2016, 6, 37694. . R Lizárraga, F Pan, L Bergqvist, E Holmström, Z Gercsi, L Vitos, Scientific Reports. 73778R. Lizárraga, F. Pan, L. Bergqvist, E. Holmström, Z. Gercsi and L. Vitos, Scientific Reports, 2017, 7, 3778. . S.-H Guan, X.-J Zhang, Z.-P Liu, The Journal of Physical Chemistry C. 120S.-H. Guan, X.-J. Zhang and Z.-P. Liu, The Journal of Physical Chemistry C, 2016, 120, 25110- 25116. . G Lan, B Ouyang, Y Xu, J Song, Y Jiang, Journal of Applied Physics. 119235103G. Lan, B. Ouyang, Y. Xu, J. Song and Y. Jiang, Journal of Applied Physics, 2016, 119, 235103. . N Yedukondalu, G Vaitheeswaran, P Anees, M C Valsakumar, Physical Chemistry Chemical Physics. 17N. Yedukondalu, G. Vaitheeswaran, P. Anees and M. C. Valsakumar, Physical Chemistry Chemical Physics, 2015, 17, 29210-29225. . S Hirata, K Gilliard, X He, J Li, O Sode, Accounts of Chemical Research. 47S. Hirata, K. Gilliard, X. He, J. Li and O. Sode, Accounts of Chemical Research, 2014, 47, 2721- 2730. . C Cazorla, A K Sagotra, M King, D Errandonea, The Journal of Physical Chemistry C. 122C. Cazorla, A. K. Sagotra, M. King and D. Errandonea, The Journal of Physical Chemistry C, 2018, 122, 1267-1279. . A Erba, A M Navarrete-Lopez, V Lacivita, P , C M Zicovich-Wilson, Physical Chemistry Chemical Physics. 17A. Erba, A. M. Navarrete-Lopez, V. Lacivita, P. D'Arco and C. M. Zicovich-Wilson, Physical Chemistry Chemical Physics, 2015, 17, 2660-2669. . C.-M Lin, I J Hsu, S.-C Lin, Y.-C Chuang, W.-T Chen, Y.-F Liao, J.-Y Juang, Scientific Reports. C.-M. Lin, I. J. Hsu, S.-C. Lin, Y.-C. Chuang, W.-T. Chen, Y.-F. Liao and J.-Y. Juang, Scientific Reports, 2018, 8, 1284. . J Engelkemier, D C Fredrickson, Chemistry of Materials. 28J. Engelkemier and D. C. Fredrickson, Chemistry of Materials, 2016, 28, 3171-3183. . R Mittal, M K Gupta, S L Chaplot, Progress in Materials Science. 92R. Mittal, M. K. Gupta and S. L. Chaplot, Progress in Materials Science, 2018, 92, 360-445. . G Kresse, J Furthmüller, Computational materials science. 6G. Kresse and J. Furthmüller, Computational materials science, 1996, 6, 15-50. . G Kresse, J Furthmüller, Physical review B. 5411169G. Kresse and J. Furthmüller, Physical review B, 1996, 54, 11169. . P E Blöchl, Physical review B. 5017953P. E. Blöchl, Physical review B, 1994, 50, 17953. . K Burke, J Perdew, M Ernzerhof, Phys. Rev. Lett. 1396K. Burke, J. Perdew and M. Ernzerhof, Phys. Rev. Lett, 1997, 78, 1396. . J P Perdew, K Burke, M Ernzerhof, Physical review letters. 3865J. P. Perdew, K. Burke and M. Ernzerhof, Physical review letters, 1996, 77, 3865. . H J Monkhorst, J D Pack, Physical review B. 5188H. J. Monkhorst and J. D. Pack, Physical review B, 1976, 13, 5188. . K Parlinksi, K. Parlinksi, 2003. . B Singh, M K Gupta, R Mittal, M Zbiri, S Rols, S J Patwe, S N Achary, H Schober, A K Tyagi, S L Chaplot, Journal of Applied Physics. 85106B. Singh, M. K. Gupta, R. Mittal, M. Zbiri, S. Rols, S. J. Patwe, S. N. Achary, H. Schober, A. K. Tyagi and S. L. Chaplot, Journal of Applied Physics, 2017, 121, 085106. . J Théry, A Lejus, D Briancon, R Collongues, Bull. Soc. Chim. Fr. J. Théry, A. Lejus, D. Briancon and R. Collongues, Bull. Soc. Chim. Fr, 1961, 973-975. . X Li, T Kobayashi, F Zhang, K Kimoto, T Sekine, Journal of Solid State Chemistry. 177X. Li, T. Kobayashi, F. Zhang, K. Kimoto and T. Sekine, Journal of Solid State Chemistry, 2004, 177, 1939-1943. . L Hedin, Physical Review. 139L. Hedin, Physical Review, 1965, 139, A796-A823. . F Aryasetiawan, O Gunnarsson, Reports on Progress in Physics. 237F. Aryasetiawan and O. Gunnarsson, Reports on Progress in Physics, 1998, 61, 237. . M I Aroyo, J Perez-Mato, D Orobengoa, E Tasci, G De La Flor, A Kirov, Bulg. Chem. Commun. 43M. I. Aroyo, J. Perez-Mato, D. Orobengoa, E. Tasci, G. De La Flor and A. Kirov, Bulg. Chem. Commun, 2011, 43, 183-197. . W Sailuam, K Sarasamak, S Limpijumnong, Integrated Ferroelectrics. 156W. Sailuam, K. Sarasamak and S. Limpijumnong, Integrated Ferroelectrics, 2014, 156, 15-22. . M A Salvado, R Franco, P Pertierra, T Ouahrani, J M Recio, Physical Chemistry Chemical Physics. 19M. A. Salvado, R. Franco, P. Pertierra, T. Ouahrani and J. M. Recio, Physical Chemistry Chemical Physics, 2017, 19, 22887-22894. . Y Yao, D D Klug, Physical Review B. 8814113Y. Yao and D. D. Klug, Physical Review B, 2013, 88, 014113. . S K Mishra, N Choudhury, S L Chaplot, P S R Krishna, R , Physical Review B. 24110S. K. Mishra, N. Choudhury, S. L. Chaplot, P. S. R. Krishna and R. Mittal, Physical Review B, 2007, 76, 024110. . A O Lyakhov, A R Oganov, H T Stokes, Q Zhu, Computer Physics Communications. 184A. O. Lyakhov, A. R. Oganov, H. T. Stokes and Q. Zhu, Computer Physics Communications, 2013, 184, 1172-1182. . C Capillas, J M Perez-Mato, M I Aroyo, Journal of Physics: Condensed Matter. C. Capillas, J. M. Perez-Mato and M. I. Aroyo, Journal of Physics: Condensed Matter, 2007, 19, 275203. R Dronskowski, Inorganic Chemistry. 32R. Dronskowski, Inorganic Chemistry, 1993, 32, 1-9. . Y , Le Page, P Saxe, Physical Review B. Y. Le Page and P. Saxe, Physical Review B, 2002, 65, 104104. . F Mouhat, F.-X Coudert, Physical Review B. F. Mouhat and F.-X. Coudert, Physical Review B, 2014, 90, 224104. M T Dove, Introduction to Lattice Dynamics. Cambridge University PressM. T. Dove, Introduction to Lattice Dynamics, Cambridge University Press, 1993. S L Chaplot, R Mittal, N Choudhury, Thermodynamic Properties of Solids: Experiments and Modeling. John Wiley & SonsS. L. Chaplot, R. Mittal and N. Choudhury, Thermodynamic Properties of Solids: Experiments and Modeling, John Wiley & Sons, 2010. Thermophysical properties of materials. G , ElsevierG. Grimvall, Thermophysical properties of materials, Elsevier, 1999. . E Grüneisen, E Goens, Z. Phys. 141E. Grüneisen and E. Goens, Z. Phys., 1924, 29, 141. FIG 1 (Color online) The crystal structure of (a) γ-Phase, (b) β-Phase, (c) α-Phase and (d) δ-Phase of. FIG 1 (Color online) The crystal structure of (a) γ-Phase, (b) β-Phase, (c) α-Phase and (d) δ-Phase of The polyhedral units around Li and Al are shown in green and blue color respectively. Lialo2, LiAlO2. The polyhedral units around Li and Al are shown in green and blue color respectively. FIG 2 (Color online) Electronic band structure of various high pressure phases of LiAlO2 as calculated using ab-initio DFT. The high-symmetry points are chosen according to the crystal symmetry 72 . EDOS stand for electronic density of states. FIG 2 (Color online) Electronic band structure of various high pressure phases of LiAlO2 as calculated using ab-initio DFT. The high-symmetry points are chosen according to the crystal symmetry 72 . EDOS stand for electronic density of states. FIG 3 (Color online) Calculated pressure dependent electronic band gap in various phases of LiAlO2at 0 K. FIG 3 (Color online) Calculated pressure dependent electronic band gap in various phases of LiAlO2at 0 K. FIG 4 (Color online) Calculated internal energy ( , enthalpy (H) and their differences with respect to that of γ-phase, for various phases of LiAlO2. as a function of pressure at 0KFIG 4 (Color online) Calculated internal energy ( , enthalpy (H) and their differences with respect to that of γ-phase, for various phases of LiAlO2,as a function of pressure at 0K. Color online) Calculated mean-square displacements of atoms in various phases of LiAlO2. The oxygen atoms in the beta-phase occupy two different sites that have slightly different displacements. 7FIG 7 (Color online) Calculated mean-square displacements of atoms in various phases of LiAlO2. The oxygen atoms in the beta-phase occupy two different sites that have slightly different displacements. FIG 8 (Color online) Calculated Gibbs free-energy difference for various phases of LiAlO2compared to. FIG 8 (Color online) Calculated Gibbs free-energy difference for various phases of LiAlO2compared to Color online) (a) The calculated Grüneisen parameter, and (b) the temperature-dependent experimental and calculated lattice parameters and volume. FIG 15. l-l300 K/l300 K), l= a, c, V)FIG 15 (Color online) (a) The calculated Grüneisen parameter, and (b) the temperature-dependent experimental and calculated lattice parameters and volume [(l-l300 K/l300 K), l= a, c, V)]. Color online) The displacements pattern of the zone-centre optic mode which makes a large contribution to thermal expansion in (a) a-b plane, and (b) along c-axis. 16FIG 16 (Color online) The displacements pattern of the zone-centre optic mode which makes a large contribution to thermal expansion in (a) a-b plane, and (b) along c-axis.
[]
[ "ON THE NOTION OF LOWER CENTRAL SERIES FOR LOOPS", "ON THE NOTION OF LOWER CENTRAL SERIES FOR LOOPS" ]
[ "Jacob Mostovoy " ]
[]
[]
The commutator calculus is one of the basic tools in group theory. However, its extension to the non-associative context, based on the usual definition of the lower central series of a loop, is not entirely satisfactory. Namely, the graded abelian group associated to the lower central series of a loop is not known to carry any interesting algebraic structure. In this note we construct a new generalization of the lower central series to arbitrary loops that is tailored to produce a set of multilinear operations on the associated graded group.
10.1201/9781420003451.ch23
[ "https://arxiv.org/pdf/math/0410515v1.pdf" ]
119,151,808
math/0410515
13d23f0558da2fd003faea8c29bc72c49072b038
ON THE NOTION OF LOWER CENTRAL SERIES FOR LOOPS 24 Oct 2004 Jacob Mostovoy ON THE NOTION OF LOWER CENTRAL SERIES FOR LOOPS 24 Oct 2004 The commutator calculus is one of the basic tools in group theory. However, its extension to the non-associative context, based on the usual definition of the lower central series of a loop, is not entirely satisfactory. Namely, the graded abelian group associated to the lower central series of a loop is not known to carry any interesting algebraic structure. In this note we construct a new generalization of the lower central series to arbitrary loops that is tailored to produce a set of multilinear operations on the associated graded group. This definition can be found in [1]. For L associative it coincides with the usual definition of the lower central series. The terms of the lower central series of L are fully invariant normal subloops of L and the successive quotients γ i L/γ i+1 L are abelian groups. If L is a group, the commutator operation on L induces a bilinear operation (Lie bracket) on the associated graded group ⊕γ i L/γ i+1 L; this Lie bracket is compatible with the grading. (See Chapter 5 of [4] for a classical account of the commutator calculus in groups.) In general, however, there is no obvious algebraic structure on ⊕γ i L/γ i+1 L. Commutators, associators and associator deviations. Here we introduce some terminology. The definitions of this paragraph are valid for L a left quasigroup, that is, a halfquasigroup with left division. Define the commutator of two elements a, b in L as The failure of the associator to be distributive in each variable is measured by three operations that we call associator deviations or simply deviations. These are defined as follows: (a, b, c, d) 1 := ((a, c, d)(b, c, d))\(ab, c, d), (a, b, c, d) 2 := ((a, b, d)(a, c, d))\(a, bc, d), (a, b, c, d) 3 := ((a, b, c)(a, b, d))\(a, b, cd). The deviations themselves are not necessarily distributive and their failure to be distributive is measured by the deviations of the second level. The general definition of a deviation of nth level is as follows. Given a positive integer n and an ordered set α 1 , . . . , α n of not necessarily distinct integers satisfying 1 ≤ α k ≤ k + 2, the deviation (a 1 , . . . , a n+3 ) α1,...,αn is a function L n+3 → L defined inductively by (a 1 , . . . , a n+3 ) α1,...,αn := (A(a αn )A(a αn+1 ))\A(a αn a αn+1 ), where A(x) stands for (a 1 , . . . , a αn−1 , x, a αn+2 , . . . , a n+3 ) α1,...,αn−1 . The integer n is called the level of the deviation. There are (n + 2)!/2 deviations of level n. The associators are the deviations of level zero and the associator deviations are the deviations of level one. The commutator-associator filtration. Let us explain informally our approach to generalizing the lower central series to loops. We want to construct a filtration by normal subloops L i (i ≥ 1) on an arbitrary loop L with the following properties: (a) the subloops L i are fully invariant, that is, preserved by all automorphisms of L; (b) the quotients L i /L i+1 are abelian for all i; (c) the commutator and the associator in L induce well-defined operations on the associated graded group ⊕L i /L i+1 ; these operations should be linear in each argument and should respect the grading. Clearly, we also want the filtration L i to coincide with the lower central series for groups. A naïve method of constructing such a filtration would be setting L 1 = L and taking L i to be the subloop normally generated by (a) commutators of the form [a, b] with a ∈ L p and b ∈ L q with p + q ≥ i, (b) associators of the form (a, b, c) with a ∈ L p , b ∈ L q , and c ∈ L r with p + q + r ≥ i. The subloops L i constructed in this way are fully invariant in L and the quotients L i /L i+1 are abelian groups. It can be seen that the commutator on L induces a bilinear operation on the associated graded group. However, the associator does not descend to a trilinear operation on ⊕L i /L i+1 . The situation can be mended by adding to the generators of L i for all i every element of the form (a, b, c, d) α with a ∈ L p , b ∈ L q , c ∈ L r and d ∈ L s , where p + q + r + s ≥ i and 1 ≤ α ≤ 3. This forces the associator to be trilinear on ⊕L i /L i+1 , but now we are faced with a new problem. Having introduced the deviations into the game, we would like them to behave in some sense like commutators and associators, namely, to induce multilinear operations that respect the grading on ⊕L i /L i+1 . This requires adding deviations of the second level to the generators of the L i , etc. The above reasoning is summarised in the following Definition. For a positive integer i, the ith commutator-associator subloop L i of a loop L is L itself if i = 1, and is the subloop normally generated by (a) [L p , L q ] with p + q i, (b) (L p , L q , L r ) with p + q + r i, (c) (L p1 , . . . , L pn ) α1,. ..,αn with p 1 + . . . + p n i for all possibles choices of α 1 , . . . , α n . We refer to the filtration by commutator-associator subloops as the commutatorassociator filtration 1 . In our terminology the usual commutator-associator subloop becomes the "second commutator-associator subloop". By virtue of its construction, the commutator-associator filtration has the desired properties: the L i are fully invariant and normal, the quotients L i /L i+1 are abelian, and the associator and the deviations induce multilinear operations on the associated graded group. The bilinearity of the commutator is also readily seen. Indeed, take a, b in L p and c in L q . Modulo L p+q+1 , the commutators [a, c] and [b, c] commute and associate with all elements of L. Also, any associator that involves one element of L p , one element of L q and any element of L, is trivial modulo L p+q+1 . Hence, a(cb) · [b, c] ≡ a(bc) ≡ (ab)c and c(ab) · [a, c] ≡ (ca)b · [a, c] ≡ (ca · [a, c])b ≡ (ac)b ≡ a(cb) modulo L p+q+1 . Therefore we have (c(ab) · [a, c])[b, c] ≡ (ab)c mod L p+q+1 and it follows that [a, c][b, c] ≡ [ab, c] mod L p+q+1 . The linearity of the commutator in the second argument is proved in the same manner. Identifying the algebraic structure on ⊕L i /L i+1 induced by the commutator, the associator and the deviations is a task beyond the scope of the present note. It is rather easy to see that the Akivis identity is satisfied in ⊕L i /L i+1 for any L. One should expect that the structure on ⊕L i /L i+1 generalizes Lie rings in the same way as Sabinin algebras 2 generalize Lie algebras. No axiomatic definition of such a structure is yet known. The lower central series and the commutator-associator subloops. Let us compare the commutator-associator filtration with the usual lower central series. It is clear that for any L γ 2 L = L 2 . More generally, an easy inductive argument establishes that γ i L is contained in L i for all i. There is, however, no converse to this statement. Theorem 1. Let F be a free loop. For any positive integer k the term F k of the commutator-associator filtration is not contained in γ 3 F . Proof. Let X be a set freely generating the loop F and assume y is in X. Let y m stand for the right-normed product (. . . ((yy)y) . . .)y of m copies of y. We shall prove that for any i 0 and any positive m the deviation of ith level (y m , y, . . . , y) 1,1,...,1 does not belong to γ 3 F . The machinery suitable for this purpose was developed by Higman in [3]. Let L be a loop and assume there is a surjective homomorphism p : F → L with kernel N . Define B to be an additively written free abelian group, with free generators f (l 1 , l 2 ) for all l 1 = 1 and l 2 = 1 in L, and g(x) for all x in X. The product set L × B can be given the structure of a loop setting (l 1 , b 1 )(l 2 , b 2 ) = (l 1 l 2 , b 1 + b 2 + f (l 1 , l 2 )), (l 1 , b 1 )/(l 2 , b 2 ) = (l 1 /l 2 , b 1 − b 2 − f (l 1 /l 2 , l 2 )), (l 2 , b 2 )\(l 1 , b 1 ) = (l 2 \l 1 , b 1 − b 2 − f (l 2 , l 2 \l 1 )), where f (l, 1) = f (1, l) = 0 for all l ∈ L. The pair (1, 0) is the identity. Higman denotes this loop by (L, B). There is a homomorphism δ : F → (L, B) defined on the generators by δx := (px, g(x)). Higman proved (Lemma 3 of [3]) that the kernel of δ is precisely [N, F ]. Without loss of generality we can assume that F is the free loop on one generator y; take N = γ 2 F . The quotient loop L = F/N can be identified with the group of integers Z. We shall see that for any positive m the element δ(y m , y, . . . , y) 1,1,...,1 is not zero in (L, B) and, hence, that (y m , y, . . . , y) 1,1,...,1 does not belong to [N, F ] = γ 3 F . Lemma. For any m ≥ 1 and any n ≥ 0 δ(y m , y, . . . , y) 1,1,...,1 n = (0, f (n + m − 1, 1) + p<n+m−1,q a p,q f (p, q)), where a p,q are integers. where p ≤ m − 2, q ≤ 2 and the c p,q are some coefficients whose value is of no importance. Thus δ(y m , y, y) is of the form and it only remains to apply δ to both sides and use the induction assumption. Higman [3] has proved that the terms of the lower central series of a free loop intersect in the identity. We have no proof of a similar statement for the commutatorassociator filtration; in view of Theorem 1, Higman's result is not sufficient to establish it. Theorem 1 reflects a fundamental difference between the lower central series and the commutator-associator filtration. In the proof of Theorem 1 we have, in fact, established that the quotient γ 2 F/γ 3 F is not finitely generated. On the other hand, we have the following Theorem 2. For any finitely generated loop L the abelian groups L i /L i+1 are finitely generated for all i > 0. Proof. For i = 1 the theorem is clearly true: L 1 /L 2 is generated by the classes of the generators of L. Assume that for all k < n the group L k /L k+1 has a finite set of generators a k,α with α in some finite index set. Consider a commutator [x, y] with x in L p and y in L n−p . The class of [x, y] in L n /L n+1 only depends on the classes of x and y in L p /L p+1 and L p /L n−p+1 respectively. As the commutator is bilinear on ⊕L i /L i+1 , the class of [x, y] is a linear combination of elements of the form [a p,α , a n−p,β ]. Similarly, all classes of associators and deviations can be expressed as linear combinations of associators and deviations of a finite number of elements. Hence, L n /L n+1 is finitely generated. Further comments. In general, the lower central series of loops is difficult to treat. There is one notable exception, namely, the theory of associators in commutative Moufang loops which is analogous to a large extent to the theory of commutators in groups, see [1]. There is also another filtration, called distributor series, on commutative Moufang loops. In general, neither of these two filtrations coincides with the commutator-associator filtration. We note that the deviations first appeared in the nilpotency theory of commutative Moufang loops as the operations that produce the distributor series. There exists a filtration on any group that is very closely related to the lower central series, namely, the filtration by dimension subgroups. In particular, the graded abelian groups associated to both filtrations become isomorphic after tensoring with the field of rational numbers. A detailed comparison of the lower central series and the dimension subgroups can be found in [2]. The dimension filtration can be generalized to loops; it would be interesting to compare it with the commutator-associator filtration. The constructions of this note can be carried out in greater generality. The definitions of the commutator, the associator and the deviations of all levels only involve left division and, therefore, make sense in any left halfquasigroup. In particular, the commutator-associator filtration can be defined for left loops. Instituto de Matemáticas (Unidad Cuernavaca), Universidad Nacional Autónoma de México, A.P. 273 Admon. de Correos # 3, C.P. 62251, Cuernavaca, Morelos, MEXICO E-mail address: [email protected] The lower central series. Let N be a normal subloop of a loop L. There exists a unique smallest normal subloop M of L such that N/M is contained in the centre of the loop L/M , that is, all elements of N/M commute and and associate with all elements of L/M . This subloop M is denoted by [N, L]. The lower central series of L is defined by setting γ 1 L = L and γ i+1 L = [γ i L, L] for i ≥ 1. [a, b] := (ba)\(ab), and the associator of three elements a, b, c in L as (a, b, c) := (a(bc))\((ab)c). [ [a, b], c] + [[b, c], a] + [[c, a], b] = (a, b, c) + (b, c, a) + (c, a, b) − (a, c, b) − (c, b, a) − (b, a, c) The lemma is proved by induction on n. A straightforward calculation gives δy m = (m, mg(y) + f (1, 1) + f (2, 1) + . . . + f (m − 1, 1)).It follows thatδ(y m−2 y 2 ) = (m, mg(y) + p,q c p,q f (p, q)) ,q f (p, q)) and the lemma is satisfied for n = 0. Now, for arbitrary positive n we have (y m , y, . . . , y) 1,... for the lack of better name. formerly known as hyperalgebras, see[5] and[6]. Acknowledgment. I would like to thank Liudmila Sabinina for many discussions. R H Bruck, Survey of Binary Systems. Berlin-Göttingen-HeidelbergSpringer-VerlagR. H. Bruck, Survey of Binary Systems, Springer-Verlag, Berlin-Göttingen-Heidelberg 1958. B Hartley, Topics in the theory of nilpotent groups. LondonAcademic PressGroup theory: essays for Philip HallB. Hartley, Topics in the theory of nilpotent groups, in: Group theory: essays for Philip Hall. Academic Press, London, 1984. The lower central series of a free loop. G Higman, Quart. J. Math. Oxford. 142G. Higman, The lower central series of a free loop, Quart. J. Math. Oxford (2), 14 (1963), 131-140. . W Magnus, A Karrass, D Solitar, DoverNew YorkCombinatorial Group TheorySecond revised editionW. Magnus, A. Karrass and D. Solitar, Combinatorial Group Theory, Second revised edition. Dover, New York, 1976. P Miheev, L Sabinin, Quasigroups and differential geometry, Quasigroups and loops: theory and applications. Heldermann, BerlinP. Miheev and L. Sabinin, Quasigroups and differential geometry, Quasigroups and loops: theory and applications, 357-430, Heldermann, Berlin, 1990. Free Akivis algebras, primitive elements, and hyperalgebras. I Shestakov, U Umirbaev, J. Algebra. 2502I. Shestakov and U. Umirbaev, Free Akivis algebras, primitive elements, and hyperalgebras, J. Algebra 250 (2002), no. 2, 533-548.
[]
[ "INTEGRATION ON ALGEBRAIC QUANTUM GROUPOIDS", "INTEGRATION ON ALGEBRAIC QUANTUM GROUPOIDS" ]
[ "Thomas Timmermann " ]
[]
[]
In this article, we develop a theory of integration on algebraic quantum groupoids in the form of regular multiplier Hopf algebroids, and establish the main properties of integrals obtained by Van Daele for algebraic quantum groups beforefaithfulness, uniqueness up to scaling, existence of a modular element and existence of a modular automorphism -for algebraic quantum groupoids under reasonable assumptions. The approach to integration developed in this article forms the basis for the extension of Pontrjagin duality to algebraic quantum groupoids, and for the passage from algebraic quantum groupoids to operator-algebraic completions, which both will be studied in separate articles.
10.1142/s0129167x16500142
[ "https://arxiv.org/pdf/1507.00660v1.pdf" ]
118,161,904
1507.00660
36cb1ee84f994b7d80088e9da55a85fb4070f16c
INTEGRATION ON ALGEBRAIC QUANTUM GROUPOIDS 2 Jul 2015 Thomas Timmermann INTEGRATION ON ALGEBRAIC QUANTUM GROUPOIDS 2 Jul 2015 In this article, we develop a theory of integration on algebraic quantum groupoids in the form of regular multiplier Hopf algebroids, and establish the main properties of integrals obtained by Van Daele for algebraic quantum groups beforefaithfulness, uniqueness up to scaling, existence of a modular element and existence of a modular automorphism -for algebraic quantum groupoids under reasonable assumptions. The approach to integration developed in this article forms the basis for the extension of Pontrjagin duality to algebraic quantum groupoids, and for the passage from algebraic quantum groupoids to operator-algebraic completions, which both will be studied in separate articles. Introduction In this article, we develop a theory of integration on algebraic quantum groupoids in the form of regular multiplier Hopf algebroids [18], and establish the main properties of integrals that were obtained by Van Daele in [21] for algebraic quantum groupsfaithfulness, uniqueness up to scaling, existence of a modular element and existence of a modular automorphism -for algebraic quantum groupoids under reasonable assumptions. The approach to integration developed in this article forms the basis for two important constructions. To every algebraic quantum groupoid equipped with suitable integrals, we can associate a generalized Pontrjagin dual, which is an algebraic quantum groupoid again. This construction generalizes corresponding results of Van Daele for algebraic quantum groups [21] and of Enock and Lesieur for measured quantum groupoids [8,10], and will be studied in a separate article [14], see also [16]. In the involutive case, we can construct operator-algebraic completions in the form of Hopf-von Neumann bimodules [20] and of Hopf C * -bimodules [15], and thus link the algebraic approaches to quantum groupoids to the operator-algebraic one. This construction generalizes corresponding results of Kustermans and Van Daele for algebraic quantum groups [9] and of the author for dynamical quantum groups [17], and is detailed in a forthcoming article [13]. To explain the main ideas and results of our approach, let us first look at multiplier Hopf algebras [21]. Given such a multiplier Hopf algebra A with comultiplication ∆, a non-zero linear functional φ on A is called a left integral if (ι ⊗ φ)(∆(a)(1 ⊗ b)) = φ(a)b (1.1) for all a, b ∈ A, where the product ∆(a)(1 ⊗ b) lies in A ⊗ A by assumption and ι ⊗ φ is the ordinary slice maps from A ⊗ A to A. Van Daele showed that every such integral (1) is faithful: if b = 0, then φ(ab) = 0 and φ(bc) = 0 for some a, c ∈ A; (2) is unique up to scaling: every left integral has the form λφ with λ ∈ C; (3) admits a modular automorphism σ such that φ(ab) = φ(bσ(a)) for all a, b. Corresponding results hold for every right integral ψ, which is a non-zero linear functional on A satisfying (ψ ⊗ ι)((a ⊗ 1)∆(b)) = ψ(b)a (1.2) for all a, b ∈ A. The last key result of Van Daele on integrals is (4) existence of an (invertible) modular element δ such that ψ(a) = φ(aδ) for all a ∈ A. Our aim is to establish corresponding results for integrals on algebraic quantum groupoids and to provide the basis for the two applications outlined above. In the framework of weak multiplier Hopf algebras, this will be done by Van Daele in a forthcoming paper. A weak multiplier Hopf algebra consists of an algebra A and a comultiplication ∆ that is, in a sense (when extended to the multiplier algebras) no longer unital but still takes values in multipliers of A ⊗ A. In that setting, the invariance conditions (1.1) still make sense for functionals φ and ψ on A, and the results (1)-(4) above can be carried over from [21] with additional arguments. In the present paper, we develop the theory in the considerably more general and challenging framework of regular multiplier Hopf algebroids. The latter were introduced by Van Daele and the author in [18] and simultaneously generalize the regular weak multiplier Hopf algebras studied by Van Daele and Wang [24,19] and Böhm [4], and Hopf algebroids studied by [5,11,25], see also [2]. A regular multiplier Hopf algebroid consists of a total algebra A, commuting subalgebras B, C of the multiplier algebra of A with anti-isomorphisms S B : B → C and S C : C → B, and a left and a right comultiplication ∆ B and ∆ C which map A to certain multiplier algebras such that one can form products of the form ∆ B (a)(1 ⊗ b), ∆ B (b)(a ⊗ 1), (a ⊗ 1)∆ C (b), (1 ⊗ b)∆ C (a). These products do no longer lie in the tensor product A⊗A but rather in certain balanced tensor products A B ⊗ A B and A C ⊗ A C , respectively, which are formed by considering A as a module over B or C in various ways. None of the algebras A, B, C needs to be unital; if all are, then one has a Hopf algebroid. What is the appropriate notion of a left or right integral for regular multiplier Hopf algebroids? Unlike the case of (weak) multiplier Hopf algebras, the key invariance relations (1.1) and (1.2) do no longer make sense for functionals φ or ψ on A. The products ∆ B (a)(1⊗b) and (a ⊗ 1)∆ C (b) do not lie in the ordinary tensor product A ⊗ A but in the balanced tensor products, and on these balanced tensor products, slice maps of the form ι ⊗ φ or ψ ⊗ ι can only be defined if φ and ψ are maps from A to B or C, respectively, that are compatible with certain module structures. In the case of Hopf algebroids, such left-or right-invariant module maps from the total algebra A to the base algebras B and C were studied already by Böhm [1]; see also Böhm and Szlachyani [5]. The key idea of our approach is to regard not only such left-or right-invariant module maps from A to B or C, which correspond to partial, relative or fiber-wise integration, but total integrals obtained by composition with suitable functionals µ B and µ C on the base algebras B and C. Why is it natural, necessary and useful to study such scalar-valued total integrals? (1) The main results of Van Daele -uniqueness and existence of a modular automorphism and of a modular element -do not hold for partial integrals or can not even be formulated. We shall prove that all of these results carry over to scalar-valued total integrals. (2) The situation is similar for locally compact groupoids [12], where total integration of functions on a groupoid is given by fiber-wise integration with respect to a left or right Haar system, followed by integration over the unit space with respect to a quasi-invariant measure; and for measured quantum groupoids [10], [8], which are given by a Hopf-von Neumann bimodule, the operator-algebraic counterpart to a multiplier Hopf algebroid, together with a left-and a right-invariant partial integral and a suitable weight on the base algebra. Again, the interplay of the partial integrals and the weight on the base is crucial for the whole theory. (3) To construct a generalized Pontrjagin dual of a (multiplier) Hopf algebroid, one first has to define a dual algebra with a convolution product. If one regards the total algebra A as a module over the base algebras B and C, one obtains four dual modules with natural convolution products, two dual to the left and two dual to the right comultiplication. Our approach yields an embedding of these four modules into the dual vector space of A and one subspace of the intersection where the four products coincide. In [14], we will show that this subspace can be equipped with the structure of a multiplier Hopf algebroid again; see also [16]. (4) Total integrals form the key to relate the algebraic approach to quantum groupoids to the operator-algebraic one, as we shall show in a forthcoming paper [13]. Given a multiplier Hopf * -algebroid with positive total integrals, one can define a natural Hilbert space of "square-integrable functions on the quantum groupoid" and construct a * -representation of the total algebra which gives rise to a Hopf-von Neumann bimodule. How should the functionals µ B and µ C on the base algebras B and C then be chosen? Obviously, they should be faithful. Next, we demand that they are antipodal in the sense that µ C = µ B • S C and µ B = µ C • S B . (1.3) Our third assumption involves the left and the right counit B ε and ε C of the multiplier Hopf algebroid, which map A to B and C, respectively, and reads µ B • B ε = µ C • ε C . (1.4) This counitality condition appeared already in [19] and has strong implications, for example, that the two equations in (1.3) are equivalent and that the anti-isomorphisms S B and S C combine to Nakayama automorphisms or modular automorphisms for µ B and µ C , that is, µ B (xx ′ ) = µ B (S C S B (x ′ )x) and µ C (yy ′ ) = µ C (y ′ S B S C (y)) (1.5) for all x, x ′ ∈ B, y, y ′ ∈ C. It also implies that on a natural subspace of functionals on A, the two convolution products induced by the left and by the right comultiplication coincide, which is crucial for the construction of the generalized Pontrjagin dual. Finally, we demand that µ B and µ C are quasi-invariant with respect to the partial integrals, which map A to B or C, respectively, in a natural sense. This condition is easily seen to be necessary for the existence of a modular element, and similar conditions are used in the theories of locally compact quantum groupoids and of measured quantum groupoids. What can we say about existence and uniqueness of such functionals µ B and µ C ? We shall give simple examples which show that neither existence nor uniqueness can be expected in general. This may seem disappointing but is quite natural. Indeed, the situation is similar to the question whether an action of a non-compact group on a non-compact space admits an invariant or quasi-invariant measure. We also give examples where condition (1.4) can not be satisfied directly, but where the left and the right comultiplication ∆ B and ∆ C can be modified so that condition (1.4) can be satisfied for the new left and right counits. The basic idea is that for every pair of automorphisms (Θ λ , Θ ρ ) of the underlying algebra A which fix B and C and satisfy (Θ λ× ι) • ∆ B = (ι×Θ ρ ) • ∆ B , this composition forms a left comultiplication and one obtains a regular multiplier Hopf algebroid again. The right comultiplication can be modified similarly and independently. This modification procedure considerably generalizes a construction of Van Daele [23], and is of interest on its own because it illustrates how loosely the left and the right comultiplication of a (multiplier) Hopf algebroid are related. Plan. This article is organized as follows. First, we recall the definition and main properties of regular multiplier Hopf algebroids from [18] (Section 2 ), and introduce the examples that will be used throughout this article. Then, we introduce the partial integrals, the functionals on the base algebras mentioned above, the quasi-invariance condition relating the two, and the total integrals obtained by composition (Section 3 ). Next, we prove uniqueness of total integrals relative to fixed base functionals µ B and µ C up to rescaling (Section 4 ). We then turn to condition (1.4) which is the last missing ingredient for our definition of measured multiplier Hopf algebroids (Section 5 ). Next, we prove the remaining key results on integrals, which are existence of a modular automorphism and modular element, and faithfulness (Section 6 ). Along the way, we study various convolution operators and obtain a dual algebra. Finally, we present the modification procedure mentioned above (Section 7 ) and consider further examples (Section 8 ). Preliminaries. We shall use the following conventions and terminology. All algebras and modules will be complex vector spaces and all homomorphisms will be linear maps, but much of the theory developed in this article applies in wider generality. The identity map on a set X will be denoted by ι X or simply ι. Let B be an algebra, not necessarily unital. We denote by B op the opposite algebra, which has the same underlying vector space as B, but the reversed multiplication. Given For left modules, we obtain the corresponding notation and terminology by identifying left B-modules with right B op -modules. We denote by R( B M ) := Hom( B B, B M ) the space of right multipliers of a left B-module B M . We write B B or B B when we regard B as a right or left module over itself with respect to right or left multiplication. We say that the algebra B is non-degenerate, idempotent, or has local units if the modules B B and B B both are non-degenerate, idempotent or both have local units in B, respectively. Note that the last property again implies the preceding two. We denote by L(B) = End(B B ) and R(B) = End( B B) op the algebras of left or right multipliers of B, respectively, where the multiplication in the latter algebra is given by (f g)(b) := g(f (b)). Note that B B or B B is non-degenerate if and only if the natural map from B to L(B) or R(B), respectively, is injective. If B B is non-degenerate, we define the multiplier algebra of B to be the subalgebra M (B) : = {t ∈ L(B) : Bt ⊆ B} ⊆ L(B), where we identify B with its image in L(B). Likewise we could define M (B) = {t ∈ R(B) : tB ⊆ B} if B B is non-degenerate. If both definitions make sense, that is, if B is non-degenerate, then they evidently coincide up to a natural identification, and a multiplier is given by a pair of maps t R , t L : B → B satisfying t R (a)b = at L (b) for all a, b ∈ B. Given a left or right B-module M and a space N , we regard the space of linear maps from M to N as a right or left B-module, where (f · b)(m) = f (bm) or (b · f )(m) = f (mb) for all maps f and all elements b ∈ B and m ∈ M , respectively. In particular, we regard the dual space B ∨ of a non-degenerate, idempotent algebra B as a bimodule over M (B), where (a · ω · b)(c) = ω(bca), and call a functional ω ∈ B ∨ faithful if the maps B → B ∨ given by d → d · ω and d → ω · d are injective, that is, ω(dB) = 0 and ω(Bd) = 0 whenever d = 0. We say that a functional ω ∈ B ∨ admits a modular automorphism if there exists an automorphism σ of B such that ω(ab) = ω(bσ(a)) for all a, b ∈ B. One easily verifies that this condition holds if and only if B · ω = ω · B, and that then σ is characterised by the relation σ(b) · ω = ω · b for all b ∈ B. We equip the dual space B ∨ with a preorder , where υ ω :⇔ Bυ ⊆ Bω and υB ⊆ ωB. (1.6) The following result is straightforward: 1.0.1. Lemma. Suppose that ω ∈ B ∨ is faithful and that υ ∈ B ∨ . (1) Then υ ω if and only if there exist δ ∈ R(B) and δ ′ ∈ L(B) such that ω(xδ) = υ(x) = ω(δ ′ x) for all x ∈ B. (2) If the conditions in (1) hold and ω admits a modular automorphism σ, then δ, δ ′ lie in M (B) and δ = σ(δ ′ ). Proof. (1) We have υ ω if and only if there exist maps δ, δ ′ : B → B such that x · υ = δ(x) · ω and υ · x = ω · δ ′ (x) for all x ∈ B. If ω is faithful, then necessarily δ ∈ R(B) and δ ′ ∈ L(B). (2) In this case, we find that for all x, x ′ ∈ B, ω((xδ)σ(x ′ )) = ω(x ′ xδ) = υ(x ′ x) = ω(δ ′ x ′ x) = ω(xσ(δ ′ x ′ )). Since ω is faithful and x ∈ B was arbitrary, we can conclude that (xδ)σ(x ′ ) = xσ(δ ′ x ′ ), whence the assertion follows. Assume that B is a * -algebra. We call a functional ω ∈ B ∨ self-adjoint if it coincides with ω * = * • ω • * , that is, ω(a * ) = ω(a) * for all a ∈ B, and positive if additionally ω(a * a) ≥ 0 for all a ∈ A. Regular multiplier Hopf algebroids Regular multiplier Hopf algebroids were introduced in [18] as non-unital generalizations of Hopf algebroids and are special multiplier bialgebroids. As such, they consist of a left and a right multiplier bialgebroid with comultiplications related by a mixed coassociativity condition. 2.1. Left multiplier bialgebroids. Let A be an algebra, not necessarily unital, such that the right module A A is idempotent and non-degenerate. Then we can form the (left) multiplier algebras L(A) and M (A) ⊆ L(A) as explained above. Let B be an algebra, not necessarily unital, with a homomorphism s : B → M (A) and an anti-homomorphism t : B → M (A) such that s(B) and t(B) commute. We denote elements of B by x, x ′ , y, y ′ , . . . and reserve a, b, c, . . . for elements of A. We write A B and A B when we regard A as a left or right B-module via left multiplication along s or t, respectively, that is, x · a = s(x)a and a · x = t(x)a. Similarly, we write A B and A B when we regard A as a right or left B-module via right multiplication along s or t, respectively. Without further notice, we also regard A B , A B and A B , A B as right or left B op -modules, respectively. Regard the tensor product A B ⊗ A B of B op -modules as a right module over A ⊗ 1 or 1 ⊗ A in the obvious way and denote by A B ×A B ⊆ End( A B ⊗ A B ) the subspace formed by all endomorphisms T of A B ⊗ A B satisfying the following condition: for every a, b ∈ A, there exist elements T (a ⊗ 1) ∈ A B ⊗ A B and T (1 ⊗ b) ∈ A B ⊗ A B such that T (a ⊗ b) = (T (a ⊗ 1))(1 ⊗ b) = (T (1 ⊗ b))(a ⊗ 1) This subspace is a subalgebra and commutes with the right A ⊗ A-module action. and A B are faithful and idempotent, and A B ⊗ A B is non-degenerate as a right module over A ⊗ 1 and over 1 ⊗ A; (3) a homomorphism ∆ : A → A B ×A B , called the left comultiplication, satisfying ∆(s(x)t(y)as(x ′ )t(y ′ )) = (t(y) ⊗ s(x))∆(a)(t(y ′ ) ⊗ s(x ′ )), (2.1) (∆ ⊗ ι)(∆(b)(1 ⊗ c))(a ⊗ 1 ⊗ 1) = (ι ⊗ ∆)(∆(b)(a ⊗ 1))(1 ⊗ 1 ⊗ c) (2.2) A left counit for such a left multiplier bialgebroid is a map ε : A → B satisfying ε(s(x)a) = xε(a), ε(t(y)a) = ε(a)y, (2.3) (ε ⊗ ι)(∆(a)(1 ⊗ b)) = ab = (ι ⊗ ε)(∆(a)(b ⊗ 1)) (2.4) for all a, b ∈ A and x, y ∈ B. Note that (2.3) implies that the slice maps ε ⊗ ι : A B ⊗ A B → A, c ⊗ d → t(ε(c))d, ι ⊗ ε : A B ⊗ A B → A, c ⊗ d → s(ε(d))c occuring in (2.4) are well-defined. 2.1.2. Notation. We will need to consider iterated tensor products of vector spaces or modules over B or B op , and if several module structures are used in an iterated tensor product, we mark the module structures that go together by primes. For example, we denote by A B ⊗ A ⊗ A B and A B ⊗ A B B ′ ⊗ A B ′ the quotients of A ⊗ A ⊗ A by the subspaces spanned by all elements of the form s(x)a ⊗ b ⊗ c − a ⊗ b ⊗ t(x)a, where x ∈ B, a, b, c ∈ A, in the case of A B ⊗ A ⊗ A B , or of the form s(x)a⊗ b⊗ c− a⊗ t(x)b⊗ c or a⊗ s(x ′ )b⊗ c− a⊗ b⊗ t(x ′ )c in the case of A B ⊗ A B B ′ ⊗ A B ′ . Let (A, B, s, t, ∆) be a left multiplier bialgebroid. Then the maps T λ : A ⊗ A → A B ⊗ A B , a ⊗ b → ∆ B (b)(a ⊗ 1), T ρ : A ⊗ A → A B ⊗ A B , a ⊗ b → ∆ B (a)(1 ⊗ b), are well-defined because of the non-degeneracy assumption on A B ⊗ A B . By definition and (2.2), they make the following diagrams commute, A ⊗ A ⊗ A ι⊗ Tρ / / T λ ⊗ι A ⊗ A B ⊗ A B mΣ⊗ι A B ⊗ A B ⊗ A ι⊗m / / A B ⊗ A B , A ⊗ A ⊗ A ι⊗ Tρ / / T λ ⊗ι A ⊗ A B ⊗ A B T λ ⊗ι A B ⊗ A B ⊗ A ι⊗ Tρ / / A B ⊗ A B B ′ ⊗ A B ′ , (2.5) where Σ denotes the flip map and m the multiplication, and by (2.1), they factorize to maps T λ : A B ⊗ A B → A B ⊗ A B , T ρ : A B ⊗ A B → A B ⊗ A B ,(2.6) which we call the canonical maps of the left multiplier bialgebroid. 2.2. Right multiplier bialgebroids. The notion of a right multiplier bialgebroid is opposite to the notion of a left multiplier bialgebroid in the sense that in all assumptions, left and right multiplication are reversed. Let A be an algebra, not necessarily unital, such that the left module A A is nondegenerate and idempotent. Then we can form the (right) multiplier algebras R(A) and M (A) ⊆ R(A). Let C be an algebra with a homomorphism s : C → M (A) ⊆ R(A) and an antihomomorphism t : C → M (A) ⊆ R(A) such that the images of s and t commute. We write A C and A C if we regard A as a right or left C-module such that a · y = as(y) or y · a = at(y) for all a ∈ A and y ∈ C. We also regard A C and A C as a left or right C op -module, and similarly use the notation A C and A C when we use multiplication on the left hand side instead of the right hand side. We consider the opposite algebra End( A C ⊗ A C ) op and write (a ⊗ b)T for the image of an element a⊗b under an element T ∈ End( A C ⊗A C ) op , so that (a⊗b)(ST ) = ((a⊗b)S)T for all a, b ∈ A and S, T ∈ End( A C ⊗ A C ) op . Denote by A C ×A C ⊆ End( A C ⊗ A C ) op the subspace formed by all endomorphisms T such that for all a, b ∈ A, there exist elements (a ⊗ 1)T ∈ A C ⊗ A C and (1 ⊗ b)T ∈ A C ⊗ A C such that (a ⊗ b)T = (1 ⊗ b)((a ⊗ 1)T ) = (a ⊗ 1)((1 ⊗ b)T ). 2.2.1. Definition. A right multiplier bialgebroid is a tuple (A, C, s, t, ∆) consisting of (1) algebras A and C, where A is non-degenerate and idempotent as a left A-module; (2) a homomorphism s : C → M (A) ⊆ R(A) and an anti-homomorphism t : C → M (A) ⊆ R(A) such that the images of s and t commute, the C-modules A C and A C are faithful and idempotent, and A C ⊗ A C is non-degenerate as a left module over A ⊗ 1 and over 1 ⊗ A; (3) a homomorphism ∆ : A → A C ×A C , called the right comultiplication, satisfying ∆(s(y)t(x)as(y ′ )t(x ′ )) = (s(y) ⊗ t(x))∆(a)(s(y ′ ) ⊗ t(x ′ )), (2.7) (a ⊗ 1 ⊗ 1)((∆ ⊗ ι)((1 ⊗ c)∆(b))) = (1 ⊗ 1 ⊗ c)((ι ⊗ ∆)((a ⊗ 1)∆(b))) (2.8) for all a, b, c ∈ A and x, y ∈ C. A right counit for such a right multiplier bialgebroid is a map ε : A → C satisfying ε(as(y)) = ay, ε(at(x)) = xa, (2.9) (ε ⊗ ι)((1 ⊗ b)∆(a)) = ba = (ι ⊗ ε)((b ⊗ 1)∆(a)) (2.10) for all a, b ∈ A, x, y ∈ C. Again, (2.9) ensures that the slice maps ε ⊗ ι : A C ⊗ A C → A, c ⊗ d → ds(ε(c)), ι ⊗ ε : A C ⊗ A C → A, c ⊗ d → ct(ε(d)) are well-defined. Associated to a right multiplier bialgebroid as above are the canonical maps λ T : A ⊗ A → A C ⊗ A C , a ⊗ b → (a ⊗ 1)∆ C (b), ρ T : A ⊗ A → A C ⊗ A C , a ⊗ b → (1 ⊗ b)∆ C (a). They make diagrams similar to those in (2.5) commute and factorize to maps λ T : A C ⊗ A C → A C ⊗ A C , ρ T : A C ⊗ A C → A C ⊗ A C . (2.11) 2.3. Regular multiplier Hopf algebroids. We now combine the two structures. 2.3.1. Definition. A multiplier bialgebroid A = (A, B, C, S B , S C , ∆ B , ∆ C ) consists of (1) a non-degenerate, idempotent algebra A, (2) subalgebras B, C ⊆ M (A) with anti-isomorphisms S B : B → C and S C : C → B, (3) maps ∆ B : A → A B ×A B and ∆ C : A → A C ×A C such that A B = (A, B, ι B , S B , ∆ B ) is a left multiplier bialgebroid, A C = (A, C, ι C , S C , ∆ C ) is a right multiplier bialgebroid, and the following mixed co-associativity conditions hold: ((∆ B ⊗ ι)((1 ⊗ c)∆ C (b)))(a ⊗ 1 ⊗ 1) = (1 ⊗ 1 ⊗ c)((ι ⊗ ∆ C )(∆ B (b)(a ⊗ 1))), (a ⊗ 1 ⊗ 1)((∆ C ⊗ ι)(∆ B (b)(1 ⊗ c))) = ((ι ⊗ ∆ B )((a ⊗ 1)∆ C (b)))(1 ⊗ 1 ⊗ c) (2.12) for all a, b, c ∈ A. We call left counits of A B and right counits of A C just left and right counits, respectively, of A. Likewise, we call the canonical maps T λ , T ρ of A B and λ T , ρ T of A C just the canonical maps of A. Given a multiplier bialgebroid (A, B, C, S B , S C , ∆ B , ∆ C ), consider the subspaces I B := ω(a) : ω ∈ Hom( B A, B B), a ∈ A , I B := ω(a) : ω ∈ Hom(A B , B B ), a ∈ A , I C := ω(a) : ω ∈ Hom(A C , C C ), a ∈ A , I C := ω(a) : ω ∈ Hom( A C , C C), a ∈ A of B and C, respectively. 2.3.2. Definition. We call a multiplier bialgebroid (A, B, C, S B , S C , ∆ B , ∆ C ) a regular multiplier Hopf algebroid if the following conditions hold: (1) the subspaces S B ( I B ) · A, I B · A, A · S C (I C ) and A · I C are equal to A; (2) the canonical maps T λ , T ρ , λ T , ρ T are bijective. If A is a multiplier Hopf algebroid as above, then the maps ∆ B and ∆ C can be extended to homomorphisms from M (A) to End( A B ⊗ A B ) and End( A C ⊗ A C ) op , respectively, such that ∆ B (T )∆ B (a)(b ⊗ c) = ∆ B (T a)(b ⊗ c), (a ⊗ b)∆ C (c)∆ C (T ) = (a ⊗ b)∆ C (cT ) for all T ∈ M (A) and a, b, c ∈ A, and then (2.1) and (2.7) take the form ∆ B (xy) = y ⊗ x, ∆ C (xy) = y ⊗ x, (2.13) where y⊗x is regarded as an element of End( A B ⊗A B ) and End( A C ⊗A C ) op , respectively, via left or right multiplication. The main result in [18] is the characterization of regular multiplier Hopf algebroids in terms of an invertible antipode: (1) S(xyax ′ y ′ ) = S C (y ′ )S B (x ′ )S(a)S C (y)S B (x) for all x, x ′ ∈ B, y, y ′ ∈ C, a ∈ A; (2) there exist a left counit B ε and a right counit ε C for A such that the following diagrams commute, where m denotes the multiplication maps: A B ⊗ B A Tρ S C ε C ⊗ι / / A, A C ⊗ C A λ T ι⊗S B B ε / / A. B A ⊗ A B S⊗ι / / A C ⊗ C A m O O A C ⊗ A C ι⊗S / / A B ⊗ B A m O O (2.14) In that case, the map S, the left counit B ε and the right counit ε C are uniquely determined. Let A be a regular multiplier Hopf algebroid. Then the map S above is called the antipode of A, and the following diagrams commute, B A ⊗ A B ρT ι⊗S / / A B ⊗ A B , A C ⊗ A C ι⊗S / / A B ⊗ B A Tρ O O C A ⊗ A C T λ S⊗ι / / A C ⊗ A C , A B ⊗ A B S⊗ι / / A C ⊗ C A λ T O O (2.15) C A ⊗ A C T λ Σ(S⊗S) / / B A ⊗ A B ρT A B ⊗ A B Σ(S⊗S) / / A C ⊗ A C , A C ⊗ C A λ T Σ(S⊗S) / / A B ⊗ B A Tρ A C ⊗ A C Σ(S⊗S) / / A B ⊗ A B ,(2.16) where Σ denotes the flip maps on varying tensor products; see Theorem 6.8, Proposition 6.11 and Proposition 6.12 in [18]. Furthermore, by Corollary 5.12 in [18], S B • B ε = ε C • S and S C • ε C = B ε • S. (2.17) We shall also use the following multiplicativity of the counits, see (3.5) and (4.9) in [18]: B ε(ab) = B ε(a B ε(b)) = B ε(aS B ( B ε(b))), ε C (ab) = ε C (ε C (a)b) = ε C (S C (ε C (a))b) (2.18) for all a, b ∈ A. Let us finally consider involutions. Here, condition (2) ensures that the map (−) * ⊗ (−) * : A B ⊗ A B → A C ⊗ A C , a ⊗ b → a * ⊗ b * is well-defined. If A is a multiplier Hopf * -algebroid as above, then its left and right counits B ε, ε C and its antipode S satisfy . We call a multiplier bialgebroid A unital if the algebras A, B, C and the maps S B , S C , ∆ B , ∆ C are unital. In that case, it is easy to see that also the antipode and the left and the right counit are unital. Such unital multiplier bialgebroids correspond to usual bialgebroids as defined, for example, in [5,2], and regular multiplier Hopf algebroids correspond with Hopf algebroids whose antipode is invertible, see [18, Propositions 3.2, 5.13]. ε C • * = * • S B • B ε, B ε • * = * • S C • ε C , S • * • S • * = ι A ;(2. 2.4.2. Example (Weak multiplier Hopf algebras). Let (A, ∆) be regular weak multiplier Hopf algebra with counit ε and antipode S; see [24]. Then one can define maps ε s , ε t : A → M (A) such that ε s (a)b = S(a (1) )a (2) b, bε t (a) = ba (1) S(a (2) ) (2.20) for all a, b ∈ A. Let B = ε s (A) and C = ε t (A). Then the extension of the antipode S to M (A) restricts to anti-isomorphisms S B : B → C and S C : C → B. Denote by π B : A ⊗ A → A B ⊗ A B and π C : A ⊗ A → A C ⊗ A C the quotient maps. Then the formulas ∆ B (a)(b ⊗ c) := π B (∆(a)(b ⊗ c)), (a ⊗ b)∆ C (c) = π C ((a ⊗ b)∆(c)) define a left comultiplication ∆ B and a right comultiplication ∆ C such that A = (A, B, C, S B , S C , ∆ B , ∆ C ) becomes a regular multiplier Hopf algebroid, see [19,Theorem 4.8]. Its antipode coincides with S, and its left counit and right counit are given by B ε = S −1 • ε t and ε C = S −1 • ε s , respectively. 2.4.3. Example (Function algebra of an étale groupoid). Let G be a locally compact, Hausdorff groupoid which is étale in the sense that the source and the target maps s, t : G → G 0 are open and local homeomorphisms [12]. Denote by C c (G) and C c (G 0 ) the algebras of compactly supported continuous functions on G and on G 0 , respectively, by s * , t * : C c (G 0 ) → M (C c (G)) the pull-back of functions along s and t, respectively, let A = C c (G), B = s * (C c (G 0 )) and C = t * (C c (G 0 )), and denote by S B , S C the isomorphisms B ⇄ C mapping s * (f ) to t * (f ) and vice versa. Since G is étale, the natural map A⊗A → C c (G × G) factorizes to an isomorphism B A ⊗ A B = A C ⊗ A C ∼ = C c (G s × t G), where G s × t G denotes the composable pairs of elements of G. Denote by ∆ B , ∆ C : C c (G) → M (C c (G s × t G)) the pull-back of functions along the groupoid multiplication, that is, (∆ B (f )(g ⊗ h))(γ, γ ′ ) = f (γγ ′ )g(γ)h(γ ′ ) = ((g ⊗ h)∆ C (f ))(γ, γ ′ ) for all f, g, h ∈ A, γ, γ ′ ∈ G. Then (A, B, C, S B , S C , ∆ B , ∆ C ) is a multiplier Hopf *algebroid with counits and antipode given by B ε(f ) = s * (f | G 0 ), ε C (f ) = t * (f | G 0 ), (S(f ))(γ) = f (γ −1 ) for all f ∈ C c (G). 2.4.4. Example (Convolution algebra of an étale groupoid). Let G be a locally compact, étale, Hausdorff groupoid again. Then the space C c (G) can also be regarded as a *algebra with respect to the convolution product and involution given by (f * g)(γ) = γ=γ ′ γ ′′ f (γ ′ )g(γ ′′ ), f * (γ) = f (γ −1 ). Since G is étale, G 0 is closed and open in G, and the function algebra C c (G 0 ) embeds into the convolution algebra C c (G). Denote by this convolution algebra, letB =Ĉ = C c (G 0 ) ⊆ and letŜB =ŜĈ = ι Cc(G 0 ) . Then the natural map A ⊗ A → C c (G × G) factorizes to isomorphismŝ B ⊗ÂB ∼ = C c (G t × t G), C ⊗ÂĈ ∼ = C c (G s × s G),(2.21) and we obtain a multiplier Hopf * -algebroid (Â,B,Ĉ,ŜB,ŜĈ ,∆B,∆Ĉ), where (∆B(f )(g ⊗ h))(γ ′ , γ ′′ ) = t(γ)=t(γ ′ ) f (γ)g(γ −1 γ ′ )h(γ −1 γ ′′ ), ((g ⊗ h)∆Ĉ (f ))(γ ′ , γ ′′ ) = s(γ)=s(γ ′ ) g(γ ′ γ −1 )h(γ ′′ γ −1 )f (γ). Its counits and antipode are given by (Bε(f ))(u) = t(γ ′ )=u f (γ ′ ), (εĈ (f ))(u) = s(γ ′ )=u f (γ ′ ), (Ŝ(f ))(γ) = f (γ −1 ).∆ B (y ⊗ x)(a ⊗ a ′ ) = ya ⊗ xa ′ , (a ⊗ a ′ )∆ C (y ⊗ x) = ay ⊗ a ′ x, B ε(y ⊗ x) = xS −1 B (y), ε C (y ⊗ x) = S −1 C (x)y, S(y ⊗ x) = S B (x) ⊗ S C (y) for all x ∈ B, y ∈ C, a, a ′ ∈ A. The following example is a special case of the extension of scalars considered in [2, §4.1.5]. 2.4.6. Example (Symmetric crossed product). Let C be a non-degenerate, idempotent, commutative algebra with a unital left action of a regular multiplier Hopf algebra (H, ∆ H ), that is, h ⊲ (yy ′ ) = (h (1) ⊲ y)(h (2) ⊲ y ′ ) for all y, y ′ ∈ C and h ∈ H [7], and assume that the action is symmetric in the sense that h (1) ⊗ h (2) ⊲ y = h (2) ⊗ h (1) ⊲ y (2.22) for all h ∈ H and y ∈ C. If the action is faithful, then symmetry follows easily from commutativity of C, but in general, it is an extra assumption and equivalent to the Yetter-Drinfeld condition for the given action and the trivial coaction of H on C. As an example, in the case that H is the function algebra of a discrete group Γ, a symmetric action of C c (Γ) corresponds to a grading by the center of Γ. Denote by A = C#H the usual smash product or crossed product, that is, the vector space C ⊗ H with multiplication given by (y ⊗ h)(y ′ ⊗ h ′ ) = y(h (1) ⊲ y ′ ) ⊗ h (2) h ′ . Then C and H can naturally be identified with subalgebras of M (A). We obtain a regular multiplier Hopf algebroid (A, B, C, S B , S C , ∆ B , ∆ C ), where B = C, S B = S C = ι C and ∆ B (yh)(a ⊗ a ′ ) = yh (1) a ⊗ h (2) a ′ = h (1) a ⊗ yh (2) a ′ , (a ⊗ a ′ )∆ C (hy) = ah (1) y ⊗ a ′ h (2) = ah (1) ⊗ a ′ h (2) y for all y ∈ C, h ∈ H and a, a ′ ∈ A. Its antipode and counits are given by S(yh) = S H (h)y, B ε(yh) = yε H (h) = ε C (hy) (2.23) for all y ∈ C and h ∈ H, where S H and ε H denote the antipode and counit of (H, ∆ H ). The verification is straightforward. 2.4.7. Example (A two-sided crossed product). Let B, C, S B and S C be as above and let H be a regular multiplier Hopf algebra with a unital left action on C and a unital right action on B such that for all h ∈ H, x, x ′ ∈ B, y, y ′ ∈ C, (xx ′ ) ⊳ h = (x ⊳ h (1) )(x ′ ⊳ h (2) ), h ⊲ (yy ′ ) = (h (1) ⊲ y)(h (2) ⊲ y ′ ), (2.24) S B (x ⊳ h) = S H (h) ⊲ S B (x), S C (h ⊲ y) = S C (y) ⊳ S H (h),(2.25) where S H denotes the antipode of H. Then the space A = C ⊗H ⊗B is a non-degenerate, idempotent algebra with respect to the product (y ⊗ h ⊗ x)(y ′ ⊗ h ′ ⊗ x ′ ) = y(h (1) ⊲ y ′ ) ⊗ h (2) h ′ (1) ⊗ (x ⊳ h ′ (2) )x ′ ., ∆ B , ∆ C ), where ∆ B (yhx)(a ⊗ a ′ ) = yh (1) a ⊗ h (2) xa ′ , (a ⊗ a ′ )∆ C (yhx) = ayh (1) ⊗ a ′ h (2) x for all x ∈ B, y ∈ C, h ∈ H, a, a ′ ∈ A. Its counits and antipode are given by B ε(xhy) = xS −1 B (h ⊲ y), ε C (xhy) = S −1 C (x ⊳ h)y, S(yhx) = S B (x)S H (h)S C (y). Partial integrals and quasi-invariant base weights This section introduces the basic ingredients for integration on a regular multiplier Hopf algebroid A = (A, B, C, S B , S C , ∆ B , ∆ C ). These are partial left and partial right integrals, which are maps C φ : A → C and B ψ B : A → B satisfying suitable invariance conditons with respect to the comultiplications, and base weights (µ B , µ C ), which are functionals on B and C, respectively, that are quasi-invariant with respect to C φ C and B ψ B . We first formulate the appropriate left-and right-invariance for maps from A to B and C (subsection 3.1). In the case of Hopf algebroids, such invariant maps were studied already in [1], and yield conditional expectations onto the orbit algebra of A (subsection 3.2). We then discuss the quasi-invariance assumption on the functionals µ B and µ C , (subsection 3.3) and study the algebraic implications thereof (subsection 3.4). Along the way, we keep an eye on the examples of multiplier Hopf algebroids introduced in subsection 2.4. 3.1. Partial integrals. We use the notation introduced in Section 2. ( B ψ B ⊗ ι)(∆ B (a)(1 ⊗ b)) = B ψ B (a)b; (3.1) (b) B ψ B ∈ Hom(A B , B B ) and for all a, b ∈ A, (S −1 C • B ψ B ⊗ ι)((1 ⊗ b)∆ C (a)) = b B ψ B (a); (3.2) (c) B ψ B ∈ Hom( A B B , B B B ) and the following diagram commutes: (2) For every linear map C φ C : A → C, the following conditions are equivalent: A C ⊗ C A λ T / / T λ Σ A C ⊗ A C S•(S −1 C • B ψ B ⊗ι) A B ⊗ A B B ψ B ⊗ι / / A. (a) C φ C ∈ Hom( C A, C C) and for all a, b ∈ A, (ι ⊗ S −1 B • C φ C )(∆ B (b)(a ⊗ 1)) = C φ C (b)a; (3.4) (b) C φ C ∈ Hom(A C , C C ) and for all a, b ∈ A, (ι ⊗ C φ C )((a ⊗ 1)∆ C (b)) = a C φ C (b); (3.5) (c) C φ C ∈ Hom( A C C , C C C ) and the following diagram commutes: A B ⊗ B A Tρ / / ρT Σ A B ⊗ A B S•(ι⊗S −1 B • C φ C ) A C ⊗ A C ι⊗ C φ C / / A. (3.6) Proof. We only prove (1) because (2) is similar. If (a) holds, then B ψ B (ax)b = ( B ψ B ⊗ ι)(∆ B (ax)(1 ⊗ b)) = ( B ψ B ⊗ ι)(∆ B (a)(1 ⊗ xb)) = B ψ B (a)xb for all x ∈ B and a, b ∈ A by (2.1) and hence B ψ B ∈ Hom( A B B , B B B ). A similar application of (2.7) shows that the same conclusion holds if (b) is satisfied. Suppose now that B ψ B ∈ Hom( A B B , B B B ) . We show that (1a) and (1b) are equivalent. The equations (3.2) and (3.1) are equivalent to commutativity of the triangle on the left hand side or on the right hand side, respectively, in the following diagram: To see that (1a) and (1c) are equivalent, consider the following diagram: B A ⊗ A B ρT ι⊗S / / B ψ B ⊗ι * * ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ A B ⊗ A B B ψ B ⊗ι t t • • • • • • • • • • A S / / A A C ⊗ A C ι⊗S / / S −1 C • B ψ B ⊗ι 4 4 • • • • • • • • • • A B ⊗ B A Tρ O O B ψ B ⊗ι j j ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚C A ⊗ A C T λ / / λ T Σ A B ⊗ A B B ψ B ⊗ι A C ⊗ A C ι⊗S / / S −1 C • B ψ B ⊗ι * * ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ A B ⊗ B A B ψ B ⊗ι / / Tρ 5 5 • • • • • • • • • • • A A S 4 4 ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ The upper left cell commutes by [18,Proposition 5.8], and the lower cell commutes by inspection. Therefore, the outer cell commutes if and only if the triangle on the right commutes. 3.1.2. Definition. Let A be a regular multiplier Hopf algebroid. A partial right integral on A is a map B ψ B : A → B satisfying the equivalent conditions in Proposition 3.1.1 (1), and a partial left integral on A is a map C φ C : A → C satisfying the equivalent conditions in Proposition 3.1.1 (2). Regard Hom( A C C , C C C ) as an M (B)-bimodule and Hom( A B B , B B B ) as an M (C)- bimodule, where (x · C φ C · x ′ )(a) = C φ C (x ′ ax) and (y · B ψ B · y ′ )(a) = B ψ B (y ′ ay). Proposition. Let A be a regular multiplier Hopf algebroid. (1) All partial left integrals form an M (B)-sub-bimodule of Hom( A C C , C C C ); (2) all partial right integrals form an M (C)-sub-bimodule of Hom( A B B , B B B ); (3) the maps C φ C → S ±1 • C φ C • S ∓1 are bijections between all partial left and all partial right integrals. Proof. This follows easily from (2.1), (2.7) and (2.16). Thus, a regular multiplier Hopf algebroid has a surjective partial left integral if and only if it has a surjective partial right integral. In the following result, we use the extension of ∆ B and ∆ C to multipliers as described in Section 2 and explained after (2.13), and identify M (B) and M (C) with subalgebras of M (A). Proposition. Let A be a regular multiplier Hopf algebroid which has a surjective partial integral. Then M (B) = {z ∈ M (A) ∩ C ′ : ∆ B (z) = 1 ⊗ z} = {z ∈ M (A) ∩ C ′ : ∆ C (z) = 1 ⊗ z}, M (C) = {z ∈ M (A) ∩ B ′ : ∆ B (z) = z ⊗ 1} = {z ∈ M (A) ∩ B ′ : ∆ C (z) = z ⊗ 1}. Proof. We only prove the first equality. The inclusion ⊆ follows from (2.1). To prove the reverse inclusion, suppose that B ψ B is a surjective right integral and that z ∈ M (A) ∩ C ′ satisfies ∆ B (z) = 1 ⊗ z. Then for all a, b ∈ A, B ψ B (za)b = ( B ψ B ⊗ ι)(∆ B (za)(1 ⊗ b)) = z( B ψ B ⊗ id)(∆ B (a)(1 ⊗ b)) = z B ψ B (a)b, B ψ B (az)b = ( B ψ B ⊗ ι)(∆ B (az)(1 ⊗ b)) = ( B ψ B ⊗ id)(∆ B (a)(1 ⊗ zb)) = B ψ B (a)zb, and hence z B ψ B (a) = B ψ B (za) ∈ B and B ψ B (a)z = B ψ B (az) ∈ B. Since B ψ B is surjective, we can conclude that z ∈ M (B). Let us return to the examples introduced in subsection 2.4. 3.1.5. Example. Let G be a locally compact, étale Hausdorff groupoid and regard the function algebra C c (G) as a multiplier Hopf * -algebroid as in Example 2.4.3. Then for each h ∈ C(G 0 ), the maps C φ (h) C : C c (G) → t * (C c (G 0 )) and B ψ (h) B : C c (G) → s * (C c (G 0 )) given by ( C φ (h) C (f ))(γ) = γ ′ ∈G t(γ ′ )=t(γ) f (γ ′ )h(s(γ ′ )), ( B ψ (h) B (f ))(γ) = γ ′ ∈G s(γ ′ )=s(γ) f (γ ′ )h(t(γ ′ )) are a partial left and a partial right integral, respectively. 3.1.6. Example. Let G be as above and regard the convolution algebra C c (G) as a multiplier Hopf * -algebroid as in Example 2.4.4. Then for each h ∈ C(G 0 ), the map C φ (h) C : C c (G) → C c (G 0 ) given by ( C φ (h) C (f ))(u) = f (u)h(u) is easily seen to be a partial left and a partial right integral. Example. Consider the tensor product A = C ⊗ B discussed in Example 2.4.5. For all υ ∈ B ∨ and ω ∈ C ∨ , the maps C φ (υ) C := ι ⊗ υ : A → C, B ψ (ω) B := ω ⊗ ι : A → B are a partial left and a partial right integral, respectively, as one can easily check again. 3.1.8. Example. Consider the symmetric crossed product A = C#H constructed in Example 2.4.6. If φ H is a left integral and ψ H is a right integral for the multiplier Hopf algebra H, then the maps C φ C : C#H → C, yh → yφ H (h), B ψ B : C#H → C = B, yh → yψ H (h) (3.7) are a partial left and a partial right integral, respectively. Note that C φ C (hy) = (h (1) ⊲ y)φ H (h (2) ) = yφ H (h), B ψ B (hy) = (h (2) ⊲ y)ψ H (h (1) ) = yψ H (h). (3.8) 3.1.9. Example. Consider the two-sided crossed product A = C#H#B discussed in Example 2.4.7. Suppose that the multiplier Hopf algebra H has a left integral φ H , and let υ ∈ B ∨ . Then the map C φ (υ) C : A → C, yhx → yφ H (h)υ(x), is a partial left integral. Indeed, clearly C φ (υ) C ∈ Hom( C A, C C), and (ι ⊗ S −1 B C φ (υ) C )(∆ B (yhx)(y ′ h ′ y ′ ⊗ 1)) = y(h (1) ⊲ y ′ )h (2) h ′ x ′ · φ H (h (3) )υ(x) = yy ′ h ′ x ′ φ H (h)υ(x) = C φ (υ) C (yhx)y ′ h ′ x ′ for all y, y ′ ∈ C, x, x ′ ∈ B and h, h ′ ∈ H by left-invariance of φ H . Similarly, if ψ H is a right integral of H and if ω ∈ C ∨ , then the map B ψ (ω) B : A → B, yhx → ω(y)ψ H (h)x, is right-invariant. 3.2. Expectations onto the orbit algebra in the proper case. We show that for a proper regular multiplier Hopf algebroid, partial integrals restrict to conditional expectations to the orbit algebra and are completely determined by these restrictions. Let us first define these terms. 3.2.1. Definition. We call a regular multiplier Hopf algebroid A as above proper if BC ⊆ A in M (A). The orbit algebra of A is the subalgebra O := M (B) ∩ M (C) ⊆ M (A). We call A ergodic if O = C1. Clearly, the orbit algebra O is central in M (B) and in M (C). 3.2.2. Proposition. The antipode S of a regular multiplier Hopf algebroid A acts trivially on its orbit algebra. Proof. Let z ∈ O. By Proposition 3.1.4, we have for all a, b ∈ A a ⊗ S B (z)b = za ⊗ b = ∆ B (z)(a ⊗ b) = a ⊗ zb in A B ⊗ A B . We apply B ε ⊗ ι, use the relations B ε(A)A = A and z ∈ C ′ ∩ B ′ ∩ M (A), and find S B (z)c = zc for all c ∈ A. In the proper case, partial integrals extend to conditional expectations on the base algebras and are completely determined by these extensions. Proposition. Let A be a proper regular multiplier Hopf algebroid with a partial left integral C φ C and a partial right integral B ψ B . Then the extensions B ψ B | C : C → ZM (B) and C φ C | B : B → ZM (C) defined by B ψ B | C (y)x = B ψ B (yx) and C φ C | B (x)y = C φ C (xy) take values in the orbit algebra O and we have C φ C | B • B ψ B = B ψ B | C • C φ C (3.9) as maps from A to O. Proof. To see that the extension C φ C | B takes values in O, let a ∈ A, x, x ′ ∈ B, y ∈ C. Then C φ C | B (x)yx ′ a = C φ C (xy)x ′ a = (ι ⊗ S −1 B C φ C )(∆ B (xy)(x ′ a ⊗ 1)) = (ι ⊗ S −1 B C φ C )(ya ⊗ xS B (x ′ )) = S −1 B ( C φ C (xS B (x ′ )))ya = S −1 B ( C φ C | B (x))x ′ ya, and hence C φ C | B (x) ∈ O. A similar argument shows that B ψ B | C (y) ∈ O for all y ∈ C. To prove (3.9), let a ∈ A, x, x ′ , x ′′ ∈ B and y ∈ C. Then the expression ( B ψ B ⊗ S −1 B C φ C )(∆ B (a)(yx ′ ⊗ S B (x ′′ )x) is equal to B ψ B ( C φ C (ax)x ′′ yx ′ ) = B ψ B | C ( C φ C (axy))x ′′ x ′ by left-invariance of C φ C , and to S −1 B C φ C ( B ψ B (ay)S B (x ′′ x ′ )x) = S −1 B ( C φ C | B ( B ψ B (axy)))x ′′ x ′ by right-invariance of B ψ B . Hence, B ψ B | C • C φ C = S −1 B • C φ C | B • B ψ. With Proposition 3.2.2, the claim follows. 3.2.4. Remark. (1) If there are sufficiently many partial integrals in the sense that (extensions of) the partial left integrals separate the points of B, which by Proposition 3.1.3 is equivalent to partial right integrals separating the points of C, then (3.9) implies that every partial left integral C φ C and every partial right integral B ψ B is uniquely determined by the extension C φ C | B or B ψ B | C , respectively. (2) In the proper and ergodic case, every partial left integral C φ C determines a func- tional µ B on B such that C φ C | B (x) = µ B (x)1 ∈ M (A), and every partial right integral B ψ B determines a functional µ C on C such that B ψ B | C (y) = µ C (y)1 ∈ M (A) . Under the assumption in (1), C φ C and B ψ B are uniquely determined by these functionals. Quasi-invariant base weights. Let A be a regular multiplier Hopf algebroid, not necessarily proper. To obtain total integrals on A, we compose partial integrals with suitable functionals µ B and µ C on the base algebras B and C, respectively. On these functionals, we impose several conditions. Definition. A base weight for A is a pair of faithful functionals (µ B , µ C ) on the algebras B and C, respectively. We call such a base weight (1) antipodal if µ B • S C = µ C and µ C • S B = µ B ,(2)µ B = µ B • S C • S B and µ C = µ C • S B • S C . (3.10) We shall later introduce another condition, counitality, which implies condition (2) and in the unital case also (1). 3.3.2. Definition. We call a base weight (µ B , µ C ) quasi-invariant with respect to (1) a partial left integral C φ C if the functional φ := µ C • C φ C can be written φ = µ B • B φ = µ B • φ B (3.11) with maps B φ ∈ Hom( B A, B B) and φ B ∈ Hom(A B , B B ); (2) a partial right integral B ψ B if the functional ψ := µ B • B ψ B can be written ψ = µ C • C ψ = µ C • ψ C (3.12) with maps C ψ ∈ Hom( C A, C C) and ψ C ∈ Hom(A C , C C ). We call a functional φ or ψ of the form above a total left or total right integral. We next consider some special cases and examples. In these examples, we shall frequently deduce quasi-invariance from the existence of certain modular multipliers: 3.3.3. Lemma. Let C φ C be a partial left and B ψ B a partial right integral for A. Suppose that (µ B , µ C ) is a base weight and that there exist invertible multipliers δ ∈ R(A), δ ′ ∈ L(A) such that the functionals φ : (3.11) and (3.12). = µ C • C φ C and ψ := µ B • B ψ B satisfy ψ(aδ) = φ(a) = ψ(δ ′ a) for all a ∈ A. Then (µ B , µ C ) is quasi-invariant with respect to C φ C and B ψ B . Proof. The maps B φ, φ B , C ψ, ψ C defined by the formulas B φ(a) := B ψ B (aδ), φ B := B ψ B (δ ′ a), C ψ(a) := C φ C (aδ −1 ) and ψ C (a) := C φ C (δ ′−1 a) satisfy Under suitable assumptions, a converse holds; see Corollary 6.2.1 and Theorem 6.3.1. In the proper case, quasi-invariant base weights can be constructed by an analogue of [12, Proposition 3.6] from functionals on the orbit algebra O = M (B) ∩ M (C). If A is also ergodic, this choice picks just a scalar; see Remark 3.2.4. 3.3.4. Lemma. Let A be a proper regular multiplier Hopf algebroid with a partial left integral C φ C , a partial right integral B ψ B and a faithful functional τ on its orbit algebra. Suppose that the extensions C φ C B and B ψ B C are faithful. (1) The functionals µ B := τ • C φ C B and µ C := τ • B ψ B C form a base weight and µ B • B ψ B = µ C • C φ C . In particular, (µ B , µ C ) is quasi-invariant with respect to C φ C and B ψ B . (2) This base weight is antipodal if and only if S −1 • C φ C • S = B ψ B = S • C φ C • S −1 . Proof. (1) First, observe that the functionals µ B and µ C are faithful. For example, if µ B (xx ′ ) = 0 for some x ∈ B and all x ′ ∈ B, then τ ( C φ C | B (xx ′ )z) = µ B (xx ′ z) = 0 for all z ∈ O and by faithfulness of τ and C φ C , we first conclude that C φ C | B (xx ′ ) = 0 for all x ′ ∈ B and then that x = 0. Next, Proposition 3.2.3 (2) implies µ C • C φ C = τ • B ψ B C • C φ C = τ • C φ C B • B ψ B = µ B • B ψ B . (2) Suppose that (µ B , µ C ) is antipodal. Then by Proposition 3.2.2, τ (z C φ C (S C (y))) = µ B (S C (zy)) = µ C (zy) = τ (z B ψ B (y)) for all z ∈ O and y ∈ C, whence C φ C B • S C = B ψ B C . By Proposition 3.1.3 (3), 3.2.2 and Remark 3.2.4 (1), we can conclude that S −1 • C φ C • S = B ψ B . Similarly, the relation µ B • S −1 C = µ C implies that S −1 • C φ C • S = B ψ B . The converse implication follows easily from the definitions and Proposition 3.2.2. In the case of regular multiplier Hopf algebroids arising from weak multiplier Hopf algebras as in Example 2.4.2, there exists a canonical base weight which satisfies all of our conditions. 3.3.5. Lemma. Suppose that A is a regular multiplier Hopf algebroid associated to a regular weak multiplier Hopf algebra (A, ∆). (1) The functionals on B = ε s (A) and C = ε t (A) defined by µ B (ε s (a)) = ε(a) and µ C (ε t (a)) = ε(a) (3.13) form an antipodal, modular base weight that is quasi-invariant with respect to every partial integral. (2) The assignment C φ C → µ C • C φ C defines a bijection between all partial left integrals C φ C and all functionals φ on A satisfying (ι ⊗ φ)((b ⊗ 1)∆(a)) = (ι ⊗ φ)((b ⊗ a)E) for all a, b ∈ A, where E = ∆(1) ∈ M (B ⊗ C) is the canonical separability idempotent of (A, ∆). Assertion (1) is a particular case of a more general result, see Example 3.4.3 and the comments there. Of course, (2) has an analogue for right integrals. Proof. (1) The results in [22, Propositions 1.7, 2.1 and 4.8] and the remarks following Definition 2.2 in [22] show that the functionals µ B and µ C form an antipodal, modular base weight and that the separability idempotent E satisfies ⊗ 1)). These maps take values in B because (a ⊗ 1)E and E(a ⊗ 1) lie in A ⊗ C. By antipodality of (µ B , µ C ) and by (3.14), (µ B ⊗ ι)(E) = ι C , (ι ⊗ µ C )(E) = ι B , (3.14) (µ B ⊗ ι)(E(x ⊗ 1)) = S B (x), (ι ⊗ µ C )((1 ⊗ y)E) = S C (y). (3.15) Let C φ C be a partial left integral, write φ = µ C • C φ C and define B φ, φ B : A → M (B) by φ B (a) := (φ ⊗ S C )((a ⊗ 1)E), B φ(a) := (φ ⊗ S −1 B )(E(aµ B (φ B (a)) = (φ ⊗ µ C )((a ⊗ 1)E) = φ(a) = (φ ⊗ µ C )(E(a ⊗ 1)) = µ B ( B φ(a)). Moreover, by (3.15), φ B (aS C (y)) = (φ ⊗ S C )((aS C (y) ⊗ 1)E) = (φ ⊗ S C )((a ⊗ y)E) = (φ ⊗ S C )((a ⊗ 1)E)S C (y) = φ B (a)S C (y), and a similar calculation shows that B φ(xa) = x B φ(a) for all a ∈ A, x ∈ B. A similar argument shows that (µ B , µ C ) is quasi-invariant with respect to every partial right integral. (2) Let φ be a functional on A. A similar argument as above shows that we can write φ = µ C • φ C with some φ C ∈ Hom(A C , C C ). Let now a, b ∈ A and write (a ⊗ 1)∆(b) = i c i ⊗ d i with c i , d i ∈ A. Then (a ⊗ 1)∆ C (b) = i c i ⊗ d i ∈ A C ⊗ A C . Since ∆(b) = ∆(b)E and φ(ey) = µ C (φ C (e)y) for all e ∈ A and y ∈ C, (ι ⊗ φ)((a ⊗ 1)∆(b)) = i (ι ⊗ φ)((c i ⊗ d i )E) = i c i (ι ⊗ µ C )((1 ⊗ φ C (d i ))E). But (3.15) implies that (ι ⊗ µ C )((1 ⊗ φ C (d i ))E) = S C (φ C (d i )) and hence (ι ⊗ φ)((a ⊗ 1)∆(b)) = i c i S C (φ C (d i )) = (ι ⊗ φ C )((a ⊗ 1)∆ C (b)). On the other hand, a similar calculation shows that (ι ⊗ φ)((a ⊗ b)E) = aS C (φ C (b)). The assertion follows. Let us next consider the examples introduced in Subsection 2.4. 3.3.6. Example. Consider the function algebra of a locally compact, étale Hausdorff groupoid G; see Example 2.4.3. Suppose that µ is a Radon measure on the space of units G 0 with full support. Then the functionals µ B and µ C given on B = s * (C c (G 0 )) and C = t * (C c (G 0 )) by µ B (s * (f )) = G 0 f dµ = µ C (t * (f )) (3.16) for all f ∈ C c (G 0 ) form an antipodal, modular, positive base weight. Consider the partial left integral C φ C and the partial right integral B ψ B given by ( C φ C (f ))(γ) = γ ′ ∈G t(γ ′ )=t(γ) f (γ ′ ), ( B ψ B (f ))(γ) = γ ′ ∈G s(γ ′ )=s(γ) f (γ ′ ). (3.17) The compositions φ := µ C • C φ C and ψ := µ B • B ψ B correspond to integration with respect to the measures ν and ν −1 on G defined by G f dν = φ(f ) = G 0 r(γ)=u f (γ) dµ(u), G f dν −1 = ψ(f ) = G 0 s(γ)=u f (γ) dµ(u). (3.18) Assume that the measure µ on G 0 is continuously quasi-invariant in the sense that the measures ν and ν −1 on G are related by a continuous Radon-Nikodym derivative, ν = Dν −1 for some D ∈ C(G). Then φ(f ) = ψ(f D) for all f ∈ C c (G) and hence (µ B , µ C ) is quasi-invariant with respect to C φ C and B ψ B . Conversely, we shall see in Example 6.3.3 that if (µ B , µ C ) is quasi- invariant with respect to C φ C (or B ψ B ), then µ is continuously quasi-invariant. 3.3.7. Example. Let G be as above and consider the convolution algebra C c (G) as in Example 2.4.4. Suppose that µ is a Radon measure on the space of units G 0 with full support and consider the functional µ : C c (G 0 ) → C, f → G 0 f dµ. (3.19) Since both base algebrasB andĈ coincide with C c (G 0 ), the pair (μ,μ) is an antipodal, modular, positive base weight, and quasi-invariant with respect to every partial integral. In the following example, we use the preorder on B ∨ defined in (1.6). 3.3.8. Example. Consider a tensor product A = C ⊗ B as discussed in Example 2.4.5, and the partial integrals C φ (υ) C := ι ⊗ υ : A → C, B ψ (ω) B := ω ⊗ ι : A → Bµ C ( C φ (υ) C (y ⊗ x)) = µ C (y)υ(x) = µ B (xµ C (y)δ) = µ B (µ C (y)δ ′ x), and the maps B φ : y ⊗x → µ C (y)xδ and φ B : y ⊗x → µ C (y)δ ′ x satisfy (3.10). Conversely, if (µ B , µ C ) is quasi-invariant with respect to C φ (υ) C and if y ∈ C is chosen such that µ C (y) = 1, then the formulas xδ := B φ(y ⊗ x) and δ ′ x := φ B (y ⊗ x) define a right and a left multiplier of B such that µ B (xδ) = υ(x) = µ B (δ ′ x). Similarly, (µ B , µ C ) is quasi-invariant with respect to B ψ (ω) B if and only if ω µ C . 3.3.9. Example. Consider a symmetric crossed product A = C#H as discussed in Example 2.4.6. Suppose that µ is a faithful functional on C and that φ H is a left and ψ H is a right integral on (H, ∆ H ). Then (µ, µ) is an antipodal, modular base weight and as such quasi-invariant with respect to the partial integrals C φ C and B ψ B defined in Example 3.1.8. Indeed by [21, Theorem 3.7, Propositions 3.10, 3.12], there exist invertible multipliers δ H , δ ′ H such that ψ H = δ H · φ H = φ H · δ ′ H , the compositions φ = µ • C φ C and ψ = µ • B ψ B then satisfy ψ(yhδ H ) = µ(y)ψ H (hδ H ) = µ(y)φ H (h) = φ(yh) for all y ∈ C and h ∈ H, and a similar calculation using (3.8) shows that φ · δ ′ H = ψ. Now, quasi-invariance follows from Lemma 3.3.3. 3.3.10. Example. Consider a two-sided crossed product A = C#H#B as discussed in Example 2.4.7. Suppose that µ B and µ C are faithful functionals on B and C, respectively, which are H-invariant in the sense that µ C (h ⊲ y) = ε H (h)µ C (y), µ B (x ⊳ h) = ε H (h)µ B (x) (3.20) for all y ∈ C, x ∈ B and h ∈ H. Then the base weight (µ B , µ C ) is quasi-invariant with respect to the partial integrals C φ C and B ψ B defined by C φ C (yhx) := yφ H (h)µ B (x) and B ψ B (yhx) := µ C (y)φ H (h)x; see also Example 3.1.9. Indeed if δ H , δ ′ H ∈ M (H) are as in the preceding example, then (3.20) implies that the compositions φ = µ C • C φ C and ψ = µ B • B ψ B satisfy φ(a) = ψ(aδ H ) = ψ(δ ′ H a) for all a ∈ A, and quasi-invariance follows from Lemma 3.3.3. 3.4. Factorizable functionals. We now focus on the algebraic aspects of the quasiinvariance condition introduced above. The proper context for this are functionals on bimodules. For the moment, B and C denote arbitrary non-degenerate, idempotent algebras, not necessarily coming from a regular multiplier Hopf algebroid. 3.4.1. Definition. Let B and C be non-degenerate, idempotent algebras with faithful functionals µ B and µ C , respectively, and let M be an idempotent B-C-bimodule. We call a functional ω ∈ M ∨ factorizable (with respect to µ B and µ C ) if there exist maps B ω ∈ Hom( B M, B B) and ω C ∈ Hom(M C , C C ) such that µ B • B ω = ω = µ C • ω C . (3.21) We denote by M ⊔ ⊆ M ∨ the subspace of all such factorizable functionals. Using the fact that µ B and µ C are faithful and that the bimodule M is idempotent, one can reformulate the condition above as follows. A functional ω ∈ M ∨ is factorizable if and only if there exist maps B ω : M → B and ω C : M → C such that ω(xm) = µ B (x B ω(m)) and ω(my) = µ C (ω C (m)y) for all x ∈ B, y ∈ C and m ∈ M . Note that such maps, if they exist, are uniquely determined by ω. The assignment M → M ⊔ is functorial. Indeed, if T : M → N is a morphism of idempotent B-C-bimodules, then the dual map T ∨ : N ∨ → M ∨ restricts to a map T ⊔ : N ⊔ → M ⊔ , ω → ω • T. The key property of factorizable functionals is that one can form slice maps and tensor products for such functionals as follows. 3.4.2. Lemma. Let B M C be an idempotent B-C-bimodule and C N D an idempotent C-D- bimodule, where B, C, D are non-degenerate, idempotent algebras with faithful functionals µ B , µ C , µ D , respectively, and consider the balanced tensor product B M C ⊗ C N D . Suppose that υ ∈ M ∨ and ω ∈ N ∨ are factorizable. (1) The formulas (υ ⊗ µ C ι)(m ⊗ n) := υ C (m)n, (ι ⊗ µ C ω)(m ⊗ n) := m C ω(n) define morphisms of modules υ ⊗ µ C ι : M C ⊗ C N D → N D , ι ⊗ µ C ω : B M C ⊗ C N → B M . (2) The formula (υ ⊗ µ C ω)(m ⊗ n) = µ C (υ C (m) C ω(n)) defines a factorizable functional υ ⊗ µ C ω on M C ⊗ C N → N , and B (υ ⊗ µ C ω) = B υ • (ι ⊗ µ C ω), (υ ⊗ µ C ω) D = ω D • (υ ⊗ µ C ι). The proof is straightforward and left to the reader. Note that υ • (ι ⊗ µ C ω) = (υ ⊗ µ C ω) = ω • (υ ⊗ µ C ι). Clearly, the product (υ, ω) → υ ⊗ µ C ω is associative and unital in the sense that if υ, ω are as above and θ is a factorizable functional on an idempotent D-E-bimodule O, where E is a non-degenerate, idempotent algebra with a fixed faithful functional, then ((υ ⊗ µ C ω) ⊗ µ D θ)((m ⊗ n) ⊗ o) = ω(υ C (m) · n · D θ(o)) = (υ ⊗ µ C (ω ⊗ µ D θ))(m ⊗ (n ⊗ o)), (µ B ⊗ µ B υ)(b ⊗ m) = υ(bm), (υ ⊗ µ C µ C )(m ⊗ c) = υ(mc) for all m ∈ M , n ∈ N and o ∈ O. 3.4.3. Example. Assume that B and C are separable Frobenius algebras with separating linear functionals µ B and µ C , respectively, as defined in [22]; see also Section 26 in [6]. Then similar arguments as in the proof of Lemma 3.3.5 show that every linear functional on an indempotent, non-degenerate B-C-bimodule is factorizable; see also [3,Proposition 3.3]. Of particular interest for us is the case where the fixed functionals µ B and µ C admit modular automorphisms. Regard M ∨ as an M (C)-M (B)-bimodule. ω(x 0 xm) = ω(x 0 x B ω(m)) = µ B (x B ω(m)σ B (x 0 )) for all x ∈ B, m ∈ M , y ∈ C, showing that the functional ωx 0 is factorizable and B (ωx 0 )(m) = B ω(m)σ B (x 0 ), (ωx 0 ) C (m) = ω C (mx 0 ). A similar argument shows that ωy 0 is factorizable for every y 0 ∈ M (C) and that B (y 0 ω)(m) = B ω(my 0 ), (y 0 ω) C (m) = σ −1 C (y)ω C (m). 3.4.5. Remark. In the situation above, suppose that y 0 ω = ωx 0 for some ω ∈ M ⊔ , x 0 ∈ M (B) and y 0 ∈ M (C). Then the equations above imply that for all m ∈ M , B ω(my 0 ) = B ω(m)σ B (x 0 ), ω C (x 0 m) = σ −1 C (y 0 )ω C (m) . Under the assumptions above, the assignment M → M ⊔ becomes a functor from B-C-bimodules to C-B-bimodules or, equivalently, to B op -C op -bimodules. Moreover, if we assume in the situation of Lemma 3.4.2 that the functionals µ C , µ D and µ E admit modular automorphisms, then the assignment υ ⊗ ω → υ ⊗ µ C ω factorizes to a morphism We us now return to the discussion of base weights and integrals. Let of B op -D op -bimodules B op (M ⊔ ) C op ⊗ C op (N ⊔ ) D op → B op ((M C ⊗ C N ) ⊔ ) D op .A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid with an antipodal base weight (µ B , µ C ). 3.4.8. Definition. We call a functional ω ∈ A ∨ factorizable (with respect to (µ B , µ C )) if it satisfies the equivalent conditions in Lemma 3.4.7. We denote by A ⊔ ⊆ A ∨ the subspace of all such factorizable functionals. Remarks. (1) The definition of quasi-invariance can now be reformulated as follows. The base weight (µ B , µ C ) is quasi-invariant with respect to a partial left integral C φ C (or a partial right integral B ψ B ) if and only if the functional φ := µ C • C φ C (or ψ := µ B • B ψ B ) is factorizable. (2) If A arises from a regular weak multiplier Hopf algebra (A, ∆) as in Example 2.4.2, then the base algebras B and C are Frobenius separable in the sense of [22] and every functional on A is factorizable; see Example 3.4.3 and Lemma 3.3.5. The following auxiliary result will be used later. Denote by * the involution on C. 3.4.9. Lemma. Let (µ B , µ C ) be an antipodal base weight for A and let ω ∈ A ⊔ . (1) Suppose that the base weight is modular. Let x, x ′ ∈ M (B), y, y ′ ∈ M (C) and ω ′ := xy · ω · x ′ y ′ . Then ω ′ ∈ A ⊔ and B ω ′ (a) = B ω(y ′ axy)σ B (x ′ ), ω ′ B (a) = (σ B ) −1 (x)ω B (x ′ y ′ ay), C ω ′ (a) = C ω(x ′ axy)σ C (y ′ ), ω ′ C (a) = (σ C ) −1 (y)ω C (x ′ y ′ ax). (2) Let k ∈ Z be odd and ω ′ := ω • S k . Then ω ′ ∈ A ⊔ and B (ω ′ ) = S −k • υ C • S k , (ω ′ ) B = S −k • C υ • S k , C (ω ′ ) = S −k • υ B • S k , (ω ′ ) C = S −k • B υ • S k . (3) Assume that A is a multiplier Hopf * -algebroid and that the base weight is positive. Then ω * := * • ω • * lies in A ⊔ and B (ω * ) = * • υ B • * , (ω * ) B = * • B υ • * , C (ω * ) = * • υ C • * , (ω * ) C = * • C υ • * . Proof. Assertion (1) Proof. This follows easily from Lemma 3.4.9 and Proposition 3.1.3. Uniqueness of integrals relative to a base weight Let A be a regular multiplier Hopf algebroid with a modular base weight (µ B , µ C ) and a left integral φ. Then for every x ∈ M (B), the rescaled functionals a → φ(ax) and a → φ(xa) are left integrals again by Corollary 3.4.10. We now show that under a certain non-degeneracy assumption on φ and local projectivity of A as a module over B and C, every left integral is of this form. Of course, a similar statement holds for right integrals. Let φ be a left and ψ a right integral for (A, µ B , µ C ). Then for all a ∈ A, ψ(a B φ(1)) = ψ(φ B (1)a) = φ(a C ψ(1)) = φ(ψ C (1)a). (4.1) Proof. By definition of B φ and of φ B , we can write φ( B ψ B (a)) in the form µ B ( B ψ B (a) B φ(1)) = ψ(a B φ(1)) or µ B (φ B (1) B ψ B (a)) = ψ(φ B (1)a). Similarly, we can write ψ( C φ C (a)) in the form µ C (ψ C (1) C φ C (a)) or µ C ( C φ C (a) C ψ(a)). Now, (4.1) follows because by Propositon 3.2.3, φ • B ψ B = µ C • C φ C • B ψ B = µ B • B ψ B • C φ C = ψ • C φ C . We can now conclude the following equivalence: 4.1.2. Lemma. Suppose that the base weight (µ B , µ C ) is antipodal. Then for every functional h on A, the following conditions are equivalent: (1) h is a left integral and h| B = µ B ; (2) h is a right integral and h| C = µ C . If these conditions hold, then h = h • S, and with τ : 4.1.4. Example. Suppose that C φ C is a partial left integral on a unital regular multiplier = µ B | O = µ C | O , we have C h C (1) = 1, B h B (1) = 1, µ B = τ • C h C | B , µ C = τ • B h B | C . Proof. Suppose that (1) holds. Since µ B is faithful and µ B (x) = h(x) = µ B (h B (1)x) = µ B (x B h(1)) for all x ∈ B,Hopf algebroid A such that C φ C (1) = 1, C φ C • S 2 = S 2 • C φ C and C φ C | B is faithful. Let B ψ B = S • C φ C • S −(A B φ(A)) · ψ = (A C ψ(A)) · φ and ψ · (φ B (A)A) = φ · (ψ C (A)A). Proof. We only prove the first equation. Let a ∈ A and b ⊗ c ∈ A B ⊗ A B and write b ⊗ c = i ∆ B (d i )(1 ⊗ e i ) = j ∆ B (f j )(g j ⊗ 1) with d i , e i , f j , g j ∈ A. Then (ψ ⊗ µ B φ)(∆ B (a)(b ⊗ c)) is equal to i (ψ ⊗ µ B φ)(∆ B (ad i )(1 ⊗ e i ))) = i φ( B ψ B (ad i )e i ) = i ψ(ad iB φ(e i )) because ψ is a right integral, and to j (ψ ⊗ µ B φ)(∆ B (af j )(g j ⊗ 1))) = j ψ( C φ C (af j )g j ) = j φ(af j C ψ(g j )) because φ is a left integral. Since the maps T λ and T ρ are bijective, we can conclude that (A B φ(A)) · ψ = (A C ψ(A)) · φ. The preceding result suggests to consider the following non-degeneracy condition. Definition. Let A be a regular multiplier Hopf algebroid with an antipodal base weight (µ B , µ C ). We call a left integral φ (right integral ψ) for (A, µ B , µ C ) full if B φ and φ B ( C ψ and ψ C , respectively) are surjective. · φ = b · ψ ′ . Then C ψ ′ (ab) = C φ C (ac). Therefore, C = C ψ ′ (A) = C φ C (A). A similar argument shows that B ψ B (A) = B for every right integral ψ. To show that integrals are unique up to scaling, we need some further preparations. 4.2.6. Lemma. Let φ be a left and ψ a right integral for a regular multiplier Hopf algebroid A with a modular base weight. Then for all x ∈ B and y ∈ C, φ · y = S 2 (y) · φ and x · ψ = ψ · S 2 (x). Proof. We only prove the first assertion. If (µ B , µ C ) denotes the base weight and a ∈ A, y ∈ C, then φ(ya) = µ C (y C φ C (a)) = µ C ( C φ C (a)S 2 (y)) = φ(aS 2 (y)). To prove the desired uniqueness result for integrals, we need a further assumption. Let M be a right module over an algebra D. Recall that M is called firm if the multiplication map induces an isomorphism M D ⊗ D D → M , and locally projective [26] if for every finite number of elements m 1 , . . . , m k ∈ M , there exist finitely many υ i ∈ Hom(M D , D D ) and e i ∈ Hom(D D , M D ) such that m j = i e i (υ i (m j )) for all j = 1, . . . , k. The corresponding definition for left modules is obvious. The algebra D is firm if it is so as a right (or, equivalently, as a left) module over itself. The following results may be well-known, but we could not find a reference and leave the straightforward proof to the reader. 4.2.7. Lemma. Let D be a firm algebra. (1) Every idempotent, locally projective right D-module is firm. φ · βy = φ ′ · y = S 2 (y) · φ ′ = S 2 (y) · φ · β = φ · yβ for all y ∈ C, and similarly, α commutes with C. Choose a full left integral ψ, for example, φ • S. We will show that for all a, b ∈ A, a B ψ B (b)α = a B ψ B (bα), β B ψ B (b)a = B ψ B (βb)a.(4.2) These equations imply Bα ⊆ B and βB ⊆ B. We then conclude, similarly as in the proof of Lemma 1.0.1 (2), that µ B (xαS −2 (φ B (a))) = µ B (φ B (a)xα) = φ(axα) = φ(βax) = µ B (φ B (βa)x) = µ B (xS −2 (φ B (βa))) for all a ∈ A, x ∈ B. Since µ B is faithful and φ is full, this relation implies αB ⊆ B, that is, α ⊆ M (B). A similar argument shows that also β ∈ M (B). Therefore, we only need to prove (4.2). We focus on the second equation; the first equation follows similarly. Let a, b ∈ A. Since C φ ′ C = C φ C · β and C φ C are partial left inegrals, ⊗ 1)). Since T λ is surjective, we can conclude (ι ⊗ S −1 B • C φ C • β)(∆ B (b)(a ⊗ 1)) = C φ C (βb)a = (ι ⊗ S −1 B • C φ C )(∆ C (βb)(a(ι ⊗ S −1 B • C φ C • β)(∆ B (b)(a ⊗ cd)) = (ι ⊗ S −1 B • C φ C )(∆ B (βb)(a ⊗ cd)) for all a, b, c, d ∈ A. Since φ is faithful and A non-degenerate, maps of the form d · C φ C separate the points of A. By assumption and Lemma 4.2.7, slice maps of the form ι ⊗ S −1 B • (d · C φ C ) separate the points of A B ⊗ A B . Consequently, (ι ⊗ β)(∆ B (b)(1 ⊗ c)) = ∆ B (βb)(1 ⊗ c) for all a, b, d ∈ A. Counital base weights and measured multiplier Hopf algebroids We now introduce the missing last assumption on our base weights, which is existence of a factorizable counit functional. This condition appeared in a related context in [19] and implies a much closer relation between the left and the right comultiplication of a regular multiplier Hopf algebroid than the mixed co-associativity condition alone. After a discussion of this condition, we finally define the notion of a measured regular multiplier Hopf algebroid and look at examples. µ B • B ε = µ C • ε C . (5.1) In this case, we call this composition the associated counit functional and denote it by ε. We shall give examples in the next subsection and first discuss the condition above. 5.1.1. Remarks. Let (µ B , µ C ) be an antipodal base weight. (1) Relation (2.17) implies (µ B • B ε) • S = µ C • ε C , (µ C • ε C ) • S = µ B • B ε,(5.2) so that (5.1) is equivalent to invariance of either sides under the antipode. Thus, the counit functional associated to a counital base weight is invariant under the antipode. (2) Conversely, suppose that A is unital and (5.1) holds. Then (µ B , µ C ) is antipodal because then B ε(1 A ) = 1 B , ε C (1 A ) = 1 C and hence µ C (S −1 C (x)) = µ C (ε C (x)) = µ B ( B ε(x)) = µ B (x) for all x ∈ B and similarly µ B (S −1 B (y)) = µ C (y) for all y ∈ C. (3) Suppose that the base weight (µ B , µ C ) is counital. Then the associated counit functional ε is factorizable and the associated module maps are B ε, ε B = S C • ε C , C ε = S B • B ε, ε C . (5.3) Indeed, (2.3) implies ε(xa) = µ B (x B ε(a)) and ε(ax) = µ C (ε C (ax)) = µ C (S −1 C (x)ε C (a)) = µ B (S C (ε C (a)) x) for all a ∈ A and x ∈ B, and (2.9) implies ε(ay) = µ C (ε C (a)y) and ε(ya) = µ C (yS B (ε B (a))) for all a ∈ A and y ∈ C. We can therefore form the relative tensor products ε ⊗ (ε ⊗ µ B ε)(a ⊗ b) = ε(ab) = (ε ⊗ µ C ε)(a ⊗ b) for all a, b ∈ A. In the involutive case, (5.1) is equivalent to several natural conditions. Remarks. Suppose that A is a multiplier Hopf * -algebroid and that (µ B , µ C ) is positive and antipodal. (1) Equation (5.1) is equivalent to self-adjointness of either sides because by (2.19), µ B • B ε • * = µ B • * • S C • ε C = * • µ C • ε C . (2) Equip B with the inner product x|x ′ := µ B (x * x ′ ) and consider the map π B : A → End(B), π B (a)x := B ε(ax). This is a homomorphism because of (2.18). Now, (5.1) holds if and only if x|π B (a)x ′ = π B (a * )x|x ′ for all x, x ′ ∈ B and a ∈ A because the inner products above are given by µ B (x * B ε(ax)) = (µ B • B ε)(x * ax ′ ) and µ B ( B ε(a * x) * x ′ ) = µ B (x ′ * B ε(a * x)) * = ( * • µ B • B ε • * )(x * ax ′ ) , respectively. Similarly, condition (5.1) can be reformulated in terms of the map π C : A op → End(C) given by π C (a op )y := ε C (ya) and the inner product on C induced by µ C . Recall that without loss of generality, one can assume the left and the right counit of a regular multiplier Hopf algebroid to be surjective; see [18, Lemma 3.6 and 3.7]. Proposition. Suppose that A is a regular multiplier Hopf algebroid with surjective left and surjective right counit. Then every counital base weight for A is modular. Proof. We only show that S −1 B S −1 C is a modular automorphism of µ B . Let x ∈ B and a ∈ A. Then (2.18) implies µ B (x B ε(a)) = µ B ( B ε(xa)) = µ C (ε C (xa)) = µ C (ε C (S −1 C (x)a)) = µ B ( B ε(S −1 C (x)a)) = µ B ( B ε(a)S −1 B (S −1 C (x))). The preceding result fits with the theory of measured quantum groupoids, where the square of the antipode generates the scaling group and the latter restricts to the modular automorphism groups on the base algebras, that is, in the notation of [8], S 2 = τ i and τ t • α = α • σ ν t , τ t • β = β • σ ν . Measured regular multiplier Hopf algebroids. We have now gathered all ingredients and assumptions to define the main objects of interest of this article. Definition. A measured regular multiplier Hopf algebroid consists of a regular multiplier Hopf algebroid A, a base weight (µ B , µ C ), a faithful partial right integral B ψ B and a faithful partial left integral C φ C such that (1) the base weight is counital, and quasi-invariant with respect to B ψ B and C φ C , (2) the right integral ψ := µ B • B ψ B and the left integral φ := µ C • C φ C are full. We call it a measured multiplier Hopf * -algebroid if additionally A is a multiplier Hopf * -algebroid and the functionals µ B , µ C , φ, ψ are positive. Faithfulness of ψ and φ, and hence also of B ψ B and C φ C , follows from the other assumptions if A is locally projective as a module over B and C, as we shall see in Theorem 6.4.1. Let us consider our list of examples. 5.2.2. Example. Consider the regular multiplier Hopf algebroid associated to a regular weak multiplier Hopf algebra (A, ∆) as in Example 2.4.2. Then the base weight in (3.13) is counital because it is antipodal and the left and the right counit of A are given by B ε = S −1 • ε t and ε C = S −1 • ε s , respectively. The associated counit functional is just the counit of (A, ∆). 5.2.3. Example. Let G be a locally compact, étale Hausdorff groupoid with a Radon measure µ on the space of units G 0 that has full support and is continuously quasiinvariant as in Example 3.3.6. Then the multiplier Hopf * -algebroid A of functions on G defined in Example 2.4.3 together with the base weight (µ B , µ C ) and the partial integrals C φ C and B ψ B defined in (3.17) forms a measured multiplier Hopf * -algebroid. Indeed, the base weight (µ B , µ C ) is counital and its the associated counit functional is given by ε(f ) = G 0 f | G 0 dµ, and the integrals φ and ψ are given easily seen to be full and faithful. Example. Let G and µ be as above and consider the multiplier Hopf * -algebroid A associated to the convolution algebra = C c (G) as in Example 2.4.4. Then the base weight (μB,μĈ) associated to µ as in Example 3.3.7 is counital if and only if for every f ∈ C c (G), the integralŝ µ(Bε(f )) = G 0 t(γ)=u f (γ) dµ(u) andμ(εĈ(f )) = G 0 s(γ)=u f (γ) dµ(u) coincide, that is, if and only if µ is invariant in sense of [12,Definition 3.12]. If this condition holds, then together with the partial integral BψB =ĈφĈ : C c (G) → C c (G 0 ), f → f | G 0 , we obtain a measured multiplier Hopf * -algebroid again; see also Example 3.1.6 and 3.3.7. Invariance of the measure µ on G 0 is a rather strong condition, and in subsection 8.1, we shall see that after a slight modification of the multiplier Hopf * -algebroid, it suffices to assume µ to be continuously quasi-invariant in the sense defined in Example 3.3.6. 5.2.5. Example. Consider the tensor product A = C ⊗ B discussed in Example 2.4.5. In this case, an antipodal base weight (µ B , µ C ) is counital if and only if the two expressions µ B ( B ε(y ⊗ x)) = µ B (xS −1 B (y) ) and µ C (ε C (y ⊗ x)) = µ C (S −1 C (x)y) coincide for all x ∈ B, y ∈ C, and therefore if and only if it is modular. Suppose that this condition holds. The maps C φ C := ι ⊗ µ B : A → C, B ψ B := µ C ⊗ ι : A → B are left-and right-invariant, respectively, by Example 3.1.7, and the resulting integral φ = ψ = µ C • µ B is full and faithful. We therefore obtain a measured regular multiplier Hopf algebroid again. 5.2.6. Example. Consider the symmetric crossed product A = C#H introduced in Example 2.4.6, and let µ be a faithful functional on the algebra C. Then the base weight (µ, µ) is counital if and only if the expressions µ( B ε(hy)) = µ( B ε((h (1) ⊲ y)h (2) )) = µ(h ⊲ y) and µ(ε C (hy)) = µ(y)ε H (h) coincide for all y ∈ C and h ∈ H, that is, if and only if µ is invariant under the action of H. Suppose that this condition holds, and that φ H is a left and ψ H is a right integral on (H, ∆ H ). With the partial integrals C φ C and B ψ B defined as in Example 3.1.8, we obtain a measured regular multiplier Hopf algebroid; see also Example 3.3.9. Again, a weaker quasi-invariance condition on µ turns out to be sufficient after a suitable modification of the multiplier Hopf algebroid, see subsection 8.2. 5.2.7. Example. Consider the two-sided crossed product A = C#H#B discussed in Example 2.4.7. In this case, an antipodal base weight (µ B , µ C ) is counital if and only if the two expressions µ B ( B ε(xhy)) = µ B (xS −1 (h ⊲ y)) and µ C (ε C (xhy)) = µ C (S −1 (x ⊳ h)y) (5.4) coincide for all x ∈ B, h ∈ H and y ∈ C. Suppose that (µ B , µ C ) is modular. Then µ B (xS −1 (h ⊲ y)) = µ C ((h ⊲ y)S(x)) = µ C (S −1 (x)(h ⊲ y)), and then equality of the expressions in (5.4) is equivalent to invariance of µ C and µ B under H in the sense of (3.20). Suppose that also this condition holds, and that φ H is a left and ψ H is a right integral on (H, ∆ H ). Together with the partial integrals C φ C (yhx) := yφ H (h)µ B (x) and B ψ B (yhx) := µ C (y)φ H (h) x defined in Example 3.1.9, we obtain a measured regular multiplier Hopf algebroid; see also Example 3.3.10. In subsection 8.3, we shall treat the case where µ C and µ C are only quasi-invariant with respect to the action of H. The key results on integration With all the assumptions in place, we now establish the key results on integration listed in the introduction -existence of a modular automorphism (subsection 6.1), existence of a modular element (subsection 6.3), and faithfulness of the integrals (subsection 6.4). Along the way, we use and study left and right convolution operators naturally associated to factorizable functionals (subsection 6.2). Here, the counitality assumption on the base weight comes into play, and ensures that the convolutions formed with respect to the left and with respect to the right comultiplication coincide; see Corollary 6.2.4. 6.1. Convolution operators and the modular automorphism. Let A be a regular multiplier Hopf algebroid with a counital base weight (µ B , µ C ). We show that integrals for (A, µ B , µ C ) which are full and faithful automatically admit modular automorphisms. As a tool, we use the following left convolution operators associated to elements B υ ∈ ( B A) ∨ = Hom( B A, B B) and υ ′ B ∈ (A B ) ∨ = Hom(A B , B B ), λ( B υ) : A → L(A), λ( B υ)(a)b = ( B υ ⊗ ι)(∆ B (a)(1 ⊗ b)), λ(υ ′ B ) : A → R(A), bλ(υ ′ B )(a) = (S −1 C • υ ′ B ⊗ ι)((1 ⊗ b)∆ C (a)) , and the following right convolution operators associated to elements C ω ∈ ( C A) ∨ = Hom( C A, C C) and ω ′ C ∈ (A C ) ∨ = Hom(A C , C C ), associated to maps ρ( C ω) : A → L(A), ρ( C ω)(a)b = (ι ⊗ S −1 B • C ω)(∆ B (a)(b ⊗ 1)), ρ(ω ′ C ) : A → R(A), bρ(ω ′ C )(a) = (ι ⊗ ω ′ C )((b ⊗ 1)∆ C (a) ). The notation above can be ambiguous for elements B υ B ∈ Hom( A B B , B B B ) and C ω C ∈ Hom( A C C , C C C ), and we shall always write ρ( B υ), ρ(υ B ), λ( C ω) or λ(ω C ) to indicate which convolution operator we mean. This ambiguity will be resolved in Lemma 6.1.1 (4) below. Let us collect a few easy observations. For all B υ, υ ′ B , C ω, ω ′ C as above and a, c ∈ A, the multipliers λ(c · B υ)(a), λ(υ ′ B · c)(a), ρ(c · C ω)(a), ρ(ω ′ C · c)(a) lie in A; for example, λ(c · B υ)(a) = ( B υ ⊗ ι)(∆ B (a)(c ⊗ 1)). By Proposition 3.1.1, a map B ψ B ∈ ( A B B ) ∨ is a partial right integral if and only if the following equivalent conditions hold, λ( B ψ) = B ψ, λ(ψ B ) = ψ B , λ(a · B ψ)(b) = S(λ(ψ B · b)(a)) for all a, b ∈ A. (6.1) Similarly, C φ C ∈ ( A C C ) ∨ is a partial left integral if and only if the following equivalent conditions hold: ρ( C φ) = C φ, ρ(φ C ) = φ C , ρ(φ C · a)(b) = S(ρ(b · C φ)(a)) for all a, b ∈ A. (6.2) Finally, (2.4) and (2.10) imply λ( B ε) = ρ( C ε) = λ(ε B ) = ρ(ε C ) = ι A , (6.3) where C ε = S B • B ε and ε B = S C • ε C . As before, ε denotes the counit functional. 6.1.1. Lemma. Let A be a regular multiplier Hopf algebroid with a countial base weight (1) and (2) follow immediately from the counit property and the coassociativity conditions relating ∆ B and ∆ C , respectively. Combined, they imply (µ B , µ C ), let B υ ∈ A · ( B A) ∨ , υ ′ B ∈ (A B ) ∨ · A, C ω ∈ A · ( C A) ∨ and ω ′ C ∈ (A C ) ∨ · A, and write υ := µ B • B υ, ω := µ C • C ω, υ ′ := µ B • υ ′ B , ω ′ := µ C • ω ′ C . Then (1) ε • λ( B υ) = υ, ε • ρ( C ω) = ω, ε • λ(υ ′ B ) = υ ′ , ε • ρ(ω ′ C ) = ω ′ ; (2) λ( B υ) and λ(υ ′ B ) commute with both ρ( C ω) and ρ(ω ′ C ); (3) υ • ρ( C ω) = ω • λ( B υ), υ • ρ(ω ′ C ) = ω ′ • λ( B υ), υ ′ • ρ( C ω) = ω • λ(υ ′ B ) and υ ′ • ρ(ω ′ C ) = ω ′ • λ(υ ′ B ). (4) Suppose that factorizable functionals separate the points of A. Then λ( B υ) = λ(υ ′ B ) whenever υ = υ ′ , and ρ( C ω) = ρ(ω ′ C ) whenever ω = ω ′ . Proof. The relations inυ • ρ( C ω) = ε • λ( B υ) • ρ( C ω) = ε • ρ( C ω) • λ( B υ) = ω • λ( B υ) which is (3). Let us prove (4). Suppose that υ = υ ′ . Then the assumption and nondegeneracy of A imply that functionals of the form like ω will separate the points of A, and by (3), ω • λ( B υ) = υ • ρ( C ω) = υ ′ • ρ( C ω) = ω • λ(υ ′ B ), whence λ( B υ) = λ(υ ′ B ) . A similar argument proves the assertion concerning ρ( C ω) and ρ(ω ′ C ). We proceed with the study of convolution operators in the next subsection. The next result is the key step towards the existence of modular automorphisms. 6.1.2. Theorem. Let A be a regular multiplier Hopf algebroid with a counital base weight (µ B , µ C ). Then A · φ = φ · A for every full left integral φ and A · ψ = ψ · A for every full right integral ψ for (A, µ B , µ C ). Proof. We only prove the assertion concerning a full left integral φ. Let ψ := φ • S. Then ψ is a full right integral and A·φ = A·ψ by Proposition 4.2.4. We show that φ·A ⊆ A·ψ, and a similar argument proves the reverse inclusion. Let a, b, c ∈ A. By Lemma 6.1.1 (3), ((a · φ) • λ(ψ B · b))(c) = ((ψ · b) • ρ(a · C φ))(c) = ((S(b) · φ) • S • ρ(a · C φ))(c). Choose b ′ ∈ A with S(b) · φ = b ′ · ψ and use (6.1) to rewrite the expression above in the form ((b ′ · ψ) • ρ(φ C · c))(a) = ((φ · c) • λ(b ′ · B ψ))(a) = ((φ · c) • S • λ(ψ B · a))(b ′ ) = ((S −1 (c) · ψ ′ ) • λ(ψ B · a))(b ′ ). Choose c ′ ∈ A with S −1 (c) · ψ = c ′ · φ and use Lemma 6.1.1 (3) again to rewrite this expression in the form ((c ′ · φ) • λ(ψ B · a))(b ′ ) = ((ψ · a) • ρ(c ′ · C φ))(b ′ ). We thus obtain φ(λ(ψ B · b)(c)a) = ((a · φ) • λ( B ψ · b))(c) = ((ψ · a) • ρ(c ′ · C φ))(b ′ ) = ψ(aρ(c ′ · C φ)(b ′ )). Here, b and c ∈ A were arbitrary, and the linear span of all elements of the form λ(ψ B · b)(c) = (S −1 C • ψ B ⊗ ι)((b ⊗ 1)∆ C (c)) is equal to AS −1 C (ψ B (A)) = AC = A because λ T and ψ B are surjective. Thus, φ · A ⊆ A · ψ = A · φ and consequently A · φ = φ · A. 6.1.3. Theorem. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. Then φ and ψ admit modular automorphisms σ φ and σ ψ , respectively, and σ φ | C = S 2 | C , ∆ B • σ φ = (S 2 ⊗ σ φ ) • ∆ B , ∆ C • σ φ = (S 2 ⊗ σ φ ) • ∆ C , σ ψ | B = S −2 | B , ∆ B • σ ψ = (σ ψ ⊗ S −2 ) • ∆ B , ∆ C • σ ψ = (σ ψ ⊗ S −2 ) • ∆ C . If A is locally projective, then σ φ (M (B)) = M (B) and σ ψ (M (C)) = M (C). If these equations hold, then for all x ∈ M (B), y ∈ M (C) and a ∈ A, φ B (xa) = (S 2 • σ φ )(x)φ B (a), B φ(ax) = B φ(a)(σ φ • S 2 ) −1 (x), (6.4) ψ C (ya) = (S −2 • σ ψ )(y)ψ C (a), C ψ(ay) = C ψ(a)(σ ψ • S −2 ) −1 (y). (6.5) Proof. From Theorem 6.1.2, we conclude existence of a unique bijection σ φ : A → A such that φ·a = σ φ (a)·φ for all a ∈ A. This map is easily seen to be an algebra automorphism. Lemma 4.2.6 implies that σ φ (y) = S 2 (y) all y ∈ C. In particular, the tensor product S 2 ⊗ σ φ is well-defined on A B ⊗ A B and on A C ⊗ A C . Two applications of (6.2) show that for all a, b ∈ A, ρ(φ · a)(σ φ (b)) = S(ρ(σ φ (b) · φ)(a)) = S(ρ(φ · b)(a)) = S 2 (ρ(a · φ)(b)) = S 2 (ρ((φ · a) • σ φ )(b)). Since a ∈ A was arbitrary, we can conclude the desired formulas for ∆ B •σ φ and ∆ C •σ φ . If A is locally projective, then the relation σ φ (M (B)) = M (B) follows immediately from the relation M (B) · φ = φ · M (B) obtained in Theorem 4.2.9. The last equations follow easily from Remark 3.4.5. Let us look at our examples again. 6.1.4. Example. Let G be a locally compact, étale Hausdorff groupoid. Then the function algebra C c (G) is commutative and hence every integral is tracial. The case of the convolution algebra will be considered in subsection 8.1. 6.1.5. Example. For the measured regular multiplier Hopf algebroid associated to the tensor product A = C ⊗B and suitable functionals µ B and µ C on B and C as in Example 5.2.5, we have φ = ψ = µ C ⊗ µ B and σ φ = σ ψ = S 2 ⊗ S −2 . 6.1.6. Example. For the measured regular multiplier Hopf algebroid associated to a symmetric crossed product A = C#H, a faithful, H-invariant functional µ on C and left and right integrals φ H and ψ H of (H, ∆ H ) as in Example 5.2.6, the integrals φ and ψ are given by φ(yh) = µ(y)φ H (h) = φ(hy), ψ(yh) = µ(y)ψ H (h) = ψ(hy), for all y ∈ C and h ∈ H, see (3.8), and hence their modular automorphisms are given by σ φ (yh) = yσ H (h), σ ψ (yh) = yσ ′ H (h), where σ H and σ ′ H denote the modular automorphisms of φ H and ψ H . 6.1.7. Example. Consider the measured regular multiplier Hopf algebroid associated to the two-sided crossed product A = C#H#B, suitable H-invariant functionals µ B and µ C on B and C and left and right integrals φ H and ψ H of (H, ∆ H ) as in Example 5.2.7. The left integral φ is given by φ(yhx) = µ C (y)φ H (h)µ B (x) for all y ∈ C, h ∈ H, x ∈ B. Let us compute its modular automorphism σ φ . By Lemma 4.2.6, σ φ (y) = S 2 (y) for all y ∈ C. Denote by δ H the modular element of φ H , see [21,Proposition 3.8]. Then φ H • S H = δ H · φ H is right-invariant on H and hence ψ ′ := δ H · φ is right-invariant; see also Example 5.2.7. The modular automorphism σ ψ ′ of ψ ′ satisfies σ ψ ′ (x) = S −2 (x) for all x ∈ B by Lemma 4.2.6 again, and hence σ φ (x) = δ −1 H σ ψ ′ (x)δ H = S −2 (x) ⊳ δ H . Finally, using H-invariance of µ C and µ B , we find φ(h ′ yhx) = µ C (y)φ H (h ′ h)µ B (x) = µ C (y)φ H (hσ H (h ′ ))µ B (x) = φ(yhxσ H (h ′ )), where σ H denotes the modular automorphism of φ H , and hence σ φ (h) = σ H (h). 6.2. Convolution operators and the dual algebra. The results obtained so far immediately imply the existence of modular elements: 6.2.1. Corollary. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. Then there exists a unique invertible multiplier δ ∈ M (A) such that ψ = δ · φ. Proof. By Proposition 4.2.4, ψ φ and φ ψ, and by Theorem 6.1.3 φ and ψ admit modular automorphisms. Now, apply Lemma 1.0.1 (2). To determine the behaviour of the comultiplication, counits and antipode on δ, we need a few more results on the convolution operators introduced above. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. Then Proposition 4.2.4 and Theorem 6.1.3 imply that the subspaces A · φ, φ · A, A · ψ, ψ · A of A ∨ coincide, and Theorem 4.2.9 implies that it does not depend on the choice of C φ C and B ψ B . Since φ, ψ are factorizable, it follows that all functionals in the spacê A := A · φ = φ · A = A · ψ = ψ · A (6.6) are factorizable, that is, ⊆ A ⊔ . Functionals in naturally extend to M (A): 6.2.2. Lemma. There exists a unique embedding j : → M (A) ∨ such that j(a · φ)(T ) = φ(T a), j(φ · a)(T ) = φ(aT ), j(a · ψ)(T ) = ψ(T a), j(ψ · a)(T ) = ψ(aT ) for every T ∈ M (A) and a ∈ A. Proof. The point is to show that the formulas given above are compatible in the sense that for each υ ∈Â, the extension j(υ) is well-defined, and this can easily be done using Theorem 6.1.3 and Corollary 6.2.1. We henceforth regard elements of as functionals on M (A) without mentioning the embedding j explicitly. 6.2.3. Proposition. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. For every υ ∈ A ⊔ and b ∈ A, there exist ρ(υ)(b), λ(υ)(b) ∈ M (A) such that ρ(υ)(b) = ρ(υ B )(b) in R(A), ρ(υ)(b) = ρ( B υ)(b) in L(A), λ(υ)(b) = λ(υ C )(b) in R(A), λ(υ)(b) = λ( C υ)(b) in L(A). Moreover, the following relations hold for all υ ∈ A ⊔ and ω ∈Â: υ • ρ(ω) = ω • λ(υ) ∈ A ⊔ , ρ(υ • ρ(ω)) = ρ(υ)ρ(ω), υ • λ(ω) = ω • ρ(υ) ∈ A ⊔ , λ(υ • λ(ω)) = λ(υ)λ(ω). Proof. Let υ ∈ A ⊔ , b ∈ A and ω ∈Â. Since φ is faithful, elements of ⊆ A ⊔ separate the points of A, and so Lemma 6.1.1 (4) implies λ( B ω)(b) = λ(ω B )(b) and ρ( C ω)(b) = ρ(ω C )(b) . We therefore drop the subscripts B and C and write λ(ω) and ρ(ω) from now on. Suppose ω = φ · a with a ∈ A. Then ω(ρ(υ C )(b)) = φ(aρ(υ C )(b)) = (φ ⊗ µ C υ)((a ⊗ 1)∆ C (b)) = υ(λ(φ · a)(b)) = υ(λ(ω)(b)). A similar calculation shows that ω(ρ C (υ)(b)) = υ(λ(ω)(b)). For ω ∈ of the form ω = c · φ · a with a, c ∈ A, we obtain φ((aρ(υ C )(b))c) = υ(λ(ω)(b)) = φ(a(ρ( C υ)(b)c)). Since a, c ∈ A were arbitrary and φ is faithful, we can conclude that (aρ(υ C )(b))c = a(ρ( C υ)(b)c) for all a, c ∈ A so that ρ( C υ)(b) and ρ C (υ)(b) form a two-sided multiplier ρ(υ)(b) as claimed. Along the way, we just showed that ω • ρ(υ) = υ • λ(ω). One easily verifies that this composition belongs to A ⊔ , for example, B (υ • λ(ω)) = B υ • λ(ω). Let now also ω ′ ∈Â. Then λ(ω ′ ) commutes with ρ(ω) by Lemma 6.1.1 and hence ω ′ • ρ(υ) • ρ(ω) = υ • λ(ω ′ ) • ρ(ω) = υ • ρ(ω) • λ(ω ′ ) = ω ′ • ρ(υ • ρ(ω)). Since ω ′ ∈ was arbitrary and separates the points of A, we can conclude ρ(υ•ρ(ω)) = ρ(υ)ρ(ω). A similar argument proves the remaining equation. The first part of the preceding result can be rewritten as follows. 6.2.4. Corollary. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. Then for all υ ∈ A ⊔ and a, b, c ∈ A, a((υ ⊗ µ B ι)(∆ B (b)(1 ⊗ c))) = aλ(υ)(b)c = (υ ⊗ µ C ι)((1 ⊗ a)∆ C (b))c, (6.7) a((ι ⊗ µ B υ)(∆ B (b)(c ⊗ 1))) = aρ(υ)(b)c = ((ι ⊗ µ C υ)((a ⊗ 1)∆ C (b)))c. (6.8) The results above imply that is an algebra and A and A ⊔ areÂ-bimodules: 6.2.5. Theorem. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. Then the subspace = A · φ ⊆ A ⊔ is an algebra and A and A ⊔ areÂ-bimodules with respect to the products given by ωω ′ := ω • ρ(ω ′ ) = ω ′ • λ(ω) for all ω, ω ′ ∈Â, ω * a = ρ(ω)(a), a * ω = λ(ω)(a) for all a ∈ A, ω ∈Â, ωυ = υ • λ(ω), υω = υ • ρ(ω) for all υ ∈ A ⊔ , ω ∈Â. As such, all are non-degenerate, and A are idempotent, and A and A ⊔ are faithful. Proof. We first show that the product defined on takes values in again. Let ω, ω ′ ∈Â. Then ω • ρ(ω ′ ) = ω ′ • λ(ω) by Proposition 6.2.3. To see that this functional lies inÂ, write ω = a · φ and ω ′ = b · ψ with a, b ∈ A and a ⊗ b = i ∆ B (d i )(e i ⊗ 1) with d i , e i ∈ A. Then (ω • ρ(ω ′ ))(c) is equal to (φ ⊗ µ B φ)(∆ B (c)(a ⊗ b)) = i (φ ⊗ µ B φ)(∆ B (cd i )(e i ⊗ 1)) = i φ( C φ C (cd i )e i ) = i φ(cd iC φ C (e i )) for all c ∈ A and hence )). (6.9) Proposition 6.2.3 now implies that the products defined above turn into an algebra and A and A ⊔ intoÂ-bimodules. Equation (6.9) and bijectivity of the canonical maps T λ , T ρ imply that is idempotent as an algebra and A is idempotent as anÂ-bimodule. These facts and non-degeneracy of the pairingÂ × A → A, (υ, a) → υ(a) imply that A is non-degenerate and faithful as anÂ-bimodule. But A being faithful and idempotent as anÂ-bimodule implies that the algebra is non-degenerate, and that A ⊔ is non-degenerate and faithful as anÂbimodule. ω • ρ(ω ′ ) = f · φ if ω = a · φ, ω ′ = b · φ, f = ( C φ C ⊗ ι)(T −1 λ (a ⊗ b In [16], we show that the algebra constructed above can be endowed with the structure of a measured regular multiplier Hopf algebroid again, which can be regarded as a generalized Pontrjagin dual to the original measured regular multiplier Hopf algebroid. We shall also need the following relations. 6.2.6. Lemma. Let υ ∈ A ⊔ . Then the following relations hold: (1) ρ(υ) • S = S • λ(υ • S) and λ(υ) • S = S • ρ(υ • S); (2) for all x, x ′ , x ′′ ∈ B, y, y ′ , y ′′ ∈ C, ρ(S B (x ′′ )x · υ · y ′′ x ′ )(yby ′ ) = S C (y ′′ )yρ(υ)(xbx)y ′ x ′′ , λ(x ′′ y · υ · S C (y ′′ )y ′ )(xbx ′ ) = y ′′ xλ(υ)(y ′ by)x ′ S B (x ′′ ); (3) if (A, µ B , µ C , B ψ B , C φ C ) is a measured multiplier Hopf * -algebroid, then we have ρ( * • υ • * ) = * • ρ(υ) • * and λ( * • υ • * ) = * • λ(υ) • * . Proof. Straightforward. 6.3. The modular element. We now determine the behaviour of the comultiplication, counit and antipode on the modular elements relating a left integral φ to the right integrals φ • S −1 and φ • S. 6.3.1. Theorem. Let (A, µ B , µ C , B ψ B , C φ C ) be a measured regular multiplier Hopf algebroid. Write ψ − = φ • S −1 and ψ + = φ • S. Then there exist unique invertible multipliers δ − , δ + ∈ M (A) such that ψ + = δ + · φ and ψ − = φ · δ − . These elements satisfy B φ(a)δ + = λ(φ)(a) = δ − φ B (a) for all a ∈ A, S(δ + ) = (δ − ) −1 , ε · δ − = ε = δ + · ε, ∆ B (δ + ) = δ + ⊗ δ + , ∆ B (δ − ) = δ + ⊗ δ − , ∆ C (δ − ) = δ − ⊗ δ − , ∆ C (δ + ) = δ − ⊗ δ + . If (A, µ B , µ C , B ψ B , C φ C ) is a measured multiplier Hopf * -algebroid, then δ + = (δ − ) * . Finally, if σ φ (M (B)) = M (B), then xδ − = δ − S 2 (σ φ (x)), xδ + = δ + σ φ (S 2 (x)) for all x ∈ M (B), δ − y = S −2 (σ ψ − (y))δ − , yδ + = δ + σ ψ + (S −2 (y)) for all y ∈ M (C),φ(λ(φ)(a)b) = ((b · φ) • λ(φ))(a) = (φ • ρ(b · φ))(a) = ((φ • S −1 ) • ρ(φ · a))(b). Another application of Proposition 6.2.3 and (6.1) shows that this is equal to ((φ · a) • λ(ψ − ))(b) = (φ · a)( B ψ − B (b)) = φ(a B ψ − B (b)) = ψ − (φ B (a)b) = φ(δ − φ B (a)b) for all a, b ∈ A, and hence λ(φ)(a) = δ − φ B (a) for all a ∈ A. A similar calculation shows that λ(φ)(a) = B φ(a)δ + for all a ∈ A. We have (δ − ) −1 = S(δ + ) because φ = (δ + · φ) • S −1 = (φ • S −1 ) · S(δ + ) = φ · δ − S(δ + ). The properties of the counit imply that ε(δ − φ B (a)b) = ε(λ(φ)(a)b) = (φ ⊗ µ B ε)(∆ B (a)(1 ⊗ b)) = φ(a B ε(b)) = ε(φ B (a)b) for all a, b ∈ A, whence ε · δ − = ε. A similar calculation shows that δ + · ε = ε. The relations for the comultiplication require a bit more work. For all a, b ∈ A, 1 ⊗ b)). (φ ⊗ µ B φ)(∆ B (δ − a)(1 ⊗ b)) = φ(λ(φ)(δ − a)b) = φ(δ − φ B (δ − a)b) = (ψ − ⊗ µ B ψ − )(∆ B (a)( The map T ρ being surjective, we can conclude that for all c, d ∈ A, (φ ⊗ µ B φ)(∆ B (δ − )(c ⊗ d)) = (ψ − ⊗ µ B ψ − )(c ⊗ d) = φ(δ − S B (φ B (δ − c))d). Since φ B (δ − c)S −1 (δ − ) = φ B (δ − c)(δ + ) −1 = (δ − ) −1 B φ(δ − c), the expression above equals φ(S(φ B (δ − c)S −1 (δ − ))d) = φ(S((δ − ) −1 B φ(δ − c))d) = φ(S B ( B φ(δ − c))S(δ − ) −1 d) = (φ ⊗ µ B φ)(δ − c ⊗ S(δ − ) −1 d). Consequently, ∆ B (δ − ) = δ − ⊗ S(δ − ) −1 . Since the antipode reverses the comultiplication (see (2.15)), we can conclude ∆ C (δ + ) = ∆ C (S −1 (δ − ) −1 ) = δ − ⊗ S −1 (δ − ) −1 = δ − ⊗ δ + . A similar argument shows that ∆ B (δ − ) = δ + ⊗ δ − . Let us compute ∆ C (δ − ). For all a ∈ A, ∆ C (δ − )(1 ⊗ φ B (a)) = ∆ C (δ − φ B (a)) = ∆ C ( B φ(a)δ + ) = δ − ⊗ B φ(a)δ + = (δ − ⊗ δ − )(1 ⊗ φ B (a) ). Since φ B (A)A = A and A C ⊗ A C is non-degenerate as a right module over 1 ⊗ A by assumption, this relation implies ∆ C (δ − ) = δ − ⊗ δ − . A similar reasoning shows that ∆ B (δ + ) = δ + ⊗ δ + . If (A, µ B , µ C , B ψ B , C φ C ) is a measured multiplier Hopf * -algebroid, then φ(a * (δ − ) * ) = φ(δ − a) = φ • S −1 (a) = φ(S −1 (a) * ) = φ(S(a * )) = φ(a * δ + ) for all a ∈ A, where we used (2.19), and hence (δ − ) * = δ + . Finally, suppose that σ φ (M (B)) = M (B). Of the intertwining relations for δ + , δ − and multipliers of B or C, we only prove the first one; the others follow similarly. From Theorem 6.1.3, we conclude that for all a ∈ A and x ∈ B, δ − S 2 (σ φ (x))φ B (a) = δ − φ B (xa) = λ(φ)(xa) = xλ(φ)(a) = xδ − φ B (a). Let (A, µ B , µ C , C φ C , B ψ B ) be a measured regular multiplier Hopf algebroid. Suppose that A is proper and that σ φ (B) = B. Under a mild non-degeneracy assumption, the left integral φ can be rescaled such that it becomes a left and right integral, like a Haar integral in the unital case except for the normalization. To formulate this condition, we use the following observation. By (6.4), we can define a multiplier B φ(y) ∈ M (B) such that for all x ∈ B, x B φ(y) = B φ(xy) = B φ(yx) = B φ(y)(σ φ • S 2 ) −1 (x). (6.10) 6.3.2. Theorem. Let (A, µ B , µ C , C φ C , B ψ B ) be a measured regular multiplier Hopf algebroid. Assume that A is proper, that σ φ (M (B)) = M (B) and that B φ(C) contains an invertible multiplier z. Then the functional h := z −1 · φ is a left and a right integral for (A, µ B , µ C ), and (A, µ B , µ C , B h B , C h C ) is a measured regular multiplier Hopf algebroid. Proof. Denote by δ + ∈ M (A) the multiplier satisfying δ + · φ = φ • S as in Theorem 6.3.1. Let a ∈ A, x, x ′ ∈ B and choose y ∈ C such that B φ(y) = z. Then xzδ + S(x ′ )a = B φ(xy)δ + S(x ′ )a = λ(φ)(xy)S(x ′ )a = ( B φ ⊗ ι)(∆ B (xy)(1 ⊗ S(x ′ )a) which is equal to ( B φ ⊗ ι)(yx ′ ⊗ xa) = S(x ′ z)xa = xS(z)S(x ′ )a. Since x, x ′ ∈ B and a ∈ A were arbitrary, we can conclude that δ + = z −1 S(z). The functional z −1 · h is a left integral by Corollary 3.4.10, and clearly full and faithful. Subsequently using Lemma 4.2.6, the definition of δ + and the formula above, we find h(S(a)) = φ(S(a)z −1 ) = φ(S(S −1 (z −1 )a)) = φ(S −1 (z −1 )aδ + ) = φ(aδ + S(z −1 )) = φ(az −1 ) = h(a) for all a ∈ A. By Corollary 3.4.10, h • S = h is also a right integral. Let us look at the examples listed in Subsection 5.2 again: 6.3.3. Example. Let G be a locally compact, étale Hausdorff groupoid and let µ be a Radon measure on the space of units G 0 with full support. Consider the tuple (A, µ B , µ C , B ψ B , C φ C ) formed by the multiplier Hopf * -algebroid of functions on G defined in Example 2.4.3, the base weight defined in (3.16) and the left-and the right-invariant maps defined in (3.17). Note that the compositions φ = µ C • C φ C and ψ = µ B • B ψ B satisfy ψ = φ • S = φ • S −1 . We saw in Example 5.2.3 that this tuple is a measured multiplier Hopf * -algebroid if the measure µ is continuously quasi-invariant. Conversely, if the tuple is a measured multiplier Hopf * -algebroid, then by Theorem 6.3.1, ψ = δ · φ with δ := δ + = δ − so that µ is continuously quasi-invariant with Radon-Nikodym derivative D = δ −1 . For the measured multiplier Hopf algebroids associated to the convolution algebra of a locally compact, étale, Hausdorff groupoid, see Example 5.2.4, and to a tensor product A = C ⊗ B, see Example 5.2.5, the modular elements δ − and δ + are evidently trivial. 6.3.4. Example. Consider the measured regular multiplier Hopf algebroid associated to a two-sided crossed product C#H#B and suitable H-invariant functionals µ B and µ C as in Example 5.2.7. By [21,Proposition 3.10], H has a modular element δ H which satisfies φ H • S H = δ H · φ H , and therefore φ H • S −1 H = φ H · S H (δ −1 H ) = φ H · δ H . Short calculations show that the modular multipliers δ + and δ − of Theorem 6.3.1 coincide with δ H . 6.4. Faithfulness of integrals. Non-zero integrals on multiplier Hopf algebras are always faithful [21]. We now prove a corresponding statement for integrals on regular multiplier Hopf algebroids, where the former need to be full, the latter locally projective, and the base algebra full. Note that all of these assumptions become vacuous in the case of multiplier Hopf algebras. Proof. We only prove the assertion for left integrals, closely following the argument in [21]. A similar reasoning applies to right integrals. Let φ be a full left integral on a projective regular multiplier Hopf algebroid A with base weight (µ B , µ C ), let a ∈ A and suppose a · φ = 0. Since µ C is faithful, we can conclude that then also a · C φ C = 0. Let b ∈ A and υ B ∈ Hom(A B , B B ). We show that ε(c) = 0, where c := λ(υ B · b)(a),(6.11) and then we can conclude from Lemma 6.1.1 that (µ B • υ B )(ba) = (ε • λ(υ B · b))(a) = ε(c) = 0. Using the facts that µ B is faithful, b ∈ A is arbitrary and A is non-degenerate, and that υ B ∈ (A B ) ∨ is arbitrary and (A B ) ∨ separates the points of A because the module A B is locally projective, we can deduce that a = 0. Let us prove (6.11). Lemma 6.1.1 and (6.2) imply for all d ∈ A that ρ(φ C · d)(c) = (λ(υ B · b) • ρ(φ C · d))(a) = (λ(υ B · b) • S • ρ(a · C φ))(d) = 0, and hence for all f ∈ A and ω ∈ A ∨ of the form ω = µ B • ω B with ω B ∈ Hom(A B , B B ), 0 = (ω · f )(ρ(φ C · d)(c)) = (φ · d)(λ(ω B · f )(c)) . Writing (f ⊗ 1)∆ C (c) = j f j ⊗ c j with f j , c j ∈ A, the equation above becomes 0 = j φ(dc j S −1 (ω B (f j ))). (6.12) Since A is locally projective, we can find finitely many ω i B ∈ Hom(A B , B B ) and e i ∈ Hom(B B , A B ) such that i e i (ω i B (f j )) = f j for all j, and since B has local units, we can without loss of generality assume that e i ∈ A in the sense that e i (x) = e i x for all i. By Theorem 6.1.2, we can find elements d i ∈ A such that S −1 (e i ) · φ = φ · d i for all i. Now, (6.12) implies 0 = i,j φ(d i c j S −1 (ω i B (f j ))) = i,j φ(c j S −1 (e i ω i B (f j ))) = j φ(c j S −1 (f j )). where we used Lemma 6.1.1 again. The second diagram in (2.14) shows that j c j S −1 (f j ) = j S −1 (f j S(c j )) = S −1 (f S B ( B ε(c))) = B ε(c)S −1 (f ), and hence 0 = φ( B ε(c)S −1 (f )) = µ B ( B ε(c) B φ(S −1 (f )) ). Since f ∈ A was arbitrary, B φ is surjective and µ B is faithful, we can conclude B ε(c) = 0 and hence ε(c) = 0. Modification Our key assumption on a base weight for a regular multiplier Hopf algebroid, the counitality condition µ B • B ε = µ C • ε C , turned out to be quite restrictive in several examples considered in subsection 5.2, where it corresponded to invariance of µ B and µ C under certain actions of an underlying groupoid or a multiplier Hopf algebra. In case that the functionals are only quasi-invariant with respect to these actions, one can independently modify the left and the right comultiplication and accordingly the left and right counits such that µ B and µ C will form a base weight for this modified multiplier Hopf algebroid. We now describe this modification, which is inspired by [23], in a systematic manner, starting with left and right multiplier bialgebroids and then turning to multiplier Hopf algebroids. Examples will be given in the next section. Given an automorphism θ of B, we write θ A and A θ when we regard A as a left or right B-module via x · a := s(θ(x))a or a · x := as(θ(x)), respectively, and θ A and A θ when x · a := at(θ −1 (x)) or a · x := t(θ −1 (x))a for all a ∈ A and x ∈ B. Note that then the quotients θ A ⊗ A B and B A ⊗ A θ of A ⊗ A coincide. Suppose that Θ λ and Θ ρ are automorphisms of A which, when extended to multipliers, satisfy Θ λ • s = s • θ, Θ ρ • t = t • θ −1 . (7.1) Then the isomorphisms Θ λ ⊗ ι, ι ⊗ Θ ρ : B A ⊗ A B → θ A ⊗ A B = A B ⊗ A θ (7.2) induce, by conjugation, isomorphisms Evidently, such modifiers form a group with respect to the composition given by Θ λ ×ι, ι×Θ ρ : End( B A ⊗ A B ) → End( θ A ⊗ A B ) = End( B A ⊗ A θ ).(θ, Θ λ , Θ ρ )(θ ′ , Θ ′ λ , Θ ′ ρ ) = (θθ ′ , Θ λ Θ ′ λ , Θ ′ ρ Θ ρ ).T λ = (ι ⊗ Θ ρ ) • T λ = (Θ λ ⊗ ι) • T λ • (Θ λ ⊗ ι) −1 , (7.4) T ρ = (Θ λ ⊗ ι) • T ρ = (ι ⊗ Θ ρ ) • T ρ • (ι ⊗ Θ ρ ) −1 . (7.5) Proof. To see that the space A B ⊗ A θ = θ A ⊗ A B is non-degenerate as a right module over A ⊗ 1 and 1 ⊗ A, use the isomorphisms (7.2). To check that∆ is bilinear with respect to s and t and co-associative is straightforward, for example, ∆(s(x)as(x ′ )) = (Θ λ× ι)((1 ⊗ s(x))∆(a)(1 ⊗ s(x ′ ))) = (1 ⊗ s(x))∆(a)(1 ⊗ s(x ′ )) for all x, x ′ ∈ B and a ∈ A, and (∆ ⊗ ι)(∆(b)(1 ⊗ c))(a ⊗ 1 ⊗ 1) = (Θ λ ⊗ ι ⊗ Θ ρ )((∆ ⊗ ι)(∆(b)(1 ⊗ c))(a ⊗ 1 ⊗ 1)) = (Θ λ ⊗ ι ⊗ Θ ρ )((ι ⊗ ∆)(∆(b)(a ⊗ 1))(1 ⊗ 1 ⊗ c)) = (ι ⊗∆)(∆(b)(a ⊗ 1))(1 ⊗ 1 ⊗ c) for all a, b, c ∈ A. The formulas (7.4) and (7.5) follow from the definition of∆. We callà B the modified left multiplier bialgebroid or briefly modification associated to (θ, Θ λ , Θ ρ ). 7.1.3. Remark. In the situation above, the map (θ ′ , Θ ′ λ , Θ ′ ρ ) → (θ ′ θ −1 , Θ ′ λ Θ −1 λ , Θ ′ ρ Θ −1 ρ ) is aΘ λ • t = t, Θ ρ • s = s, ∆ • Θ λ = (ι×Θ λ ) • ∆, ∆ • Θ ρ = (Θ ρ ×ι) • ∆. Proof. We only prove the relations involving Θ λ ; similar arguments apply to Θ ρ . Formula (7.5) implies (Θ λ ⊗ ι)((t(x) ⊗ 1)T ρ (a ⊗ b)) = (Θ λ ⊗ ι)(T ρ (t(x)a ⊗ b)) = (t(x) ⊗ 1)(Θ λ ⊗ ι)(T ρ (a ⊗ b)) for all a, b ∈ A and x ∈ B. Applying slice maps, we find that t( x)Θ λ (c) = Θ λ (t(x)c) for all elements c ∈ A of the form (ι ⊗ ω)(T ρ (a ⊗ b)), where a, b ∈ A and ω ∈ Hom(A B , B B ). Since A B is full, such elements span A and hence Θ λ • t = t. Next, the formula forT ρ and the second diagram in (2.5) show that the outer cell and the left cell in the following diagram commute: A B ⊗ B A B ′ ⊗ B ′ A ι⊗Tρ / / T λ ⊗ι A B ⊗ B B ′ A ⊗ A B ′ ι⊗Θ λ ⊗ι / / T λ ⊗ι A B ⊗ θ θ ′ A ⊗ A B ′ T λ ⊗ι B A ⊗ A B B ′ ⊗ B ′ A ι⊗Tρ / / B A ⊗ B ′ A B ⊗ A B ′ ι⊗Θ λ ⊗ι / / B A ⊗ θ ′ A θ ⊗ A B ′ Here, we use the notation explained in Notation 2.1.2. We apply slice maps of the form ι ⊗ ι ⊗ ω, where ω ∈ Hom(A B , B B ), use the assumption that A B is full, and conclude that (ι ⊗ Θ λ )T λ = T λ (ι ⊗ Θ λ ) and hence ∆ • Θ λ = (ι×Θ λ )∆. The preceding result implies that in the full case, the modified left multiplier bialgebroid is isomorphic to the original one in the following sense. Definition. An isomorphism between left multiplier bialgebroids A 1 = (A 1 , B 1 , s 1 , t 1 , ∆ 1 ) and A 2 = (A 2 , B 2 , s 2 , t 2 , ∆ 2 ) is a pair of isomorphisms Θ : A 1 → A 2 and θ : B 1 → B 2 such that for all a, b, c ∈ A, Θ • s 1 = s 2 • θ, Θ • t 1 = t 2 • θ, (Θ ⊗ Θ)(∆ 1 (a)(b ⊗ c)) = ∆ 2 (Θ(a))(Θ(b) ⊗ Θ(c)). 7.1.6. Proposition. Let A B be a full left multiplier bialgebroid with a modifier (θ, Θ λ , Θ ρ ). Then (Θ λ , θ) and (Θ ρ , ι) are isomorphisms from A B to the modificationà B . Proof. We only prove the assertion for (Θ λ , θ). Lemma 7.1.4 and the definition of∆ imply that for all a, b, c ∈ A, ∆(Θ λ (a))(Θ λ (b) ⊗ Θ λ (c)) = (Θ λ ⊗ ι)(∆(Θ λ (a))(b ⊗ Θ λ (c))) = (Θ λ ⊗ Θ λ )(∆(a)(b ⊗ c)). As in subsection 6.1, we consider convolution operators λ( B υ), ρ(ω B ) : A → L(A) associated to module maps B υ ∈ Hom( B A, B B) and ω B ∈ Hom(A B , B B ) by the formulas ( λ( B υ)(a)b := ( B υ ⊗ ι)(∆(a)(1 ⊗ b)), ρ(ω B )(a)b := (ι ⊗ ω B )(∆(a)(b ⊗ 1)).1) If A B is full, then θ • B ε • Θ −1 λ = B ε • Θ −1 ρ ,A B ⊗ B A Tρ / / ι⊗Θρ A B ⊗ A B ι⊗Θρ Bε ⊗ι / / A Θρ A B ⊗ B AT ρ / / A B ⊗ A θ Bε ⊗ι / / A, and shows that Θ ρ (λ( Bε )(a)b) = aΘ ρ (b) for all a, b ∈ A. Suppose that A B is unital in the sense that the algebras A, B and the maps s, t, ∆ B are unital. Then modifiers have a nice description in terms of the maps from A to B considered above. Consider the convolution product on the space Hom( B A B , B B B ), given by B υ B * B ω B := ( B υ B ⊗ B ω B ) • ∆ : a → B ω B (a (2) ) B υ B (a (1) ), where B υ B , B ω B ∈ Hom( A B B , B B B ) and A B ×A B is identified with the Takeuchi prod- uct inside A B ⊗ A B . If it exists, then the left counit B ε is the unit for this product. We call an element B χ B ∈ Hom( B A B , B (1) all modifiers (ι, Θ λ , Θ ρ ) for A B ; B B ) a character if for all a, b ∈ A, B χ B (ab) = B χ B (as( B χ B (b))) = B χ B (at( B χ B (b))). (2) all automorphisms Θ λ of A satisfying Θ λ •s = s, Θ λ •t = t, ∆•Θ λ = (ι×Θ λ )•∆; (3) all automorphisms Θ ρ of A satisfying Θ ρ • t = t, Θ ρ • s = s, ∆ • Θ ρ = (Θ ρ ×ι); (4) all invertible characters B χ B ∈ Hom( B A B , B B B ). Proof. For every modifier (ι, Θ λ , Θ ρ ), the automorphisms Θ λ and Θ ρ satisfy the conditions in (2) := B ε • Θ −1 λ is its convolution inverse because B χ B * Bχ B = B χ B • (ι ⊗ Bχ B ) • ∆ = B χ B • (ι ⊗ B ε) • ∆ • Θ −1 λ = B χ B • Θ −1 λ = B ε and similarly Bχ B * B χ B = B ε. Similar arguments show that for every automorphism Θ ρ as in (3), the map B ε • Θ ρ is an invertible character. Finally, assume that B χ B is an invertible character as in (4). Then the maps Θ λ := ρ( B χ B ) and Θ ρ := λ( B χ B ) are bijections of A because B χ B is invertible in the convolution algebra, and they are automorphisms because B χ B is a character. Using co-associativity, one easily verifies that (ι, Θ λ , Θ ρ ) is a modifier. Right multiplier bialgebroids can be modified similarly. 7.1.9. Definition. A modifier of a right multiplier bialgebroid A C = (A, C, s, t, ∆) consists of an automorphism θ of C and automorphisms λ Θ and ρ Θ of A satisfying λ Θ • t = t • θ −1 , ρ Θ • s = s • θ, ( λ Θ×ι) • ∆ = (ι× ρ Θ) • ∆. The results obtained above carry over to right multiplier bialgebroids in a straightforward way. In particular, for every modifier (θ, λ Θ, ρ Θ) of a right multiplier bialgebroid A C = (A, C, s, t, ∆), we obtain a modified right multiplier bialgebroid A C := (A, C, s, t • θ −1 ,∆), where∆ = ( λ Θ×ι) • ∆ = (ι× ρ Θ) • ∆. If A C is full, then we have isomorphisms (Θ λ , ι) and (Θ ρ , θ) from A C toà C , where the notion of an isomorphism between right multiplier bialgebroids is evident. , ∆ B , ∆ C ) is a tuple (Θ λ , Θ ρ , λ Θ, ρ Θ) such that (1) Θ λ extends to an automorphism of B and (Θ λ | B , Θ λ , Θ ρ ) is a modifier of the associated left multiplier bialgebroid A B , (2) ρ Θ extends to an automorphism of C and ( ρ Θ| C , λ Θ, ρ Θ) is a modifier of the associated right multiplier bialgebroid A C . We call such a modifier trivial on the base if all four automorphisms act trivially on B and on C. We call it self-adjoint if A is a multiplier * -bialgebroid and * • Θ λ = λ Θ • * and * •Θ ρ = ρ Θ • * . Note that every modifier as above satisfies (7.6) and that modifiers form a group with respect to the composition Θ ρ • S B • Θ λ = S B and λ Θ • S C • ρ Θ = S C ,(Θ λ , Θ ρ , λ Θ, ρ Θ) · (Θ ′ λ , Θ ′ ρ , λ Θ ′ , ρ Θ ′ ) := (Θ λ Θ ′ λ , Θ ′ ρ Θ ρ , λ Θ ′ λ Θ, ρ Θ ρ Θ ′ ). 7.2.2. Proposition. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a multiplier bialgebroid with a modifier (Θ λ , Θ ρ , λ Θ, ρ Θ) and let S B := S B • Θ λ | −1 B ,S C := S C • ρ Θ| −1 C ,∆ B := (Θ λ ×ι) • ∆ B ,∆ C := ( λ Θ×ι) • ∆ C . Thenà = (A, B, C,S B ,S C ,∆ B ,∆ C ) is a multiplier bialgebroid. If moreover A is a multiplier * -bialgebroid and the modifier is self-adjoint, then alsoà is a multiplier *bialgebroid. Proof Assume that A is a multiplier * -bialgebroid and that the modifier is self-adjoint. Theñ S B • * •S C • * = S B • Θ −1 λ • * • S C • Θ −1 ρ • * = S B • * • λ Θ −1 • S C • Θ −1 ρ • * = S B • * • S C • * = ι C and similarlyS C • * •S C • * = ι B . Finally, self-adjointness of the modifier immediately implies that∆ B and∆ C satisfy condition (3) in Definition 2.3.4. In the situation above, we callà the modification of A. We can now formulate the main result of this section: 7.2.3. Theorem. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid with a modifier (Θ λ , Θ ρ , λ Θ, ρ Θ). Then the modificationà is a regular multiplier Hopf algebroid again. The counits and antipode B ε, ε C , S of A and the counits and antipode Bǫ ,ǫ C ,S ofà are related by Bǫ = B ε • Θ −1 ρ = Θ λ • B ε • Θ −1 λ , (7.7) ǫ C = ε C • λ Θ −1 = ρ Θ • ε C • ρ Θ −1 , (7.8) S = Θ ρ • S • ρ Θ −1 = λ Θ • S • Θ −1 λ . (7.9) If A is a multiplier Hopf * -algebroid and the modifier is self-adjoint, then alsoà is a multiplier Hopf * -algebroid. Proof. The canonical maps of the multiplier bialgebroidà are bijective by construction, see Proposition 7.1.2, and the formulas for the counits follow from Proposition 7.1.7 and its right-handed analogue. To prove (7.9), consider the following diagrams. A B ⊗ B A S C ε C ⊗ι / / Tρ ι⊗Θ −1 ρ ❅ ❅ ❅ ❅ ❅ ❅ ❅ A C A ⊗ A CS ⊗ι / / T λ Θ −1 λ ⊗ι ❆ ❆ ❆ ❆ ❆ ❆ ❆ A θ C ⊗ A C A B ⊗ B A S C ε C ⊗ι / / Tρ A Θρ > > ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ C A ⊗ A C S⊗ι / / T λ A C ⊗ A C λ Θ⊗ι > > ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ B A ⊗ A B S⊗ι / / Θ −1 λ ⊗Θρ~⑦ ⑦ ⑦ ⑦ ⑦ ⑦ B A ⊗ A B Θρ⊗Θρ ❅ ❅ ❅ ❅ ❅ ❅ m O O B A ⊗ A B ΣT −1 ρ / / Θ λ ⊗ι6 ⑥ ⑥ ⑥ ⑥ ⑥ B A ⊗ A B ρT O O ι⊗ι ❆ ❆ ❆ ❆ ❆ ❆ ❆ B A ⊗ A B S⊗ι / / B A ⊗ A B m O O B A ⊗ A θ B ΣT −1 ρ / / B A ⊗ A B ρT O O In the diagram on the left hand side, the outer and inner square commute by the defining property of the antipode, and since the left, the right and the upper cells commute, so does the lower one, showing that Θ ρ S = Θ −1 λ S. In the diagram on the right hand side, the outer and inner square commute by [18,Proposition 5.8]. Since the left, the right and the lower cell commute as well, so does the upper one, showing thatS = λ ΘSΘ −1 λ . The following example is the counterpart to Van Daele's modification [23]: 7.2.4. Example. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid and let u, v ∈ B be invertible. Then the formulas Partial integrals do not change when a multiplier bialgebroid is modified and the modifier is trival on the base: 7.2.5. Lemma. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid with a modifier (Θ λ , Θ ρ , λ Θ, ρ Θ) that is trivial on the base. Then partial integrals of A and of the modificationà coincide. Θ λ (a) = v −1 av, Θ ρ (a) = S B (v −1 )aS B (v), λ Θ(a) = uau −1 , ρ Θ(a) = S −1 C (u)aS −1 C (u −1 ) define a modifier (Θ λ , Θ ρ , λ Θ, Proof. Straightforward and left to the reader. As outlined above and illustrated in the next section, some naturally appearing multiplier Hopf algebroids admit a counital base weight only after modification. We now investigate when such a modification exists. Similarly as before, we consider for a map C χ C ∈ Hom( C A C , C C C ) the convolution operators λ( C χ C ), ρ( C χ C ) : A → R(A) defined by aλ( C χ C )(b) = ( C χ C ⊗ ι)((1 ⊗ a)∆ C (b)), aρ( C χ C )(b) = (ι ⊗ C χ C )((a ⊗ 1)∆ C (b)). 7.2.6. Proposition. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid with an antipodal base weight (µ B , µ C ). The following conditions are equivalent: (1) there exists a modifier (Θ λ , Θ ρ , λ Θ, ρ Θ), trivial on the base, such that (µ B , µ C ) is a counital base weight for the associated modificationÃ; (2) there exists a functional χ ∈ A ⊔ such that (a) B χ ∈ Hom( B A B , B B B ) and χ C ∈ Hom( C A C , C C C ), (b) the maps ρ( B χ), λ( B χ), ρ(χ C ), λ(χ C ) are automorphisms of A. Proof. Suppose that (Θ λ , Θ ρ , λ Θ, ρ Θ) is a modifier as in (1). Then the counit functional χ :=ǫ of the associated modificationà satisfies (2a) and (2b) by Proposition 7.1.7 and by its right-handed analogue. Conversely, suppose χ ∈ A ⊔ satisfies the conditions in (2), and denote by Θ λ , Θ ρ , λ Θ, ρ Θ inverses of the convolution operators in (2b). Using coassociativity, one easily verifies that these automorphisms form a modifier of A. By Proposition 7.1.7, the left counit of the associated modificationà satisfies λ( Bε ) = Θ −1 ρ = λ( B χ), whence Bε = B χ. Likewise,ε C = χ C . Therefore, µ B • Bε = χ = µ C • Cε . Examples of modified measured multiplier Hopf algebroids For several examples of regular multiplier Hopf algebroids considered in the preceding sections, our assumptions on base weights translated into quite restrictive invariance conditions. We now show that if weaker and quite natural quasi-invariance assumptions hold, then the original multiplier Hopf algebroid can be modified in such a way that the modification meets all of our assumptions and becomes a measured multiplier Hopf algebroid. Let furthermore µ be a Radon measure on G 0 and consider the associated functional µ : C c (G 0 ) → C, f → G 0 f dµ. We saw in Example 5.2.4 that the base weight (μ,μ) for is counital if and only if µ is invariant. Suppose now that µ is only continuously quasi-invariant in the sense explained in Example 3.3.6, that is, the measures ν and ν −1 on G defined by (3.18) are related by a continuous Radon-Nikodym derivative D ∈ C(G) such that ν = Dν −1 . We can then modify and obtain a measured multiplier Hopf * -algebroid as follows. The Radon-Nikodym cocycle D yields a one-parameter family of automorphisms σ t : C c (G) → C c (G), (σ t (f ))(γ) = f (γ)D t (γ), on the convolution algebra C c (G). The automorphisms Θ λ := Θ ρ := σ 1/2 and λ Θ := ρ Θ := σ −1/2 form a modifier of which is trivial on the base and self-adjoint. The associated modification is the multiplier Hopf * -algebroid A = (Â,B,Ĉ,ŜĈ,ŜĈ ,∆B,∆Ĉ ), where = C c (G) andB =Ĉ = C c (G 0 ) withŜB =ŜĈ = ι Cc(G 0 ) as before, but (∆B(f )(g ⊗ h))(γ ′ , γ ′′ ) = s(γ)=r(γ ′ ) f (γ)D 1/2 (γ)g(γ −1 γ ′ )h(γ −1 γ ′′ ), ((g ⊗ h)∆Ĉ (f ))(γ ′ , γ ′′ ) = r(γ)=s(γ ′ ) f (γ)D −1/2 (γ)g(γ ′ γ −1 )h(γ ′′ γ −1 ) for all f, g, h ∈ C c (G). Here, we use the isomorphisms (2.21). Short calculations show that the antipodeS remains unchanged, that is, (S(f ))(γ) = f (γ −1 ), and the counitsBε andεĈ are given by (Bε(f ))(u) = r(γ)=u f (γ)D −1/2 (γ), (εĈ (f ))(u) = s(γ)=u f (γ)D 1/2 (γ). For the modificationÃ, the base weight (μ,μ) is counital because for all f ∈ C c (G), µ(Bε(f )) = G f D −1/2 dν = G f D 1/2 dν −1 =μ(εĈ (f )). By Example 3.1.6 and Lemma 7.2.5, the restriction map C c (G) → C c (G 0 ) is left-and right-invariant with respect to the modified comultiplications, and the composition witĥ µ gives a total left and right integral φ =ψ : C c (G) → C, f → G 0 f | G 0 dµ. We thus obtain a measured multiplier Hopf * -algebroid (Â,μ,μ,φ,ψ). Sinceφ • S =φ, the modular element is trivial. The modular automorphism ofφ is σ 1 because for all f, g ∈ C c (G), φ(f * g) = G f (γ)g(γ −1 ) dν(γ) = G g(γ −1 )f (γ)D(γ) dν −1 (γ) =φ(g * σ 1 (f )). 8.2. Crossed products for symmetric actions on commutative algebras. Let C be a non-degenerate, idempotent and commutative algebra with a left action of a regular multiplier Hopf algebra (H, ∆ H ) which is symmetric in the sense that (2.22) holds, denote by A = C#H the associated crossed product and consider the regular multiplier Hopf algebroid A = (A, C, C, ι, ι, ∆ B , ∆ C ) defined in Example 2.4.6. Suppose moreover that (H, ∆ H ) has a left and a right integral φ H and ψ H , and define a partial left integral C φ C and a partial right integral B ψ B as in (3.7). We saw in Example 5.2.6 that for every faithful H-invariant functional µ, the tuple (A, µ, µ, B ψ B , C φ C ) is a measured regular multiplier Hopf algebroid. We now show that if µ only satisfies a weaker quasi-invariance condition, then we can modify A so that we obtain a measured regular multiplier Hopf algebroid again. To simplify the discussion, we shall only consider the unital case. We start with a few preliminaries on functionals that are quasi-invariant with respect to an action of a Hopf algebra. For the application in the next subsection, we drop the commutativity assumption on C and the symmetry assumption for a moment. Recall that a Hopf algebra is regular if its antipode is invertible. Let C be a unital algebra with a left action of a regular Hopf algebra (H, ∆ H ) so that C becomes a left H-module algebra. As before, we identify C and H with subalgebras of C#H. A unital one-cocycle for (H, ∆ H ) with values in C is a map ω : H → C satisfying the following equivalent conditions: (1) ω(1 H ) = 1 C and ω(hg) = ω(h (1) )(h (2) ⊲ ω(g)) for all h, g ∈ H; (2) the map α ω : H → C#H given by h → ω(h (1) )h (2) is a unital homomorphism. We call a faithful functional µ on C quasi-invariant with respect to H if there exists a map D : H → C, h → D h , such that µ(S(h) ⊲ y) = µ(D h y) for all h ∈ H, y ∈ C. We then call D the Radon-Nikodym cocycle of µ. This terminology is justified: 8.2.1. Lemma. If µ is a faithful and quasi-invariant functional on C, then its Radon-Nikodym cocycle D is a one-cocycle. Proof. Let h ∈ H and y ∈ C. Then by definition, µ(y ′ (S(h) ⊲ y)) = µ(S(h (1) ) ⊲ ((h (2) ⊲ y ′ )y)) = µ(D h (1) (h (2) ⊲ y ′ )y). (8.1) Taking y ′ = D g , we find µ(D hg y) = µ(S(hg) ⊲ y) = µ(D g (S(h) ⊲ y)) = µ(D h (1) (h (2) ⊲ D g )y). Since µ is faithful, the assertion follows. Regard the left action of H on C as a left action of the co-opposite Hopf algebra H co on the opposite algebra C op . If C is commutative and the action of H is symmetric, then C op #H co is canonically isomorphic to C#H. 8.2.2. Lemma. Let µ be a faithful, quasi-invariant functional on C and suppose that D h (1) (h (2) ⊲ y) = (h (1) ⊲ y)D h (2) (8.2) for all h ∈ H and y ∈ C. Then there exist automorphisms β D of C#H and β † D of C op #H co such that β D (y#h) = yD h (1) #h (2) , β † D (y#h) = D h (2) y#h (1) . (8.3) Proof. We only prove the assertion on β D ; the existence of β † D follows similarly. The formula for β D defines a homomorphism because the map α D : h → D h (1) #h (2) is a homomorphism and β D (1#h)β D (y#1) = D h (1) (h (2) ⊲ y)#h (3) = (h (1) ⊲ y)D h (2) #h (3) = β D ((h (1) ⊲ y)#h (2) ) for all h ∈ H and y ∈ C. It is bijective because the map β D : C#H → C#H, y#h → y(h (1) ⊲ ω(S H (h (2) )))#h (3) , where S H denotes the antipode of (H, ∆ H ), is inverse to β D . Indeed, both are C-linear on the left hand side and satisfȳ (2) ) ⊲ D h (4) )))#h (5) β D (β D (1#h)) = D h (1) (h (2) ⊲ D S H (h (3) ) )#h (4) = D h (1) S H (h (2) ) #h (3) = 1#h, β D (β D (1#h)) = (h (1) ⊲ D S H (h (2) ) )D h (3) #h (4) = (h (1) ⊲ (D S H (h (3) ) (S H (h= (h (1) ⊲ D S H (h (2) )h (3) )#h (4) = 1#h. We now apply the preceding considerations to our example. 8.2.3. Proposition. Let C be a unital, commutative algebra with a left action of a regular Hopf algebra (H, ∆ H ) that is symmetric in the sense that for all h ∈ H and y ∈ C, h (1) ⊗ h (2) ⊲ y = h (2) ⊗ h (1) ⊲ y. Suppose that µ is a faithful, quasi-invariant functional on C, and that φ H is an integral on (H, ∆ H ). Define the regular multiplier Hopf algebroid A = (A, B, C, S B , S C , ∆ B , ∆ C ) as in Example 2.4.6, the maps β D and β † D as in (8.3), and C φ C as in (3.7). Then: (1) ((β † D ) −1 , (β D ) −1 , ι, ι) is a modifier of A; (2) (µ, µ) is a counital base weight for the associated modificationÃ; (3) (Ã, µ, µ, C φ C , C φ C ) is a measured multiplier Hopf algebroid; (4) the modular automorphism σ φ of φ = µ • C φ C is given by σ φ (yh) = yσ H (h (2) )D S −1 (h (1) ) for all y ∈ C and h ∈ H, where σ H denotes the modular automorphism of φ H . Proof. (1) Since the action is symmetric, we can apply Lemma 8.2.2 and conclude that β † D and β D are automorphisms. They form a modifier of the left multiplier bialgebroid A B because (β † D ×ι)(∆ B (y#h)) = (yD h (2) #h (1) ) ⊗ (1#h (3) ) = (y#h (1) ) ⊗ (D h (2) #h (3) ) = (ι×β D )(∆ B (y#h)). Therefore, their inverses form a modifier as well. (2) Let y ∈ C and h ∈ H. Then by (7.7) and (2.23), µ( Bε (y#h)) = µ( B ε(β D (y#h))) = µ( B ε(D h (1) y#h (2) )) = µ(D h y) and, because the right comultiplication remains unchanged and the action is symmetric, µ(ε C (y#h)) = µ(ε C (h (1) (S(h (2) ) ⊲ y))) = µ(S(h) ⊲ y). (3) By Example 3.1.8 and Lemma 7.2.5, C φ C is a partial integral forÃ, by (2), (µ, µ) is a counital base weight, and using the fact that φ H is faithful [21,Theorem 3.7], it is not difficult to see that φ is faithful as well. (4) By Lemma 4.2.6, σ φ (y) = S 2 (y) = y for all y ∈ C. Let h, h ′ ∈ H and y ∈ C. Then φ(yhσ H (h ′ (2) )D S −1 (h ′ (1) ) ) = φ(hσ H (h ′ (2) )D S −1 (h ′ (1) ) y) = µ(D S −1 (h ′ (1) ) y)φ H (hσ H (h ′ (2) )) = µ(h ′ (1) ⊲ y)φ H (h ′ (2) h) = φ((h ′ (1) ⊲ y)h ′ (2) h) = φ(h ′ yh). The assertion follows. 8.3. Two-sided crossed products. Consider the regular multiplier Hopf algebroid A obtained from a two-sided crossed product A = C#H#B associated to compatible left and right actions of a regular multiplier Hopf algebra (H, ∆ H ) on idempotent, nondegenerate algebras C and B with given anti-isomorphisms S B and S C as in Example 2.4.7. In Example 5.2.7, we saw that an antipodal, modular base weight (µ B , µ C ) for A is counital if and only if µ B and µ C are invariant with respect to the actions of H. We now show that if µ B and µ C are only quasi-invariant, then we can modify A so that we obtain a measured regular multiplier Hopf algebroid. To simplify the discussion, we assume the algebras B, C and H to be unital again. First, we need further preliminaries about quasi-invariant functionals. Let C be a unital algebra with a left action of a regular Hopf algebra (H, ∆ H ) and a faithful, quasiinvariant functional µ. We regard the action also as a left action of the co-opposite Hopf algebra H co on the opposite algebra C op again. Given a functional µ on C, we denote by µ op the corresponding functional on C op . 8.3.1. Lemma. Let µ be a faithful, quasi-invariant functional on C that admits a modular automorphism σ such that σ(h ⊲ y) = S 2 (h) ⊲ σ(y) for all h ∈ H and y ∈ C. Then: (1) σ(D h ) = D S 2 (h) for all h ∈ H; (2) the functional µ op is quasi-invariant and its Radon-Nikodym cocycle is D; (3) D h (1) (h (2) ⊲ y) = (h (1) ⊲ y)D h (2) for all h ∈ H and y ∈ C. Proof. We repeatedly use faithfulness of µ. Let y, y ′ ∈ C and h ∈ H. (1) The relation µ • σ = µ and the assumption on σ imply that µ(D h y) = µ(S(h) ⊲ y) = µ(S 3 (h) ⊲ σ(y)) = µ(D S 2 (h) σ(y)) = µ(cD S 2 (h) ). (2) The antipode S co H of H co is the inverse of the antipode S H of (H, ∆ H ), whence µ op (S H co (h) ⊲ c) = µ(S −1 H (h) ⊲ c) = µ(D S −2 (h) c) = µ(cD h ). (3) By assumption on σ, µ((h (1) ⊲ y ′ )D h (2) y) = µ(D h (2) y(S 2 (h (1) ) ⊲ σ(y ′ ))) = µ(S(h (2) ) ⊲ y(S 2 (h (1) ) ⊲ σ(y ′ ))) = µ((S(h) ⊲ y)σ(y ′ )) = µ(y ′ (S(h) ⊲ y)), (8.4) and by (8.1), this is equal to µ(D h (1) (h (2) ⊲ y ′ )y). In the following proposition, we write y op if we regard an element y of an algebra C as an element in the opposite algebra C op . 8.3.2. Proposition. Let (H, ∆ H ) be a Hopf algebra with invertible antipode S H and let C be a unital left H-module algebra with a faithful functional µ that is quasi-invariant with respect to H and admits a modular automorphism σ such that for all h ∈ H, c ∈ C, σ(h ⊲ c) = S 2 H (h) ⊲ σ(c). (2) There exist automorphisms Θ λ , Θ ρ of A such that Θ λ (yhx) = y(D h (2) ) op h (1) x, Θ ρ (yhx) = yD h (1) h (2) x for all y ∈ C, h ∈ H, x ∈ B, and (Θ −1 λ , Θ −1 ρ , ι A , ι A ) is a modifier of A. (2) The map Θ ρ acts trivially on the subalgebra B ⊆ A, and like the automorphism β D of C#H defined in Lemma 8.2.2 on the subalgebra C#H ∼ = CH ⊆ A. Thus, the canonical linear isomorphism A ∼ = (C#H) ⊗ B identifies Θ ρ with the map β D ⊗ ι which is bijective. Now, Θ ρ is an automorphism because for all x ∈ B, y ∈ C, h ∈ H, Θ ρ (x)Θ ρ (yh) = xyD h (1) h (2) = yD h (1) h (2) (x ⊳ h (3) ) = Θ ρ (yh (1) )Θ ρ ((x ⊳ h (2) )). To prove the assertions concerning the map Θ λ , note that the subalgebra BH ⊆ A is isomorphic to C op #H co because hy op = (y op ⊳ S −1 (h (2) ))h (1) = (h (2) ⊲ y) op h (1) for all h ∈ H, y ∈ C. Now, similar arguments as above show that the desired properties of Θ λ follow easily from the corresponding properties of the automorphism β † D of C op #H co . The tuple (Θ λ , Θ ρ , ι, ι) is a modifier of A because (Θ λ ×ι)(∆ B (yhx)) = y(D h (2) ) op h (1) ⊗ h (3) x = yh (1) ⊗ D (2) h (3) x = (ι×Θ ρ )(∆ B (yhx)) for all y ∈ C, h ∈ H and x ∈ B. (3) By assumption and definition, the base weight (µ op , µ) is antipodal and modular, and the relations (7.7), (7.8) and (5.4) imply (µ op • Bε )(y op hy ′ ) = (µ op • B ε)(Θ λ (y op hy ′ )) = (µ op • B ε)(y op (D h (2) ) op h (1) y ′ ) = µ((h (1) ⊲ y ′ )D h (2) y), (µ •ε C )(y op hy ′ ) = (µ C • ε C )(y op hy ′ ) = µ(S −1 (h ⊲ y)y ′ ) = µ C (y ′ (S(h) ⊲ y)) for all y, y ′ ∈ C and h ∈ H. By (8.4), both expressions coincide. γ,γ ′ ,γ ′′ A γ,γ ′ ⊗ C γ ′ ,γ ′′ by the subspace spanned by all elements of the form (1 ⊗ x)a ⊗ c − a ⊗ (x ⊗ 1)c, which becomes a (B, Γ) ev -algebra in a natural way, and the unit object is the crossed product B ⋊ Γ with diagonal Γ × Γ-grading and the multiplication map B ⊗ B → B ֒→ M (B ⋊ Γ). A multiplier (B, Γ)-Hopf * -algebroid consists of a (B, Γ) ev -algebra A with a comultiplication, counit and antipode, which are morphisms becomes a multiplier * -bialgebroid. Its canonical maps are bijective by [17,Proposition 1.3.8], so it is a multiplier Hopf * -algebroid. One easily verifies that its antipode is S and its left and right counits B ε and ε C are equal to the composition of ε with the linear maps B ⋊ Γ given by γ x γ γ → γ x γ and γ γx γ → γ x γ , respectively. Integration. The ingredients for integration considered in [17, Definition 1.6.1] and here are related as follows. Assume that the multiplier (B, Γ)-Hopf * -algebroid (A, ∆, S, ε) is measured in the sense of [17, Definition 1.6.1], that is, it comes with (1) a map C φ C : A → B ∼ = C, called a left integral in [17], which has to be C-linear, left-invariant with respect to ∆, and to vanish on A γ,γ ′ if γ = e; (2) a map B ψ B : A → B ∼ = B, called a right integral in [17], which has to be B-linear, right-invariant with respect to ∆, and to vanish on A γ,γ ′ if γ ′ = e; (3) a faithful, positive linear functional on B that is quasi-invariant with respect to the action of Γ in a suitable sense and satisfies µ • C φ C = µ • B ψ B . Let us first consider the conditions in (1) and (2). The first two conditions are easily seen to be equivalent to C φ C being left-invariant and B ψ B being right-invariant with respect to ∆ B and ∆ C in the sense of Definition 3.1.2. The third conditions in (1) and (2) follows from the first ones if the action of Γ on B is free in the sense that for every non-zero x ∈ B and every γ = e, there is some x ′ ∈ B such that x(x ′ − γ(x ′ )) is non-zero. Let us next consider the conditions in (3). Write µ B and µ C for µ, regarded as a functional on B or C, respectively. Then µ B • S C = µ C and µ C • S B = µ B by definition. If a ∈ A and ε(a) = γ x γ γ ∈ B ⋊ Γ, then µ B ( B ε(a)) = γ µ(x γ ), µ C (ε C (a)) = γ µ(γ −1 (x γ )). If µ is invariant under the action of Γ on B, then (µ B , µ C ) is a counital base weight for A and we obtain a measured multiplier Hopf * -algebroid. Modification. The condition of quasi-invariance imposed on the functional µ in (c) is strengthened in [17, Section 2.1, condition (A2)] as follows. There, one assumes existence of a family of self-adjoint, invertible multipliers D Thus, every measured multiplier (B, Γ)-Hopf * -algebroid satisfying [17, Section 2.1, condition (A2)] gives rise to a measured multiplier Hopf * -algebroid. Theorem 6.3.2 shows that in the proper case, the assumption µ • C φ C = µ • B ψ B in [17] does not restrict generality. The measured quantum groupoid associated to (A, ∆, ε, S) in [17] is rather a completion of the modificationà than of A. Indeed, the formula for the comultiplication on the Hopf-von Neumann bimodule given in [17, Lemma 2.5.8] looks like the modified comultipliation∆ B , and the antipode of the associated measured quantum groupoid extends the modified antipodeS = D 1/2 SD 1/2 instead of the original antipode S, see [17,Proposition 2.7.13]. a right module M over B, we write M B if we want to emphasize that M is regarded as a right B-module. We call M B faithful if for each non-zero b ∈ B there exists an m ∈ M such that mb is non-zero, non-degenerate if for each non-zero m ∈ M there exists a b ∈ B such that mb is non-zero, idempotent if M B = M , and we say that M B has local units in B if for every finite subset F ⊂ M there exists a b ∈ B with mb = m for all m ∈ F . Note that the last property implies the preceding two. We denote by (M B ) ∨ := Hom(M B , B B ) the dual module, and by f ∨ : (N B ) ∨ → (M B ) ∨ the dual of a morphism f : M → N of right B-modules, given by f ∨ (χ) = χ • f . We use the same notation for duals of vector spaces and of linear maps. We furthermore denote by L(M B ) := Hom(B B , M B ) the space of left multipliers of the module M B . 2. 1 . 1 . 11Definition. A left multiplier bialgebroid is a tuple (A, B, s, t, ∆) consisting of (1) algebras A and B, where A is non-degenerate and idempotent as a right A-module; (2) a homomorphism s : B → M (A) and an anti-homomorphism t : B → M (A) such that the images of s and t commute, the B-modules A B 2. 3 . 3 . 33Theorem ([18, Theorem 5.6]). Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a multiplier bialgebroid. Then A is a regular multiplier Hopf algebroid if and only if there exists an anti-automorphism S of A satisfying the following conditions: 2. 3 . 4 . 34Definition. A multiplier Hopf * -algebroid is a regular multiplier Hopf algebroid A = (A, B, C, S B , S C , ∆ B , ∆ C ) with an involution on the underlying algebra A such that (1) B and C are * -subalgebras of M (A); (2) S B • * • S C • * = ι C and S C • * • S B • * = ι B ; (3) ∆ B (a * )(b * ⊗ c * ) = ((b ⊗ c)∆ C (a)) (−) * ⊗(−) * for all a, b, c ∈ A. 2. 4 . 5 . 45Example (Tensor product). Let B and C be non-degenerate, idempotent algebras with anti-isomorphisms S B : B → C and S C : C → B, form the the tensor product A = C ⊗ B, and identify B and C with their images in M (A) under the canonical inclusions. Then we obtain a regular multiplier Hopf algebroid (A, B, C, S B , S C , ∆ B , ∆ C ) with comultiplication, counits and antipode given by The algebras C, H, B embed naturally into M (A), and we identify them with their images in M (A). Then the products yhx, yxh, hyx lie in A ⊆ M (A) for all x ∈ B, y ∈ C and h ∈ H, and we obtain a regular multiplier Hopf algebroid (A, B, C, S B , S C 3.1. 1 . 1Proposition. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid. (1) For every linear map B ψ B : A → B, the following conditions are equivalent: (a) B ψ B ∈ Hom( B A, B B) and for all a, b ∈ A, The large rectangle commutes by(2.15), and the upper and lower cells commute by inspection. Since all horizontal and vertical arrows are bijections, the triangle on the left hand side commutes if and only if the triangle on the right hand side does. modular if σ B := S −1 B S −1 C and σ C := S B S C are modular automorphisms of µ B and µ C , respectively; (3) positive if A is a multiplier Hopf * -algebroid and µ B and µ C are positive. Conditon (1) is quite natural. Like (2), it implies associated to arbitrary functionals υ and ω on B and C, respectively, as in Example 3.1.7. Then a base weight (µ B , µ C ) is quasi-invariant with respect to C φ(υ) C if and only if υ µ B . Indeed, if υ µ B and δ ∈ R(B) and δ ′ ∈ L(B) are as in Lemma 1.0.1, then 3.4. 4 . 4Lemma. Assume that µ B and µ C admit modular automorphisms σ B and σ C , respectively. Then M ⊔ is an M (C)-M (B)-sub-bimodule of M ∨ . Proof. Let ω ∈ M ⊔ and x 0 ∈ M (B). Then ω(x 0 my) = µ C (ω C (x 0 m)y) and Finally, let us look at the factorizable functionals on B B B in case where µ B admits a modular automorphism. 3.4.6. Example. Regard B as a B-bimodule and assume that µ B admits a modular automorphism σ B . Then for every multiplier T ∈ M (B), the functional µ B T : b → µ B (T b), is factorizable, and the assignment T → µ B T defines a bijection between M (B) and B ⊔ . This follows easily from Lemma 3.4.4 and 1.0.1 (2). 3. 4 . 7 . 47Lemma. A functional ω ∈ A ∨ is factorizable as a functional on the B-bimodule B A B and on the C-bimodule C A C if and only if it is factorizable as a functional on the B-bimodule B A B and on the C-bimodule C A C . follows from Lemma 3.4.4, (2) from functoriality of the assignment M → M ⊔ , and (3) is straightforward and left to the reader. 3.4.10. Corollary. Let A be a regular multiplier Hopf algebroid A with an antipodal base weight (µ B , µ C ). (1) If the base weight is modular, then all left (right) integrals form an M (B)-bimodule (M (C)-bimodule). (2) The maps φ → φ • S ±1 are bijections between all left and all right integrals. 4. 1 . 1The unital case. Let us first consider the case of a unital regular multiplier Hopf algebroid A with a base weight(µ B , µ C ) satisfying µ B | O = µ C | O , where O = B ∩ Cdenotes the orbit algebra. For example, this relation holds for antipodal base weights by Proposition 3.2.2, and for the base weights constructed in Lemma 3.3.4. 4.1.1. Lemma. we must have h B (1) = 1 = B h(1). By Corollary 3.4.10, ψ := h • S is a right integral and by Lemma 3.4.9, C ψ(1) = 1 = ψ C (1). In particular, ψ| C = µ C . Now, ψ = h by Lemma 4.1.2 and (2) follows. The reverse implication follows similarly. 4.1.3. Definition. Let A be a unital regular multiplier Hopf algebroid with an antipodal base weight (µ B , µ C ). We call a functional h on A a Haar integral for (A, µ B , µ C ) if it satisfies the equivalent conditions in Lemma 4.1.2. 1 and choose a faithful functional τ on the orbit algebra O. Then µ B := τ • C φ C | B and µ C := τ • B ψ B | C form an antipodal base weight by Lemma 3.3.4 and φ = µ C • C φ C is a Haar integral. The preceding results immediately imply the following uniqueness result: 4.1.5. Corollary. Let A be a unital regular multiplier Hopf algebroid with an antipodal base weight (µ B , µ C ). If a Haar integral h exists, then it is unique, and then h(a B φ(1)) = φ(a) = h(φ B (1)a) and h(a C ψ(1)) = h(a) = h(ψ C (1)a) for every left integral φ, every right integral ψ and all a ∈ A. 4. 2 . 3 . 23Remark. Note that ω is a full left or right integral, then the right or left integrals ω • S and ω • S −1 are full again by Lemma 3.4.9(2).We obtain the following corollaries, which involve the relation (1.6) on A ∨ : 4.2.4. Proposition. Let ω and ω ′ be left or right integrals on a regular multiplier Hopf algebroid A with a fixed antipodal base weight. If ω is full, then ω ′ ω.Proof. This follows immediately from the preceding result.4.2.5.Proposition. For every full left integral φ and every full right integral ψ on a regular multiplier Hopf algebroid A with a fixed antipodal base weight, the partial integrals C φ C and B ψ B are surjective. Proof. Suppose that φ is a full left integral. Then ψ ′ := φ • S is a full right integral and Aφ = Aψ ′ by Proposition 4.2.4. Let a, b ∈ A and choose c ∈ A such that c ( 2 ) 2Let M D be a locally projective right D-module, D N a firm left D-module, and suppose that the set of maps P ⊆ Hom( D N , D D) separates the points of N . Then the slice mapsι ⊗ ω : M D ⊗ D N → M, m ⊗ n → mω(n),where ω ∈ P , separate the points of M D ⊗ D N .4.2.8. Definition. We call a regular multiplier Hopf algebroid (A, B, C, S B , S C , ∆ B , ∆ C ) locally projective if the algebras B and C are firm and the modules B A, A B , C A, A C are locally projective. 4.2.9. Theorem. Let A be a locally projective regular multiplier Hopf algebroid with modular base weight (µ B , µ C ). Then for every full and faithful left integral φ for (A, µ B , µ C ), M (B) · φ = {left integrals φ ′ for (A, µ B , µ C )} = φ · M (B), and for every full and faithful right integral ψ for (A, µ B , µ C ), M (C) · ψ = {right integrals ψ ′ for (A, µ B , µ C )} = ψ · M (C). Proof. We only prove the assertion concerning a full and faithful left integral φ. Every element of M (B) · φ and of φ · M (B) is a left integral by Corollary 3.4.10.Conversely, assume that φ ′ is a left integral on A. By Proposition 4.2.4 and Lemma 1.0.1, there exist a unique multipliers α ∈ L(A) and β ∈ R(A) such that α·φ = φ ′ = φ·β. The multiplier β commutes with C because φ is faithful and by Lemma 4.2.6, implies β ∈ M (B). For every full and faithful left integral, we thus obtain bijections between M (B) and the space of left integrals, and between invertible multipliers of B and full and faithful left integrals. Of course, a similar remark applies right integrals. 5. 1 . 1Counital base weights. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid with antipode S, left counit B ε and right counit ε C . 5.1.1. Definition. A base weight (µ B , µ C ) for A is counital if it is antipodal and satisfies where σ ψ − = S •(σ φ ) −1 •S −1 and σ ψ + = S −1 •(σ φ ) −1 •S are the modular automorphisms of ψ − and ψ + , respectively.Proof. The compositions ψ + and ψ − are full and faithful right integrals by Corollary 3.4.10 and Remark 4.2.3, and of the form ψ + = δ + · φ and ψ − = φ · δ − with unique invertible multipliers δ − , δ + ∈ M (A) by Corollary 6.2.1 and Theorem 6.1.3. By Proposition 6.2.3 and (6.2), 6. 4 . 1 . 41Theorem. Let A = (A, B, C, S B , S C , ∆ B , ∆ C ) be a regular multiplier Hopf algebroid with counital base weight (µ B , µ C ). If A is locally projective and B has local units, then every full left or right integral for (A, µ B , µ C ) is faithful. 7. 1 . 1Modification of left multiplier bialgebroids. Let A B = (A, B, s, t, ∆) be a left multiplier bialgebroid. 7. 1 . 1 . 11Definition. A modifier of a left multiplier bialgebroid A B = (A, B, s, t, ∆) consists of an automorphism θ of B and automorphisms Θ λ , Θ ρ of A satisfying (7.1) and (Θ λ ×ι) • ∆ = (ι×Θ ρ ) • ∆. (7.3) 7.1.2. Proposition. Let A B = (A, B, s, t, ∆) be a left multiplier bialgebroid with a modifier (θ, Θ λ , Θ ρ ). Denote by∆ the composition in (7.3). Thenà B := (A, B, s, t • θ −1 ,∆) is a left multiplier bialgebroid with canonical maps (T λ ,T ρ ) given bỹ bijection between all modifiers of A B and all modifiers ofà B , as one can easily check. 7.1.4. Lemma. Let (θ, Θ λ , Θ ρ ) be a modifier of a full left multiplier bialgebroid A B . Then 7. 1 . 7 . 17Proposition. Let A B = (A, B, s, t, ∆ B ) be a left multiplier bialgebroid with a left counit B ε and a modifier (θ, Θ λ , Θ ρ ). and this composition, denoted by Bε , is the unique left counit of the modified left multiplier bialgebroidà B . (2) Ifà B has a left counit Bε , then λ( Bε ) = Θ −1 ρ and ρ(θ • Bǫ ) = Θ −1 λ , where the convolution operators are formed with respect to ∆. Proof. (1) The isomorphisms in Proposition 7.1.6 yield two left counits B ε • Θ −1 ρ and θ • B ε • Θ −1 λ ofÃ, which necessarily coincide by [18, Proposition 3.5] becauseà B is full. (2) We only prove the first equation. The following diagram commutes, 7.1.8. Lemma. Let A B = (A, B, s, t, ∆) be a unital full left multiplier bialgebroid with a left counit. Then there exist canonical bijections between that Θ λ is an automorphism as in(2)and denote by B ε the counit of A B . Then the map B χ B := B ε • Θ λ : A → B lies in Hom( A B B , B B B ) because of (2.3), is a character because of [18, Proposition 3.5], and the composition Bχ B 7. 2 . 2Modification of regular multiplier Hopf algebroids. Let us now consider the two-sided case. 7.2.1. Definition. A modifier for a multiplier bialgebroid A = (A, B, C, S B , S C . By Proposition 7.1.2 and its right-handed analogue,à B := (A, B, ι B ,S B ,∆ B ) is a left andà C := (A, C, ι C ,S C ,∆ C ) is a right multiplier bialgebroid. The mixed coassociativity relations follow by similar arguments as the coassociativity of∆ B , see the proof of Proposition 7.1.2. Thus,à is a multiplier bialgebroid. ρ Θ) and the counits and antipode of the associated modi-ficationà are given byBε (a) = B ε(av −1 )v,ε C (a) = S −1 C (u −1 )ε C (ua),S(a) = uS(vav −1 )u −1 , compare with the formulas given in [23, Proposition 1.12 and 1.13] and (2.20). 8. 1 . 1The convolution algebra of an étale groupoid. Let G be a locally compact, étale Hausdorff groupoid, and consider the multiplier Hopf * -algebroid A = (Â,B,Ĉ,ŜB,ŜĈ,∆B,∆Ĉ) associated to the convolution algebra = C c (G) of G as in Example 2.4.4. Regard the opposite algebra B := C op as a right H-module algebra via x ⊳ h := S H (h) ⊲ x. ( 1 ) 1The anti-isomorphismsS B : B = C op → C, y op → y, S C : C → C op = B, y → σ(y) op , satisfy S B (x ⊳ h) = S H (h) ⊲ S B (x) and S C (h ⊲ y) = S C (y) ⊳ S H (h).Define the associated regular multiplier Hopf algebroid A = (A, B, C, S B , S C , ∆ B , ∆ C ), where A = C#H#B, as in Example 2.4.7. ( 3 ) 3The pair (µ op , µ) is a counital base weight for the associated modificationÃ.(4) Suppose that (H, ∆ H ) has a left and right integral φ H . Then the formulasC φ C (yhx) := yφ H (h)µ op (x), B ψ B (yhx) := µ(y)φ H (h)x,where y ∈ C, h ∈ H and x ∈ B, define a partial left and a partial right integral, and (Ã, µ op , µ, B ψ B , C φ C ) is a measured regular multiplier Hopf algebroid. (5) The modular automorphism of φ = µ • C φ C is given byσ φ (y) = σ(y), σ φ (y op ) = σ −1 (y) op , σ φ (h) = σ H (h (2) )D S(h (1) ) (D S −1 (h (3) ) ) opfor all y ∈ C and h ∈ H, where σ H denotes the modular automorphism of φ H .Proof.(1) This follows immediately from the definitions and (8.5). ∆ : A → M (A⊗A), ε : A → B ⋊ Γ, S : A → A co,op satisfying natural conditions, see [17, Definition 1.3.5], where A co,op is a suitably defined bi-opposite of A. Such a multiplier (B, Γ)-Hopf * -algebroid can be regarded as a multiplier Hopf *algebroid with certain extra structure as follows. The * -homomorphism B ⊗ B → M (A) extends to embeddings of B ⊗ 1 and 1 ⊗ B into M (A). Denote by B and C the respective images, and by S B : B → C and S C : C → B the canonical (anti-)isomorphism. Then A B ⊗ A B is a left and A C ⊗ A C a right A⊗A-module, and the formulas ∆ B (a)(b ⊗ c) := ∆(a)(b ⊗ c), (b ⊗ c)∆ C (a) := (b ⊗ c)∆(a) define a left and a right comultiplication so that A := (A, B, C, S B , S C , ∆ B , ∆ C ) xD γ )) = µ(x)for all γ, γ ′ ∈ Γ and x ∈ B. Given such a family, we can modify A and obtain a measured multiplier Hopf * -algebroid as follows. a ∈ A γ,γ ′ , define automorphisms of A and (D 1/2 , D 1/2 ,D −1/2 , D −1/2 ) is a modifier for A which is trivial on the base and self-adjoint. This can be checked easily, see also[17, Lemma 1.6.3]. We thus obtain a modified multiplier Hopf * -algebroidA = (A, B, C, S B , S C ,∆ B ,∆ C ).Now, (µ B , µ C ) is a counital base weight forÃ. Indeed, if ε(a) = γ γx γ , then by (µ C (ε C (a)). Uniqueness of integrals. Let us now consider the general case. The first step towards the proof of uniqueness is the following result.4.2.1. Lemma. Let φ be a left and ψ a right integral on a regular multiplier Hopf algebroidA with antipodal base weight. ThenProof. Combine Lemmas 4.1.1 and 4.1.2. 4.2. Proof. This is straightforward. Acknowledgements. The author would like to thank Alfons Van Daele for inspiring and fruitful discussions. Since µ • C φ C = µ op • B ψ B , the base weight (µ op , µ) is quasi-invariant with respect to these partial integrals, and by (3), it is counital. Hence, it only remains to verify that the functional φ = µ • C φ C is faithful, and this is easy. C φ C is a partial left and B ψ B is a partial right integral. By Lemma 4.2.6, σ φ (y) = S 2 (y) = σ(y) for all y ∈ C. The same lemma and theBy Example 3.1.9, C φ C is a partial left and B ψ B is a partial right integral. Since µ • C φ C = µ op • B ψ B , the base weight (µ op , µ) is quasi-invariant with respect to these partial integrals, and by (3), it is counital. Hence, it only remains to verify that the functional φ = µ • C φ C is faithful, and this is easy. (5) By Lemma 4.2.6, σ φ (y) = S 2 (y) = σ(y) for all y ∈ C. The same lemma and the . · Μ Op, y op ⊳ S −2 (h ′· µ op ((y op ⊳ S −2 (h ′ . H ⊲ Y ′ ) · Φ, h ′⊲ y ′ ) · φ H (h ′ we studied integration on dynamical quantum groups in order to construct operator-algebraic completions in the form of measured quantum groupoids. The present results generalize and clarify the results. Dynamical quantum groups. obtained in [17], for example, they explain the deviation of the antipode on the operator-algebraic level from the algebraic antipode observed in [17, Proposition 2.7.13Dynamical quantum groups. In [17], we studied integration on dynamical quan- tum groups in order to construct operator-algebraic completions in the form of measured quantum groupoids. The present results generalize and clarify the results obtained in [17], for example, they explain the deviation of the antipode on the operator-algebraic level from the algebraic antipode observed in [17, Proposition 2.7.13]. To recall the notion of a dynamical quantum group as. (b Multiplier, Γ Hopf * -Algebroids, defined in [17], we need some preliminariesMultiplier (B, Γ) Hopf * -algebroids. To recall the notion of a dynamical quantum group as defined in [17], we need some preliminaries. Denote by e ∈ Γ the unit element. A (B, Γ) ev -algebra is a * -algebra A with a grading by Γ × Γ, local units in A e,e , and a non-degenerate * -homomorphism B ⊗ B → M (A) satisfying a(x ⊗ y) = (γ(x) ⊗ γ ′ (y))a ∈ A γ,γ ′ for all a ∈ A γ. x, y ∈ BLet B be a commutative * -algebra with local units and a left action of a discrete group Γ. Denote by e ∈ Γ the unit element. A (B, Γ) ev -algebra is a * -algebra A with a grading by Γ × Γ, local units in A e,e , and a non-degenerate * -homomorphism B ⊗ B → M (A) satisfying a(x ⊗ y) = (γ(x) ⊗ γ ′ (y))a ∈ A γ,γ ′ for all a ∈ A γ,γ ′ , x, y ∈ B. Such (B, Γ) ev -algebras form a monoidal category as follows. Morphisms are non-degenerate B ⊗ B-linear * -homomorphisms into multipliers that preserve the grading. The product A⊗C of (B, Γ) ev -algebras A and C is the quotient of References. Such (B, Γ) ev -algebras form a monoidal category as follows. Morphisms are non-degenerate B ⊗ B-linear * -homomorphisms into multipliers that preserve the grading. The product A⊗C of (B, Γ) ev -algebras A and C is the quotient of References Integral theory for Hopf algebroids. G Böhm, Algebr. Represent. Theory. 84G. Böhm. Integral theory for Hopf algebroids. Algebr. Represent. Theory, 8(4):563-599, 2005. Hopf algebroids. G Böhm, Handbook of algebra. AmsterdamElsevier/North-Holland6G. Böhm. Hopf algebroids. In Handbook of algebra. Vol. 6, volume 6 of Handb. Algebr., pages 173-235. Elsevier/North-Holland, Amsterdam, 2009. Comodules over weak multiplier bialgebras. G Böhm, Internat. J. Math. 2551450037G. Böhm. Comodules over weak multiplier bialgebras. Internat. J. Math., 25(5):1450037, 57, 2014. Weak multiplier bialgebras. to appear in Trans. G Böhm, J Gómez-Torrecillas, E López-Centella, arXiv:1306.1466Amer. Math. SocG. Böhm, J. Gómez-Torrecillas, and E. López-Centella. Weak multiplier bialgebras. to appear in Trans. Amer. Math. Soc., June 2013. arXiv:1306.1466. Hopf algebroids with bijective antipodes: axioms, integrals, and duals. G Böhm, K Szlachányi, J. Algebra. 2742G. Böhm and K. Szlachányi. Hopf algebroids with bijective antipodes: axioms, integrals, and duals. J. Algebra, 274(2):708-750, 2004. Corings and comodules. T Brzezinski, R Wisbauer, London Mathematical Society Lecture Note Series. 309Cambridge University PressT. Brzezinski and R. Wisbauer. Corings and comodules, volume 309 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2003. Actions of multiplier Hopf algebras. B Drabant, A Van Daele, Y Zhang, Comm. Algebra. 279B. Drabant, A. Van Daele, and Y. Zhang. Actions of multiplier Hopf algebras. Comm. Algebra, 27(9):4117-4172, 1999. Measured quantum groupoids in action. M Enock, N.S.Mém. Soc. Math. Fr. 114150M. Enock. Measured quantum groupoids in action. Mém. Soc. Math. Fr. (N.S.), (114):ii+150 pp. (2009), 2008. C * -algebraic quantum groups arising from algebraic quantum groups. J Kustermans, A Van Daele, Internat. J. Math. 88J. Kustermans and A. Van Daele. C * -algebraic quantum groups arising from algebraic quantum groups. Internat. J. Math., 8(8):1067-1139, 1997. Measured quantum groupoids. F Lesieur, Mém. Soc. Math. Fr. (N.S.). 109158F. Lesieur. Measured quantum groupoids. Mém. Soc. Math. Fr. (N.S.), (109):iv+158 pp. (2008), 2007. Hopf algebroids and quantum groupoids. J.-H Lu, Internat. J. Math. 71J.-H. Lu. Hopf algebroids and quantum groupoids. Internat. J. Math., 7(1):47-70, 1996. A groupoid approach to C * -algebras. J Renault, Lecture Notes in Mathematics. 793SpringerJ. Renault. A groupoid approach to C * -algebras, volume 793 of Lecture Notes in Mathematics. Springer, Berlin, 1980. The kac-takesaki operator of algebraic quantum groupoids. T Timmermann, in preparationT. Timmermann. The kac-takesaki operator of algebraic quantum groupoids. in preparation. Pontrjagin duality for algebraic quantum groupoids. T Timmermann, in preparationT. Timmermann. Pontrjagin duality for algebraic quantum groupoids. in preparation. C * -pseudo-multiplicative unitaries, Hopf C * -bimodules and their Fourier algebras. T Timmermann, J. Inst. Math. Jussieu. 11T. Timmermann. C * -pseudo-multiplicative unitaries, Hopf C * -bimodules and their Fourier algebras. J. Inst. Math. Jussieu, 11:189-229, 2011. Regular multiplier Hopf algebroids II. Integration on and duality of algebraic quantum groupoids. T Timmermann, 1403.5282T. Timmermann. Regular multiplier Hopf algebroids II. Integration on and duality of algebraic quantum groupoids. ArXiv e-prints, 1403.5282, Mar. 2014. Measured quantum groupoids associated to proper dynamical quantum groups. T Timmermann, J. Noncommut. Geom. 91T. Timmermann. Measured quantum groupoids associated to proper dynamical quantum groups. J. Noncommut. Geom., 9(1):35-82, 2015. Regular multiplier Hopf algebroids. Basic theory and examples. T Timmermann, A Van Daele, 1307.0769T. Timmermann and A. Van Daele. Regular multiplier Hopf algebroids. Basic theory and examples. ArXiv e-prints, 1307.0769, July 2013. Multiplier Hopf algebroids arising from weak multiplier Hopf algebras. T Timmermann, A Van Daele, 1406.3509T. Timmermann and A. Van Daele. Multiplier Hopf algebroids arising from weak multiplier Hopf algebras. ArXiv e-prints, 1406.3509, June 2014. Bimodules de Hopf et poids opératoriels de Haar. J.-M Vallin, J. Operator Theory. 351J.-M. Vallin. Bimodules de Hopf et poids opératoriels de Haar. J. Operator Theory, 35(1):39-65, 1996. An algebraic framework for group duality. A Van Daele, Adv. Math. 1402A. Van Daele. An algebraic framework for group duality. Adv. Math., 140(2):323-366, 1998. Separability idempotents and multiplier algebras. A Van Daele, arxiv.1301.4398ArXiv e-printsA. Van Daele. Separability idempotents and multiplier algebras. ArXiv e-prints, arxiv.1301.4398, 2013. Modified weak multiplier Hopf algebras. A Van Daele, 1407.0513A. Van Daele. Modified weak multiplier Hopf algebras. ArXiv e-prints, 1407.0513, July 2014. Weak multiplier Hopf algebras. The main theory. A Van Daele, S Wang, DOI10.1515/crelle-2013-0053J. Reine Angew. Math. ArXiv e-prints, 1210.4395, 2015. to appear inA. Van Daele and S. Wang. Weak multiplier Hopf algebras. The main theory. ArXiv e-prints, 1210.4395, 2015. to appear in J. Reine Angew. Math., DOI 10.1515/crelle-2013-0053. Quantum groupoids. P Xu, Comm. Math. Phys. 2163P. Xu. Quantum groupoids. Comm. Math. Phys., 216(3):539-581, 2001. Pure submodules of direct products of free modules. B Zimmermann-Huisgen, Math. Ann. 2243B. Zimmermann-Huisgen. Pure submodules of direct products of free modules. Math. Ann., 224(3):233-245, 1976. Einsteinstr. 62, 48149 Muenster, Germany E-mail address: [email protected]. deFB Mathematik und Informatik, University of MuensterFB Mathematik und Informatik, University of Muenster, Einsteinstr. 62, 48149 Muen- ster, Germany E-mail address: [email protected]
[]
[ "Mixed displacement-pressure-phase field framework for finite strain fracture of nearly incompressible hyperelastic materials", "Mixed displacement-pressure-phase field framework for finite strain fracture of nearly incompressible hyperelastic materials" ]
[ "Fucheng Tian \nNational Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina\n", "Jun Zeng \nNational Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina\n", "Mengnan Zhang \nNational Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina\n", "Liangbin Li \nNational Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina\n" ]
[ "National Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina", "National Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina", "National Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina", "National Synchrotron Radiation Laboratory\nAnhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry\nUniversity of Science and Technology of China\n230026HefeiChina" ]
[]
The favored phase field method (PFM) has encountered challenges in the finite strain fracture modeling of nearly or truly incompressible hyperelastic materials. We identified that the underlying cause lies in the innate contradiction between incompressibility and smeared crack opening. Drawing on the stiffness-degradation idea in PFM, we resolved this contradiction through loosening incompressible constraint of the damaged phase without affecting the incompressibility of intact material. By modifying the perturbed Lagrangian approach, we derived a novel mixed formulation. In numerical aspects, the finite element discretization uses the classical Q1/P0 and high-order P2/P1 schemes, respectively. To ease the mesh distortion at large strains, an adaptive mesh deletion technology is also developed. The validity and robustness of the proposed mixed framework are corroborated by four representative numerical examples. By comparing the performance of Q1/P0 and P2/P1, we conclude that the Q1/P0 formulation is a better choice for finite strain fracture in nearly incompressible cases. Moreover, the numerical examples also show that the combination of the proposed framework and methodology has vast potential in simulating complex peeling and tearing problems. (Liangbin Li) ( , ) det( ) T t J
10.1016/j.cma.2022.114933
[ "https://arxiv.org/pdf/2112.00294v1.pdf" ]
244,773,017
2112.00294
b339dfd1c13212b2390dc7e781c8a762433edab2
Mixed displacement-pressure-phase field framework for finite strain fracture of nearly incompressible hyperelastic materials Fucheng Tian National Synchrotron Radiation Laboratory Anhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry University of Science and Technology of China 230026HefeiChina Jun Zeng National Synchrotron Radiation Laboratory Anhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry University of Science and Technology of China 230026HefeiChina Mengnan Zhang National Synchrotron Radiation Laboratory Anhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry University of Science and Technology of China 230026HefeiChina Liangbin Li National Synchrotron Radiation Laboratory Anhui Provincial Engineering Laboratory of Advanced Functional Polymer Film, CAS Key Laboratory of Soft Matter Chemistry University of Science and Technology of China 230026HefeiChina Mixed displacement-pressure-phase field framework for finite strain fracture of nearly incompressible hyperelastic materials 1Phase field modelIncompressibleFinite strain fractureHyperelastic materials The favored phase field method (PFM) has encountered challenges in the finite strain fracture modeling of nearly or truly incompressible hyperelastic materials. We identified that the underlying cause lies in the innate contradiction between incompressibility and smeared crack opening. Drawing on the stiffness-degradation idea in PFM, we resolved this contradiction through loosening incompressible constraint of the damaged phase without affecting the incompressibility of intact material. By modifying the perturbed Lagrangian approach, we derived a novel mixed formulation. In numerical aspects, the finite element discretization uses the classical Q1/P0 and high-order P2/P1 schemes, respectively. To ease the mesh distortion at large strains, an adaptive mesh deletion technology is also developed. The validity and robustness of the proposed mixed framework are corroborated by four representative numerical examples. By comparing the performance of Q1/P0 and P2/P1, we conclude that the Q1/P0 formulation is a better choice for finite strain fracture in nearly incompressible cases. Moreover, the numerical examples also show that the combination of the proposed framework and methodology has vast potential in simulating complex peeling and tearing problems. (Liangbin Li) ( , ) det( ) T t J Introduction With the traits of bearing large deformations and high recoverability, hyperelastic materials occupy a unique status in industrial and biomedical applications like seals, tires, artificial soft tissues, etc. Meanwhile, they also serve as an ideal vehicle for fundamental research such as material physics and nonlinear mechanics [1]. The dual demands of practicality and academic pursuit have boosted in-depth explorations on the service behavior of hyperelastic materials. Among them, finite strain, or large strain, fracture as the main failure channel of such materials, has intrigued widespread experimental and numerical research interest [2][3][4][5][6]. Yet the finite strain coupled discontinuous fracture, especially the tracking issue of the complex crack surfaces involved in three-dimensional fracture, places a tough obstacle for numerical treatments. To overcome this numerical challenge, the state-of-the-art diffusive crack models came into being, in which the sharp crack topology is approximated by the continuous scalar damage field [7][8][9]. As a prominent member of such models, the phase field model (PFM) has been sought after by the computational mechanics community in the past two decades [10,11], due to its inherent features that can be naturally incorporated into the framework of continuum mechanics. The PFM prevailing in the computational mechanics community is rooted in the classic Griffith's theory [12], and shares some thoughts with those originated from Ginzburg-Landau phase transition theory [13][14][15]. Limiting our horizons to the former, the pioneering works are the variational approach to fracture established by Francfort and Marigo and the Ambrosio-Tortorelli regularized version reformulated by Bourdin et al. [16,17]. Thenceforth, the research on the phase field methodology and its application has entered a flourishing stage [18][19][20][21][22]. Although the generic phase field model was developed for quasi-static brittle fracture, its application scenarios have been expanded to brittle cohesive fracture [23], dynamic fracture [24,25], and finite-strain fracture [26][27][28][29], covering almost all material categories. Instead of presenting an exhaustive research list, we tend to review the studies on the fracture of hyperelastic materials. In this respect, Miehe et al. first presented a rate-independent phase field fracture (PFF) model for rubber-like polymer [30]. Loew et al. further extended the model to a rate-dependent type, which was validated by experiments on rubber fractures [27]. To better distinguish the contribution of tension and compression to finite-strain fracture, Tang et al. proposed a novel energy decomposition scheme for Neo-Hookean materials [31]. Moreover, Tian et al. and Peng et al. reformatted the PFM in the framework of edge-based smooth finite element method (ES-FEM), relieving the mesh distortion of large-strain fracture [32,33]. Other associated researches involve hydrogel systems [34,35], anisotropy [36], and composite materials [37]. Note, however, that most existing studies on the phase field modeling of hyperelastic materials focused on compressible cases, and rarely concerned incompressibility. Although compressibility is commonly presumed in PFF modeling, we remark that numerous hyperelastic materials like rubber-like polymers and biological tissues can stand large strains without distinct volume changes. Therefore, these materials can be rationally treated as nearly or fully incompressible. In such circumstances, standard single-field displacement solutions suffer from the well-known volumetric locking problem [38,39]. To alleviate or avert locking at finite deformation, many sophisticated numerical strategies and techniques have been developed, e.g., F-bar and J-bar methods [40][41][42], enhanced-strain approach [43], mixed two-field displacement-pressure (u-p) formulation and three-field displacement-pressure-Jacobian (u-p-J) formulation [44,45]. Regarding the specific pros and cons of these methodologies, the discussion is beyond the scope of this article and will not be further extended. As stated earlier, incompressibility is an inescapable challenge in the modeling of most hyperelastic materials. However, the previous studies on the PFF modeling seldom touched on this issue, regardless of the small or the finite deformation. To our knowledge, Ma et al. first reported that the mixed u-p formulation was utilized to dispose of the phase field modeling of incompressible hyperelastic materials [46]. They modified the Lagrangian multiplier by introducing slightly compressible, thereby realizing simultaneous damage of bulk and shear modulus. Along this route, Li et al. pointed that the bulk modulus should be degraded faster than the shear modulus to free the damaged materials from the incompressible constraint [47]. However, this notion has not gained enough attention, owing to the short of solid theoretical foundation and detailed numerical implementation. More recently, Ye et al. proposed a new phase field model for nearly incompressible hyperelastic materials based on an enhanced threefield variational formulation [48]. They stated that introducing the assumed strain method also mediated the contradiction between energy decomposition and incompressible constraints. Despite the three-field formulation being mathematically elegant, we must recognize that the computational cost is also expensive, especially for PFF modeling. Against this background, we endeavor to formulate a novel multi-field framework for modeling the fracture of nearly incompressible hyperelastic materials in finite strain scenarios. The underlying contradiction between incompressibility and the diffuse crack opening is elaborated from the perspective of physical crack topology. To resolve this conflict, we set forth a coping strategy that loosens the incompressibility constraint of damaged materials using the phase field degradation function. Such a scheme does not affect the incompressibility of intact materials, complying with the physical insights. With this idea in mind, the classic perturbed Lagrangian approach was modified, yielding a novel multi-field variational formulation for fracture problems. Then we derived the governing equations and outlined the concrete numerical implementation of the mixed formulation. The previously developed adaptive mesh deletion technology is also utilized to alleviate the mesh distortion problem in the 3D tearing test [49]. In addition, four representative numerical examples were chosen to demonstrate the validity and robustness of the proposed mixed framework. By comparing with the higher-order P2/P1 formulation, the superior performance of the classical Q1/P0 scheme in the modeling of nearly incompressible finite strain fractures was ascertained. The rest of this paper is organized as follows. Section 2 emphatically uncovers the root cause of the incompressibility thwarting diffuse crack opening and conceives a countermeasure, thus laying the groundwork for the subsequently mixed formulation. In section 3, a variational form of the multi-field fracture problem is established after revisiting the perturbed Lagrangian approach. The governing equations in the strong form are also derived. In the aspect of numerical realization, Section 4 elaborates the discretization and linearization formulations, as well as some crucial numerical techniques. Section 5 demonstrates the validity of the proposed mixed framework via several representative numerical examples. Finally, Section 6 concludes this paper with the main contributions. Let  denote the deformation mapping from point 0   X to   x , then the displacement field is defined as ( )    u X X(1) With this definition, the deformation gradient tensor F , the Jacobian determinant J and the right Cauchy-Green deformation tensor C are given by        X X F χ X I u F C F F ,(2) where I is the second-order identity tensor. ( ) ( ) ( , ) 2 C ext g dV G dV dV t                               u C(6) Herein, the degradation function takes Nm s  ) is the artificial viscosity coefficient that regulates the energy dissipation rate, added for numerical robustness. Incompressible hyperelastic model Numerous hyperelastic materials can be regarded as nearly or truly incompressible, and the related constitutive models have also evolved into a big family. Herein, we postulate that the strain energy function ( )  C follows a general decomposition form [50] ( ) ( ) ( ) vol U J       C C   (7) where  is the bulk modulus, and the volumetric part ( ) 1 a b a b a b a b a ab ab ab a b a b a b a b a b b b b a b b S S S S S                                           (13) with ab ab             a a b b a b a b a b b a N N N N N N N N N N N N  (14) Remark. An alternative decomposition format of Eq. 7 takes the form as [52]       iso vol J      C C ,(15) where iso  and vol  are the decoupled isochoric and volumetric parts, respectively. Accordingly, the PK2 stress and elastic tensors can be written as   2/3 1 1 1 1 : 2 3 iso iso vol J Jp                    C S S S I C C C C    (16)2 1 + : 3 3 2 3 iso vol iso iso iso iso J J                                       C C C C C C C C C C C S S C      (17) Note that, the above formula is appealing in the research of nearly incompressible hyperelastic materials, whereas our pre-tests indicate that it lacks numerical robustness in the phase field modeling of fracture coupled with incompressibility. Therefore, this decomposition form is deprecated in this work. Incompressibility -a barrier for diffuse crack opening The diffusive depiction of sharp cracks circumvents the laborious tracking of discontinuous surfaces, which is the principal cause that PFM is favored in fracture modeling. However, this advantage may turn into a barrier in some special circumstances, such as finite strain fracture of incompressible hyperelastic materials. Revisiting Fig. 1, we remark that the physically material-free crack opening zone is still integrated into the computational domain, which inevitably generates volume expansion compared to discrete crack configurations. In this context, enforcing the incompressible constraint will frustrate the parabolic opening of the crack, and even result in the collapse of the numerical solution (see Appendix A for an intuitive presentation). Given this intrinsic conflict, simply seeking an alternative solution algorithm is doomed to be futile. Enlightened by the idea of attenuating strain energy via multiplying the degradation function ( ) g  in the PFF modeling, we loosen the incompressible constraint in the same manner. Now, using 0 0 0 /     to measure the incompressible level, we assume that 0 0 0 ( ) ( ) ( ) f f          (18) where   f  satisfies     0 0, 1 1 f f   (19) This definition indicates that formula 18 only relieves the incompressible constraint of the damaged phase, and yet does not loosen the binding for non-damaged materials. Considering that the damaged material within the crack opening outline has no substantial contribution to the physical entity, such a treatment is sound in the limit of  -convergence [53]. Moreover, due to the moduli  and  have been degenerated in the pristine phase field formulation, Eq.18 can be rephrased as 0 0 0 0 ( ) ( ) ( ) ( ) ( ) ( ) ( ) g f g f g g               (20) For simplicity, we let     f g    , thus yielding     2 0 ( ) g g       (21) This expression implies that the bulk modulus  requires to be damaged faster than the shear modulus  . For the truly incompressible hyperelastic materials, however, the above countermeasure fails as the bulk modulus  approaches infinity. Consequently, slightly compressible (quasi-incompressible) is introduced in the modeling, arriving at the same destination as the subsequent perturbed Lagrangian approach. Mixed formulation Perturbed Lagrangian approach To date, the classical Lagrangian multiplier approach is the sole workable technique for fully incompressible cases [54]. Without involving phase field damage, the potential energy functional of the Lagrangian multiplier approach is written as   0 ( , ) ( ) ( 1) LagMul ext p p J dV          u C (22) with the external potential energy ( ext  ) comprised of body and surface loads 0 0 0 ext n dV dA          b u t u(23) Herein, the scalar p acts as the Lagrangian multiplier, which can be recognized as an independent variable representing hydrostatic pressure. Nonetheless, this formulation yields a saddle-point problem that is prone to numerical difficulties. To address the numerical issue, a perturbed Lagrangian approach is proposed [45], and the corresponding energy functional is reformatted as 0 ( , ) ( ) 1 2 PerMul ext p p p J dV                        u C (24) Note that, such an approach is compatible with nearly and fully incompressible materials, thanks to slightly compressible modifications. In the incompressible limit, i.e.,    , the Lagrangian multiplier approach (Eq. 22) can be recovered from Eq. 24. It is worth mentioning that Kadapa and Hossain have proposed a unified framework that further generalized the perturbed Lagrangian method to incorporate compressible materials [55]. However, we do not intend to embrace this formulation, allowing for the keynote of the current work. Variational form for multi-field fracture problem In the previous displacement-phase field (   u ) solution scheme, the mechanics sub-problem only solves a single displacement field. We remark that such an approach is incapable of modeling nearly and truly incompressible materials. Thus the functional u    requires to be amended in the perturbed Lagrangian approach. A straightforward extension is to multiply the perturbed Lagrangian term by ( ) g  , resulting in 0 0 0 2 ( ) ( ) 1 ( , ) 2 2 test p C ext p g p J dV G dV dV t                                                u C (25) This format excels in modeling incompressible (quasi-incompressible) brittle fracture [56]. however, it fails in the finite deformation context due to the forced incompressibility of the damaged materials (see Appendix A). For this reason, guided by the ideas illuminated in sub-section 2.3, we liberate the incompressibility constraints of the damaged phase, namely   2 0 ( ) g     , to eliminate the barriers to crack opening. Assuming the damaged volumetric energy function     2 2 0 1 ( ) 1 2 vol g J     (26) thus, the pressure takes the form     2 0 ( ) 1 vol p g J J        (27) Remarkably, this expression can be identified as a constraint, i.e.,     2 0 ( ) 1 0 p g J      , enforced via a Lagrange multiplier p. In this manner, we now arrive at mixed energy functional for the multi-field fracture problem   0 0 0 0 2 0 2 ( ) ( ) ( ) ( 1) 2 ( , ) 2 u p C ext p g dV p g J dV G dV dV t                                             C (28) Taking the variational of Eq. 28 in terms of , , p  u , we obtain     0 0 0 0 0 0 0 2 1 2 0 0 0 0 0 ( ) ( ) 2 ( ) : ( ) ( ) ( ) : 2 ( ) ( 1) 3 1 ( ) ( 1) 2 d 8 3 d ( 4 T u p T C c X ext g g dV g g pJ dV g p J dV p g J pdV G l V l G l A dV t                                                                                           C F u C C C F u n   )  u(29) After taking the integration by parts, Eq.29 is reformulated as         0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 d ( ) d ( ) ( 1) ( ) 3 ( ) ( ) ( 2 ) 2 ( ) ( 1) d 8 3 d 4 N vol u p X X vol c c c X V dV p A g J pdV G g g G l g p J V l G l A dV dA                                                                                       P u P u P P n u C n b u t u     (30) with the nominal stress   ( ) 2 T g      C P F C   (31) and     2 1 vol T g Jp    P C F(32) Governing equations On account of the independence of the variational  u , p  ,  , the equilibrium condition of formula 30 is to satisfy the following three equations       0 0 0 0 d ( ) d 0 N vol vol X V A                   P P b u P P n t u   (33)   0 2 0 ( ) ( 1) 0 p g J pdV              (34) 0 02 0 0 0 0 ( ) 3 ( ) ( ) ( 2 ) 2 ( ) ( 1) d 8 3 d 0 4 c c c X G g g G l g p J V l G l A                                    C n  (35) Notwithstanding the strong-form governing equations are unessential for numerical treatment, we still derived them for the sake of mathematical succinct, as follows     0 2 0 2 0 0 0 0 0 0 ( ) 0 ( ) 1 0 ( ) 3 ( ) ( ) ( 2 ) 2 ( ) ( 1) 0 8 ( ) at 0 at vol X c c vol N X p g J G g g G l g p J l                                                   P P b C P P n t n    (36) Note that, Eq. 27, which serves to relax the incompressible constraint of the damage phase, is strictly derived, see the second equality of Eq. 36. Numerical aspects Weak form of mixed formulation The standardized finite element method (FEM) solving derives the weak form of the governing equation as per the Galerkin weighted residual method at first. Let u w , p w , w  denote the test functions for the mechanical response, pressure, and phase field equations, respectively. Multiply the strong form (Eq. 36) by these test functions and perform integration by parts, the weak form of mixed formulation in the residual form can then be derived as   0 0 0 0 : d d d 0 , N vol X V V A                 u u u u u u R P P w b w t w w   (37)       0 4 0 1 ε 1 d 0 , p p p p p J w V w                   R  (38)             0 0 3 2 0 0 -2 1- 4 1- 1 d 3 2 d 0 8 c X X p J w V G l w w w V w l                                   R C   (39) Herein, u  , p  and   are trial function spaces for displacement field u, pressure field p and phase-field  . FEM discretization To assure the stability of mixed finite element formulation, the collocation of discretization elements should be selected elaborately. Now, we exploit the well-known Q1/P0 element [38], as well as the P2/P1 element (a member of the Taylor-Hood element family [57]) with inf-sup stability for discretization, and the corresponding configuration and node layout of these two elements are illustrated in Fig. 2. In the Q1/P0 formulation, displacement u and phase-field  are discretized by linear quadrilateral element (Q4) for the 2D case or linear hexahedral elements (H8) for the 3D case, while the pressure p retains constant within an element. Concerning the P2/P1 formulation, u is discretized using quadratic 6-noded triangular (10-noded tetrahedral) elements, whilst p and  are discretized with linear 3-noded (4-noded) ones. As a demonstration, the Q1/P0 element is applied for the discretization in space, thus the fundamental field variables , , p  u and their functions u w , p w , w  are approximated as In regard of the time-related dissipation term   , a concise backward Euler difference method is adopted for the temporal discretization  . Remark. Albeit suffering some complaints due to the lack of inf-sup stability, the classical Q1/P0 formulation works very robustly in most computing scenarios involving (nearly) incompressibility, containing the current issues. As for the higher-order elements, we exclusively chose the P2/P1 element because of the enormously fine mesh demand in PFF modeling. Other alternative higher-order elements such as BQ2/BQ1 are also available [58], and yet the computational cost is daunting, especially in 3-D phase field modeling of fracture. 1 1 1 1 1 1 , = , , , m n m p i i i i i i i i i m n m p p p i i i i i i i i i p N p N w N w w N w                        u u u u u N u w N w(40)with 0 0 u i i u i N N        u N(41) Linearization To solve the coupled Eqs 37-38, we linearize them using the Newton-Raphson method, as follows                                        u uu u u u u f K K K u K K K f K K K f(43) Here, Eq. 43 involves the three-field coupling of                          uu u u u K u K f K u K f K f(45)V N V N A p J V N N p J V G l G l N N dV t                                                                          u u u f B S S b t f f C B B  (46) Going by the matrix derivation rule, the tangent stiffness matrices thereof T T T e T c R V V R J V R V p N N N p J G l N N V t                                                                    u uu u u u K B B S S u K K B C K K C B B     (47)i X i X i X i X i Y i Y i Z i Z i Z X i Y i Z i Y i Z i Y i Y i X i Z i X i Z i X i Z i X i Y i X i Y i X i Y N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F N F           B 1                    (52) and , , , 0 0 0 0 = 0 0 0 0 0 0 0 0 0 0 0 0 i X i Y i Z T i X i Y i Z i X i Y i Z N N N N N N N N N            (53) where , i X N symbolizes / i N X   ., , , , , , 0 0 Enforced constraints As far as the topic of this article is concerned, we touch upon two categories of constraints, i.e., the incompressible constraints and the irreversible constraints on crack growth. The basic tenet for disposing of the former is to degenerate the incompressibility of damaged materials, which has been elaborated earlier and will not be reiterated here. Referring to the irreversible constraints 1 1 0 n n t          (54) which is imposed by using an active set method that can be well embedded in Newton-Raphson iterations [27,49]. For the sake of clarity, we narrate this method in the form of pseudocode, as stated in Algorithm 1. Adaptive element deletion In the Lagrangian coordinates frame, the grid deforms with the material, which may engender severe element distortion, especially in the large-strain fracture problem involving discontinuous surfaces. Despite phase field representation of the fracture erasing the explicit discontinuity, the mesh distortion issue is not settled, but even worse. As displayed in Fig. 3 (a), the crack opening zone that should have no material is now filled with damaged elements. To delineate the crack opening outline, these elements Algorithm 1 Active set method. 1. Define     , ' =     2. while ' min( ) 0     do 3. ' ' ( 0)            4. ' \     5. ' ' ' ' 1 1 1 1 ( ) ( ) n n n           K f     6. 0     7. end 8. 1 1 n n n         with degraded stiffness will undergo large deformations. Moreover, the most severe mesh deformation arises precisely in these damaged zones. A consequential idea is that if these damaged elements can be properly removed, the mesh distortion dilemma facing large strain fracture will be significantly alleviated. Following this route, an adaptive mesh deletion technology for phase field modeling of large-strain fracture is employed. We first formulate a rule for element deletion, i.e., the elements that meet   max i e c   (55) will be removed from the solution domain (The brief algorithm procedures refer to [49] Fig. 3(b). Solution schemes Throughout the solving procedures, the pre-processing mesh generation module is developed on the foundation of open-source software packages termed ifem [59] and ameshref [60]. The proposed mixed displacement-pressure-phase field formulation for finite strain fracture of quasi-incompressible hyperelastic materials is realized by the parallel in-house MATLAB code and post-processed with ParaView [61]. To gain insight into the complete solution scheme, the basic procedures and core technologies of the proposed approach are recapitulated in Algorithm 2. Algorithm 2 1. Generate FEM mesh using Q4(H8) elements. In this section, the competence of the proposed framework for modeling the fracture of quasi-incompressible hyperelastic materials is validated by several numerical tests. The first example considers a well-known benchmark test of a 3-D block under compression to demonstrate the effectiveness of mixed Q1/P0 and P2/P1 formulations in coping with quasi-incompressible issues by quantitative comparison with the standard displacement solutions. The ensuing two examples focus on plane- 5. while ( or ) strain problems, consisting of a bilateral notched specimen in tension rooted in the experiments of Hocine et al. and an amusing peeling test with weak interfaces. The last one is the more challenging tearing test of the 3-D sheet at large deformation, showing the superior performance of the proposed formulation and numerical treatment. Quasi-incompressible block under compression This example is a widely accepted benchmark test for evaluating the performance of numerical algorithms in approaching the incompressible limit. The strain energy function takes the same form as Reese et al. [62], viz. compressible Neo-Hookean model       2 3 ln ln 2 2 tr J J         C(58) Herein, the incompressibility is guaranteed by setting    , such that the shear Besides, the identical mesh shown in Fig. 4(b) is also utilized to solve the standard displacement solution denoted by Q1S. By setting the pressure load 2 0 320 N/mm p  the displacement contours in deformed configuration for the above three schemes, Q1S, Q1/P0 and P2/P1 are presented together in Fig. 5. Thereinto, the standard Q1S exhibits over stiffening on account of the well-known volumetric locking, while Q1/P0 averts this issue, closely resembling the response of P2/P1 formulation in vision. For quantitative comparison, the relative z-displacement of point A of the three schemes, known as the so-called compression levels, are exported. Fig. 6 (a) depicts the convergence of compression level with the number of elements per edge. As shown, the compression levels obtained by Q1/P0 and P2/P1 formulations are almost indistinguishable after the number of elements per edge exceeds 12. The result calculated by the Q1S scheme, by contrast, is significantly lower, even if the mesh is refined. A similar phenomenon is also reflected in Fig. 6 (b), which depicts the evolution of the compression level with loading for a mesh configuration of 16 elements per edge. We hereby underline that this benchmark test corroborates the efficacy of the basic framework for incompressible problems, thus laying a solid foundation for subsequent incorporating phase field. This example aims to test the performance of the proposed mixed formulation for fracture modeling of incompressible hyperelastic materials by comparing with experiments [63]. Fig. 7 (a) To better visualize the crack opening morphology, the level set 0.8 Double edge notched specimen in tension   is removed in ParaView † . The post-processed snapshots of the crack growth patterns and the corresponding Q1/P0 meshes at various fracture states are exhibited in Fig. 8 (see the supplemental material named mov1 for the complete crack evolution video), bearing a strong resemblance to those in the literature. We also derived the Cauchy stress and pressure fields with smooth distribution in the Q1/P0 formulation, and the contour of the Cauchy stress component 22  is demonstrated in Fig. 9. It is conspicuous that the crack opening outlines solved by the proposed scheme are smoother than past reports on this issue, by degenerating the incompressibility of damaged materials. Furthermore, Fig. 10 reports the volume evolution after eliminating the damage phase throughout the fracture process, corroborating that incompressibility is ensured for undamaged materials in the mixed Q1/P0 formulation. As for the P2/P1 scheme, the resultant crack patterns are indistinguishable from those of Q1/P0 (see Appendix B), except for the poorer robustness. For this reason, the P2/P1 scheme is shelved in the upcoming tests. The design of this example is inspired by an intriguing gel peeling experiment [64], involving more complicated configurations and stronger nonlinearities. We consider a rectangular sheet of 80 mm length and 16 mm width, containing a weak interface with a width of 0.5mm at the middle height, as depicted in Fig.11 (a). The left edge is split into two segments by a notch of 20 mm in length at the interface. With the fixed bottom edge, the seed crack can be driven by imposing a vertical displacement loading u  on the upper segment of the left edge. Fig. 11 (b) presents the finite element meshes used for this example, where the elements in the vicinity of the interface are refined (effective element size of 0 / 5 f h l  ). For the sake of testing the serviceability of the mixed formulation, we switch to a more realistic Mooney-Rivlin model (see Eq. 7 for the strain energy function), in which the material parameters are set as: 2 1 0.078 N/mm c  , Fig. 12 depicts the entire peeling process from the pre-crack opening ( Fig. 12(a)), germination ( Fig. 12(b)), growth (Figs. 12(c)-e), and final separation at the weak interface ( Fig. 12(f)). The evolution course of the crack patterns in the deformed configuration is quite similar to experimental observations. Then, we also extracted the pressure and Cauchy stress fields, and a snapshot of their smooth distributions at the loading of 74.82 u   Two-dimensional peeling test [mm], is presented in Fig. 13. For better visualization, a complete landscape of the entire fracture process is placed in supplemental materials and termed mov2. Now, despite this example demonstrating the outstanding performance of the proposed mixed framework, more challenging three-dimensional fracture scenarios are still lacking, which yields motivation for the following test. The idea of this example stems from the classical out-of-plane tearing experiment [65], a bit like the test of Yin et. al [66]. We consider a 3-D rubber sheet of 48 mm length (L), 36 mm width (W) and 1 mm thickness (H), containing a seek crack of 12 mm length, as illustrated in Fig. 14(a). Constraining the right side 2  , the sheet is torn apart by exerting antisymmetric z-displacements to the two left sides split by the crack. The finite element mesh used for this test is presented in Fig. 14(b), where the a priori crack path is refined to meet the effective element size of 0.125mm. We assume that the strain energy function is portrayed by the Neo-Hookean model for simplicity. The , the damage initiates at the forefront of the crack opening outline (Fig. 15 (c)). After that, the loading is further increased to ignite the tearing process, as depicted in Figs 15 (d)-(g). We also present the distribution of other physical fields in an arbitrarily selected loading state, comprising pressure p and the magnitudes of displacement u and Cauchy stress σ , as shown in Fig. 16. A complete tearing animation termed mov3 is provided in supplemental materials, close resemblance to the experimental scene [66]. As a final note, we hereby illuminate that adaptive mesh deletion technology is vital to realizing this tearing test, although it is optional in the previous examples. Tearing test in three-dimensional sheet Conclusion In the finite strain regime, we have posed a novel mixed displacement-pressurephase field framework for addressing the fracture issues of nearly incompressible hyperelastic materials. Based on the modified perturbed Lagrangian approach, a multifield variational form is derived. The concrete numerical implementations of the proposed formulation are also presented, and its excellent performance is demonstrated via four numerical examples. For conciseness, we summarize the main contributions of this study as follows (i) The innate contradiction between incompressibility and the diffuse crack opening at finite deformation was unveiled. (ii) A scheme that loosens incompressible constraint of the damaged phase without affecting the intact material was proposed to resolve the underlying conflict. (iii) In the numerical aspect, an adaptive mesh deletion technology is developed for the current issue, alleviating the mesh distortion under large deformations, especially in 3-D scenarios. (iv) In terms of precision and robustness, the classical Q1/P0 scheme performs better in the fracture modeling of nearly incompressible hyperelastic materials than the higher-order P2/P1 formulation. To sum up, the proposed mixed framework eliminates the barriers set by incompressibility for phase field crack opening, thus initiating a new avenue for finite strain fracture in nearly incompressible cases. The current formulation will be further extended to study dynamic fracture issues with richer crack morphologies in future work. ACKNOWLEDGMENTS We acknowledge support from the National Natural Science Foundation of China (Grant No. 51890872 and 51790500). Based on the geometric configuration illustrated in Fig. 1, the consequences of imposing incompressibility on damaged materials are demonstrated here (material parameters refer to section 5.2). We first consider diffuse pre-crack, i.e., set 1   . As shown in Fig. A1(a) and its partial enlarged Fig. 1(d), the pre-crack cannot open due to the limited volume change of damaged elements. Then, Fig. A1(b) presents the results of using geometric pre-crack. Although the crack opening was unhindered, the meshes at the crack tip were severely deformed, causing the crack outline to be unsmooth (see Fig. A1(e)). Subsequent calculations even encountered mesh distortion issues, resulting in numerical collapse. For comparison, we also demonstrate the result of relaxing the incompressible constraint of damaged materials, see Fig. A1(c). The crack opening profile is very similar to the physical one, but there is no mesh distortion at the crack tip, as depicted in Fig. A1(f). Given that the damaged material has almost no substantial contribution to the calculation system, it is sound to relax the incompressibility with the damage field. Appendix B. Recalculated test 1 with P2/P1 formulation We recalculated test 1 using the high-order P2/P1 formulation. Fig. A2 exhibits the snapshots of the crack patterns and the corresponding P2/P1 meshes at various loading states. Evidently, the results obtained by P2/P1 are almost identical to that of the Q1/P0 scheme, which is also validated by the force-displacement curve in Fig. 7. From the numerical perspective, however, the P2/P1 scheme is less robust. With the same parameter settings, such a formulation cannot reach the final complete break state despite the higher computational cost. 2 . 2Phase field description for finite strain fracture with incompressible 2.1. Phase-field representation of cracks at finite strain Fig. 1 . 1(a) Illustration of a soft rectangular sheet containing a sharp crack  in its undeformed configuration 0  . (b) Diffuse crack pattern depicted by phase field  in the deformed configuration  .The highlight of phase field representation of cracks consists in smearing the sharp crack topology with a time-dependent scalar damage field ( , ) , as illustrated inFig. 1(a). The separated crack surfaces that ought to open up parabolically in the new configuration  are now replaced by the phase field  with a smooth transition from 0 (pristine state) to 1 (fully broken state), seeFig. 1(b). Fig. 2 . 2Shape and node arrangement of Q1/P0 and P2/P1 elements in 2D and 3D cases. Symbol ,  and  indicate displacement u , pressure p and phase field  DOFs, respectively. Fig. 3 . 3The mesh configuration before (a) and after (b) the damaged element is deleted. For load step n+1, run5. Validation via numerical examplesFig. 4. 3D block under compression. (a) Quarter geometry and boundary conditions. The mesh configurations for Q1/P0 (b) and P2/P1 formulations (c). quarter geometric configuration and boundary conditions. A uniform pressure 0 p is imposed on a quarter square zone encompassing the top surface center (point A, see Fig. 4(a)). The meshes used for Q1/P0 (  ) formulations are demonstrated in Figs. 4(b) and (c), respectively. Fig. 5 . 5Displacement contour plots of the 3D block under compression in deformed configuration for Q1S solution (a), mixed Q1/P0 formulation (b) and P2/P1 formulation (c). Fig. 6 . 6Comparison of calculation results in three formulations. (a) Study on the convergence of compression level (point A) with the number of elements per edge. (b) Comparison for the evolution of compression level with the loading. Fig. 7 . 7Double edge notched specimen in tension. (a) Geometry and boundary conditions allowing for symmetry. (b) Comparison of simulated (Q1/P0 and P2/P1) and experimental force-displacement responses. Fig. 8 . 8Crack patterns and the corresponding meshes of the double-edge tensile specimen with Fig. 9 . 9Distributions of Cauchy stress 22 for Q1/P0 formulation at various loading states: Fig. 10 . 10Evolution of the incompressible level  throughout the fracture process. Fig. 11 . 11Peeling test in soft materials containing the weak interface. (a) Geometry configuration and boundary conditions. (b) finite element meshes used for Q1/P0 formulation. Fig. 13 . 13Distributions of pressure p, Cauchy stress components11 Fig. 14 . 14Tearing test in 3-D sheet. (a) Geometrical setup and boundary conditions. (b) finite element mesh used for Q1/P0 formulation. Fig. 15 . 15Snapshots of 3-D tear test in deformed configuration at various loading states: Fig . 15 displays the deformation history of out-of-plane tearing through a series of snapshots in various loading states. The sheet first undergoes a purely elastic deformation phase in the early stage of loading, see Figs. 15(a) and (b). As the loading approaches30.08 mm u   Fig. 16 . 16Tear test in 3-D sheet. Distributions of the magnitude of displacement u , pressure p and the magnitude of Cauchy stress σ at the loading of 58 u   [mm]. Appendix A. Incompressibility frustrates crack opening Fig. A1. Incompressible damaged phase obstructing cracks opening with diffuse (a) and geometric (b) pre-crack, respectively. (c) The results obtained by the proposed framework. (d)-(e) are the partial enlargements corresponding to (a)-(c). Fig. A2 . A2Crack patterns and the corresponding meshes of test 1 recalculated by P2/P1 formulation. The snapshots (a)-(e) (or (f)-(j)) are at loading displacements of 10 u   , 40 u   , 60.2573 u   , 61.7294 u   61.7296 u   [mm]. Herein, the damaged PK2 stress Ŝ and elastic tensor  read    2 1 ε     S S  (48) and     2 1 ε        (49) respectively, while the pure volumetric parts are given by     4 1 1 ε vol vol T Jp         S P F C (50)         4 1 1 1 1 1 1 2 1 ε C C C C C C vol vol ij kl il kj ik jl pJ pJ                S C  (51) Besides, the gradient matrices X B and  are defined by , 11 , 21 , 31 , 12 , 22 , 32 , 13 , 23 , 33 , 13 , 12 , 23 , 22 , 33 , 32 , 13 , 11 , 23 , 21 , 33 , 31 , 12 , 11 , 22 , 21 , 32 , 3 ).From our experience, the setting [0.95, 0.98] c   is apposite. After eliminating the labeled elements, the crack opening profile is visualized, as illustrated in presents the geometry and boundary conditions considering symmetry, aligning with the experiment of Hocine et al. The pre-crack length ia take 12 mm and 24mm, respectively. Using the Neo-Hookean model to characterize the mechanical response, the material parameters are set as: 2 0.178 N/mm   , 0.499 v  and 1.67 N/mm c G  . Following the previous reports, the phase-field regularization parameter is set to 0 1 mm l  , the effective element size 0.2 mm f h  and viscosity coefficient 3 1 10     . With the above parameters, this example was simulated in terms of Q1/P0 and P2/P1 schemes, respectively, and the resulting force-displacement responses are plotted in Fig 7(b) together with the experimental ones. While the curves obtained by Q1/P0 and P2/P1 are in good agreement with the experimental measurements [63], the P2/P1 formulation breaks down when approaching the complete fracture state for the identical parameter settings. † This operation is a post-processing technique, unlike the foregoing adaptive mesh deletion method. (57)with the Euclidean norm  .End do 6. Perform adaptive mesh deletion (optional). Fracture and adhesion of soft materials: a review. C Creton, M Ciccotti, Reports on Progress in Physics. 7946601C. Creton, M. Ciccotti, Fracture and adhesion of soft materials: a review, Reports on Progress in Physics, 79 (2016) 046601. Oscillating fracture paths in rubber. R D Deegan, P J Petersan, M Marder, H L Swinney, Physical review letters. 8814304R.D. Deegan, P.J. Petersan, M. Marder, H.L. Swinney, Oscillating fracture paths in rubber, Physical review letters, 88 (2001) 014304. Cracks in rubber under tension exceed the shear wave speed. P J Petersan, R D Deegan, M Marder, H L Swinney, Physical review letters. 9315504P.J. Petersan, R.D. Deegan, M. Marder, H.L. Swinney, Cracks in rubber under tension exceed the shear wave speed, Physical review letters, 93 (2004) 015504. Supersonic rupture of rubber. M Marder, Journal of the Mechanics and Physics of Solids. 54M. Marder, Supersonic rupture of rubber, Journal of the Mechanics and Physics of Solids, 54 (2006) 491-532. Intrinsic nonlinear scale governs oscillations in rapid fracture. T Goldman, R Harpaz, E Bouchbinder, J Fineberg, Physical review letters. 108104303T. Goldman, R. Harpaz, E. Bouchbinder, J. Fineberg, Intrinsic nonlinear scale governs oscillations in rapid fracture, Physical review letters, 108 (2012) 104303. Instability in dynamic fracture and the failure of the classical theory of cracks. C.-H Chen, E Bouchbinder, A Karma, Nature Physics. C.-H. Chen, E. Bouchbinder, A. Karma, Instability in dynamic fracture and the failure of the classical theory of cracks, Nature Physics, 13 (2017) 1186-1190. Damage, gradient of damage and principle of virtual power. M Frémond, B Nedjar, International Journal of Solids and Structures. 33M. Frémond, B. Nedjar, Damage, gradient of damage and principle of virtual power, International Journal of Solids and Structures, 33 (1996) 1083-1103. Reformulation of elasticity theory for discontinuities and long-range forces. S A Silling, Journal of the Mechanics and Physics of Solids. 48S.A. Silling, Reformulation of elasticity theory for discontinuities and long-range forces, Journal of the Mechanics and Physics of Solids, 48 (2000) 175-209. The variational approach to fracture. B Bourdin, G A Francfort, J.-J Marigo, Journal of elasticity. 91B. Bourdin, G.A. Francfort, J.-J. Marigo, The variational approach to fracture, Journal of elasticity, 91 (2008) 5-148. A review on phase-field models of brittle fracture and a new fast hybrid formulation. M Ambati, T Gerasimov, L De Lorenzis, Computational Mechanics. 55M. Ambati, T. Gerasimov, L. De Lorenzis, A review on phase-field models of brittle fracture and a new fast hybrid formulation, Computational Mechanics, 55 (2015) 383-405. A review of phase-field models, fundamentals and their applications to composite laminates. T Q Bui, X Hu, Engineering Fracture Mechanics. 107705T.Q. Bui, X. Hu, A review of phase-field models, fundamentals and their applications to composite laminates, Engineering Fracture Mechanics, (2021) 107705. Revisiting brittle fracture as an energy minimization problem. G A Francfort, J J Marigo, Journal of the Mechanics and Physics of Solids. 46G.A. Francfort, J.J. Marigo, Revisiting brittle fracture as an energy minimization problem, Journal of the Mechanics and Physics of Solids, 46 (1998) 1319-1342. Continuum field description of crack propagation. I Aranson, V Kalatsky, V Vinokur, Physical review letters. 85118I. Aranson, V. Kalatsky, V. Vinokur, Continuum field description of crack propagation, Physical review letters, 85 (2000) 118. Phase-field model of mode III dynamic fracture. A Karma, D A Kessler, H Levine, Physical Review Letters. 8745501A. Karma, D.A. Kessler, H. Levine, Phase-field model of mode III dynamic fracture, Physical Review Letters, 87 (2001) 045501. Laws of crack motion and phase-field models of fracture. V Hakim, A Karma, Journal of the Mechanics Physics of Solids. 57V. Hakim, A. Karma, Laws of crack motion and phase-field models of fracture, Journal of the Mechanics Physics of Solids, 57 (2009) 342-368. Numerical experiments in revisited brittle fracture. B Bourdin, G A Francfort, J J Marigo, Journal of the Mechanics and Physics of Solids. 48B. Bourdin, G.A. Francfort, J.J. Marigo, Numerical experiments in revisited brittle fracture, Journal of the Mechanics and Physics of Solids, 48 (2000) 797-826. The Variational Approach to Fracture. B Bourdin, G A Francfort, J J Marigo, Journal of Elasticity. 91B. Bourdin, G.A. Francfort, J.J. Marigo, The Variational Approach to Fracture, Journal of Elasticity, 91 (2008) 5-148. Thermodynamically consistent phase-field models of fracture: Variational principles and multi-field FE implementations. C Miehe, F Welschinger, M Hofacker, International Journal for Numerical Methods in Engineering. 83C. Miehe, F. Welschinger, M. Hofacker, Thermodynamically consistent phase-field models of fracture: Variational principles and multi-field FE implementations, International Journal for Numerical Methods in Engineering, 83 (2010) 1273-1311. Phase-field modeling of hydraulic fracture. Z A Wilson, C M Landis, Journal of the Mechanics and Physics of Solids. 96Z.A. Wilson, C.M. Landis, Phase-field modeling of hydraulic fracture, Journal of the Mechanics and Physics of Solids, 96 (2016) 264-290. Crack nucleation in variational phase-field models of brittle fracture. E Tanné, T Li, B Bourdin, J.-J Marigo, C Maurini, Journal of the Mechanics and Physics of Solids. 110E. Tanné, T. Li, B. Bourdin, J.-J. Marigo, C. Maurini, Crack nucleation in variational phase-field models of brittle fracture, Journal of the Mechanics and Physics of Solids, 110 (2018) 80-99. A phase-field model of fracture with frictionless contact and random fracture properties: Application to thin-film fracture and soil desiccation. T Hu, J Guilleminot, J E Dolbow, Computer Methods in Applied Mechanics and Engineering. 368113106T. Hu, J. Guilleminot, J.E. Dolbow, A phase-field model of fracture with frictionless contact and random fracture properties: Application to thin-film fracture and soil desiccation, Computer Methods in Applied Mechanics and Engineering, 368 (2020) 113106. Three-dimensional phase-field modeling of mode I+ II/III failure in solids. J.-Y Wu, Y Huang, H Zhou, V P Nguyen, Computer Methods in Applied Mechanics and Engineering. 373113537J.-Y. Wu, Y. Huang, H. Zhou, V.P. Nguyen, Three-dimensional phase-field modeling of mode I+ II/III failure in solids, Computer Methods in Applied Mechanics and Engineering, 373 (2021) 113537. A unified phase-field theory for the mechanics of damage and quasi-brittle failure. J.-Y Wu, Journal of the Mechanics and Physics of Solids. 103J.-Y. Wu, A unified phase-field theory for the mechanics of damage and quasi-brittle failure, Journal of the Mechanics and Physics of Solids, 103 (2017) 72-99. M J Borden, C V Verhoosel, M A Scott, T J Hughes, C M Landis, A phase-field description of dynamic brittle fracture. 217M.J. Borden, C.V. Verhoosel, M.A. Scott, T.J. Hughes, C.M. Landis, A phase-field description of dynamic brittle fracture, Computer Methods in Applied Mechanics and Engineering, 217 (2012) 77-95. Phase field approximation of dynamic brittle fracture. A Schlüter, A Willenbücher, C Kuhn, R Müller, Computational Mechanics. 54A. Schlüter, A. Willenbücher, C. Kuhn, R. Müller, Phase field approximation of dynamic brittle fracture, Computational Mechanics, 54 (2014) 1141-1161. Stochastic analysis of polymer composites rupture at large deformations modeled by a phase field method. J Wu, C Mcauliffe, H Waisman, G Deodatis, Computer Methods in Applied Mechanics and Engineering. J. Wu, C. McAuliffe, H. Waisman, G. Deodatis, Stochastic analysis of polymer composites rupture at large deformations modeled by a phase field method, Computer Methods in Applied Mechanics and Engineering, 312 (2016) 596-634. Rate-dependent phase-field damage modeling of rubber and its experimental parameter identification. P J Loew, B Peters, L A Beex, Journal of the Mechanics and Physics of Solids. 127P.J. Loew, B. Peters, L.A. Beex, Rate-dependent phase-field damage modeling of rubber and its experimental parameter identification, Journal of the Mechanics and Physics of Solids, 127 (2019) 266- 294. Hyperelastic phase-field fracture mechanics modeling of the toughening induced by Bouligand structures in natural materials. S Yin, W Yang, J Kwon, A Wat, M A Meyers, R O Ritchie, Journal of the Mechanics and Physics of Solids. 131S. Yin, W. Yang, J. Kwon, A. Wat, M.A. Meyers, R.O. Ritchie, Hyperelastic phase-field fracture mechanics modeling of the toughening induced by Bouligand structures in natural materials, Journal of the Mechanics and Physics of Solids, 131 (2019) 204-220. A variational phase-field model For ductile fracture with coalescence dissipation. T Hu, B Talamini, A J Stershic, M R Tupek, J E Dolbow, Computational Mechanics. T. Hu, B. Talamini, A.J. Stershic, M.R. Tupek, J.E. Dolbow, A variational phase-field model For ductile fracture with coalescence dissipation, Computational Mechanics, (2021) 1-25. Phase field modeling of fracture in rubbery polymers. Part I: Finite elasticity coupled with brittle failure. C Miehe, L.-M Schänzel, Journal of the Mechanics Physics of Solids. 65C. Miehe, L.-M. Schänzel, Phase field modeling of fracture in rubbery polymers. Part I: Finite elasticity coupled with brittle failure, Journal of the Mechanics Physics of Solids, 65 (2014) 93-113. Phase field modeling of fracture in nonlinearly elastic solids via energy decomposition. S Tang, G Zhang, T F Guo, X Guo, W K Liu, Computer Methods in Applied Mechanics Engineering fracture mechanics. 347S. Tang, G. Zhang, T.F. Guo, X. Guo, W.K. Liu, Phase field modeling of fracture in nonlinearly elastic solids via energy decomposition, Computer Methods in Applied Mechanics Engineering fracture mechanics, 347 (2019) 477-494. Phase field simulation for fracture behavior of hyperelastic material at large deformation based on edge-based smoothed finite element method. F Peng, W Huang, Z.-Q Zhang, T F Guo, Y E Ma, Engineering Fracture Mechanics. 238107233F. Peng, W. Huang, Z.-Q. Zhang, T.F. Guo, Y.E. Ma, Phase field simulation for fracture behavior of hyperelastic material at large deformation based on edge-based smoothed finite element method, Engineering Fracture Mechanics, 238 (2020) 107233. An adaptive edge-based smoothed finite element method (ES-FEM) for phase-field modeling of fractures at large deformations. F Tian, X Tang, T Xu, L Li, Computer Methods in Applied Mechanics and Engineering. 372113376F. Tian, X. Tang, T. Xu, L. Li, An adaptive edge-based smoothed finite element method (ES-FEM) for phase-field modeling of fractures at large deformations, Computer Methods in Applied Mechanics and Engineering, 372 (2020) 113376. A theory for fracture of polymeric gels. Y Mao, L Anand, Journal of the Mechanics and Physics of Solids. 115Y. Mao, L. Anand, A theory for fracture of polymeric gels, Journal of the Mechanics and Physics of Solids, 115 (2018) 30-53. A phase-field model for fracture in water-containing soft solids. G Zhang, T F Guo, Z Zhou, S Tang, X Guo, Engineering Fracture Mechanics. 212G. Zhang, T.F. Guo, Z. Zhou, S. Tang, X. Guo, A phase-field model for fracture in water-containing soft solids, Engineering Fracture Mechanics, 212 (2019) 180-196. A length scale insensitive anisotropic phase field fracture model for hyperelastic composites. T K Mandal, V P Nguyen, J.-Y Wu, International Journal of Mechanical Sciences. 188105941T.K. Mandal, V.P. Nguyen, J.-Y. Wu, A length scale insensitive anisotropic phase field fracture model for hyperelastic composites, International Journal of Mechanical Sciences, 188 (2020) 105941. Rupture of 3D-printed hyperelastic composites: Experiments and phase field fracture modeling. J Russ, V Slesarenko, S Rudykh, H Waisman, Journal of the Mechanics and Physics of Solids. 140103941J. Russ, V. Slesarenko, S. Rudykh, H. Waisman, Rupture of 3D-printed hyperelastic composites: Experiments and phase field fracture modeling, Journal of the Mechanics and Physics of Solids, 140 (2020) 103941. Nonlinear finite element methods. P Wriggers, Springer Science & Business MediaP. Wriggers, Nonlinear finite element methods, Springer Science & Business Media, 2008. The finite element method: linear static and dynamic finite element analysis, Courier Corporation. T J Hughes, T.J. Hughes, The finite element method: linear static and dynamic finite element analysis, Courier Corporation, 2012. Design of simple low order finite elements for large strain analysis of nearly incompressible solids. E De Souza Neto, D Perić, M Dutko, D Owen, International Journal of Solids and Structures. 33E. de Souza Neto, D. Perić, M. Dutko, D. Owen, Design of simple low order finite elements for large strain analysis of nearly incompressible solids, International Journal of Solids and Structures, 33 (1996) 3277-3296. F-bar-based linear triangles and tetrahedra for finite strain analysis of nearly incompressible solids. Part I: formulation and benchmarking. E D S Neto, F A Pires, D Owen, International Journal for Numerical Methods in Engineering. 62E.D.S. Neto, F.A. Pires, D. Owen, F-bar-based linear triangles and tetrahedra for finite strain analysis of nearly incompressible solids. Part I: formulation and benchmarking, International Journal for Numerical Methods in Engineering, 62 (2005) 353-383. Volumetric locking free 3D finite element for modelling of anisotropic visco-hyperelastic behaviour of anterior cruciate ligament. A Bijalwan, B Patel, M Marieswaran, D Kalyanasundaram, Journal of biomechanics. 73A. Bijalwan, B. Patel, M. Marieswaran, D. Kalyanasundaram, Volumetric locking free 3D finite element for modelling of anisotropic visco-hyperelastic behaviour of anterior cruciate ligament, Journal of biomechanics, 73 (2018) 1-8. On enhanced strain methods for small and finite deformations of solids, Computational Mechanics. P Wriggers, J Korelc, 18P. Wriggers, J. Korelc, On enhanced strain methods for small and finite deformations of solids, Computational Mechanics, 18 (1996) 413-428. Variational and projection methods for the volume constraint in finite deformation elasto-plasticity. J Simo, R L Taylor, K Pister, Computer methods in applied mechanics and engineering. 51J. Simo, R.L. Taylor, K. Pister, Variational and projection methods for the volume constraint in finite deformation elasto-plasticity, Computer methods in applied mechanics and engineering, 51 (1985) 177- 208. On the perturbed Lagrangian formulation for nearly incompressible and incompressible hyperelasticity. J Chen, W Han, C Wu, W Duan, Computer Methods in Applied Mechanics and Engineering. 142J. Chen, W. Han, C. Wu, W. Duan, On the perturbed Lagrangian formulation for nearly incompressible and incompressible hyperelasticity, Computer Methods in Applied Mechanics and Engineering, 142 (1997) 335-351. Fracture of soft elastic foam. Z Ma, X Feng, W Hong, Journal of Applied Mechanics. 83Z. Ma, X. Feng, W. Hong, Fracture of soft elastic foam, Journal of Applied Mechanics, 83 (2016). A variational phase-field model for brittle fracture in polydisperse elastomer networks. B Li, N Bouklas, International Journal of Solids and Structures. B. Li, N. Bouklas, A variational phase-field model for brittle fracture in polydisperse elastomer networks, International Journal of Solids and Structures, 182-183 (2020) 193-204. Large strained fracture of nearly incompressible hyperelastic materials: Enhanced assumed strain methods and energy decomposition. J.-Y Ye, L.-W Zhang, J Reddy, Journal of the Mechanics and Physics of Solids. 139103939J.-Y. Ye, L.-W. Zhang, J. Reddy, Large strained fracture of nearly incompressible hyperelastic materials: Enhanced assumed strain methods and energy decomposition, Journal of the Mechanics and Physics of Solids, 139 (2020) 103939. A dynamic phase field model with no attenuation of wave speed for rapid fracture instability in hyperelastic materials. F Tian, J Zeng, X Tang, T Xu, L Li, International Journal of Solids and Structures. 202F. Tian, J. Zeng, X. Tang, T. Xu, L. Li, A dynamic phase field model with no attenuation of wave speed for rapid fracture instability in hyperelastic materials, International Journal of Solids and Structures, 202 (2020) 685-698. On some mixed finite element methods for incompressible and nearly incompressible finite elasticity. U Brink, E Stein, Computational Mechanics. 19U. Brink, E. Stein, On some mixed finite element methods for incompressible and nearly incompressible finite elasticity, Computational Mechanics, 19 (1996) 105-119. Non-linear elastic deformations, Courier Corporation. R W Ogden, R.W. Ogden, Non-linear elastic deformations, Courier Corporation, 1997. Nonlinear solid mechanics II. A G Holzapfel, A.G. Holzapfel, Nonlinear solid mechanics II, (2000). An introduction to Γ-convergence. G Maso, Springer Science & Business MediaG. Dal Maso, An introduction to Γ-convergence, Springer Science & Business Media, 2012. Mixed and hybrid finite element methods. M Fortin, F Brezzi, Springer-VerlagNew YorkM. Fortin, F. Brezzi, Mixed and hybrid finite element methods, New York: Springer-Verlag, 1991. A linearized consistent mixed displacement-pressure formulation for hyperelasticity. C Kadapa, M Hossain, Mechanics of Advanced Materials and Structures. C. Kadapa, M. Hossain, A linearized consistent mixed displacement-pressure formulation for hyperelasticity, Mechanics of Advanced Materials and Structures, (2020) 1-18. K Mang, T Wick, W Wollner, A phase-field model for fractures in nearly incompressible solids. 65K. Mang, T. Wick, W. Wollner, A phase-field model for fractures in nearly incompressible solids, Computational Mechanics, 65 (2020) 61-78. A numerical solution of the Navier-Stokes equations using the finite element technique. C Taylor, P Hood, Computers & Fluids. C. Taylor, P. Hood, A numerical solution of the Navier-Stokes equations using the finite element technique, Computers & Fluids, 1 (1973) 73-100. On the advantages of mixed formulation and higher-order elements for computational morphoelasticity. C Kadapa, Z Li, M Hossain, J Wang, Journal of the Mechanics and Physics of Solids. 148104289C. Kadapa, Z. Li, M. Hossain, J. Wang, On the advantages of mixed formulation and higher-order elements for computational morphoelasticity, Journal of the Mechanics and Physics of Solids, 148 (2021) 104289. iFEM: an integrated finite element methods package in MATLAB. L Chen, University of California at IrvineL. Chen, iFEM: an integrated finite element methods package in MATLAB, University of California at Irvine, (2009). Adaptive mesh refinement in 2D-An efficient implementation in matlab. S A Funken, A Schmidt, Computational methods in applied mathematics. 20S.A. Funken, A. Schmidt, Adaptive mesh refinement in 2D-An efficient implementation in matlab, Computational methods in applied mathematics, 20 (2020) 459-479. Paraview: An end-user tool for large data visualization, The visualization handbook. J Ahrens, B Geveci, C Law, 717J. Ahrens, B. Geveci, C. Law, Paraview: An end-user tool for large data visualization, The visualization handbook, 717 (2005). A new locking-free brick element technique for large deformation problems in elasticity. S Reese, P Wriggers, B Reddy, Computers & Structures. 75S. Reese, P. Wriggers, B. Reddy, A new locking-free brick element technique for large deformation problems in elasticity, Computers & Structures, 75 (2000) 291-304. Fracture problems of rubbers: J-integral estimation based upon η factors and an investigation on the strain energy density distribution as a local criterion. N A Hocine, M N Abdelaziz, A Imad, International Journal of Fracture. 117N.A. Hocine, M.N. Abdelaziz, A. Imad, Fracture problems of rubbers: J-integral estimation based upon η factors and an investigation on the strain energy density distribution as a local criterion, International Journal of Fracture, 117 (2002) 1-23. Tough bonding of hydrogels to diverse non-porous surfaces. H Yuk, T Zhang, S Lin, G A Parada, X Zhao, Nature materials. 15H. Yuk, T. Zhang, S. Lin, G.A. Parada, X. Zhao, Tough bonding of hydrogels to diverse non-porous surfaces, Nature materials, 15 (2016) 190-196. Tearing a hydrogel of complex rheology. R Bai, B Chen, J Yang, Z Suo, Journal of the Mechanics and Physics of Solids. 125R. Bai, B. Chen, J. Yang, Z. Suo, Tearing a hydrogel of complex rheology, Journal of the Mechanics and Physics of Solids, 125 (2019) 749-761. Fracture simulation of viscoelastic polymers by the phase-field method. B Yin, M Kaliske, Computational Mechanics. 65B. Yin, M. Kaliske, Fracture simulation of viscoelastic polymers by the phase-field method, Computational Mechanics, 65 (2020) 293-309.
[]
[ "Packet Latency of Deterministic Broadcasting in Adversarial Multiple Access Channels *", "Packet Latency of Deterministic Broadcasting in Adversarial Multiple Access Channels *" ]
[ "Lakshmi Anantharamu \nDepartment of Computer Science and Engineering\nUniversity of Colorado Denver\nDenverColoradoU.S.A\n", "Bogdan S Chlebus \nDepartment of Computer Science and Engineering\nUniversity of Colorado Denver\nDenverColoradoU.S.A\n", "Dariusz R Kowalski \nDepartment of Computer Science\nUniversity of Liverpool\nLiverpoolU.K\n", "Mariusz A Rokicki \nDepartment of Computer Science\nUniversity of Liverpool\nLiverpoolU.K\n" ]
[ "Department of Computer Science and Engineering\nUniversity of Colorado Denver\nDenverColoradoU.S.A", "Department of Computer Science and Engineering\nUniversity of Colorado Denver\nDenverColoradoU.S.A", "Department of Computer Science\nUniversity of Liverpool\nLiverpoolU.K", "Department of Computer Science\nUniversity of Liverpool\nLiverpoolU.K" ]
[]
We study broadcasting on multiple access channels with dynamic packet arrivals and jamming. The communication environments is represented by adversarial models which specify constraints on packet arrivals and jamming. We consider deterministic distributed broadcast algorithms and give upper bounds on the worst-case packet latency and the number of queued packets in relation to the parameters defining adversaries. Packet arrivals are determined by the rate of injections and number of packets that can arrive in one round. Jamming is constrained by the rate with which the adversary can jam rounds and by the number of consecutive rounds that can be jammed.Keywords: multiple access channel, adversarial queuing, jamming, distributed algorithm, deterministic algorithm, packet latency, queue size. * The results of this paper appeared in a preliminary form in [6] and[7].
10.1016/j.jcss.2018.07.001
[ "https://arxiv.org/pdf/1701.00186v1.pdf" ]
4,390,035
1701.00186
805e8ba0f5060d03cded53e3018e51dc1f566c82
Packet Latency of Deterministic Broadcasting in Adversarial Multiple Access Channels * 1 Jan 2017 Lakshmi Anantharamu Department of Computer Science and Engineering University of Colorado Denver DenverColoradoU.S.A Bogdan S Chlebus Department of Computer Science and Engineering University of Colorado Denver DenverColoradoU.S.A Dariusz R Kowalski Department of Computer Science University of Liverpool LiverpoolU.K Mariusz A Rokicki Department of Computer Science University of Liverpool LiverpoolU.K Packet Latency of Deterministic Broadcasting in Adversarial Multiple Access Channels * 1 Jan 2017 We study broadcasting on multiple access channels with dynamic packet arrivals and jamming. The communication environments is represented by adversarial models which specify constraints on packet arrivals and jamming. We consider deterministic distributed broadcast algorithms and give upper bounds on the worst-case packet latency and the number of queued packets in relation to the parameters defining adversaries. Packet arrivals are determined by the rate of injections and number of packets that can arrive in one round. Jamming is constrained by the rate with which the adversary can jam rounds and by the number of consecutive rounds that can be jammed.Keywords: multiple access channel, adversarial queuing, jamming, distributed algorithm, deterministic algorithm, packet latency, queue size. * The results of this paper appeared in a preliminary form in [6] and[7]. Introduction We study broadcasting on multiple access channels by deterministic distributed algorithms. The communication medium is considered either with jamming or without it. We evaluate the performance of communication algorithms by bounds on packet latency and number of packets queued at stations, both these metrics understood in their worst-case sense. The traditional approach to dynamic broadcasting on multiple access channels uses randomization to arbitrate for access to a channel in a distributed manner. Typical examples of randomized protocols include backoff protocols, like the binary exponential backoff employed in the Ethernet. It is a popular opinion that using randomization is the only option in order to be able to cope with bursty traffic, subject to challenges of collision resolution, and that effectiveness of deterministic solutions for dynamic broadcasting is limited. The preliminary conference presentations of this work [6,7] related outcomes of simulating experiments in which we measured average packet latency of various broadcasting algorithms, including deterministic and backoff ones. These experiments indicated that when injections are intense and sustained then even simple deterministic algorithms perform better than randomized backoff ones in terms of average packet latency. This indicates that deterministic algorithms for broadcasting on multiple-access channels have untapped potential. We explore algorithmic paradigms useful for deterministic distributed broadcasting with dynamic continuous packet injection. This is done not that much in order to compare deterministic algorithms to randomized solutions but rather as an algorithmic problem interesting in its own sake. The goal is to investigate worst-case bounds on packet latency and queue size achievable by deterministic solutions to dynamic broadcasting. It is supported by a model of continuous packet injection without any stochastic assumptions about how packets are generated and where and when they are injected. This model of adversarial queueing is an alternative to models of stochastic packet injection. It has proved useful in studying dynamic communication with only minimal constraints on how traffic is generated. There are numerous reasons why the traditional approach to broadcasting by employing randomization has been considered as essentially the only viable one. The methodological underpinnings of key performance metrics, like queue size and latency, have normally been studied with stochastic assumptions in mind. The methodology of simulations has been geared towards models of regular data injection defined by simple stochastic constraints. Most importantly, in real-world applications, most stations stay idle for most of the time, so that periods of inactivity are interspersed with unexpected bursts of activity by stations in unpredictable configurations. The success of Ethernet, as a real-world implementation of local area networks, has confirmed that randomization works very well in practice. Jamming in wireless networks is often understood as an effect of deliberate transmissions of radio signals that disrupt the flow of information by creating interference of legitimate signals with such additional disrupting transmissions. We use the word "jamming" in a different meaning. A jammed round has the same effect as one with multiple simultaneous transmissions, in how it is perceived by the stations attached to the channel. Stations cannot distinguish a jammed round from a round with multiple transmissions. This means that jamming is understood as logical or virtual, in the sense that we do not make any assumptions about the physical reasons justifying a possibility that one station transmits in a round and the transmitted message is not heard on the channel. This logical approach to jamming allows to capture a situation in which jamming occurs because groups of stations execute their independent communication protocols so that for each group an interference caused by "foreign" transmissions is logically equivalent to jamming. A similar motivation arrives from a situation in which a degradation-of-service attack produces dummy packets that interfere with legitimate packets. We investigate deterministic broadcast algorithms for dynamic packet injection. No randomization in algorithms nor any stochastic component determining packet injection is present in the considered communication environments. The obtained upper bounds on packet latency and queue sizes of broadcast algorithms are worst-case. The studied communication algorithms are distributed in that they are executed with no centralized control. The stations attached to the channel are assumed to be activated at the same initial round with empty queues. We consider broadcasting against adversaries that control both injections of packets into stations and jamming of the communication medium. Packet injection is limited only by the rate of injecting new packets and the number of packets that can be injected simultaneously. Jamming is limited by the rate of jamming different rounds and by how many consecutive rounds can be jammed. We use the slotted model of synchrony, in which an execution of a communication algorithm is partitioned into rounds, so that a transmission of a message with one packet takes one round. The set of stations attached to the channel is fixed and their number n is known, in that it can be used in codes of algorithms. Stations are equipped with private queues, in which they can store packets until they are transmitted successfully. Synchrony is the underlying model's component that allows to define the rate of injecting packets and the rate of jamming different rounds. Similarly, rounds are needed to determine burstiness of traffic, determined as the maximum number of packets that can be injected "at the same time," and the burstiness of jamming, understood as the maximum size of a time interval that is unavailable for successful transmissions because of continuous jamming. All the considered algorithms have bounded packet latency for each fixed injection rate ρ and jamming rate λ subject only to the necessary constraint that ρ + λ < 1. The obtained results are recapitulated in the final Section 7. Previous work on adversarial multiple access channels. Now we review previous work on broadcasting in multiple-access channels in the framework of adversarial queuing. The first such work, by Bender et al. [15], concerned throughput of randomized backoff for multiple-access channels, in the queue-free model. Deterministic distributed broadcast algorithms for multiple-access channels, in the model of stations with queues, were first considered by Chlebus et al. [24]. They specified the classes of acknowledgment based and full sensing deterministic distributed algorithms, along the lines of the respective randomized protocols [21]. The maximum throughput, defined to mean the maximum rate for which stability is achievable, was studied by Chlebus et al. [23]. Their model was of a fixed set of stations with queues, whose size n is known. They developed a stable deterministic distributed broadcast algorithm with queues of O(n 2 + burstiness) size against leaky-bucket adversaries of injection rate 1, where "burstiness" is the maximum number of packets that the adversary can inject in one round. That work demonstrated that throughput 1 was achievable in the model of a fixed set of stations whose number n is known. They also showed some restrictions on traffic with throughput 1; in particular, communication algorithms have to be adaptive (use control bits in messages), achieving bounded packet latency is impossible, and queues have to be Ω(n 2 + burstiness). Anantharamu et al. [8] extended work on throughput 1 in adversarial settings by studying the impact of limiting a window-type adversary by assigning individual rates of injecting data for each station. They gave a non-adaptive algorithm for channels without collision detection of O(n + w) queue size and O(nw) packet latency, where w is the window size; this is in contrast with general adversaries, against whom bounded packet latency for injection rate 1 is impossible to achieve. Bieńkowski et al. [18] studied online broadcasting against adversaries that are unbounded in the sense that they can inject packets into arbitrary stations with no constraints on their numbers nor rates of injection. They gave a deterministic algorithm optimal with respect to competitive performance, when measuring either the total number of packets in the system or the maximum queue size. They also showed that the algorithm is stochastically optimal for any expected injection rate smaller than or equal to 1. Anantharamu and Chlebus [5] considered a multiple access channel with an unbounded supply of anonymous stations attached to it, among which only stations activated by the adversary with injected packets participate in broadcasting. They studied deterministic distributed broadcast algorithms against adversaries that are restricted to be able to activate at most one station per round. Their algorithms can provide bounded packet latency for injection rates up to 1/2, with specific rates depending on additional features of algorithms, and showed that no injection rate greater than 3 4 can be handled with bounded packet latency in this model. Related work. The simplest communication problem on multiple access channels is about collision resolution, in which there is a group of active stations, being a subset of all stations connected to the channel, and we want to have either some station in the group or all of them transmit at least once successfully. For recent work on this topic, see the papers by Kowalski [32], Anta et al. [13], and De Marco and Kowalski [25]. Most related work on broadcasting on multiple access channels has been carried out with randomization playing an integral part. Randomness can affect the behavior of protocols either directly, by being a part of the mechanism of a communication algorithm, or indirectly, when packets are generated subject to stochastic constraints. With randomness affecting communication in either way, the communication environment can be represented as a Markov chain with stability understood as ergodicity. Stability of randomized communication algorithms can be considered in the queue-free model, in which a packet gets associated with a new station at the time of injection, and the station dies after the packet has been heard on the channel. Full sensing protocols were shown to fare well in this model; some protocols stable for injection rate slightly below 1/2 were developed, see Chlebus [21]. The model of a fixed set of stations with private queues was considered to be less radical, as queues appear to have a stabilizing effect. Håstad et al. [31], Al-Ammal et al. [2] and Goldberg et al. [29] studied bounds on the rates for which the binary exponential backoff was stable, as functions of the number of stations. For recent work related to the exponential backoff see the papers by Bender et al [16] and Bender et al [17], who proposed modifications to exponential backoff with the goal to improve some of its characteristics. Raghavan and Upfal [34] and Goldberg et al [30] proposed randomized broadcast algorithms based on different paradigms that those used in backoff protocols. Paper [21] includes a survey of randomized communication in multiple-access channels. The methodology of adversarial queuing allows to capture the notion of stability of communica-tion protocols without resorting to randomness and serves as a framework for worst-case bounds on performance of deterministic protocols. Borodin et al. [19] proposed this approach in the context of routing protocols in store-and-forward networks. This was followed by Andrews et al. [9], who emphasized the notion of universality in adversarial settings. The adversarial approach to modeling communication proved to be inspirational and versatile. Alvarez et al. [3] applied adversarial models to capture phenomena related to routing of packets with varying priorities and failures in networks.Álvarez et al. [4] addressed the impact of link failures on stability of communication algorithms by way of modeling them in adversarial terms. Andrews and Zhang [11] considered adversarial networks in which nodes operate as switches connecting inputs with outputs, so that routed packets encounter additional congestion constrains at nodes when they compete with other packets for input and output ports and need to be queued when delayed. Andrews and Zhang [12] investigated routing and scheduling in adversarial wireless networks in which every node can transmit data to at most one neighboring node per time step and where data arrivals and transmission rates are governed by an adversary. Worst-case packet latency of routing protocols for store-and-forward wired networks has been studied in the framework of adversarial queuing. Aiello et al. [1] demonstrated that polynomial packet latency can be achieved by a distributed algorithm even when the adversaries do not disclose the paths they assigned to packets in order to comply with congestion restrictions. Andrews et al. [10] studied packet latency of adversarial routing when the entire path of a packet is known at the source. Broder et al. [20] discussed conditions under which protocols effective for static routing provide bounded packet latency when applied in dynamic routing. Scheideler and Vöcking [37] investigated how to transform static store-and-forward routing algorithms, designed to handle packets injected at the same time, into efficient algorithms able to handle packets injected continuously into the network, so that packet delays in the static case are close to those occurring in the dynamic case. Rosén and Tsirkin [36] studied bounded packet delays against the ultimately powerful adversaries of rate 1. Jamming in multiple-access channels and wireless networks is usually understood as disruptions occurring in individual rounds that prevent successful transmissions in spite of lack of collisions caused by concurrent interfering transmissions. Awerbuch et al. [14] studied jamming in multiple access channels in an adversarial setting with the goal to estimate saturation throughput of randomized protocols. Gilbert et al. [28] studied jammed transmissions on multiple access channel with the goal to optimize energy consumption per each transmitting station. Broadcasting on multi channels with jamming controlled by adversaries was studied by Chlebus et al. [22], Gilbert et al. [26], Gilbert et al. [27], and Meier et al. [33]. Richa et al. [35] considered broadcasting on wireless networks modeled as unit disc graphs with one communication channel, in which a constant fraction of rounds can be jammed. Preliminaries In this section, we review the technical specifications of the underlying model of communication and the algorithms. This is a preparation to consider packet latency of deterministic distributed communication algorithms, depending on a number of stations and an adversary that has the combined injection and jamming rates less than 1. In subsequent sections, we discuss specific deterministic communication algorithms, for which we estimate upper bounds on the queue size and packet latency, as functions of the size of the system and an adversary at hand. A communication environment we will study consists of some n stations attached to a channel. The stations receive packets continuously and their goal is to have them broadcast successfully. Each station is equipped with a private buffer space, which is organized as a queue. A station may hold multiple packets at a time, which are stored in this queue. The queues' capacities are assumed to be unbounded, in that a queue can accommodate an arbitrary number of packets. A message transmitted by a station consists of either a packet along with control bits or only control bits. We use the slotted model, in which time is partitioned into rounds. The size of a message and the duration of a round are calibrated such that a transmission of a message takes one round. Multiple access channel. A message successfully transmitted on the communication medium is delivered to each station; we say that that the message is heard by each station. A round when no message is heard on the channel is called void. A round may either be jammed or not; a round is clear when it is not jammed. A broadcast system is said to be a multiple-access channel without jamming when the channel is never jammed and a message transmitted by a station is heard, instantaneously and by all the stations, if and only if the transmission does not overlap in time with any other transmissions. In the slotted model that we assume, a message is heard by all the stations in a round of transmission when exactly one station transmits in this round. A broadcast system is said to be a multiple-access channel with jamming if a message transmitted by a station is heard, instantaneously and by all the stations, if and only if the transmission does not overlap in time with any other transmissions and the channel is not jammed during the transmission. In the slotted model of channels that we assume, a message is heard by all the stations in a round of transmission when exactly one station transmits in this round and the round is clear. In each round, all the stations receives the same feedback from the channel. When a message is heard on the channel, then the message itself is such feedback. A round with no transmissions is said to be silent; in such a round, all the stations receive from the channel the feedback we call silence. Multiple transmissions in the same round result in conflict for access to the channel, which is called a collision. When a round is jammed then all the stations receive in this round the same feedback from the channel as in a round of collision. Now we recapitulate the categorizations of rounds. When a round is void, that is, when no message is heard, then this is because of the following three possibilities. One possibility is that the round is silent, which means there is no transmission. The other possibility is that the round is jammed, then it does not matter whether there is any transmission or not. Finally, there may be a collision caused by multiple simultaneous transmissions. Stations cannot distinguish between a collision, caused by multiple simultaneous transmissions, and the channel being jammed in the round, in the sense that the channel is sensed in exactly the same manner in both cases. We say that collision detection is available when stations can distinguish between a silence and either a collision or jamming by the feedback they receive from the channel; otherwise the channel is without collision detection. Next we consider in detail the four possible cases of a channel being either with jamming or not, and independently, being either with collision detection or not. A channel is without jamming and without collision detection: a void round is perceived as silence, even when caused by a collision. A channel is without jamming and with collision detection: a void round is perceived either as silence, when there is no transmission, or differently as a collision, when there are multiple simultaneous transmission. A channel is with jamming and without collision detection: a void round is perceived as silence, and the stations cannot distinguish which of the following possible causes makes the round void, that is, either no message is transmitted or there is a collision or the round is jammed. A channel is with jamming and with collision detection: a void round is either perceived as silence, which means there is no transmission, or as a collision, which means that either there are multiple simultaneous transmissions or the round is jammed. Adversarial model of packet injection. We use the general leaky-bucket adversarial model, as considered in [9,23]. An adversary is determined by injection rate and burstiness. Let a real number ρ satisfy 0 < ρ ≤ 1, and b be a non-negative integer; the leaky-bucket adversary of type (ρ, b) may inject at most ρt + b packets into any set of stations in every contiguous segment of t > 0 rounds. An adversary is said to be of injection rate ρ when it is of type (ρ, b), for some b. Burstiness means the maximum number of packets that can be injected in one round. The adversary of type (ρ, b) has burstiness ⌊b + 1⌋. Knowledge. A property of a system is said to be known when it can be referred to explicitly in a code of a communication algorithm. We assume throughout that the number of stations n is known to the stations. Each station has a unique integer name in [0, n − 1], which it knows. When a station needs to be distinguished in a communication algorithm, for example to be the first one to transmit in an execution, then it is always the station with name 0. Definition of deterministic distributed broadcast algorithms. Broadcast algorithms control timings of transmissions by individual stations. All the stations start simultaneously at the same round. The local queues at stations are under the FIFO discipline, which minimizes packet latency. A packet is never dropped by a station, unless it was heard on the channel. A state of a station is determined by the private values of variables occurring in the code of the algorithm and the number of outstanding packets in its queue that still need to be transmitted. A state transition is a change of state in one round. An execution of a algorithm is a sequence of events occurring at consecutive rounds. An event in a round comprises the following actions at any station in the given order: a) a station either transmits a message or pauses, accordingly to its state, b) a station receives a feedback from the channel, in the form of either hearing a message or collision signal or silence, then c) new packets are injected into a station, if any, and finally d) a state transition occurs in a station. A state transition depends on the state at the end of the previous round, the feedback from the channel in this round, and the packets injected in this round; it occurs as follows. If packets have been injected in this round then they are enqueued into the local queue. If the station has just transmitted successfully in this round, then the transmitted packet is discarded and a new pending packet is obtained by dequeuing the local queue, unless it is empty. Finally, a message for the next round is prepared, if any will be attempted to be transmitted. We categorize broadcast algorithms according to the terminology used in [23,24]. All the algorithms considered in this paper are full sensing, in that nontrivial state transitions can occur at a station in any round, even when the station does not have pending packets to transmit. Algorithms that use control bits piggybacked on packets or send messages comprised of only control bits are called adaptive. When we refer to an algorithm simply as full sensing then this means that it is not adaptive. A channel with jamming does not produce any special "interference" signal indicating that the round is jammed. It follows that a communication algorithm for channels without jamming can be executed, without any changes in its code, for channels with jamming. Jamming adversaries. For channels with jamming, we consider adversaries that control both packet injections and jamming. Given real numbers ρ and λ in [0, 1] and integer b > 0, the leakybucket jamming adversary of type (ρ, λ, b) can inject at most ρ|τ | + b packets and, independently, it can jam at most λ|τ | + b rounds, in any contiguous segment τ of |τ | > 0 rounds. For this adversary, we refer to ρ as the injection rate and to λ as the jamming rate. We can observe that the non-jamming adversary of type (ρ, b) is formally the same as the jamming adversary of type (ρ, 0, b). If λ = 1 then every round could be jammed, making the channel dysfunctional. Therefore we assume that the jamming rate λ satisfies λ < 1. Stability is not achievable by a jamming adversary with injection rate ρ and the jamming rate λ satisfying ρ + λ > 1. To see this, observe that it is equivalent to ρ > 1 − λ, so when the adversary is jamming with the maximum capacity, then the bandwidth remaining for transmissions is 1 − λ, while the injection rate is greater than 1 − λ. It is possible to achieve stability in the case ρ + λ = 1, by adapting the approach for ρ = 1 (and λ = 0) in [23], but packet latency is then inherently unbounded. We assume throughout that ρ + λ < 1. The number of packets that a jamming adversary can inject in one round is called its injection burstiness. This parameter equals ⌊ρ + b⌋ for a leaky-bucket adversary. The maximum continuous number of rounds that an adversary can jam is called its jamming burstiness. The leaky-bucket jamming adversary of type (ρ, λ, b) can jam at most ⌊b/(1−λ)⌋ consecutive rounds, as the inequality λx + b ≥ x needs to hold for any such a number x of rounds. The type of an adversary is not assumed to be known. The only exception to this rule in this paper occurs for a full-sensing algorithm that has an upper bound J on the jamming burstiness of an adversary as part of its code; this algorithm attains the claimed packet latency when the adversary's jamming burstiness is at most J. Performance of broadcast algorithms. The basic quality for a communication algorithm in a given adversarial environment is stability, understood in the sense that the number of packets in the queues at stations stays bounded at all times. For a stable algorithm in a communication environment, an upper bound on the number of packets waiting in queues is a natural performance metric, see [23,24]. An algorithm is universal when it is stable for any injection rate smaller than 1. A sharper performance metric is that of packet latency; it denotes an upper bound on the time spent by a packet waiting in a queue, counting from the round of injection through the round when the packet is heard on the channel. For each algorithm that we consider in this paper, we give upper bounds for packet latency as functions of the number of stations n and the type (ρ, λ, b) of a leaky-bucket adversary. Specific Broadcast Algorithms In this section we summarize the specifications of deterministic broadcast algorithms whose packet latency is analyzed in detail later. Some of these algorithms have already been considered in the literature and some are new. Three broadcast algorithms. We start with three deterministic distributed algorithms for channels without jamming that are already known in the literature. These algorithms can be described as follows. Algorithm Round-Robin-Withholding (RRW) is a full-sensing (non-adaptive) algorithm for channels without collision detection. It operates in a round-robin fashion, in that the stations gain access to the channel in the cyclic order of their names. Once a station gets access to the channel by transmitting successfully, it withholds the channel to unload all the packets in its queue. A silent round is a signal to the next station, in the cyclic order of names, to take over. Algorithm RRW was introduced in [24] and showed to be universal, that is, stable for injection rates smaller than 1. Algorithm Search-Round-Robin (SRR) is a full-sensing (non-adaptive) algorithm for channels with collision detection. Its execution proceeds as a systematic sweep across all the stations with the goal to identify these with packets, with such search performed in the cyclic order. When a station with a packet is identified, the station unloads all its packets one by one. A silent round triggers the sweep to be resumed. We apply binary search to identify the next station. The binary search is implemented using collision detection. When we inquire about a segment of stations, then all the stations with packets that are in the segment transmit in the round. A search is completed by a packet heard. A silence indicates that the segment is empty of stations with packets. A collision indicates that multiple stations are in the segment: this results in having the segment partitioned into two halves, with one segment processed next immediately while the other one is pushed on a stack to wait. A transition to the next segment occurs when the stack gets empty. Algorithm RRW was introduced in [24] and showed to be universal. Algorithm Move-Big-To-Front (MBTF) is an adaptive algorithm for channels without collision detection. Each station maintains a list of all stations in its private memory. A list is initialized to be sorted in the increasing order of names of stations. The lists are manipulated in the same way by all the stations so their order is the same. The algorithm schedules exactly one station to transmit in a round, so collisions never occur. This is implemented by having a conceptual token assigned to stations, which is initially assigned to the first station on the list. A station with the token broadcasts a packet, if it has any, otherwise the round is silent. A station considers itself big in a round when it has at least n packets; such a station attaches a control bit to all packets it transmits to indicate this status. A big station is moved to the front of the list and it keeps the token for the next round. When a station that is not big transmits, or when it pauses due to a lack of packets while holding the token, the token is passed to the next station in the list ordered in a cyclic fashion. Algorithm MBTF was introduced in [23] and showed to be stable for injection rate 1. The "older-go-first" paradigm. We obtain new algorithms by modifying RRW and SRR so that packets are categorized into "old" and "new." Packets categorized as "new" become eligible for transmissions only after the packets categorized as "old" have been heard. Formally, an execution is structured as a sequence of conceptual phases, which are contiguous segments of rounds of dynamic length, and then the notions of old versus new packets are implemented through them. A phase is defined as a full cycle made by the conceptual token to visit all stations. No additional communication is needed to mark a transition to a new phase. The "older-go-first" principle is implemented through phases by having packets injected in a given phase transmitted in the next phase. Similarly, the definition of old versus new packets is implemented by phases. For a given phase, packets are old when they have been injected in the previous phase, and the packets injected in the current phase are considered new for the duration of the phase. When a new phase begins, old packets have already been heard on the channel and new ones immediately graduate to old. Algorithm Older-First-Round-Robin-Withholding (OF-RRW) operates similarly as RRW, except that when a station gets access to the channel by transmitting successfully, then the station unloads all the old packets while the new packets stay in the queue. Algorithm Older-First-Search-Round-Robin (OF-SRR) operates similarly as SRR, except that the search is for old packets only. When a new phase begins, then all old packets have been processed already. Algorithms for channels with jamming. We introduce a a new algorithm Jamming-Round-Robin-Withholding(J), abbreviated as JRRW(J), designed for channels with jamming. The design of the algorithm is similar to that of RRW, the difference is in how the token is transferred from a station to the next one, in the cyclic order among stations. Just one void round should not trigger a transfer of the token, as it is the case in RRW, because not hearing a message may be caused by jamming. The algorithm has a parameter J interpreted as the upper bound on jamming burstiness of the adversary. This parameter is used to facilitate transfer of control from a station to the next one by way of forwarding the token. The token is moved after hearing silence for precisely J + 1 contiguous rounds, counting from either hearing a packet or moving the token; the former indicates that the transmitting station exhausted its queue while the latter indicates that the queue is empty. More precisely, every station maintains a private counter of void rounds. The counters show the same value across the system, as they are updated in exactly the same way determined only by the feedback from the channel. A void round results in incrementing the counter by 1. The token is moved to the next station when the counter reaches J + 1. When either a packet is heard or the token is moved then the counter is zeroed. Algorithm Older-First-Jamming-Round-Robin-Withholding(J), abbreviated OF-JRRW(J), is obtained from JRRW(J) similarly as OF-RRW is obtained from RRW. An execution is structured into phases, and packets are categorized into old and new, with the same rule to graduate packets from new to old. When a token visits a station, then only the old packets are transmitted while the new ones will be transmitted during the next visit by the token. Structural properties of algorithms. We say that a communication algorithm designed for a channel without jamming is a token algorithm if it uses a virtual token to avoid collisions. Such a token is always held by some station and only the station that holds the token can transmit. Algorithms RRW, OF-RRW, JRRW, OF-JRRW, and MBTF are token ones. We can take any token algorithm for channels without jamming and adapt it to the model with jamming in the following manner. If there is a packet to be transmitted by a station in the original algorithm, then the modified algorithm has a packet transmitted as well, otherwise just a control bit is transmitted. A round in which only a control bit is transmitted by a modified token algorithm is called control round otherwise it is a packet round. The effect of sending control bits in control rounds is that if a round is not jammed then a message is heard in this round; this message is either just a control bit or it includes a packet. This approach creates virtual collisions in jammed rounds, so when a void round occurs then this round is jammed as otherwise a message would have been heard. Once a communication algorithm can identify jammed rounds, we may ignore their impact on the flow of control. The resulting algorithm is adaptive. We will consider such modified version of the full-sensing algorithms RRW and OF-RRW, denoting them by C-RRW and OFC-RRW, respectively. Note that algorithm MBTF works by having a station with the token send a message even if the station does not have a packet, so enforcing additional control rounds is not needed for this algorithm to convert it into one creating virtual collisions. Algorithms with executions structured into phases are referred to as phase algorithms. These include RRW, OF-RRW, JRRW, OF-JRRW, C-RRW, and OFC-RRW. When the older-go-first paradigm is used in such a communication algorithm then it is called the older-go-first version of the algorithm, otherwise it is the regular version of the algorithm. In particular, RRW, JRRW and C-RRW are regular phase algorithms, while OF-RRW, OF-JRRW and OFC-RRW are older-go-first phase algorithms. Full Sensing Algorithms with Jamming We show that a bounded worst-case packet latency is achievable by full sensing algorithms against adversaries for whom jamming burstiness is at most a given bound J, while J is a part of code. On the other hand, we will demonstrate later that adaptive algorithms can achieve bounded packet latency without restricting jamming burstiness of adversaries in their code. The full sensing algorithms we consider are OF-JRRW(J) and JRRW(J). Algorithms OF-JRRW(J) and JRRW(J) include the parameter J as a part of code, but the value of J does not occur in the upper bounds on packet latency in Theorems 1 and 2. Lemma 1 Consider an execution of algorithm OF-JRRW(J) against a leaky-bucket adversary of jamming rate λ, burstiness b, and jamming burstiness at most J. If there are x old packets in the queues in a round, then at least x packets are transmitted within the next (x + n(J + 1) + b)/(1 − λ) rounds. Proof: It takes n intervals of J + 1 void rounds each for the token to make a full cycle and so to visit every station with old packets. It is advantageous for the adversary not to jam the channel during these rounds. Therefore, at most n(J + 1) + x clear rounds are needed to hear the x packets. Consider a contiguous time segment of z rounds in which some x packets are heard. At most zλ + b of these z rounds can be jammed. Therefore, the following inequality holds: z ≤ n(J + 1) + x + zλ + b . Solving for z, we obtain z ≤ x + n(J + 1) + b 1 − λ as the bound on the length of a contiguous time interval in which at least x packets are heard. Theorem 1 The packet latency of algorithm OF-JRRW(J) is O( bn (1−λ)(1−ρ−λ) ) , when executed against a jamming adversary of type (ρ, λ, b) such that its jamming burstiness is at most J. Proof: Let t i be the duration of phase i and q i be the number of old packets in the beginning of phase i, for i ≥ 1. The following two estimates lead to a recurrence for the numbers t i . One is q i+1 ≤ ρt i + b ,(1) which follows from the definitions of old packets and of type (ρ, λ, b) of the adversary. The other estimate is t i+1 ≤ n(J + 1) + q i+1 + b 1 − λ ,(2) which follows from Lemma 1. Let us denote n(J + 1) = a. Substitute (1) into (2) to obtain t i+1 ≤ a + q i+1 + b 1 − λ ≤ a 1 − λ + b 1 − λ + ρt i + b 1 − λ = a 1 − λ + 2b 1 − λ + ρ 1 − λ t i ≤ c + dt i , for c = a+2b 1−λ and d = ρ 1−λ . Note that d < 1, as ρ < 1 − λ. We will find an upper bound on the duration of a phase by iterating the recurrence t i+1 ≤ c+dt i . To this end, it is sufficient to inspect the following sequence of consecutive bounds on the lengths of the initial phases t 1 ≤ c, t 2 ≤ c + dc, t 3 ≤ c + dc + d 2 c, . . . to discover a general pattern to the effect t i+1 ≤ c + dc + d 2 c + . . . d i c ≤ c 1 − d .(3) After substituting c = a+2b 1−λ and d = ρ 1−λ into (3), we obtain t i ≤ a + 2b 1 − λ · 1 1 − ρ 1−λ ≤ a + 2b 1 − λ · 1 − λ 1 − ρ − λ ≤ a + 2b 1 − ρ − λ .(4) Now, replace a by n(J + 1) in (4) to expand it into the following inequality: t i ≤ n(J + 1) + 2b 1 − ρ − λ .(5) Apply the estimate J ≤ b/(1 − λ) to (5) to obtain that t i is at most t i ≤ n( b 1−λ + 1) + 2b 1 − ρ − λ ≤ 2(bn + (n + b)(1 − λ)) (1 − λ)(1 − ρ − λ) ,(6) which is a bound on the duration of a phase that depends only on the type of the adversary, without involving J. The bound on packet latency we seek is twice that in (6), as a packet stays queued for at most two consecutive phases. Next, we give an upper bound on the packet latency of algorithm JRRW(J). The proof of Theorem 2 is obtained by comparing packet latency of algorithm JRRW(J) with that of algorithm OF-JRRW(J) as given in Theorem 1. s i + s i (ρ + λ) + s i (ρ + λ) 2 + . . . = s i 1 − (ρ + λ) .(7) We obtain that the phase's length of algorithm JRRW(J) differs from that of algorithm OF-JRRW(J) by at most a factor of 1 1−ρ−λ . Algorithms JRRW(J) and OF-JRRW(J) share the property that a packet is transmitted in at most two consecutive phases, where the first of them is determined by the injection of the packet. The bound on packet latency given in Theorem 1 is of twice the length of a phase of algorithm OF-JRRW(J). Similarly, a bound on twice the length of a phase of algorithm JRRW(J) is a bound on packet latency. It follows that a bound on packet latency of algorithm J-RRW(J) can be obtained by multiplying the bound given in Theorem 1 by 1 1−ρ−λ . The upper bounds on packet latency given in Theorems 2 and 1 differ by the multiplicity of the factor 1 − ρ − λ occurring in the denominator. This difference between the two bounds reflects the benefit of the paradigm "older-go-first" applied in the design of algorithm OF-JRRW(J), as compared to algorithm JRRW(J). We want to show next that the bound of Theorem 2 is tight. To this end, consider algorithm JRRW(J) in a channel with jamming. The adversary will inject packets and jam the channel at full power, subject to constrains imposed by its type. The number of clear void rounds in each phase is a = n(J + 1). The number of packets injected due to these void rounds is aρ, and it takes a(ρ + λ) rounds to transmit them. In the first phase, aρ packets are transmitted in a(ρ + λ) rounds and a queue of a(ρ + λ)ρ packets builds up in station n − 1. In the second phase, there are aρ and n(ρ + λ)ρ packets transmitted by stations n and n − 1, respectively; this takes time a(ρ + λ) 2 and results in a(ρ + λ) 2 ρ packets queued at station n − 2. This process continues until phase n − 2 in which the adversary's behavior changes. The difference is that the adversary injects just one packet into station 2 after the token has passed through that station. What follows is phase n − 1 in which the queue at station 1 is Ω( a+b 1−ρ−λ ). Let the adversary inject packets at full power only into station 1 in this phase. Consider the latency of packet in station 2. The packets in station 1 are unloaded by withholding the channel while the adversary keeps injecting only into the transmitting station. Let us assign a suitably large J to the adversary, for example, J ≥ b/(2(1 − λ)) will do. The latency of the only packet in station 2 is estimated as being Ω n(J + 1) + b (1 − ρ − λ) 2 = Ω n( b 2(1−λ) + 1) + b (1 − ρ − λ) 2 = Ω bn (1 − λ)(1 − ρ − λ) 2 . This shows that the upper bound given in Theorem 2 is tight. Regarding the bound of Theorem 1, one possible way to argue about its tightness is to examine its proof to see that the derivation can be mimicked by adversary's actions. Another approach to argue that the bound in Theorem 1 is tight is to observe that the bound in Theorem 2 was obtained from the bound in Theorem 1 by multiplying it by the factor 1 1−ρ−λ , so that any improvement in the bound in Theorem 1 would be reflected in an improvement in the bound in Theorem 2, which has been shown to be tight. Knowledge of jamming burstiness. We have shown that a full sensing algorithm achieves bounded packet latency for ρ + λ < 1 when an upper bound on jamming burstiness is a part of code. Next, we hypothesize that this is unavoidable and reflects the utmost power of full sensing (non-adaptive) algorithms. Conjecture 1 No full sensing but non-adaptive algorithm can be stable against all jamming adversaries with the injection rate ρ and jamming rate λ satisfying ρ + λ < 1. Adaptive Algorithms with Jamming We give upper bounds on packet latency for the following three adaptive algorithms: C-RRW, OFC-RRW, and MBTF. The bounds are similar to those obtained for their full sensing counterparts in Theorems 1 and 2. The apparent relative strength of adaptive algorithms is reflected in their bounds shedding the factor 1 − λ in the denominators. Each of these adaptive algorithms is stable for any jamming burstiness, unlike the full sensing algorithms we considered, which have in their codes a bound on jamming burstiness they can handle in a stable manner. Lemma 2 Consider an execution of algorithm OFC-RRW against a leaky-bucket adversary of some type (ρ, λ, b). If there are x old packets in the system in some round, then at least x packets are transmitted within the next x+n+b 1−λ rounds. Proof: It takes n control rounds for the token to pass through all n stations. Consider a contiguous time segment of y rounds in which some x packets are heard. At most yλ + b of these y rounds can be jammed. It follows that the following inequality holds: y ≤ n + x + yλ + b . Solving for y yields the upper bound y ≤ x + n + b 1 − λ , as the bound on the length of a contiguous time interval in which at least x packets are heard. Theorem 3 The packet latency of algorithm OFC-RRW is O n+b 1−ρ−λ when executed against the jamming adversary of type (ρ, λ, b). Proof: Let t i denote the duration of phase i and q i be the number of old packets in the beginning of phase i, for i ≥ 1. We use the following two estimates to derive a recurrence for the numbers t i . One is q i+1 ≤ ρt i + b ,(8) which follows from the definition of old packets and the adversary of type (ρ, λ, b). The other is t i+1 ≤ q i+1 + n + b 1 − λ ,(9) which follows from Lemma 2. Using the abbreviations c = n+2b 1−λ and d = ρ 1−λ , we substitute (8) into (9) to obtain t i+1 ≤ n + b 1 − λ + ρt i + b 1 − λ = n + 2b 1 − λ + ρ 1 − λ t i ≤ c + dt i . To find an upper bound on the duration of a phase, we iterate the recurrence t i+1 ≤ c + dt i , which produces t i+1 ≤ c + dc + d 2 c + . . . d i c ≤ c 1 − d .(10) After substituting c = n+2b 1−λ and d = ρ 1−λ into (3), we obtain, by algebraic manipulations, the following estimate: t i ≤ n + 2b 1 − λ · 1 1 − ρ 1−λ ≤ n + 2b 1 − λ · 1 − λ 1 − ρ − λ ≤ 2(n + b) 1 − ρ − λ .(11) The bound on packet latency we seek is twice that in (11), because a packet is queued for at most two consecutive phases. Theorem 4 The packet latency of algorithm C-RRW is O n+b (1−ρ−λ) 2 , when executed against the jamming adversary of type (ρ, λ, b). Proof: We compare packet latency of algorithm C-RRW to that of algorithm OFC-RRW. Consider executions of algorithms C-RRW and OFC-RRW for the same injection and jamming pattern of the adversary. Let s i and t i be the bounds on the length of phase i of algorithms C-RRW and OFC-RRW, respectively. Phase i of C-RRW takes s i rounds. When algorithm OFC-RRW is executed, then its phase i may take up to the following number of rounds: s i + s i (ρ + λ) + s i (ρ + λ) 2 + . . . = s i 1 − (ρ + λ) . So the phase's length of algorithm C-RRW differs from that of algorithm OFC-RRW by a factor of at most 1 1−ρ−λ . An injected packet is transmitted in at most two phases of an execution, for each of the two considered algorithms. The bound of Theorem 3 is twice the bound on the duration of a phase. It follows that a bound on packet latency of algorithm C-RRW can be obtained by multiplying the bound given in Theorem 3 by 1 1−ρ−λ . The tightness of the bound on packet latency given in Theorem 4 can be established similarly as that for Theorem 2. To this end, replace n(J + 1) by n in the tightness argument for the bound of Theorem 2 given in Section 4 to obtain Ω n+b (1−ρ−λ) 2 as a bound. The tightness of the upper bound of Theorem 3 follows from the property that the derivation of the upper bound can be closely mimicked by the adversary. Alternatively, we observe that an improvement of the bound in Theorem 3 would lead to an improvement to the bound in Theorem 4, which has already been shown to be tight. Algorithm Move-Big-To-Front. Next we estimate the packet latency of algorithm MBTF against jamming. The analysis we give resorts to estimates of the number of packets stored in the queues at all times. Let a traversal of the token, starting at the front end of the list and ending again at the front station of the list, be called a pass of the token. A pass is concluded by either discovering a new big station or traversing the whole list. We monitor the number of packets in the queues at the end of a pass, to see if the pass contributed to an increase of the number of packets stored in the queues or not. When the number of packets in queues at the end of a pass is bigger than at the end of the previous pass then such a pass is called increasing, otherwise it is non-increasing. Additionally, we partition passes into two categories, depending on whether a big station is discovered in the pass or not. When a big station is discovered in a pass then such a pass is called big and otherwise it is called small. To make this terminology precise, we clarify how actions of a newly discovered big station are categorized, with respect to two consecutive passes. A discovery of a big station concludes a pass, but this big station just found does not transmit in this pass, because we consider the pass as already finished. The next pass begins by a transmission of the newly discovered big station, just after it has been moved up to the front position in the list. Proof: We first investigate how many packets can be accumulated in queues when small passes occur. It is sufficient to consider increasing small passes only, as otherwise the previous passes contribute bigger counts of packets. A pass that is not jammed takes n rounds, but with jamming it may take longer, as the jammed rounds slow the algorithm down. The number of rounds in a small pass is at most b + n + nλ + nλ 2 + nλ 3 . . . ≤ n 1 − λ + b . Therefore at most ( n 1−λ + b)ρ + b packets get injected during a small pass. This is also an upper bound on the number of stations with packets during an increasing small pass, because if there were more such stations than each of them would transmit a packet during a pass. Each such a station has at most n − 1 packets, because the pass is small. It follows that a small pass is either increasing or the number of packets in the queues, when the pass is over, is at most n 1 − λ + b ρ + b (n − 1) = ρ n(n + b) 1 − λ + O(bn) .(12) Next we consider the question how much above the upper bound (12) can the queues grow during big passes. Consider a time interval T when a maximum number of packets is accrued while some k stations are discovered at least once as big. To obtain estimates from above on the total number of packets in the queues, we conservatively assume the following: (i) when a small station is passed by the token then there are no packets in the queue of the station and the resulting round is a control one, and (ii) when a big station is discovered and moved up to the front of the list, then only one packet is transmitted by the station and then the token immediately moves on to the next station. Consider a series of consecutive big passes. The number of control rounds in any big pass is at most n − 1 followed by a big station moved to the front of the list. In the first pass, there are at most n − 1 control rounds. In the second pass, the station discovered big in the first pass transmits a packet, which is next followed by at most n − 1 control rounds until a new big station is discovered. In the third pass, the two big stations discovered so far transmit at least one packet each, as they are at the front of the list, which is followed by at most n − 2 control rounds before the third big station is discovered. This pattern continues until the last pass which begins by all the k big stations residing in the initial segment of the list and so each of them transmitting one packet in this pass followed by at most n − k control rounds. Let t be the last round when the above last control round among at most n − k occurs. When the next pass begins in round t+1, then either the pass is small or a big station among the first k stations in the list is discovered, as these are the stations discovered as big in time interval T . In the former case, the upper bound (12) on the number of packets in the queues applies. In the latter one, there are no control rounds during the pass. It follows that an upper bound is obtained by estimating from above an increase of packets in the time interval T by round t and next adding to it the bound (12) as a possible starting point of the process of increasing queues in T . Next we estimate the increase of packets in T by round t. There are at most (k + 1)(n − 1) control rounds and 1 + 2 + 3 + . . . + k = k(k + 1)/2 packets transmitted in packet rounds. At the same time, the adversary may be injecting packets and jamming rounds, with a net increase contributed by control and jammed rounds. The increase is at most ρ 1 − λ (k + 1)(n − 1) + k(k + 1) 2 + b − k(k + 1) 2 = ρ 1 − λ kn − 1 2 (1 − ρ 1 − λ )k 2 + O(n + b). We seek to maximize the function f (k) = ρ 1 − λ kn − 1 2 (1 − ρ 1 − λ )k 2 . The maximum occurs at the argument k max = ρ n 1 − ρ − λ , which can be justified by the standard maximum finding procedure based on differentiation. By algebraic manipulation, we obtain that the values f (k) are at most f (k max ) ≤ ρn 2 2(1 − λ)(1 − ρ − λ) and the increase of packets in T by round t is at most this number plus O(n + b). We combine this estimate with bound (12) to obtain the estimate ρn(n + b) 1 − λ + ρn 2 2(1 − λ)(1 − ρ − λ) + O(bn) ≤ 2ρn(n + b) (1 − λ)(1 − ρ − λ) + O(bn) as a bound on the total number of queued packets. These estimates are with respect to what are the numbers of packets at ends of passes. Possible fluctuations of the number of packets in queues during a pass cannot be more than n + b, which is within the magnitude of the asymptotic component O(bn) of the bound. When big stations get discovered then the regular round-robin pattern of token traversal is disturbed. Suppose a station i is discovered as big, which results in moving the station to the front. The clear rounds that follow, before the token either moves to position i + 1 or a new big station is discovered, whichever occurs first, are called delay rounds. These rounds are spent, first, on transmissions by the new first station, to bring the number of packets in its queue down to n − 1, and next, the token needs i additional clear rounds to move to the station at position i + 1. Given a round t in which a packet p is injected, let us take a snapshot of the queues in this round. Let q i be the number of packets in a station i in this snapshot, for 1 ≤ i ≤ n. We associate credit of the amount max{0, q i − (n − i)} with station i at this round. A station i has a positive credit when the inequality q i ≥ n − i + 1 holds. In particular, a big station i has credit q i + i − n ≥ i. Let C(n, t) denote the sum of all the credits of the stations in round t. We consider credit only with respect to the packets already in the queues in round t, unless stated otherwise. Lemma 4 If discovering a big station in round t delays the token by some x rounds, excluding jammed rounds, then the amounts of credit satisfy C(n, t + x) = C(n, t) − x. Proof: We refer to stations by their positions just before the shift, unless stated otherwise. When a big station i is moved up to the front of the list, its credit gets decreased first by i − 1 by the change of position in the list, and next by the amount equal to the number of transmitted packets by the new first station. More precisely, q i − (n − 1) packets are transmitted in these many rounds to decrease credit of the new first station from q i + 1 − n to zero, a unit of credit spent in each round. The decrease of the amount of credit by i − 1 is to pay for the travel of the token to position i + 1. There is a problem of what happens to the credit at the traversed stations that changed their positions. The stations at the original positions 1 through i − 1 get shifted by one position down the list to occupy the positions 2 through i. Consider such a station j, for 1 ≤ j ≤ i − 1, when it is shifted one position down the list. Its credit stays equal to 0, if q j < n − j, but it gets incremented by 1 if q j ≥ n − j, as it then equals q j + (j + 1) − n ≥ 1. When a packet is transmitted by a station that does not hold any credit credit, then this does not affect the total amount of credit, as the credit contributed by this station stays equal to zero. A packet transmitted by a station with a positive credit decrements the credit by 1, which restores the amount of credit held by the station before the shift down. This means that the total decrease of the amount of credit equals the number of rounds in the time period of delay. Theorem 5 The packet latency of algorithm MBTF is at most 3n(n+b) (1−λ)(1−ρ−λ) + O bn 1−λ when executed against the jamming adversary of type (ρ, λ, b). Proof: Let a packet p be injected in some round t into station i. Let S(n, t) be an upper bound on the number of rounds that packet p spends waiting in the queue at i to be heard when no big station is discovered by the time packet p is eventually transmitted. Some additional waiting time for packet p is contributed by discoveries of big stations and the resulting delays; let us denote by T (n, t) the number of rounds by which p is delayed this way. The total delay of p is at most S(n, t) + T (n, t). First, we find upper bounds on the expression S(n, t) + T (n, t) that does not depend on t but only on n. The inequality S(n, t) ≤ n(n − 1) 1 − λ(13) holds because there are at most n − 1 packets in the queue of i and each pass of the token takes at most n 1−λ rounds. The following inequality T (n, t) ≤ C(n, t) 1 − λ(14) holds because of Lemma 4 and the fact that iterated delays due to jamming contribute the factor 1/(1 − λ). Observe that C(n, t) is upper bounded by the number of packets in the queues in round t, by the definition of credit. Therefore, we obtain the following inequality by Lemma 3: C(n, t) ≤ 2ρn(n + b) (1 − λ)(1 − ρ − λ) + O(bn) . This, along with the estimate (14), implies the following bound: T (n, t) ≤ 2ρn(n + b) (1 − λ) 2 (1 − ρ − λ) + O bn 1 − λ .(15) Combine (13) and (15) to obtain that a packet waits at most n(n − 1) 1 − λ + 2ρn(n + b) (1 − λ) 2 (1 − ρ − λ) + O bn 1 − λ ≤ 3n(n + b) (1 − λ)(1 − ρ − λ) + O bn 1 − λ rounds, where we used the inequalities ρ < 1 − λ and 1 − ρ − λ < 1. The tightness of the bound given in Theorem 5 can be established as follows. Let (ρ, λ, b) be the type of an adversary, where we assume 0 < ρ < 1 to be sufficiently close to 1 to simplify the argument. Consider first the case of no jamming, that is, when λ = 0. The adversary begins an execution by building a queue of n − 1 packets at the last station in the list. Next the adversary will work to block the token reaching the last station by building big stations whose discovery makes the token go back to the front of the list. Let us call a station critical when it is the station into which the adversary is currently injecting packets to make it big. Once a critical station stores n packets, the adversary switches to the immediately preceding station to make it big by injecting packets into it; this station thereby becomes critical. Each new big station delays the token getting to the last station in the list by n − 1 rounds. Next we calculate how many big stations can be built in a sequence without the token reaching the last station. Suppose the token is at the first station when the adversary begins working on a new critical stations. When the token reaches this station then it has about nρ packets in its queue, with (1 − ρ)n packets missing to become big. During the next token's traversal the critical station becomes big and when the token reaches it the previous station misses 2(1− ρ)n packets to also be big. This patterns keeps repeating, for example, during the next token's traversal the critical station becomes big and when the token reaches it the previous station needs only 3(1 − ρ)n packets to become big. This continues about n (1−ρ)n = 1 1−ρ times until the token manages to reach the last station in the list. The queue at the last station has n − 1 packets, so the last of them will wait for the preceding n − 2 packets to be heard. Each such a packet is delayed by about 1 1−ρ token traversals each taking n − 1 rounds. The delay of some packet is therefore Ω n 2 1−ρ . Next, let us adapt this construction when the adversary has a jamming power. When the adversary applies the strategy to jam a round whenever possible, this slows down token traversal by a factor of 1 1−λ . The first critical station stores ρn 1−λ packets, rather than ρn without jamming, when the token passes it for the first time; this makes it to have about n(1 − ρ 1 − λ ) = n 1 − λ − ρ 1 − λ packets short of becoming big. New big stations are discovered about n 1 − λ ÷ n(1 − λ − ρ) 1 − λ = 1 1 − λ − ρ times before the token reaches the last station. We obtain that the delay of some packet is Ω n 2 (1−λ)(1−λ−ρ) . Channels without Jamming In this Section, we consider deterministic distributed algorithms for channels with no jamming. We study the performance of full sensing and adaptive algorithms against leaky-bucket adversaries with injection rates ρ < 1. For each of the algorithms we consider, we give upper bounds for packet latency as functions of the number of stations n and the type (ρ, b) of a leaky-bucket adversary. Full sensing algorithms without collision detection. We begin with estimating packet latency for full sensing algorithms RRW and OF-RRW. They operate similarly as the adaptive algorithms C-RRW and OFC-RRW for a channel with jamming, respectively. The difference is in how the virtual token moves from station to station and what happens in jammed rounds. In adaptive algorithms with jamming, the virtual token is moved when the station with token transmits a control bit. In non-adaptive algorithms, the token is moved when the station holding the token pauses. In adaptive algorithms with jamming, the jammed rounds is perceived as silence and just ignored. This facilitates translating the bounds for jammed channel to the channel without jamming, for the respective algorithms. The following results are obtained by such translations. These bounds are tight. The argument for tightness of bounds for algorithms for channels with jamming hold for the full sensing algorithms for channels without jamming when the jamming rate is λ = 0. Full sensing algorithms with collision detection. We continue with estimates of packet latency of algorithms Search-Round-Robin (SRR) and Older-First-Search-Round-Robin (OF-SRR), both of which use collision detection. Executions are partitioned into phases, similarly as when considering any older-go-first type of algorithm. In the case of algorithm Search-Round-Robin, a phase denotes one full sweep of searches through the range of names of stations. We begin with a technical estimate that will be used in proving bounds on packet latency. Let lg x denote ⌈log 2 x⌉. Lemma 5 If there are x ≥ 1 packets in the system in a round then, for each 1 ≤ y ≤ x, algorithms SRR and OF-SRR make y packets heard in the next min(y lg n, 2n + y) rounds. Proof: We argue first that each of the considered algorithms takes no more than 2n+y consecutive rounds to transmit successfully at least y packets. It takes at most n rounds with no transmission to identify stations that store at least y ≤ x packets for SRR, because the token needs to make at most one full cycle along the list of stations. Similarly, it takes at most 2n rounds with no transmission to identify stations that store at least y ≤ x packets for OF-SRR. This is because two phases are needed in the worst case, when the first phase results in discovering too few old packets. Uploading these y packets takes y rounds. This completes the first part of the argument. Next, we prove that these algorithms need no more than y lg n consecutive rounds to upload y packets. Indeed, after a packet has been transmitted, it takes time at most the height of a conceptual search tree to identify another station with an eligible packet. This height is at most lg n, so a packet is heard at least once in any contiguous segment of lg n rounds. Theorem 6 The packet latency for algorithm OF-SRR executed against the adversary of type (ρ, b) is at most 4 min(b lg n, n + b) for ρ ≤ 1 2 lg n , and it is at most 4n+2b 1−ρ for ρ > 1 2 lg n . Proof: A packet is transmitted successfully by the end of the phase following the one in which it was injected. So the maximum packet latency is upper bounded by twice the maximum length of a phase. At most ρ|τ | + b packets get injected in a time interval τ of length |τ |. The maximum number of packets in the system is upper bounded by the maximum number of packets at the end of an interval plus the number of packets injected within this interval. Let us begin with the case ρ ≤ 1 2 lg n . We first argue that at the end of a round there are at most ρ · min(2b lg n, 2n + 2b) + b packets in the system. Suppose it is otherwise and consider the first round t with the number of packets more than ρ · min(2b lg n, 2n + 2b) + b at the end of it. Note that t > min(2b lg n, 2n + 2b), as at most ρ min(2b lg n, 2n + 2b) + b packets are injected into the system in the time interval [1, min(2b lg n, 2n + 2b)]. This follows from the assumption that, at the end of round t − min(2b lg n, 2n + 2b), there were at most ρ · min(2b lg n, 2n + 2b) + b packets in the system. From that round until round t, at least this number of packets have been successfully transmitted, by Lemma 5 and the estimate ρ · min(2b lg n, 2n + 2b) + b ≤ 2b lg n 2 lg n + b = 2b , while at most ρ · min(2b lg n, 2n + 2b) + b new packets have been injected. Consequently, the number of packets in the system at the end of round t would be at most ρ · min(2b lg n, 2n + 2b) + b, which is a contradiction. This completes the proof that the number of packets in the system is at most ρ·min(2b lg n, 2n+2b)+b. This bound is also at most 2b. By Lemma 5, a phase does not take longer than min(2b lg n, 2n + 2b) rounds, as the number of old packets in the system never surpasses 2b. Hence, the packet latency is at most 2 · min(2b lg n, 2n + 2b) ≤ 4 min(b lg n, n + b). The next case is when ρ > 1 2 lg n . Let x i be the number of packets in the system in the beginning of phase i, and let t i be the length of this phase. By Lemma 5, the inequalities x 1 ≤ b and t i ≤ min(x i lg n, 2n + x i ) hold, for every i ≤ 1. This implies the inequality x i+1 ≤ ρt i + b ≤ ρ min(x i lg n, 2n + x i ) + b . Iterating this formula yields x i+1 ≤ b + ρ min(x i lg n, 2n + x i ) ≤ b + ρ (2n + (b + ρ min(x i−1 lg n, 2n + x i−1 ))) = b + ρ (2n + b) + ρ 2 min(x i−1 lg n, 2n + x i−1 ) , which in turn leads to the following inequalities: x i+1 ≤ b + (2n + b)(ρ + ρ 2 + . . . + ρ i−1 ) ≤ ρ 1 − ρ 2n + b ρ = 2nρ + b 1 − ρ . The packet latency is the maximum of t i + t i+1 , over all i. By Lemma 5, these numbers are at most min(x i lg n, 2n + x i ) + min(x i+1 lg n, 2n + x i+1 ) . Each of them is upper-bounded by 4n + 2 · 2nρ + b 1 − ρ ≤ 4n + 2b 1 − ρ , which completes the proof. Theorem 7 The packet latency of algorithm SRR executed against the adversary of type (ρ, b) is at most 6b lg n for ρ ≤ 1 2 lg n , and at most 4(n+b) (1−ρ) 2 for ρ > 1 2 lg n . Proof: Let x i be the number of packets in the system in the beginning of phase i, and let t i be the length of this phase. We first consider the case of ρ ≤ 1 2 lg n . By a similar argument as in the proof of Theorem 6, the number of pending packets is never more than ρ · min(2b lg n, 2n + 2b) + b ≤ 2b . This argument relies only on Lemma 5. Next, observe that the following inequality holds: t i ≥ x i + ρt i + b ≥ t i lg n . This is because the number of all packets in the system in phase i multiplied by the maximum time lg n of a void period in phase i is an upper bound on t i . There are at most ρt i + b newly arrived packets in this phase. Combine this with the estimate t i 2 lg n ≥ ρt i to obtain (x i + b) lg n ≥ t i . By an upper bound 2b on the number of packets in the system, in the case of ρ ≤ 1 2 lg n , we obtain that the following bound hold: t i ≤ (x i + b) lg n ≤ 3b lg n . Every packet arriving in phase i is transmitted by the end of phase i + 1. Hence the maximum packet latency is at most max i (t i + t i+1 ) ≤ 2 · 3b lg n ≤ 6b lg n . In order to estimate packet latency in the case of ρ > 1 2 lg n , observe that the following inequalities hold: t i ≥ x i + (ρt i + b) ≥ t i − n . This is because the number of all packets that are in the system in phase i plus the maximum total number n of void rounds in phase i is an upper bound on t i . There are at most ρt i + b newly arrived packets in this phase, so x i +n+b 1−ρ ≥ t i . Lemma 5 for algorithm SRR implies that 2ρn+b 1−ρ is an upper bound on the number of packets in the system in a round. It follows that the following bounds hold: Table 1: Upper bounds on packet latency for a channel with n stations with jamming when the injection rate ρ ≥ 0 and jamming rate λ ≥ 0 satisfy ρ + λ < 1. The inequality λ ≤ J is assumed for algorithms OF-JRRW(J) and JRRW(J). Algorithms OF-JRRW(J) and JRRW(J) are non-adaptive, and the remaining algorithms are adaptive. t i ≤ x i + n + b 1 − ρ ≤ n(1 + ρ) + b(2 − ρ) (1 − ρ) 2 . A packet arriving in phase i is transmitted by the end of phase i + 1, hence the maximum packet latency is upper bounded by max i (t i + t i+1 ) ≤ 2 · n(1 + ρ) + b(2 − ρ) (1 − ρ) 2 ≤ 4(n + b) (1 − ρ) 2 , which completes the proof. Algorithm Move-Big-To-Front. We conclude with packet latency of algorithm Move-Big-To-Front (MBTF), which is an adaptive algorithm for channels without collision detection. This algorithm was originally designed for channels without jamming, but using control bits in messages without packets allows to transfer the token without unnecessary delay even when a channel is subject to jamming. We may consider any of these two versions of the algorithm when there is no jamming in a channel. A translation of the bound on packet latency applies, in a manner similar to the case of full sensing algorithms for channels without collision detection. Table 2: Upper bounds on packet latency for a channel with n stations without jamming, depending on injection rate ρ ≥ 0. Algorithm MBTF is adaptive, and the remaining algorithms are non-adaptive. Conclusion We study worst-case packet latency of deterministic distributed broadcast algorithms in adversarial multiple-access channels, in which an adversary controls packet injection and, optionally, jamming. We derived asymptotic upper bounds on packet latency expressed in terms of the quantitative constraints defining adversaries. The upper bounds on packet latency are summarized in Tables 1 and 2. When the channel is with jamming, then there is a non-adaptive algorithm with bounded packet latency for any injection rate smaller than 1, but this algorithm operates only for a known range of values of jamming rates. It is an open problem if there is a non-adaptive algorithm that is stable against all jamming adversaries with the injection rate ρ and jamming rate λ satisfying ρ + λ < 1, as already discussed at the end of Section 4. When the channel is without jamming, then algorithms OF-SRR and SRR have packet latencies that are o(n), for suitably small injection rates as functions of n, when burstiness is treated as a constant. It is unknown if such algorithms exist for channels with jamming. Algorithm MBTF is the only one we discussed with the property that its packet latency is Θ(n 2 ), when burstiness b and injection rate ρ and jamming rate λ are all treated as constants, and when ρ+λ < 1. It is also the only algorithm among those we considered for which there is an upper bound on the number of packets queued in the system, which is independent of injection and jamming rates, because this bound holds even for ρ + λ = 1, see [23]. It is unknown if there is an algorithm stable when ρ+λ = 1 and such that its packet latency is o(n 2 ) when ρ+λ < 1 and when burstiness b and injection rate ρ and jamming rate λ are all treated as constants. This paper considers performance of deterministic distributed broadcast protocols in terms of their worst-case packet latency. Although theoretically appealing, this measure of performance could be considered as less relevant to real-world applications than average packet latency. Proposing adversarial models for packet injection to study average packet latency of deterministic algo-rithms would be an interesting direction of future work. The model of channels we consider is idealized and simplified to a considerable extent, in particular, we assume that the set of stations attached to the channel is fixed and time is slotted into rounds. It is an interesting area of future work to consider deterministic broadcasting algorithms in less restricted models of multiple access channels; paper [5] indicates one such a possible direction. Theorem 2 2The packet latency of algorithm JRRW(J) is O bn (1−λ)(1−ρ−λ) 2 , when executed against a jamming adversary of type (ρ, λ, b) such that its jamming burstiness is at most J.Proof: We will relate executions of algorithms JRRW(J) and OF-JRRW(J) determined by some injection and jamming pattern of the adversary. Let s i and t i be bounds on the length of phase i of algorithms OF-JRRW(J) and JRRW(J), respectively, when run against the considered adversarial pattern of injections and jamming.A phase i of algorithm OF-JRRW(J) takes s i rounds. When algorithm JRRW(J) is executed, the total number of rounds t i in phase i is at most Lemma 3 3When algorithm MBTF is executed by n stations against the jamming adversary of type (ρ, λ, b) then the number of packets stored in queues in each round is at most 2ρn(n+b)(1−λ)(1−ρ−λ) + O(bn). Corollary 1 1The packet latency of algorithm OF-RRW is O n+b 1−ρ when executed against the adversary of type (ρ, b). Proof: The upper bound on packet latency given in Theorem 3 becomes O n+b 1−ρ for λ = 0. Corollary 2 The packet latency of algorithm RRW is O n+b (1−ρ) 2 when executed against the adversary of type (ρ, b). Proof: The upper bound on packet latency given in Theorem 4 becomes O((n + b)/(1 − ρ) 2 ) for λ = 0. + O(bn) packets in queues in any round when algorithm MBTF is executed against the adversary of type (ρ, b).Proof: Follows from Lemma 3 when λ = 0. Corollary 4 4The packet latency of algorithm MBTF is O n(n+b) 1−ρ when executed against the adversary of type (ρ, b).Proof: The bound on packet latency given in Theorem 5 becomes O n(n+b) Adaptive packet routing for bursty adversarial traffic. W Aiello, E Kushilevitz, R Ostrovsky, A Rosén, Journal of Computer and System Sciences. 603W. Aiello, E. Kushilevitz, R. Ostrovsky, and A. Rosén. Adaptive packet routing for bursty adversarial traffic. Journal of Computer and System Sciences, 60(3):482-509, 2000. An improved stability bound for binary exponential backoff. Theory of Computing Systems. H Al-Ammal, L A Goldberg, P D Mackenzie, 34H. Al-Ammal, L. A. Goldberg, and P. D. MacKenzie. An improved stability bound for binary exponential backoff. Theory of Computing Systems, 34(3):229-244, 2001. Adversarial models for priority-based networks. C Àlvarez, M J Blesa, J Díaz, M J Serna, A Fernández, Networks. 451C.Àlvarez, M. J. Blesa, J. Díaz, M. J. Serna, and A. Fernández. Adversarial models for priority-based networks. Networks, 45(1):23-35, 2005. The impact of failure management on the stability of communication networks. C Àlvarez, M J Blesa, M J Serna, Proceedings of the 10th International Conference on Parallel and Distributed Systems (ICPADS). the 10th International Conference on Parallel and Distributed Systems (ICPADS)C.Àlvarez, M. J. Blesa, and M. J. Serna. The impact of failure management on the stability of communication networks. In Proceedings of the 10th International Conference on Parallel and Distributed Systems (ICPADS), pages 153-160, 2004. Broadcasting in ad hoc multiple access channels. L Anantharamu, B S Chlebus, Theoretical Computer Science. 584L. Anantharamu and B. S. Chlebus. Broadcasting in ad hoc multiple access channels. Theo- retical Computer Science, 584:155-176, 2015. Deterministic broadcast on multiple access channels. L Anantharamu, B S Chlebus, D R Kowalski, M A Rokicki, Proceedings of the 29th IEEE International Conference on Computer Communications (INFOCOM). the 29th IEEE International Conference on Computer Communications (INFOCOM)L. Anantharamu, B. S. Chlebus, D. R. Kowalski, and M. A. Rokicki. Deterministic broadcast on multiple access channels. In Proceedings of the 29th IEEE International Conference on Computer Communications (INFOCOM), pages 1-5, 2010. Medium access control for adversarial channels with jamming. L Anantharamu, B S Chlebus, D R Kowalski, M A Rokicki, Proceedings of the 18th International Colloquium on Structural Information and Communication Complexity (SIROCCO). the 18th International Colloquium on Structural Information and Communication Complexity (SIROCCO)Springer6796L. Anantharamu, B. S. Chlebus, D. R. Kowalski, and M. A. Rokicki. Medium access control for adversarial channels with jamming. In Proceedings of the 18th International Colloquium on Structural Information and Communication Complexity (SIROCCO), Lecture Notes in Com- puter Science, vol. 6796, pages 89-100. Springer, 2011. Adversarial multiple access channels with individual injection rates. Theory of Computing Systems. L Anantharamu, B S Chlebus, M A Rokicki, 10.1007/s00224-016-9725-xPublished online inL. Anantharamu, B. S. Chlebus, and M. A. Rokicki. Adversarial multiple access channels with individual injection rates. Theory of Computing Systems, 2017. Published online in 2016 at doi:10.1007/s00224-016-9725-x. Universal-stability results and performance bounds for greedy contention-resolution protocols. M Andrews, B Awerbuch, A Fernández, F T Leighton, Z Liu, J M Kleinberg, Journal of the ACM. 481M. Andrews, B. Awerbuch, A. Fernández, F. T. Leighton, Z. Liu, and J. M. Kleinberg. Universal-stability results and performance bounds for greedy contention-resolution protocols. Journal of the ACM, 48(1):39-69, 2001. Source routing and scheduling in packet networks. M Andrews, A Fernández, A Goel, L Zhang, Journal of the ACM. 524M. Andrews, A. Fernández, A. Goel, and L. Zhang. Source routing and scheduling in packet networks. Journal of the ACM, 52(4):582-601, 2005. Achieving stability in networks of input-queued switches. M Andrews, L Zhang, IEEE/ACM Transactions on Networking. 115M. Andrews and L. Zhang. Achieving stability in networks of input-queued switches. IEEE/ACM Transactions on Networking, 11(5):848-857, 2003. Routing and scheduling in multihop wireless networks with timevarying channels. M Andrews, L Zhang, ACM Transactions on Algorithms. 3333M. Andrews and L. Zhang. Routing and scheduling in multihop wireless networks with time- varying channels. ACM Transactions on Algorithms, 3(3):33, 2007. Unbounded contention resolution in multipleaccess channels. A F Anta, M A Mosteiro, J R Muñoz, Algorithmica. 673A. F. Anta, M. A. Mosteiro, and J. R. Muñoz. Unbounded contention resolution in multiple- access channels. Algorithmica, 67(3):295-314, 2013. A jamming-resistant MAC protocol for singlehop wireless networks. B Awerbuch, A W Richa, C Scheideler, Proceedings of the 27th ACM Symposium on Principles of Distributed Computing (PODC). the 27th ACM Symposium on Principles of Distributed Computing (PODC)B. Awerbuch, A. W. Richa, and C. Scheideler. A jamming-resistant MAC protocol for single- hop wireless networks. In Proceedings of the 27th ACM Symposium on Principles of Distributed Computing (PODC), pages 45-54, 2008. Adversarial contention resolution for simple channels. M A Bender, M Farach-Colton, S He, B C Kuszmaul, C E Leiserson, Proceedings of the 17th Annual ACM Symposium on Parallel Algorithms (SPAA). the 17th Annual ACM Symposium on Parallel Algorithms (SPAA)M. A. Bender, M. Farach-Colton, S. He, B. C. Kuszmaul, and C. E. Leiserson. Adversarial contention resolution for simple channels. In Proceedings of the 17th Annual ACM Symposium on Parallel Algorithms (SPAA), pages 325-332, 2005. How to scale exponential backoff: Constant throughput, polylog access attempts, and robustness. M A Bender, J T Fineman, S Gilbert, M Young, Proceedings of the 27th ACM-SIAM Symposium on Discrete Algorithms (SODA). the 27th ACM-SIAM Symposium on Discrete Algorithms (SODA)M. A. Bender, J. T. Fineman, S. Gilbert, and M. Young. How to scale exponential backoff: Constant throughput, polylog access attempts, and robustness. In Proceedings of the 27th ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 636-654, 2016. Contention resolution with loglogstar channel accesses. M A Bender, T Kopelowitz, S Pettie, M Young, Proceedings of the 48th ACM Symposium on Theory of Computing (STOC). the 48th ACM Symposium on Theory of Computing (STOC)M. A. Bender, T. Kopelowitz, S. Pettie, and M. Young. Contention resolution with log- logstar channel accesses. In Proceedings of the 48th ACM Symposium on Theory of Computing (STOC), pages 499-508, 2016. Distributed online and stochastic queuing on a multiple access channel. M Bieńkowski, T Jurdziński, M Korzeniowski, D R Kowalski, Proceeding of the 26th International Symposium on Distributed Computing (DISC). eeding of the 26th International Symposium on Distributed Computing (DISC)Springer7611M. Bieńkowski, T. Jurdziński, M. Korzeniowski, and D. R. Kowalski. Distributed online and stochastic queuing on a multiple access channel. In Proceeding of the 26th International Symposium on Distributed Computing (DISC), volume 7611 of Lecture Notes in Computer Science, pages 121-135. Springer, 2012. Adversarial queuing theory. A Borodin, J M Kleinberg, P Raghavan, M Sudan, D P Williamson, Journal of the ACM. 481A. Borodin, J. M. Kleinberg, P. Raghavan, M. Sudan, and D. P. Williamson. Adversarial queuing theory. Journal of the ACM, 48(1):13-38, 2001. A general approach to dynamic packet routing with bounded buffers. A Z Broder, A M Frieze, E Upfal, Journal of the ACM. 482A. Z. Broder, A. M. Frieze, and E. Upfal. A general approach to dynamic packet routing with bounded buffers. Journal of the ACM, 48(2):324-349, 2001. Randomized communication in radio networks. B S Chlebus, Handbook of Randomized Computing. P. M. Pardalos, S. Rajasekaran, J. H. Reif, and J. D. P. RolimKluwer Academic PublishersB. S. Chlebus. Randomized communication in radio networks. In P. M. Pardalos, S. Ra- jasekaran, J. H. Reif, and J. D. P. Rolim, editors, Handbook of Randomized Computing, vol- ume I, pages 401-456. Kluwer Academic Publishers, 2001. Scalable wake-up of multi-channel single-hop radio networks. B S Chlebus, G De Marco, D R Kowalski, Theoretical Computer Science. 615B. S. Chlebus, G. De Marco, and D. R. Kowalski. Scalable wake-up of multi-channel single-hop radio networks. Theoretical Computer Science, 615:23-44, 2016. Maximum throughput of multiple access channels in adversarial environments. B S Chlebus, D R Kowalski, M A Rokicki, Distributed Computing. 222B. S. Chlebus, D. R. Kowalski, and M. A. Rokicki. Maximum throughput of multiple access channels in adversarial environments. Distributed Computing, 22(2):93-116, 2009. Adversarial queuing on the multiple access channel. B S Chlebus, D R Kowalski, M A Rokicki, ACM Transactions on Algorithms. 8131B. S. Chlebus, D. R. Kowalski, and M. A. Rokicki. Adversarial queuing on the multiple access channel. ACM Transactions on Algorithms, 8(1):5:1-5:31, 2012. Fast nonadaptive deterministic algorithm for conflict resolution in a dynamic multiple-access channel. G , De Marco, D R Kowalski, SIAM Journal of Computing. 443G. De Marco and D. R. Kowalski. Fast nonadaptive deterministic algorithm for conflict res- olution in a dynamic multiple-access channel. SIAM Journal of Computing, 44(3):868-888, 2015. Interference-resilient information exchange. S Gilbert, R Guerraoui, D R Kowalski, C Newport, Proceedings of the 28th IEEE International Conference on Computer Communications (INFOCOM). the 28th IEEE International Conference on Computer Communications (INFOCOM)S. Gilbert, R. Guerraoui, D. R. Kowalski, and C. Newport. Interference-resilient information exchange. In Proceedings of the 28th IEEE International Conference on Computer Communi- cations (INFOCOM), pages 2249-2257, 2009. Of malicious motes and suspicious sensors: On the efficiency of malicious interference in wireless networks. S Gilbert, R Guerraoui, C C Newport, Theoretical Computer Science. S. Gilbert, R. Guerraoui, and C. C. Newport. Of malicious motes and suspicious sensors: On the efficiency of malicious interference in wireless networks. Theoretical Computer Science, 410(6-7):546-569, 2009. Near) optimal resourcecompetitive broadcast with jamming. S Gilbert, V King, S Pettie, E Porat, J Saia, M Young, Proceedings of the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)S. Gilbert, V. King, S. Pettie, E. Porat, J. Saia, and M. Young. (Near) optimal resource- competitive broadcast with jamming. In Proceedings of the 26th ACM Symposium on Paral- lelism in Algorithms and Architectures (SPAA), pages 257-266, 2014. A bound on the capacity of backoff and acknowledgment-based protocols. L A Goldberg, M Jerrum, S Kannan, M Paterson, SIAM Journal on Computing. 332L. A. Goldberg, M. Jerrum, S. Kannan, and M. Paterson. A bound on the capacity of backoff and acknowledgment-based protocols. SIAM Journal on Computing, 33(2):313-331, 2004. Contention resolution with constant expected delay. L A Goldberg, P D Mackenzie, M Paterson, A Srinivasan, Journal of the ACM. 476L. A. Goldberg, P. D. MacKenzie, M. Paterson, and A. Srinivasan. Contention resolution with constant expected delay. Journal of the ACM, 47(6):1048-1096, 2000. Analysis of backoff protocols for multiple access channels. J Håstad, F T Leighton, B Rogoff, SIAM Journal on Computing. 254J. Håstad, F. T. Leighton, and B. Rogoff. Analysis of backoff protocols for multiple access channels. SIAM Journal on Computing, 25(4):740-774, 1996. On selection problem in radio networks. D R Kowalski, Proceedings of the 24th ACM Symposium on Principles of Distributed Computing (PODC). the 24th ACM Symposium on Principles of Distributed Computing (PODC)D. R. Kowalski. On selection problem in radio networks. In Proceedings of the 24th ACM Symposium on Principles of Distributed Computing (PODC), pages 158-166, 2005. Speed dating despite jammers. D Meier, Y A Pignolet, S Schmid, R Wattenhofer, Proceedings of the 5th IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS). the 5th IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS)5516D. Meier, Y. A. Pignolet, S. Schmid, and R. Wattenhofer. Speed dating despite jammers. In Proceedings of the 5th IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS), Lecture Notes in Computer Science vol. 5516, pages 1-14, 2009. Stochastic contention resolution with short delays. P Raghavan, E , SIAM Journal on Computing. 282P. Raghavan and E. Upfal. Stochastic contention resolution with short delays. SIAM Journal on Computing, 28(2):709-719, 1998. Competitive throughput in multi-hop wireless networks despite adaptive jamming. A W Richa, C Scheideler, S Schmid, J Zhang, Distributed Computing. 263A. W. Richa, C. Scheideler, S. Schmid, and J. Zhang. Competitive throughput in multi-hop wireless networks despite adaptive jamming. Distributed Computing, 26(3):159-171, 2013. On delivery times in packet networks under adversarial traffic. Theory of Computing Systems. A Rosén, M S Tsirkin, 39A. Rosén and M. S. Tsirkin. On delivery times in packet networks under adversarial traffic. Theory of Computing Systems, 39(6):805-827, 2006. Universal continuous routing strategies. Theory of Computing Systems. C Scheideler, B Vöcking, 31C. Scheideler and B. Vöcking. Universal continuous routing strategies. Theory of Computing Systems, 31(4):425-449, 1998.
[]
[ "Spatial BCS-BEC crossover in superconducting p-n junctions", "Spatial BCS-BEC crossover in superconducting p-n junctions" ]
[ "A Niroula ", "G Rai ", "S Haas ", "S Kettemann ", "\nSchool of Engineering and Science\nDepartment of Physics and Astronomy\nJacobs University\nCampus Ring 128759BremenGermany\n", "\nDepartment of Physics and Astronomy\nUniversity of Southern California\n90089-0484Los AngelesCAUSA\n", "\nUniversity of Southern California\n90089-0484Los AngelesCAUSA\n", "\nSchool of Engineering and Science\nand Jacobs University\nCampus Ring 128759BremenGermany\n", "\nSchool of Engineering and Science\nGermany and Division of Advanced Materials Science Pohang University of Science and Technology (POSTECH)\nJacobs University\nCampus Ring 1, San 31, Hyoja-dong, Nam-gu28759, 790-784Bremen, PohangSouth Korea\n" ]
[ "School of Engineering and Science\nDepartment of Physics and Astronomy\nJacobs University\nCampus Ring 128759BremenGermany", "Department of Physics and Astronomy\nUniversity of Southern California\n90089-0484Los AngelesCAUSA", "University of Southern California\n90089-0484Los AngelesCAUSA", "School of Engineering and Science\nand Jacobs University\nCampus Ring 128759BremenGermany", "School of Engineering and Science\nGermany and Division of Advanced Materials Science Pohang University of Science and Technology (POSTECH)\nJacobs University\nCampus Ring 1, San 31, Hyoja-dong, Nam-gu28759, 790-784Bremen, PohangSouth Korea" ]
[]
We present a theory of superconducting p-n junctions. To this end, we consider a two band model of doped bulk semiconductors with attractive interactions between the charge carriers and derive the superconducting order parameter, the quasiparticle density of states and the chemical potential as a function of the semiconductor gap ∆0 and the doping level ε. We verify previous results for the quantum phase diagram for a system with constant density of states in the conduction and valence band, which show BCS-Superconductor to Bose-Einstein-Condensation (BEC) and BEC to Insulator transitions as function of doping level and the size of the band gap. Then, we extend this formalism to a density of states which is more realistic for 3D systems and derive the corresponding quantum phase diagram, where we find that a BEC phase can only exist for small band gaps ∆0 < ∆ * 0 . For larger band gaps, we find rather a direct transition from an insulator to a BCS phase. Next, we apply this theory to study the properties of superconducting p-n junctions. We derive the spatial variation of the superconducting order parameter along the p-n junction. As the potential difference across the junction leads to energy band bending, we find a spatial crossover between a BCS and BEC condensate, as the density of charge carriers changes across the p-n junction. For the 2D system, we find two possible regimes, when the bulk is in a BCS phase, a BCS-BEC-BCS junction with a single BEC layer in the space charge region, and a BCS-BEC-I-BEC-BCS junction with two layers of BEC condensates separated by an insulating layer. In 3D we find that there can also be a conventional BCS-I-BCS junction for semiconductors with band gaps exceeding ∆ * 0 . Thus, we find that there can be BEC layers in the well controlled setting of doped semiconductors, where the doping level can be varied to change and control the thickness of BEC and insulator layers, making Bose Einstein Condensates thereby possibly accessible to experimental transport and optical studies in solid state materials.
10.1103/physrevb.101.094514
[ "https://arxiv.org/pdf/1912.09699v2.pdf" ]
209,439,890
1912.09699
cdd17d52353eca213f7bdfce28d733eac3b479b7
Spatial BCS-BEC crossover in superconducting p-n junctions (Dated: December 23, 2019) A Niroula G Rai S Haas S Kettemann School of Engineering and Science Department of Physics and Astronomy Jacobs University Campus Ring 128759BremenGermany Department of Physics and Astronomy University of Southern California 90089-0484Los AngelesCAUSA University of Southern California 90089-0484Los AngelesCAUSA School of Engineering and Science and Jacobs University Campus Ring 128759BremenGermany School of Engineering and Science Germany and Division of Advanced Materials Science Pohang University of Science and Technology (POSTECH) Jacobs University Campus Ring 1, San 31, Hyoja-dong, Nam-gu28759, 790-784Bremen, PohangSouth Korea Spatial BCS-BEC crossover in superconducting p-n junctions (Dated: December 23, 2019)numbers: 7340Lq7420z7490+n We present a theory of superconducting p-n junctions. To this end, we consider a two band model of doped bulk semiconductors with attractive interactions between the charge carriers and derive the superconducting order parameter, the quasiparticle density of states and the chemical potential as a function of the semiconductor gap ∆0 and the doping level ε. We verify previous results for the quantum phase diagram for a system with constant density of states in the conduction and valence band, which show BCS-Superconductor to Bose-Einstein-Condensation (BEC) and BEC to Insulator transitions as function of doping level and the size of the band gap. Then, we extend this formalism to a density of states which is more realistic for 3D systems and derive the corresponding quantum phase diagram, where we find that a BEC phase can only exist for small band gaps ∆0 < ∆ * 0 . For larger band gaps, we find rather a direct transition from an insulator to a BCS phase. Next, we apply this theory to study the properties of superconducting p-n junctions. We derive the spatial variation of the superconducting order parameter along the p-n junction. As the potential difference across the junction leads to energy band bending, we find a spatial crossover between a BCS and BEC condensate, as the density of charge carriers changes across the p-n junction. For the 2D system, we find two possible regimes, when the bulk is in a BCS phase, a BCS-BEC-BCS junction with a single BEC layer in the space charge region, and a BCS-BEC-I-BEC-BCS junction with two layers of BEC condensates separated by an insulating layer. In 3D we find that there can also be a conventional BCS-I-BCS junction for semiconductors with band gaps exceeding ∆ * 0 . Thus, we find that there can be BEC layers in the well controlled setting of doped semiconductors, where the doping level can be varied to change and control the thickness of BEC and insulator layers, making Bose Einstein Condensates thereby possibly accessible to experimental transport and optical studies in solid state materials. I. INTRODUCTION The existence of a superconducting state below a critical temperature T c is not restricted to materials which are typical metals at higher temperatures, but can also occur in materials that are known to be semiconductors. 1,2 For example, superconductivity has been observed at doping concentrations as small as 4 × 10 17 cm −3 in SrTiO 3 , with a critical temperature of T c = 0.1 K, 3 and in a wide range of doped semiconductors, such as B-doped diamond [4][5][6] and in doped silicon under high pressure, 7 with critical temperatures up to T c = 10 K. The BCS theory of superconductivity, [8][9][10] can be extended and applied to such materials. Eagles 11 has solved the BCS equations within a single-band semiconductor model and found a crossover to a BEC condensate as the doping concentration is lowered. There, the charge carriers form local pairs which condense into a Bose-Einstein condensate at low temperatures. Nozieres and Pistolesi 12 have extended this theory to a two-band semiconductor model and studied the superconducting-insulator transition as a function of the semiconductor energy gap, assuming a constant density of states in each band. Junctions between p-and n-doped semiconductors form the basic element of semiconductor devices whose rectifying behavior is based on the energy band bending and on the different majority charge carriers, holes and electrons, respectively on either side of the junction. As superconductivity has been observed both in p-and n-doped semiconductors, intriguing questions arise about the physical properties of superconducting p-n junctions 14 : how does the superconducting order parameter vary spatially across the junction? Does a p-n junction form a Josephson contact, and how large is the supercurrent across the p-n junction? Such questions have been explored for Y Ba 2 Cu 3 O 7 /N d 1−x Ce x Cu 2 O 4 junctions, 13 with an estimated depletion width of less than 1 nm, 14 for the p-type superconductor YBa2Cu3O (YBCO) over the n-type superconducting cuprate Pr2CexCuO4 (PCCO), 16 as well as for iron pnictide p-n junctions, where the redistribution of charges could possibly lead to the suppression of the local superconducting order parameter near the interface for both single crystals. This may play a role in the junction formation itself. 15 Here, we study superconducting p-n junctions within a twoband model, based on a self-consistent solution of the BCS equations, the Poisson equation and the particle number conservation. In the next section, we first review the two-band theory of superconductivity for a constant density of states. Then, we generalize it to a more realistic three-dimensional density of states. We derive the pairing amplitude, the chemical potential, the quasiparticle density of states and the coherence length ξ as functions of the semiconductor band gap ∆ 0 and the doping level ε. We identify the crossover between superconductivity (SC) and Bose-Einstein condensation (BEC) and derive the corresponding phase diagram in the ε-∆ 0 parameter space. Based on this model, in section III we derive the properties of a superconducting p-n-junction homojunction (with same parent material on both sides of the junction), in particular the spatial dependence of the order parameter, the quasiparticle excitation energy and the pairing coherence length across the p-n junction. II. TWO-BAND THEORY OF SUPERCONDUCTIVITY In order to derive the superconducting order parameter ∆ and the chemical potential µ, we need to solve the BCS selfconsistency equation along with the equation for the conservation of particle number N . The particle number conservation at T = 0 gives 12 : 2 dξ k ρ(ξ k ) 1 2 1 − ξ k − µ E(ξ k ) = N = 2 εF dξ k ρ(ξ k ),(1) where ρ(ξ k ) is the density of states, with electron energy dispersion ξ k , E(ξ k ) = (ξ k − µ) 2 + ∆ 2 is the quasiparticle energy. ε F is the Fermi energy at T = 0K. At T = 0, there are no thermally excited charge carriers. Doping introduces additional electrons or holes. However, in the dilute doping limit, electrons and holes are trapped at low temperature by the donor and acceptor atoms, respectively. As the concentration of donor atoms N D or acceptor atoms N A increases, their eigenstates hybridize and eventually delocalize into impurity bands, which at larger doping concentrations merge with the conduction or valence band, respectively. Here, we model the doping in a simplified way by a continuous variation of the Fermi energy, for donor doping by ε F = E C + ε n and for acceptor doping by ε F = E V − ε p (see Fig. 1). These doping parameters are related to the donor concentration N D and the acceptor concentration N A , respectively, for the 2D DOS via N D = 2ρ 0 ε n , N A = 2ρ 0 ε p , where the factor 2 accounts for the spin degeneracy. For the 3D DOS, one finds N D = 2 2 3 ρ 0 ε 3 2 n , N A = 2 2 3 ρ 0 ε 3 2 p . The BCS weak coupling theory gives for T = 0K the selfconsistency equation for the order parameter ∆, where U is the attractive interaction strength, and 2ω D is the size of the typical energy window around the chemical potential µ where the effective interaction is attractive. 1 = U 2 µ+ω D µ−ω D dξ k ρ(ξ k ) (ξ k − µ) 2 + ∆ 2 ,(2) Quasiparticle density of states. The quasiparticle density of states is defined by N (E) = − 1 π T rImĜ E ,(3) where E is the quasiparticle excitation energy relative to the chemical potential µ, andĜ E is the quasiparticle propagator. Noting that in the presence of the pairing gap ∆, the propagator is given by 19 G E (ξ k ) = (E +iδ +ξ k −µ))/((E +iδ) 2 − ∆ 2 − (ξ k − µ) 2 ) , and thus we get via complex integration N (E) = Re[ρ(µ + E 2 − ∆ 2 ) |E| √ E 2 − ∆ 2 ].(4) When the chemical potential is within a band, e.g. in the conduction band, ∆ 0 < µ < D, the quasiparticle density of states N (E) diverges at E = ±∆, the coherence peak, and is zero for smaller energies, so that ∆ is the quasiparticle gap (see Fig. 2(top) and (center)). Remarkably, in the case when the chemical potential is in the semiconductor gap, −∆ 0 < µ < ∆ 0 , the quasiparticle density of states N (E) does not diverge for any E, see Fig. 2(bottom) , but it is still peaked. This is an indication that the system is in a Bose-Einstein condensate, as we discuss below. Moreover, the quasiparticle gap is then enhanced tõ ∆ = ∆ 2 + (∆ 0 − |µ|) 2 > ∆ exceeding the pairing order parameter ∆. BCS-BEC Crossover. There is a crossover from BCS superconductivity to Bose-Einstein condensation (BEC) as the concentration of charges carriers is lowered by decreasing the doping level ε. 12 Let us study this BCS-BEC crossover in more detail. One way to distinguish between BCS and BEC is to measure the coherence length ξ of the condensate pairs. When ξ > λ F , where λ F is the Fermi wave length, many electron pairs overlap with each other, which is typical for a superconducting condensate. When ξ < λ F , however, the electron pairs do not overlap, but they instead form well-defined bosons which condense below the transition temperature T c . Therefore, let us next calculate ξ in the two-band model. ξ can be derived by calculating the expectation value of the distance between two electrons with opposite spin in the ground state ξ 2 = drr 2 g(r)/ drg(r). Here, g(r) is the pair correlation function in the ground state, defined by g(r) = | ψ|ψ + + (r)ψ + − (0)|ψ | 2 /n 2 ,(6) where |ψ is the BCS trial ground state given by |ψ = k u k + v k c + k+ c + k− |0 .(7) Here c + kα are the fermion creation operators in a state with momentum k and spin α = ±. |0 is the vacuum state, and 2u k v k = (ξ k − µ)/ (ξ k − µ) 2 + ∆ 2 . The electron field operators are given by ψ + α (r) = k e ıkr c + kα . Thereby we find ξ 2 = − k u k v k ∇ 2 k u k v k / k u 2 k v 2 k .(8) We will calculate ξ below for the two-band model explicitly. A. Two-band model with 2D DOS Let us first review the theory for the two-band model with a constant density of states ρ in both the valence and the conduction band, separated by an energy gap 2∆ 0 as shown in Fig.1. This corresponds to a two-dimensional system, as considered in Ref.. 12 Thus, the BCS self-consistency equation becomes 1 = ρU 2 ω D ∆0 + −∆0 −ω D dξ k 1 (ξ k − µ) 2 + ∆ 2 ,(9) which gives 2 ρU = ln   ω D − µ + (ω D − µ) 2 + ∆ 2 ∆ 0 − µ + (∆ 0 − µ) 2 + ∆ 2 · −∆ 0 − µ + (−∆ 0 − µ) 2 + ∆ 2 −ω D − µ + (−ω D − µ) 2 + ∆ 2   .(10) For the limiting case of a gapless, metallic system, i.e., ∆ 0 = 0, we can express the interaction factor ρU in terms of the superconducting order parameter ∆ m via ∆ m ≡ ∆(∆ 0 = 0) = 2ω D exp −1 ρU ,(11) and rewrite Eq. (10) as ∆ 2 m = (∆ 0 − µ) + (∆ 0 − µ) 2 + ∆ 2 · (∆ 0 + µ) + (∆ 0 + µ) 2 + ∆ 2 .(12) Particle conservation in the doped semiconductor implies that the number of particles does not change as superconductivity sets in. Therefore, we need to ensure the equality between the number of particles in the normal and in the superconducting state: For an n-doped semiconductor, electrons are released into the conduction band by donor atoms. We model this by adding an extra number of electrons δN , which for a constant density of states in the conduction band can be written as δN = 2ρε. Here, ε = ε F − ∆ 0 is the Fermi energy measured from the conduction band edge ∆ 0 . 2ρε δN + 2ρ −∆0 −D dξ k = −∆0 −D + D ∆0 dξ k 1 − ξ k − µ E(ξ k ) .(13) Here, 2D represents the total bandwidth of the semiconductor. For large D + µ ∆ and D − µ ∆, integration gives 2ε = 2µ − (∆ 0 + µ) 2 + ∆ 2 + (∆ 0 − µ) 2 + ∆ 2 .(14) Eqs. (12) and (14) are the set of equations that describe the BCS superconducting state of semiconductors with constant density of states. By numerically solving these equations we obtain plots for the superconducting order parameter ∆, Fig.3 (top) and the chemical potential µ Fig.3 (bottom) as functions of the semiconductor gap ∆ 0 . We thereby reproduce the results of Ref. 12 : in the undoped semiconductor there is a sharp superconductor-insulator transition at a critical ∆ 0c , which occurs at half of the superconducting order parameter in a metallic superconductor, ∆ 0c = ∆ m /2. Here ∆ m parametrizes the strength of the attractive interaction via Eq. (11). At finite doping, the pairing amplitude ∆ is finite for any value of the semiconducting gap ∆ 0 , since there are always charge carriers present, which can be paired for any value of ∆ 0 . As mentioned above, there is a crossover to BEC at low concentration of charge carriers. This can be seen by the fact that as the paring sets in, the chemical potential µ drops below the conduction band edge even when it has been in the conduction band before, see Fig.3(bottom). We obtain the correlation length for the 2D density of states for D ∆ 0 , ξ 2 = 1 4m∆ h(s = µ ∆ , t = ∆ 0 ∆ ),(15) where h(s, t) = (π − arctan(t − s) − arctan(t + s)) −1 2 − πt + α=±1 ((t − αs) arctan(t − αs) + 1 (t − αs) 2 + 1 ) .(16) For µ > ∆ 0 we recover the BCS coherence length ξ given by ξ 2 = µ−∆0 4m∆ 2 . 18 When the chemical potential is at the band edge µ = ∆ 0 , we find ξ 2 = 1/(πm∆), which is the size of a single bound electron pair with pairing energy ∆. For the undoped semiconductor with symmetric bands, µ = 0, the coherence length is given by ξ 2 = 1/(3m∆ 0 ) for ∆ → 0, which coincides with the size of a single bound electron pair with binding energy ∆ 0 . For |µ| < ∆ 0 and ∆ → 0 one finds ξ 2 = 1/(3m∆ 0 )(∆ 2 0 +µ 2 )/(∆ 2 0 −µ 2 ) . We note that while this defines the smallest size of the bound pair in this simple twoband model with band gap 2∆ 0 , the actual size of the bound electron pair is modified by the fact that the states in the tails of the band of a doped semiconductor are localized with a finite localization length L c , which in the dilute dopant limit becomes the effective Bohr radius of the ground state of the dopant levels. Thus, as the doping is reduced there occurs a metal-insulator transition to Anderson localized states, which has to be implemented in the pairing theory to obtain a more realistic description of the BCS-BES crossover and may result in a localization transition to localized bosons. 20 We conclude that there is a crossover from superconductivity to dilute bound electron pairs when the chemical potential is at one of the band edges, µ = ±∆ 0 . Inserting that condition into Eqs. (12), (14) we find ε = ∆ 0 + ∆/2 − ∆ 2 0 + ∆ 2 /4, where ∆ is the positive solution of the quartic equation ∆ 4 /∆ 4 m + 4∆∆ 0 /∆ 2 m − 1 = 0. Thereby, the quantum phase diagram in the parameter space of doping ε versus ∆ 0 , see Fig.4 showing a parameter regime where Bose-Einstein condensation occurs below a critical temperature T c . This diagram has already been obtained for the two-band model with a 2D density of states in Ref.. 12 For ∆ 0 ∆, one obtains that the BCS-BEC crossover occurs for ε/∆ m = 1/(8∆ 0 /∆ m ) (dashed line in Fig. 4). For ∆ 0 ∆, one obtains that the BCS-BEC crossover occurs for ε/∆ m = ∆ 0 /∆ m (dotted line in Fig.4). ξ k ) = ρ 0c √ ξ k − E c , for E c < ξ k < D, whereas in the valence band ρ(ξ k ) = ρ 0v √ −ξ k + E v , for −D < ξ k < E v , and ρ(ξ k ) = 0 in the band gap for E v < ξ k < E c . Here, ρ 0c/v = 2m 3 c/v /( 3 π 2 ) , where m c/v is the effective mass in the conduction/valence band, respectively. We assume m c = m v in the following. Following an approach similar to Eagles, 11 who solved these equations for a single-band semiconductor, we rearrange the particle conservation equation Eq. 1 of the two-band model to get 2 3 ε ∆ 3 2 = Q(λ 1 ) − Q(λ 2 ),(17) where Q(λ i ) = ∞ 0 x 2 dx(1 − (x 2 − λ i )/ 1 + (x 2 − λ i ) 2 )for i = 1, 2. with λ 1 ≡ (µ − ∆ 0 )/∆, and λ 2 ≡ (−∆ 0 − µ)/∆. We can rearrange the BCS gap equation to get 1 ρ 0 U ∆ 1 2 P (λ 1 ) + P (λ 2 ) + 4ω ∆ ,(18)where P (λ i ) = ∞ 0 dx(x 2 / 1 + (x 2 − λ i ) 2 − 1), i = 1, 2. We follow the approach by Pistolesi 17 to rewrite Eqs. 17 and 18 in terms of elliptical integrals and obtain for the equation ensuring particle conservation 2 3 ε ∆ 3 2 = i=1,2 σ i λ i 1 + λ 2 i 1 4 E π 2 , k i + i=1,2 σ i (1 + λ 2 i ) 1/4 2(λ i + 1 + λ 2 i ) F ( π 2 , k i ),(19) where σ 1 = 1, σ 2 = −1 and k 2 i = √ 1+λ 2 i +λi 2 (1+λ 2 i ) for i = 1, 2. Here, F (ϕ, k) are E(ϕ, k) the incomplete elliptic integral of the first and second kind, respectively. The paring equation becomes 1 = 2ρ 0 U √ ∆ i=1,2 − 1 + λ 2 i 1 4 E π 2 , k i + F π 2 , k i 2(1 + λ 2 i ) 1 4 λ i + 1 1 + λ 2 i + λ i + ω D ∆ .(20) We denote the metallic limit by ∆(∆ 0 = 0) = ∆ m , as defined by Eq. (20), when substituting there ∆ 0 = 0, ∆ by ∆ m and λ i by λ 1 = λ i | ∆0=0,∆=∆m and k 2 i = √ 1+ λ 2 i + λi 2 √ 1+ λ 2 i for i = 1, 2. Since we assume that the local attraction U between the fermions does not depend on ∆ 0 , we can equate the right hand side of Eq. 20 with finite ∆ 0 to the one obtained in the metallic limit. This equation gives together with Eqs. 19 the new set of equations that model the three-dimensional BCS superconducting semiconductors. We solve these equations numerically to obtain the superconducting order parameter ∆, Fig. 5(top), and chemical potential µ, Fig. 5(bottom) as functions of the semiconductor gap ∆ 0 , the attractive interaction U via ∆ m = 2ω D exp(−1/(ρU )), and the doping parameter ε/∆ m . Without doping, ε = 0, the superconducting order parameter ∆ drops to zero when the semiconductor gap reaches the critical value ∆ 0c = 0.29∆ m . Thus, this superconductor-insulator transition occurs already at a smaller semiconducting band gap, than for the step function DOS, as expected, since the density of states is smaller when approaching the band edges compared to the 2D case, and thus less quasiparticles are available to pair and participate in the condensate. For finite doping ε, the order parameter ∆ persists for all values of the semiconductor gap ∆ 0 , but is for the same values of (∆ 0 , ε) substantially smaller than for the step function DOS. As discussed in the previous section, the condition µ = ±∆ 0 gives the BCS-BEC crossover line in parameter space spanned by the doping parameter ε and the semiconductor gap ∆ 0 . In Fig. 6 we plot the resulting phase diagram as obtained by a numerical solution of the above equations for the 3D DOS for µ = ∆ 0 . Remarkably, we find that for large semiconductor band gaps ∆ 0 > ∆ * 0 = 0.5∆ m , there is no solution with µ = ∆ 0 for finite doping ε > 0, within the numerical accuracy of at least 10 −4 , so that there exists no BEC, but rather a direct transition to a BCS superconductivity phase as shown in Fig. 6. III. SPATIAL VARIATION ALONG A SUPERCONDUCTING P-N JUNCTION Having derived the superconducting order parameter ∆ and chemical potential µ as functions of the semiconductor gap ∆ 0 and the doping level ε, we can study the effect of pairing on the properties of p-n junctions in the presence of an attractive interaction U . For doping levels ε n,p the potential drop across a conventional p-n junction is given by e ∆φ = ε n + ε p + 2∆ 0 . The charge density drops in the depletion region which has on the n-side a width d n and on the p-side the depletion width d p . Using Poisson's equation, d 2 φ dx 2 = (x)/ , where (x) is the charge density and is the dielectric constant, one finds in the depletion approximation, which assumes, when solving the Poisson equation, that there are no charge carriers in the depletion region, −eφ(x) = e∆φ N D + N A ·          −N A , x > d n , −N A (1 − ( x dn − 1) 2 ), d n > x > 0, N D (1 − ( x dp + 1) 2 ), −d p < x < 0, N D , x < −d p .(22) Here, the depletion lengths in the n, p regions are respectively given by d n/p = (N A/D /N D/A ∆φ/(2πe(N D + N A ))) 1/2 , where is the bulk dielectric constant of the semiconductor. For a given energy gap of the semiconductor ∆ 0 , we thus obtain the spatial variation of the conduction and valence band edges across the p-n junction, as plotted in Fig. 7 (black lines). The electrochemical potential is given by µ em (x) = µ + eφ(x). For simplicity we assume that both, the n-and p-sides are equally doped, ε n = ε p = ε, N A = N D , d n = d p = d = ( ∆φ/(4πeN )) 1/2 . As we consider the p-n junction without an external bias, the chemical potential µ remains independent of the position x across the junction, µ = 0 (blue line in Fig. 7). Turning on superconductivity takes charge carriers into the condensate which can then no longer contribute to the potential drop across the junction. The potential changes to φ S (x), accordingly, resulting in the new spatial variation of the band edges, E C (x) = −eφ(x) + ∆ 0 , E V (x) = −eφ(x) − ∆ 0 ,(23)E CS (x) = −eφ S (x) + ∆ 0 , E V S (x) = −eφ S (x) − ∆ 0 ,(24) where −eφ(x) = e∆φ S 2 ·          −1, x > d s , −1 + ( x d s − 1) 2 , d s > x > 0, 1 − ( x d s + 1) 2 , −d s < x < 0, 1, x < −d s ,(25) and d s = ( ∆φ s /(4πeN )) 1/2 . Here the potential energy across the p-n junction in the presence of superconductivity is reduced to e ∆φ S = eφ S (x −d p ) − eφ S (x d n ) = 2µ em (∆ 0 , ε, ∆ m ),(26) where the parameters ∆ 0 , ε, ∆ m are the semiconductor band gap, the doping level and the superconducting order parameter in the metallic limit, as defined above. Thus, we can get the spatial variation of ∆(x) from equations Eqs. (19,20). While the actual chemical potential µ is constant in the p-n junction without external bias, the chemical potential entering in Eqs. (19,20) is rather the electrochemical potential µ em (x) as measured relative to the middle of the semiconductor gap at the respective position x, which is for µ = 0 given by µ s em (x) = µ em (∆ 0 , ε, ∆ m )) ·          1, x > d s , 1 − ( x d s − 1) 2 , d s > x > 0, −1 + ( x d s + 1) 2 , −d s < x < 0, −1, x < −d s ,(27) Next, we obtain the superconducting order parameter ∆(x) along the length of the p-n junction for different values of ∆ 0 and ε, by inserting µ s em (x), as given by Eq. (27), for µ into Eq. (12) for the 2D system, and in Eq. (20) for the 3D system and solve for ∆(x) for every position x. We find that, even when the bulk system is in the BCS superconducting phase, there emerges a BEC layer at the pnjunction. Thereby, we find for the 2D system two different kinds of superconducting p-n junctions when the bulk is in the BCS phase: 1. BCS-BEC-BCS junction: For ∆ 0 < ∆ m /2 the order parameter ∆(x) decreases in the space charge region, but remains finite with a minimum in the middle of the pn-junction, as shown in Fig. 8 (Top). Thus, as the chemical potential moves into the band gap at the pn-junction, there appears a BEC condensate which extends throughout the pn-junction in a regime of width d BEC = 2d s (1 − 1 − ∆ 0 /µ em ), as obtained by the condition µ em (x = ±d BEC /2) = ±∆ 0 . The quasiparticle excitation gap∆(x) remains finite throughout the pn-junction, decreasing first as the order parameter ∆(x) decreases, reaching a minimum and increasing again, as the chemical potential moves in the middle of the semiconductor band gap. Interestingly, the coherence length, which in the BCS phase increases with the decrease of ∆(x), converges to a finite value in the BEC phase, and decreases to a minimum, in the middle of the pn-junction. 2. BCS-BEC-I-BEC-BCS junction: For ∆ 0 > ∆ m /2 the order parameter ∆(x) decreases in the space charge region to 0 , as shown in Fig. 8 (Bottom), with a finite layer of an insulator phase in the middle of the junction. Thus, as the chemical potential moves into the band gap at the pn-junction, there is a BEC condensate at each of the two surfaces of the p-n junction, each of finite width d BEC = d s ((1− ∆0 µem 1 − ∆ 2 m /(4∆ 2 0 )) 1/2 −(1− ∆0 µem ) 1/2 ) , separated by an insulating layer, where ∆ = 0. The quasiparticle excitation gap∆(x) remains finite throughout the pn-junction, decreasing first as the order parameter ∆(x) decreases, reaching a minimum at the boundary between the BEC and the insulator phase and increasing again in the insulator layer, as the chemical potential moves in the middle of the semiconductor band gap. Interestingly, the coherence length, which in the BCS phase increases with the decreease of ∆(x), converges to a finite value at the boundary between the BEC and the insulator phase, where the order parameter vanishes. In the 3D systems we find, when the bulk is in the BCS phase, a BEC layer at the pn-junction occurs only for sufficiently small semiconductor gaps ∆ 0 < ∆ * 0 . Thus, we find in 3D three different kinds of superconducting p-n junctions, when the bulk is in the BCS phase: 1. BCS-BEC-BCS junction: For small semiconductor gaps ∆ 0 < ∆ 0c == 0.29∆ m the order parameter ∆(x) decreases in the space charge region, but remains finite with a minimum in the middle of the pn-junction, as shown in Fig. 9 (Top). Thus, as the chemical potential moves into the band gap at the pn-junction, there appears a BEC condensate, where the chemical potential is outside of the band edges, which extends throughout the pn-junction in a regime of width d BEC , as obtained by the condition µ em (x = ±d BEC /2) = ±∆ 0 . The quasiparticle excitation gap∆(x) remains finite throughout the pn-junction, decreasing first as the order parameter ∆(x) decreases, reaching a minimum and increasing again, as the chemical potential moves into the middle of the semiconductor band gap. 2. BCS-BEC-I-BEC-BCS junction: For large semiconductor band gaps ∆ 0 > ∆ 0c == 0.29∆ m the order parameter ∆(x) decreases in the space charge region to 0 , as shown in Fig. 9 (Center), with a finite layer of an insulator phase in the middle of the junction. Thus, as the chemical potential moves into the band gap at the pn-junction, there is a BEC condensate at each of the two surfaces of the p-n junction, each of finite width d BEC , separated by an insulating layer, where ∆ = 0. The quasiparticle excitation gap∆(x) remains finite throughout the pn-junction, decreasing first as the order parameter ∆(x) decreases, reaching a minimum at the boundary between the BEC and the insulator phase and increasing again in the insulator layer, as the chemical potential moves into the middle of the semiconductor band gap. 3. BCS-I-BCS junction: For still larger semiconductor band gaps, ∆ 0 > ∆ * 0 == 0.5∆ m , there is no BEC layer anymore, the order parameter ∆(x) decreases to zero as the chemical potential reaches the band edge, as shown in Fig. 9 (Bottom), reaching directly an insulator phase as the chemical potential moves into the band gap at the pn-junction. Remarkably, the quasiparticle excitation gap∆(x) vanishes at the boundary of the space-charge region, decreasing first to zero as the order parameter ∆(x) decreases to zero, and increasing again in the insulator layer, as the chemical potential moves into the middle of the semiconductor band gap. Thus, there appear gapless quasiparticle excitations at the boundary to the space-charge region. Thus, we have shown that in superconducting pn-junctions there can appear layers of BEC condensates even when the bulk is in the BCS state. This opens the possibility to create layers of BEC and study their properties in the well controlled setting of doped semiconductors, where the doping level can be varied to change and control the thickness of BEC and insulator layers. The BEC condensate can be detected by scanning tunneling microscopy, where instead of the sharp coherence peaks in the BCS phase, a maximum in the tunneling density of states in the band which is closest to the chemical potential, is expected, as plotted in Fig. 3 (Bottom). Also, the fact that the quasiparticle excitation gap remains finite throughout the pn-junction when there is a BEC layer, while there are gapless excitations in a conventional BCS-I-BCS junction, might be amenable to experimental detection. Moreover, attaching sufficiently small leads in lateral direction, the superconducting pn-junction may enable one to study the transport properties of the BEC layers directly. As qualitatively outlined in Ref.14, the superconductor critical current I c is expected to be dominated by the bulk superconducting order parameter ∆ and the normal small voltage resistance of the pn-junction R n , yielding for identical ∆ on both sides of the junction I c R n = π∆/(2e). The presence of a BEC layer might modify that product due to the spatial variation of the order parameter, and the quasiparticle exciation gap, see Figs. 8,9. We will leave the derivation as a task for further studies. For a conventional semiconductor with = 10, ∆Φ = 1V and N D = N A = 10 18 cm −3 , the depletion width is d ≈ 50nm, 14 whereas in p-n junctions of cuprate semiconductors ∆Φ can be several volts, N D = N A = 5. × 10 21 cm −3 , yielding only d ≈ 1nm which is the same order as the thickness of oxide barriers in typical Josephson junctions. Indeed, cuprate semiconductors with a superconducting phase for both hole and electron doping have been found, see Ref. 21 for a review, which may therefore be realisations of homogeneous p-n junctions, where we can expect BEC layers of the thickness of the order of d BEC ≈ 1nm. The theory can be extended to hetero-junctions with two different host materials with different band gaps on the n-and p-doped side of the junctions, resulting in band discontinuities at the junction to study what effect this has on the existence of a BEC layer. The I(V ) characteristics of superconducting pn junctions has been discussed qualitatively in Ref.14. We leave it for future work to extend our theory to include a potential difference and thereby allow a quantitative derivation of current voltage characteristics, and to study what consequence BEC-layers have for the I(V)-characteristics. Recently, Josephson junctions in the BCS-BEC crossover range have been reviewed in Ref. 22. These authors did not discuss the appearance of a BEC layer at the junction when the bulk is in the BCS phase. However, we expect, that, since the carrier concentration is reduced in the vicinity of an oxide layer, a BEC layer may also appear at such BCS-Josephson junctions with an oxide layer, a question we leave for future research. IV. CONCLUSIONS Motivated by Ref. 12, we have analyzed the superconductor-insulator transition at T = 0K for the case when the density of state is constant and for a more realistic three-dimensional density of states as function of doping level and band gap. The chemical potential was found as a function of the semiconductor band gap ∆ 0 and doping level ε. We observe that the chemical potential can drop below the band edge, which indicates the transition to a BEC phase. While in 2D we find direct transitions between BEC and BCS phase when increasing the doping level for all band gaps ∆ 0 , in 3D we find no BEC phase for large band gaps ∆ 0 > ∆ * 0 , where there is rather a direct insulator to BCS transition when increasing the doping level ε. In superconducting p-n junctions, we find that layers of BEC condensates can appear even when the bulk semiconductor is in the BCS phase. For 2D systems, we thus have two kinds of junctions when the bulk is in the BCS phase: a BCS-BEC-BCS junction, with a layer of a Bose-Einstein condensate in the space charge region, and a BCS-BEC-I-BEC-BCS junction, with two finite layers of Bose-Einstein condensates on both sides of the space charge region, separated by an insulator. In both types of junctions the quasiparticle excitation gap remains finite throughout the junction. For the 3D density of states, we rather identify three kinds of junctions, a BCS-BEC-BCS junction, a BCS-BEC-I-BEC-BCS junction. And, for large semiconductor band gaps exceeding ∆ * 0 , we find a conventional BCS-I-BCS junction, where the quasiparticle excitations are gapless at the boundary to the space charge region where the BCS order parameter is vanishing. The local BEC condensate could be detected experimentally by measuring the local spectral density by means of STM: tunneling into a BEC condensate the quasiparticle density of states N (E) has the shape shown in Fig. 3 (Bottom) with a small peak in the conduction band on the n-doped side, and in the valence band on the p-doped side. Furthermore, when there is a BEC condensate the quasi particle excitations have a gap throughout the pn-junction. The BEC layer might also be accessible with local contacts to direct transport studies in lateral direction to the pn-junction. In our study we have assumed zero temperature T = 0K, and it remains to be extended to finite temperatures. Furthermore, by using the mean field approximation of the many body physics we did not take into account quantum fluctuations, such as Kosterlitz-Thouless fluctuations of the order parameter. The disorder introduced by the dopants will furthermore lead to Anderson localization of charge carriers and accordingly may result in disorder localized Bosons at the p-n junction, rather than an extended BEC layer. These issues will be subject for future research. FIG. 1 : 1Left: two-band model with valence band edge EV = −∆0, and conduction band edge EC = ∆0, semiconductor band gap 2∆0, total band width 2D. Energy range with attractive pairing 2ωD around Fermi energy εF = Ec + ε with doping level ε. Right: model densities of states as a function of energy: 2D DOS (blue) and 3D DOS (green). FIG. 2 : 22D (Top) and 3D (Center) quasiparticle density of states, Eq. (4), as a function of quasiparticle energy E, when the chemical potential is in the conduction band. (Bottom) quasiparticle density of states as function of quasiparticle energy E for the 3D DOS, when the chemical potential is in the semiconductor band gap. FIG. 3 : 3Top: superconducting order parameter ∆, Bottom: chemical potential µ as functions of the semiconductor band gap ∆0, at different doping levels ε for T = 0 K and 2D constant density of states. ∆m is the superconducting order parameter for the metallic (∆0 = 0) case, Eq. (5). The dashed line is the conduction band edge Ec. When µ crosses below Ec, a crossover from BCS to BEC occurs. FIG. 4: BCS-BEC crossover diagram: doping parameter ε versus semiconductor band gap ∆0 in units of ∆m, as obtained by the crossover condition µ = ∆0 for 2D DOS. FIG. 5 : 5Top: superconducting order parameter ∆. Bottom: chemical potential µ as function of semiconductor band gap ∆0, at different doping levels ε for T = 0 K for 3D density of states. ∆m is the superconducting order parameter for the metallic (∆0 = 0) case, Eq.(5). The dashed line is the position of the conduction band edge Ec. When µ crosses below Ec, a crossover from BCS to BEC occurs. FIG. 6: BCS-BEC Crossover diagram: doping parameter ε versus semiconductor band gap ∆0 in units of ∆m as obtained by the crossover condition µ = ∆0 for 3D DOS.B. Two-band model with 3D DOS Next, we consider a density of states (DOS) which is more realistic for 3-dimensional semiconductors, shown inFig. 1(right) (green): the DOS has a square-root dependence on the energy, in the conduction band, ρ( FIG. 7 : 7Energy Band diagram of p-n junction with spatial variation of band edges EC (x), EV (x) (black). Superconductivity caused by the attractive interaction shifts the band edges to ECS(x), EV S (x) (red). The chemical potential remains constant without external bias (blue). FIG. 8 : 8Order parameter ∆(x), quasiparticle gap∆(x), and the coherence length ξ(x) across two types of 2D p-n junctions. (Top) BCS-BEC-BCS with semiconductor gap ∆0 = 0.48∆m and doping ε = 0.26∆m. (Bottom) BCS-BEC-I-BEC-BCS with semiconductor gap ∆0 = 0.6∆m and doping ε = 0.26∆m. FIG. 9 : 9The spatial variation of the order parameter ∆(x) and the quasiparticle gap∆(x) across the 3D p-n junction. (Top) BCS-BEC-BCS junction with semiconductor gap ∆0 = 0.285∆m < ∆0c and doping ε = 0.26∆m. (Center) BCS-BEC-I-BEC-BCS junction with semiconductor gap ∆0 = 0.3∆m > ∆0c and doping ε = 0.26∆m. (Bottom) BCS-I-BCS junction with semiconductor gap ∆0 = 0.52∆m and doping ε = 0.26∆m, showing the appearance of gapless quasiparticle excitations at the boundary between the BCS and the insulator phase. . M L Cohen, Rev. Mod. Phys. 36240M. L. Cohen, Rev. Mod. Phys. 36, 240 (1964) . W Hanke, M J Kelly, Phys. Rev. Lett. 451203W. Hanke and M. J. Kelly, Phys. Rev. Lett. 45, 1203 (1980) . X Lin, Z Zhu, B Fauque, K Behnia, Phys. Rev. X. 321002X. Lin, Z. Zhu, B. Fauque, and K. Behnia, Phys. Rev. X 3, 021002 (2013). . E A Ekimov, V A Sidorov, E D Bauer, N N Melnik, N J Curro, J D Thompson, S M Stishov, Nature. 428542E. A. Ekimov, V. A. Sidorov, E. D. Bauer, N. N. Melnik, N. J. Curro, J. D. Thompson, and S. M. Stishov, Nature 428, 542 (2004). . X Blase, Ch Adessi, D Connetable, Phys. Rev. Lett. 93237004X. Blase, Ch. Adessi, and D. Connetable, Phys. Rev. Lett. 93, 237004 (2004). . E Bustarret, J Kamarik, C Marcenat, E Gheeraert, C Cytermann, J Marcus, T Klein, Phys. Rev. Lett. 93237005E. Bustarret, J. Kamarik, C. Marcenat, E. Gheeraert, C. Cyter- mann, J. Marcus, and T. Klein, Phys. Rev. Lett. 93, 237005 (2004). . E Bustarret, Nature. 444465E. Bustarret et al., Nature 444, 465 (2006); . R Skrotzki, R , Appl. Phys. Lett. 97192505R. Skrotzki, R. et al., Appl. Phys. Lett. 97, 192505 (2010). . L N Cooper, Phys. Rev. 1041189L. N. Cooper, Phys. Rev. 104, 1189 (1956) . J Bardeen, L N Cooper, J R Schrieffer, Phys. Rev. 106162J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 106, 162 (1957) . J Bardeen, L N Cooper, J R Schrieffer, Phys. Rev. 1081175J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957) . D M Eagles, Phys. Rev. 186456D. M. Eagles, Phys. Rev. 186, 456 (1969). . P Nozieres, F Pistolesi, Eur. Phys. J. B. 10649P. Nozieres and F. Pistolesi, Eur. Phys. J. B 10, 649 (1999). . I Takeuchi, S N Mao, X X Xi, K Petersen, C L Lobb, T Venkatesan, Appl. Phys. Lett. 672872I. Takeuchi, S. N. Mao, X. X. Xi, K. Petersen, C. L. Lobb, and T. Venkatesan, Appl. Phys. Lett. 67, 2872 (1995). . J Mannhart, A Kleinsasser, J Ströbel, A Baratoff, Physica C. 216401J. Mannhart, A. Kleinsasser, J. Ströbel and A. Baratoff, Physica C 216, 401 (1993). . X Zhang, S Saha, N P Butch, K Kirshenbaum, J Paglione, R L Greene, Appl. Phys. Lett. 9562510X. Zhang, S. Saha, N. P. Butch, K. Kirshenbaum, J. Paglione, and R. L. Greene, Appl. Phys. Lett. 95, 062510 (2009). . C Wu, M.-J Wang, M.-K Wu, Physica C 460-462424C. Wu, M.-J. Wang, M.-K. Wu, Physica C 460-462, 424 (2007). . M Marini, F Pistolesi, G C Strinati, Eur. Phys. J. B. 1151M. Marini, F. Pistolesi, and G. C. Strinati, Eur. Phys. J. B 1, 151 (1998). . F Pistolesi, G C Strinati, Phys. Rev. B. 496356F. Pistolesi, and G. C. Strinati, Phys. Rev. B 49, 6356 (1994). J R Schrieffer, Theory of Superconductivity. Perseus BooksJ. R. Schrieffer, Theory of Superconductivity , Perseus Books (1999). . A Ghosal, M Randeria, N Trivedi, Phys. Rev. Lett. 813940A.Ghosal, M.Randeria and N.Trivedi, Phys. Rev. Lett. 81, 3940 (1998); . A Ghosal, M Randeria, N Trivedi, Phys. Rev. B. 6320505A. Ghosal, M. Randeria and N. Trivedi, Phys. Rev. B 63, 020505 (2000); . K Bouadim, Y L Loh, M Randeria, N , K. Bouadim, Y. L. Loh, M. Randeria and N. . Trivedi, Nature Physics. 7Trivedi, Nature Physics 7 (2011); . M V Feigelman, L B Ioffe, V E Kravtsov, E A Yuzbashyan, Phys. Rev. Lett. 9827001M. V. Feigelman, L. B. Ioffe, V. E. Kravtsov and E. A. Yuzbashyan, Phys. Rev. Lett. 98, 027001 (2007); . M Feigelman, L Ioffe, V Kravtsov, E Cuevas, Annals of Physics. 3651368M. Feigelman, L. Ioffe, V. Kravtsov and E. Cuevas, An- nals of Physics 365, 1368 (2010); . I Burmistrov, I Gornyi, A Mirlin, Phys. Rev. Lett. 10817002I. Burmistrov, I. Gornyi and A. Mirlin, Phys. Rev. Lett. 108, 017002 (2012); . A M Finkelstein, Physica B. 197636A. M. Finkel- stein, Physica B 197, 636 (1994); . B Sacepe, T Dubouchet, C Chapelier, M Sanquer, M Ovadia, D Shahar, M Feigelman, L Ioffe, Nat. Phys. 7239B. Sacepe, T. Dubouchet, C. Chapelier, M. Sanquer, M. Ovadia, D. Shahar, M. Feigelman and L. Ioffe, Nat. Phys. 7, 239(2011). . M R Norman, C Pépin, Rep. Prog. Phys. 661547M. R. Norman and C Pépin, Rep. Prog. Phys. 66, 1547 (2003). . A Spuntarelli, P Pieri, G C Strinati, Physics Reports. 488111A. Spuntarelli, P. Pieri, G.C. Strinati, Physics Reports 488, 111 (2010).
[]
[ "Asymmetry-enriched electronic and optical properties of bilayer graphene", "Asymmetry-enriched electronic and optical properties of bilayer graphene" ]
[ "Bor-Luen Huang [email protected] \nDepartment of Physics\nNational Cheng Kung University\n701TainanTaiwan\n\nPhysics Division\nnational center for Theoretical Sciences\n300HsinchuTaiwan\n", "Chih-Piao Chuu \nPhysics Division\nnational center for Theoretical Sciences\n300HsinchuTaiwan\n", "Ming-Fa Lin \nHierarchical Green-Energy Materials Research Center\nNational Cheng Kung University\n701TainanTaiwan\n\nQuantum Topology Center\nNational Cheng Kung University\n701Tainan\n" ]
[ "Department of Physics\nNational Cheng Kung University\n701TainanTaiwan", "Physics Division\nnational center for Theoretical Sciences\n300HsinchuTaiwan", "Physics Division\nnational center for Theoretical Sciences\n300HsinchuTaiwan", "Hierarchical Green-Energy Materials Research Center\nNational Cheng Kung University\n701TainanTaiwan", "Quantum Topology Center\nNational Cheng Kung University\n701Tainan" ]
[]
the electronic and optical response of Bernal stacked bilayer graphene with geometry modulation and gate voltage are studied. The broken symmetry in sublattices, one dimensional periodicity perpendicular to the domain wall and out-of-plane axis introduces substantial changes of wavefunctions, such as gapless topological protected states, standing waves with bonding and antibonding characteristics, rich structures in density of states and optical spectra. The wavefunctions present well-behaved standing waves in pure system and complicated node structures in geometrymodulated system. The optical absorption spectra show forbidden optical excitation channels, prominent asymmetric absorption peaks, and dramatic variations in absorption structures. These results provide that the geometry-modulated structure with tunable gate voltage could be used for electronic and optical manipulation in future graphene-based devices.The layered structure materials have stimulated enormous studies in the condensed matter and high-energy physics community. Since the discovery of single-layer graphene in 2004 1 , various 2D materials with stable planar or buckling structures have been realized, e.g. mono-element group-IV systems, including silicene 2 , germanene 3 , and tinene 4 . One of the exciting findings in graphene-related systems is a new playground for quantum electromagnetic dynamics. In fundamental studies and applications, understanding manipulations on 2D materials become urgent 5,6 . For few layers graphene, stacking configuration and external field play critical roles in diversifying essential physical properties 7,8 . Bilayer graphenes (BLGs) with AB or AA stacking possess different electronic structures 9 , absorption spectra 10 , and Coulomb excitations 11 from each other. In general, there are different kinds of stacking, including AB, AA, sliding, twisting, and geometry-modulated BLGs. On the other side, AB stacking BLG creates a gap when applying an external electric field 12 .Recently, extensive interests focus on the topological characteristics of materials 13-15 , which have gapless topological protected states around boundaries and geometry defects. The domain wall (DW) formed in between by two oppositely biased region of BLG is first proposed by Martin et al.16to host one-dimensional topological states, which is distinct from the edge states in nature 17 . It is believed that this kink of topological defect hosts symmetry-protected gapless mode because of a change in the Chern number18,19. However, experimental realization of such electric-field defined walls is extremely challenging. An alternative approach is to exploit different stacking orders, which creates diverse new physics in multi-layer system 18 . Such crystalline topological line defects exist naturally in Bernal stacked BLGs grown by chemical vapour deposition (CVD) 20-22 and in exfoliated BLGs from graphite 23-25 . This kind of DWs in BLGs can be moved by mechanical stress exerted through an atomic force microscope (AFM) tip 26 . It is also possible to manipulate DWs with designed structures by controlling the movement of the AFM tip.The electronic structures and optical absorption show various pictures for different systems. According to the low-lying electronic structures, AA, AA′ (BLG with a special sliding vector), and AB stackings, respectively, possess vertical and non-vertical two Dirac-cone structures, and two pairs of parabolic bands 27 . Density of states (DOS) and optical absorption spectra exhibit 2D van Hove singularities (vHSs), in which the special structures strongly depend on the characters of relevant dimension. Similar 2D phenomena appear in sliding and twisted systems 9,28,29 , while important differences are revealed between them. BLG with sliding configurations shows Published: xx xx xxxx opeN www.nature.com/scientificreports/
10.1038/s41598-018-37058-9
null
59,409,856
1805.10775
86e587e36a8db2e3ab44d1927a1257b4a83070ce
Asymmetry-enriched electronic and optical properties of bilayer graphene Bor-Luen Huang [email protected] Department of Physics National Cheng Kung University 701TainanTaiwan Physics Division national center for Theoretical Sciences 300HsinchuTaiwan Chih-Piao Chuu Physics Division national center for Theoretical Sciences 300HsinchuTaiwan Ming-Fa Lin Hierarchical Green-Energy Materials Research Center National Cheng Kung University 701TainanTaiwan Quantum Topology Center National Cheng Kung University 701Tainan Asymmetry-enriched electronic and optical properties of bilayer graphene 10.1038/s41598-018-37058-9Received: 1 October 2018 Accepted: 29 November 20181 Scientific RepoRts | (2019) 9:859 | https:// www.nature.com/scientificreports Taiwan. Correspondence and requests for materials should be addressed to B.-L.H. ( the electronic and optical response of Bernal stacked bilayer graphene with geometry modulation and gate voltage are studied. The broken symmetry in sublattices, one dimensional periodicity perpendicular to the domain wall and out-of-plane axis introduces substantial changes of wavefunctions, such as gapless topological protected states, standing waves with bonding and antibonding characteristics, rich structures in density of states and optical spectra. The wavefunctions present well-behaved standing waves in pure system and complicated node structures in geometrymodulated system. The optical absorption spectra show forbidden optical excitation channels, prominent asymmetric absorption peaks, and dramatic variations in absorption structures. These results provide that the geometry-modulated structure with tunable gate voltage could be used for electronic and optical manipulation in future graphene-based devices.The layered structure materials have stimulated enormous studies in the condensed matter and high-energy physics community. Since the discovery of single-layer graphene in 2004 1 , various 2D materials with stable planar or buckling structures have been realized, e.g. mono-element group-IV systems, including silicene 2 , germanene 3 , and tinene 4 . One of the exciting findings in graphene-related systems is a new playground for quantum electromagnetic dynamics. In fundamental studies and applications, understanding manipulations on 2D materials become urgent 5,6 . For few layers graphene, stacking configuration and external field play critical roles in diversifying essential physical properties 7,8 . Bilayer graphenes (BLGs) with AB or AA stacking possess different electronic structures 9 , absorption spectra 10 , and Coulomb excitations 11 from each other. In general, there are different kinds of stacking, including AB, AA, sliding, twisting, and geometry-modulated BLGs. On the other side, AB stacking BLG creates a gap when applying an external electric field 12 .Recently, extensive interests focus on the topological characteristics of materials 13-15 , which have gapless topological protected states around boundaries and geometry defects. The domain wall (DW) formed in between by two oppositely biased region of BLG is first proposed by Martin et al.16to host one-dimensional topological states, which is distinct from the edge states in nature 17 . It is believed that this kink of topological defect hosts symmetry-protected gapless mode because of a change in the Chern number18,19. However, experimental realization of such electric-field defined walls is extremely challenging. An alternative approach is to exploit different stacking orders, which creates diverse new physics in multi-layer system 18 . Such crystalline topological line defects exist naturally in Bernal stacked BLGs grown by chemical vapour deposition (CVD) 20-22 and in exfoliated BLGs from graphite 23-25 . This kind of DWs in BLGs can be moved by mechanical stress exerted through an atomic force microscope (AFM) tip 26 . It is also possible to manipulate DWs with designed structures by controlling the movement of the AFM tip.The electronic structures and optical absorption show various pictures for different systems. According to the low-lying electronic structures, AA, AA′ (BLG with a special sliding vector), and AB stackings, respectively, possess vertical and non-vertical two Dirac-cone structures, and two pairs of parabolic bands 27 . Density of states (DOS) and optical absorption spectra exhibit 2D van Hove singularities (vHSs), in which the special structures strongly depend on the characters of relevant dimension. Similar 2D phenomena appear in sliding and twisted systems 9,28,29 , while important differences are revealed between them. BLG with sliding configurations shows Published: xx xx xxxx opeN www.nature.com/scientificreports/ Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 highly distorted energy dispersions with an eye-shaped stateless region accompanied by saddle points. In twisted BLGs, the moiré superlattice enlarges the primitive unit cell and thus creates a lot of 2D energy subbands. The geometry-modulated BLG is expected to generate many characters different from 2D bulk properties. Many works have studied band structures of AB-BA BLG by effective models 19,28,30 or tight-binding model with sharp DW 18,31 . Finite but short width has been studied in ref. 18 by first-principle calculation. However, the width of domain should be large enough to avoid interaction between different DW states. Reference 32 has studied DW states in large domain and DW width. In this work, we investigate electronic and optical properties of geometry-and gate-modulated BLGs in tight-binding model. We present energy subbands, tight-binding functions on distinct sublattices, and DOS by exact diagonalization. Optical absorption spectra are calculated within gradient approximation. We study the effects caused by geometry modulation and gate voltage. The band structures show diverse 1D phenomena, including 1D energy subbands with various band-edge states, band splitting through geometry modulation, and metallic behavior in geometry-modulated system with a gate voltage. The wavefunctions present well-behaved standing waves in system without geometry modulation, while their node structures become complicated as applying geometry modulation and/or gate voltage. We discuss the wavefunctions of the topological protected DW states. For optical absorption spectra, we find forbidden optical excitation channels under specific linear relations between layer-dependent sublattices, prominent asymmetric absorption peaks in absence of selection rule, and DW-and gate-induced dramatic variations in optical absorption structures. We also discuss the connection to experiments. Our predicted results could be verified by the experimental measurements. Figure 1 shows schematic diagram for stacking DWs, also known as stacking solitons, between AB and BA domains in Bernal-stacked BLG. Such topological line defects involve both tensile and shear tilt boundaries, as a consequence of the competition between strain energy and misalignment energy cost within the DW, spanning in space a few to ten nanometers 20 . In this paper, we consider tensile soliton, which has zigzag boundary along the DW, created by uniformly stretching the C-C bonds of one layer along the armchair direction by an atomic bond distance. We assume the system has translational invariance along the tangent (ŷ) direction. For pristine Bernal-stacked BLG, all carbon atoms in a primitive unite cell are labeled by top-and bottom-layer indices, and A-and B-sublattice indices, (A 1 , B 1 , A 2 , B 2 ). The positions of these atoms can further be classified into even and odd for each sublattice. Because the wavefunctions with even and odd position indices only have a π phase difference, the considered bilayer system will be described by the components of wavefunctions only with odd position indices. In our notation, A 1 and A 2 (B 1 and B 2 ) are on top of each other for AB (BA) domain, as shown in Fig. 1(b). Methods The low-energy Hamiltonian is described by the single orbital p z tight-binding model, ∑ = − ′ ′ + . . ′ + H t c c h c r r r r ( , )[ ( ) ( ) ],(1)r r , where c + (r) and c(r) are creation and annihilation operators at position r, respectively. The hopping amplitude is characterized by the empirical formula 9,28,33 where d = |r − r′| is the distance connecting two lattice points, b 0 = 1.42 Å the in-plane C-C bond length, d 0 = 3.35 Å the interlayer distance, and ρ = 0.184b 0 the characteristic decay length. γ 0 = −2.7 eV is the intralayer nearest-neighbor hopping integral and γ 1 = 0.48 eV is the interlayer interaction. We consider a system of 100 unite cells for each domain with various width of DWs (N dw strained unit cells with a modified lattice constant a′ = a(3N dw + 4)/(3N dw + 2) for geometry-modulated layer in each DW region, where = a b 3 0 is the lattice constant of single layer graphene). The cut-off of the distance for hopping amplitude is set to be d ∥ ≤ 2a. Because of translational invariance along zigzag direction, the quasi-1D Hamiltonian is a real symmetric matrix, which can possess real eigenfunction for each eigenvalue (If Hψ = Eψ, Eψ* = Eψ with real E for real symmetric H. When H does not have degeneracy, ψ = Eψ* real. When H have degeneracy, two of eigenfunctions can be chosen as ψ + ψ* ∈ real and i(ψ − ψ*) real. Therefore, all eigenfunctions of H can be real). Opposite on-site energy between two layers is included to describe layer-dependent Coulomb potential to simulate the effect of gate voltage 34 . γ γ − ′ =       −                  +           ρ ρ − − − − t e d d e d d r r ( , ) 1 ,(2) When BLG is subject to an electromagnetic field at zero temperature, occupied valence states are excited to unoccupied conduction ones by incident photons. In general, only vertical transitions can occur because of negligible momenta carried by photons. Based on the Kubo formula for linear response of an electromagnetic field, the optical absorption function is given by [35][36][37][38][39] where → P is the momentum operator, Ê the unit vector of an electric polarization along ŷ, m e the bare electron mass, and Γ the broadening factor due to various de-excitation mechanisms. For a clean sample, physical properties can be observed under a sufficiently low temperature and Γ (=2 meV) will be small enough for observing the fine structures. The superscripts c and v represent the indices of conduction and valence subbands, respectively. The velocity matrix element, ∫ ∑ ω ω ∝ | 〈Ψ | ⋅ → |Ψ 〉| ×        − − − Γ        A d E P m E E i ( ) k (k) (k) Im 1 (k) (k) ,(3)〈 Ψ | |Ψ 〉 ⋅ → (k) ( k) c E P m v e , for optical properties of carbon-related sp 2 bonding is evaluated from the gradient approximation 37,38 . The velocity operator is calculated by ∂H(k)/∂k y . The γ 0 -dependent velocity matrix elements dominate the optical excitations. Whether vertical excitation channels could survive is mainly related to the eigenfunction of the A-sublattice of the initial state to that of the B-sublattice of the final state within each layer, or vice versa. E c (k) − E v (k) is the excitation energy. The joint density of states (JDOS), which is defined by setting the velocity matrix elements in Eq. (3) equal to one, determine the available number of vertical optical excitation channels. While JDOS directly links with the band-edge states, it will exhibit the prominent structures. Results and Discussion Without geometry modulation. Low-lying band structures of pure and geometry-modulated BLGs exhibit unusual features. For pristine AB stacking graphene, the first conduction and valence bands are 2D parabolic bands and slightly overlap with each other around π π = ± ± K a a (2 / 3 , 2 /3 ). We first study the energy bands of AB stacking graphene without geometry modulation around K (k y a = 2π/3) in Fig. 2(a). 1D parabolic dispersions is due to zone-folding effects. Because of translational invariance along x direction, each band, except the first conduction and valence bands, in Fig. 2(a) involves two degenerate states, which can be referred to ±k x of the pristine AB stacking graphene. (The first conduction and valence bands are corresponding to high-symmetry line, therefore they are not degenerate.) Therefore, the band indices are closed related to discrete k x , provided applying Fourier transformation along the x direction. The energy dispersions along k x is almost negligible due to the sufficiently large unit cell. The particle-hole symmetry is broken because of non-vanished hopping integrals between atoms with distance longer than the spacing of the nearest neighbors. According to state energies measured from E F , the first valence and conduction bands (v 1 , c 1 ) touch with each other around K and have a very Fig. 3(a), these two states are degenerate at K and can have other ratio between the weights of B 1 -and B 2 -sublattice through linear combination. Therefore, the wavefunctions at K are localized only at B 1 -and B 2 -sublattices, which are on top of each other. Notice that, because of armchair-sharp in the x direction, the position of atoms can further be classified by even or odd index. Within the same sublattice, the eigenfunctions with even or odd position index only has a π phase difference. For clear presentation, we only show the components with odd position indices. With increase of subband indices, the wavefunctions become well-behaved standing waves instead of uniform spatial distributions. For (v 2 , c 2 ) or (v′ 2 , c′ 2 ), the states have dominant components in the B 1 -and B 2 -sublattices and minor weights in A 1 -and A 2 -sublattices, which means that finite momentum away from K smear the wavefunction through finite weights in the sublattices with dangling atoms. The tight-binding functions show standing-wave behaviors with two nodes for each component in the spatial distributions, which is due to the linear combination between ±k x as mentioned before and the constrain to have real wavefunctions for a real symmetric matrix (If Hψ = Eψ, Hψ* = Eψ with real E for real symmetric H. When H does not have degeneracy, ψ = ψ* real. When H have degeneracy, two of eigenfunctions can be chosen as ψ + ψ* ∈ real and i(ψ − ψ*) ∈ real. Therefore, all eigenfunctions of H can be real). The π/2 phase for each sublattice between v 2 and v′ 2 or between c 2 and c′ 2 is consistent with uniform distribution for the bulk system. Our results also show that, the eigenfunctions between B 1 -and B 2 -sublattices are almost in-phase for conduction bands and out-of-phase for valence bands, and those between A 1 -and A 2 -sublattices are almost out-of-phase for conduction bands and in-phase for valence bands. The slight phase shift between A 1 and A 2 or between B 1 -and B 2 is intrinsic, depending on the material parameters. The phase shift becomes less obvious for modes with larger band indices. For the next band index, the tight-binding functions exhibit four-nodes standing waves (as shown in Fig. 3(g and h)). The number of nodes, Geometry modulation. Dramatic changes in electronic structure come to exist for the geometry-modulated BLGs (Fig. 1). Because of more complicated interlayer hopping integrals, asymmetry of the energy spectrum about Fermi energy is greatly enhanced by the geometric modulation, as clearly displayed in Fig. 2(b). The overlap of valence and conduction bands is getting larger, and so do the free electron and hole densities. The destruction of the inversion symmetry leads to the splitting of doubly degenerate states. More pairs of neighboring energy subbands are created. The splitting electronic states evolve more excitation channels and additional structures in optical absorption spectrum. Most of the energy bands have parabolic dispersions, while (v 1 ,c 1 ) and (v 2 ,c 2 ) around half filling exhibit oscillating and crossing behaviors. (The indices of valence and conduction bands are starting from the Fermi energy, as representatively shown in Fig. 2(b).) In general, the band-edge states in various energy subbands seriously deviate from K . They are responsible for the vHSs in DOS and thus the number of vertical optical transition channels. Spatial distributions of the wavefunctions belong to unusual standing waves. Symmetric and antisymmetric standing waves for each tight binding function thoroughly disappear under the modulation of stacking configuration, as clearly indicated in Fig. 4. The tight-binding functions on four sublattices roughly have a linear superposition relationship within the AB and BA domains. When the weight is large for one domain, it becomes small for the other. While the modulation of interlayer hopping amplitudes and relaxed intralayer hopping amplitudes act as scattering centers for the extended solutions of infinite system, the weight within DWs is small comparing with that in domains. Each component of a wavefunction still has fixed number of nodes. It grows with the 1D subband indices, e.g., two and four nodes in (v 1 /v′ 1 , c 1 /c′ 1 ) and (v 2 /v′ 2 , c 2 /c′ 2 ) subbands, respectively. Wavefunction with zero node disappears because of no translational invariance in the x direction. The position of nodes can be located within domains or DWs. The left (right) domain, which corresponding to BA (AB) stacking BLG, contains the major weight from the eigenfunctions of A 1 -and A 2 -(B 1 -and B 2 -) sublattices, consistent with the case without geometry modulation. For conduction (valence) bands, the eigenfunctions with major weight are in-phase (out-of-phase) and those with minor weight are out-of-phase (in-phase). On the other hand, for conduction (valence) bands, the eigenfunctions between A 1 and B 1 are symmetric (anti-symmetric) and anti-symmetric (symmetric) for c 1 and c 2 (v 1 and v 2 ), respectively. Similar behaviors of the eigenfunctions can be obtained for larger energy subband indices. Due to the major weight confined within one domain, the phase shift becomes less obvious, which is similar to the cases with nodes more than 2 in the uniform system. Gate voltage. Electronic energy spectra are greatly affected by an external electric field. For pristine AB stacking BLG, the gate voltage (V z ) creates a band gap 8 . Roughly speaking, for AB-BA BLG, each domain still prefers to have a gap when applying a gate voltage. However, the characteristics of the wavefunctions in the AB domain is different from that in the BA domain, as discussed in the previous section. In order to bend in with these two subsystems, the geometry-modulation BLG with gate voltage will have metallic states within the gap. The reason is likely to be the two subsystems carry different valley Chern number 18 . Figure 2(c) shows rich and unique energy dispersions around the Fermi level. Most of energy levels repelled away from the Fermi energy by V z , where the energy dispersions become weak within a certain range of k y . Moreover, two Dirac dispersions around K lead to a finite contribution in DOS and provide metallic behavior near the Fermi energy, in great contrast with semiconducting behavior in the gated AB stacking BLGs. The valence and conduction wavefunctions become highly complicated, as displayed in Fig. 5, when the system sustains geometry modulation and gate voltage. In AB stacking BLG with a positive gate voltage in the z direction, which breaks inversion symmetry, the tight-binding functions of B 2 -sublattice are mainly responsible for the valence band states and those of B 1 -sublattice for conduction band states. This is different from the un-gated case, where the top layer and bottom layer components are distributed evenly between layers, as shown in Fig. 4. In other words, electron states are driven to the top layer, and hole states stay at bottom layer, and the BLG is therefore polarized under the electric field. This property of applying gate voltage is still maintained in the geometry-modulation BLGs. The left (right) column of the plots in Fig. 5, which corresponding to conduction (valence) bands, shows major contribution from the bottom (top) layer except the first row of the plots. We shall discuss these two states with more details later. As shown in Fig. 5(c-h), the abnormal standing waves show irregular features in oscillatory forms, amplitudes, numbers of nodes, and relationship among four sublattices. In general, there are no analytic sine or cosine waves suitable for the spatial distributions of the wavefunctions. Because the gated AB BLG has the band crossovers in both conduction and valence bands, the number of nodes does not grow with state energies monotonously. Furthermore, it might be identical or different for the top and bottom layers. For example, the v 1 (v 2 ) subband at K exhibits the 4-and 4-zero-point (4-and 6-zero-point) subenvelope functions on the top and bottom layers, respectively (Fig. 5(a and c)). If examining the wavefunctions from higher energy to lower energy (from Fig. 5 (h to b) of the conduction bands through (a) up to (g) of the valence bands) with fixed k y , we find that the major weights of the wavefunctions in corresponding domains seem to inject into the opposite domains through both DWs and have some rebounds. Similar behavior can be found in the opposite way. As a result, simple linear combination of the (A 1 , A 2 )-and (B 1 , B 2 )-related tight-binding functions disappear. These features of wavefunctions near Fermi energy are expected to induce very complex optical excitation channels for small energy frequency. To compromise different characteristics in different DWs, the major components of the wavefunctions will leak into the other sublattices with energy outside the corresponding quasi-1D bulk bands. It is only possible to create this non-uniform distribution near DWs. A pair of special wavefunctions is shown in Fig. 5(a,b), which has significant enhancement for the weight within the DWs. These DW states are robust once the system has a gate voltage. The localization in Fig. 5(b) is weak, where the energy level at K is close to the band edges created by the gate voltage. When the k y is set to be near the crossing points (K L and K R ), the DW states become apparent, as shown in Fig. 6. In Fig. 6, instead of tight-binding functions, we present the probability distribution for each component in real space, which can be directly measured in experiments (discussed later). Note that, even we name these localized states DW states, the weight outside DW is still large and the corresponding decay length is around several nanometers for V z = 0.1 eV. The decay length becomes smaller as V z is larger. Therefore, large size of domains, as considered in this paper, is crucial to have well-define DW states when the applied gate voltage is limited. Two DW states, localized at each DW, belong to two different linear dispersive bands but with group velocities of the same sign. The DW states localized at the first DW have negative group velocities in ŷ and dominant weights for top (bottom) layer when k is around K L (K R ), as shown in Fig. 6(a,d,e,h)). On the other hand, the DW states localized at the second DW have positive group velocities in ŷ and dominant weights for bottom (top) layer when k is around K L (K R ), as shown in Fig. 6(b,c,f,g)). The dominant component of these DW states can be different from that of the other states which may be referred to bulk states in this gated geometry-modulated system. For example, probability distribution of A 1 component of the DW states in Fig. 6(f and g) appears in the right (AB) domain with countable weight in the DW, while that of bulk states is used to be located at the left (BA) domain. These unusual properties of DW states, which is presented in real space wavefunctions, might be used to distinguish topological protected states from the others. Density of state. Main features of electronic structures are directly reflected in the DOS, as shown in Fig. 7. The pristine system has a finite DOS at E F (the dot line in Fig. 7(a)), indicating the gapless property. Weak overlap between conduction and valence bands leads to a very small shoulder structure at E F . In addition to a pair of shoulder structures around γ 1 , which comes from the valence and conduction bands of the second group, the absence of special structures near E F is consistent with the first-principles calculations 40 . In this work, we focus on the fine structure within small energy region and do not show the shoulder structures around γ 1 in Fig. 7. For a finite system without geometry modulation, Fig. 7(a) presents a lot of asymmetric peaks of DOS divergent in square-root form, due to 1D parabolic energy subbands as shown in Fig. 2(a). On the other side, the geometry modulation can create additional peaks ( Fig. 7(b-c)), which corresponds to the splitting of doubly degenerate states ( Fig. 2(b-c)). A finite DOS at E F in the geometry-modulated system shows metallic behavior, combined with a pair of rather strong peak structures around ±0.01 eV. The latter are related to the weakly dispersive energy bands near E F . With a gate voltage, the DOS near E F becomes a plateau structure, which is created by the linear energy dispersions across E F , as shown in Fig. 2(c). Two very prominent peaks appear around ±V z mainly due to the weak dispersions generated by the gate voltage. There are also some sub-structures for |E| < V z , which corresponds to band edge states with major weights around DWs. These accidental sub-structures come from large size of DWs. optical absorption. The geometry-and gate-modulated BLG exhibits rich and unique optical properties. The JDOS gives the weight of vertical optical excitations between different bands. For pristine AB stacking BLG, JDOS shows monotonically increasing within the considered frequency region 41 , which has the same reason to have monotonically increasing in DOS for small energy region away from E F . The inset of Fig. 8 shows peak structures, which indicate many channels between different 1D subbands, in the JDOS of BLG without and with geometry modulation (the dashed curves). Peaks in the JDOS are strong when the valence and conduction band-edge states have the same wave vector, e.g. (v 1 , c 1 , v 2 , c 2 ) around K and (v 2 , c 3 ) at k y a ≈ 2.06 for BLG with DWs, as shown in Fig. 2(b). There are some weak but observable structures due to non-vertical relation between different subbands. The peak structures in the JDOS are greatly reduced when system is under a gate voltage, as shown in the inset of Fig. 9, because most states with |E| < V z are repelled away from Fermi energy (Fig. 2(c)). For the system without geometry modulation, the threshold vertical channels is around V z = 0.1 eV. However, some weak structures appear in the geometry-modulated BLG for ω < V z because of linear dispersions and band edge states with major weights around DWs (Fig. 2(c)). This is one of major differences in optical absorption when geometry modulation is included. Structures in the optical absorption spectrum is determined by the available optical transition channels and the velocity matrix elements between channels. From Fig. 8, we can find both optical gap and energy gap are identical and equal to zero. The optical absorption spectrum also shows 1D absorption peaks. However, many peaks in JDOS do not appear in the optical absorption spectrum, as shown in the insets of Figs 8 and 9. The relations between wavefunctions of valence and conduction bands strongly affect the appearance of absorption peaks. The v n to c n vertical excitations, which are due to the same pair of valence and conduction bands, are forbidden. The main mechanism is the linear symmetry or anti-symmetric superposition of the (A 1 , A 2 ) and (B 1 , B 2 ) sublattices, as discussed in Fig. 3. When the geometry modulation is introduced, the peaks are weaker as compared to uniform system and the curve for the optical absorption spectrum becomes smoother. The intensity, frequency, and structures of optical special absorption are very sensitive to the changes in the width of DW (Fig. 8) because of the shifting of band-edge states of the valence and conduction subbands. We find red-shift phenomena when increasing width of DW and suppressed strength for large ω. The first effect is mainly due to more atoms involved in the calculation and more energy subbands as a result. The second effect is related to larger band splitting for larger width of DW. Also, the first absorption structure might be replaced by another excitation channel during the variation of DW width. The changes of the structures of optical absorption spectra become clear when applying various gate voltages, as shown in Fig. 9. The reduced intensity and the enhanced number of absorption structures occur in the increment of V z . This directly reflects gradual changes in energy dispersions, band-edge states and wavefunctions with the gate voltages. The first absorption peak, corresponding to (v 2 , c 3 ) at k y a = 2.06, for V z = 0 appears at 0.049 eV. First few absorption peaks are indicated by arrows above the curves with corresponding absorption channels in Fig. 9. By comparing the band structures and corresponding optical absorption spectra carefully, we find that those peaks involving v 2 (the corresponding band edges are sensitive to V z ) are shifted to higher frequency and become weaker when the gate voltage becomes larger. We find strong peaks which are contributed from multioptical excitation channels. Some weak absorption structures, which corresponds to new optical excitation channels, emerge in between as indicated by arrows below the curves in Fig. 9. New shoulder structures for frequency around 2V z become clearer for lager V z because of the band edges induced by the gate voltage. We note that the optical absorptions get greatly enhanced when the system has geometry modulation, as shown in the inset of Fig. 9. Most of the contribution is from the linear dispersions and band edge states with major weights around DWs (Fig. 2(c)). Connection to experiments. The periodic boundary condition in asymmetry-enriched BLGs is responsible for 1D electronic and optical properties. The energy dispersions might belong to linear, parabolic and oscillatory forms, in which the first ones are the well-known vertical/non-vertical Dirac cones in the AA/AA′ stacking 9,27,42 . The band-edge states, which are the critical points in energy dispersion, correspond to the extreme points, saddle points, and effective 1D constant-energy loops. DOS presents various structures, including V-shape structures from linear energy dispersions, shoulder structures from 2D band edges, logarithmically symmetric peaks from the vHSs, and square-root asymmetric peaks from 1D band edges. These four kinds of special structures can further be revealed in the optical absorption spectra 43 . An electric field would create optical and energy gaps in most of BLG systems 7,8,12 . The experimental measurements can verify the predicted band structures, DOS, wavefunctions, and optical absorption spectra. The high-resolution angle-resolved photoemission spectroscopy (ARPES) can be used to directly examine the energy dispersion. The measured results have confirmed feature-rich band structures of the carbon-related sp 2 -bonding systems. Graphene nanoribbons are identified to possess 1D parabolic energy subbands centered at the high-symmetry point, accompanied with an energy gap and non-uniform energy spacings 44 . Recently, a lot of ARPES measurements are conducted on few-layer graphenes, covering the linear Dirac cone in monolayer system [45][46][47] , two pairs of parabolic bands in AB stacking BLG 45,48 , the coexistent linear and parabolic dispersions in symmetry-destroyed bilayer systems 49 , one linear and two parabolic bands in tri-layer ABA stacking 45,50 , the linear, partially flat and sombrero-shaped bands in tri-layer ABC stacking 50 . The Bernal-stacked graphite possesses 3D band structure, with the bilayer-and monolayer-like energy dispersions, respectively, at K and H points of the first Brillouin zone. The ARPES examinations on the geometry-modulated BLGs could provide unusual band structures, such as, the split of electronic states, diverse energy dispersions near E F , band-edge states, and metallic properties. These directly reflect the strong competition between stacking symmetries, interlayer hopping integrals and Coulomb potential. The scanning tunneling spectroscopy (STS) measurements, in which the differential conductance (dI/dV) is proportional to DOS, are very powerful in exploring the vHSs due to the band-edge states and the metallic/semiconducting/semi-metallic behaviors. They have successfully identified diverse electronic properties in graphene nanoribbons [51][52][53] , carbon nanotubes 54,55 , few-layer graphenes [56][57][58][59][60][61][62] , and graphite 63,64 . Concerning graphene nanoribbons, the width-and edge-dominated energy gaps and the asymmetric peaks due to 1D parabolic bands are confirmed from the precisely defined crystal structures [51][52][53][54][55]65 . The similar prominent peaks obviously appear in carbon nanotubes, where they present chirality-and radius-dependent band gaps and energy spacings between two neighboring subbands 54,55 . A plenty of STS measurements on few-layer graphenes clearly reveal diverse low-lying DOS, including a V-shape dependence initiated from the Dirac point in monolayer system 64 , the peak structures closely related to saddle points in asymetric BLGs [57][58][59] , an electric-field-induced gap in bilayer AB stacking and tri-layer ABC stacking 60,61 , a pronounced peak at E F due to partially flat bands in tri-layer and penta-layer ABC stackings 54,55 , and a dip structure at E F accompanied with a pair of asymmetric peaks arising from constant-energy loops in tri-layer AAB stacking 54 . The measured DOS of the AB-stacked graphite is finite near E F characteristic of semi-metallic property 64 and exhibits the splitting π and π * strong peaks at deeper/ higher energy 63 . The focuses of the STS examinations on the geometry-modulated 22 and gated BLGs should be the square-root asymmetric peaks, the single-or double-peak structures, finite DOS at E F , and strong valence and conduction peaks caused by gate voltage. The STS can also be used to measure the two-dimensional structure of individual wavefunctions in metallic single-walled carbon nanotubes 66,67 . Therefore, the wavefunctions found in the paper can be verified by STS measurements. Up to date, four kinds of optical spectroscopies, including absorption, transmission, reflection, and Raman scattering spectroscopies, are frequently utilized to accurately explore vertical optical excitations 43 . Concerning the AB-stacked BLG, the experiments have confirmed the 0.3-0.4 eV shoulder structure under zero field, the V z -created semimetal-semiconductor transition and two low-frequency asymmetric peaks, the two strong π-electronic absorption peaks at the middle frequency, specific magneto-optical selection rule for the first group of Landau levels (LLs), and linear magnetic-field-strength dependence of the inter-LL excitation energies. Similar verifications performed on trilayer ABA stacking cover one shoulder around 0.5 eV, the gapless behavior unaffected by gate voltage, the V z -induced low-frequency multi-peak structures, several π-electronic absorption peaks, and monolayer-and bilayer-like inter-LL absorption frequencies. Moreover, the identified spectral features in trilayer ABC stacking are two low-frequency characteristic peaks and gap opening under an electric field. The above-mentioned optical spectroscopies are available in examining the vanishing optical gaps for any metallic systems, prominent asymmetric absorption peaks, absence of selection rule, forbidden optical excitations associated with linear relations in the (A 1 , A 2 ) and (B 1 , B 2 ) sublattices, and the variations in absorption structures due to the modulation of DW width and gate voltage. Concluding Remarks We have studied the electronic and optical properties of AB stacking BLG with geometry modulation and gate voltage in tight-binding model. We present energy subbands, tight-binding functions on distinct sublattices, and DOS by exact diagonalization. Effects of geometry modulation in the presence or absence of a gate voltage are discussed. The metallic system exhibits a plenty of 1D energy subbands, accompanied by well-behaved or irregular standing waves. Specifically, the layer-dependent Coulomb potential destroys a simple relation between the (A 1 , A 2 ) and (B 1 , B 2 ) subenvelope functions and gives complicated node structures. With a gate voltage, the system is still metallic due to the existence of the DW states. The wavefunctions of the topological protected DW states present unusual space distributions. DOS shows various vHSs, including single-and double-peak structures with the square-root divergent forms, a pair of prominent peaks caused by gate voltage, and a plateau structure across E F . Optical absorption spectra are calculated within gradient approximation. We find forbidden optical excitation channels under specific linear relations between layer-dependent sublattices, prominent asymmetric absorption peaks in absence of selection rule, and DW-and gate-induced dramatic variations in optical absorption structures. Concerning the geometry-modulated systems, the observable absorption peaks could survive only under the destruction of symmetric or anti-symmetric linear superposition of the (A 1 , A 2 ) and (B 1 , B 2 ) sublattices. Simple dependence of absorption structures on the modulation width of DW is absent. However, the reduced intensity and the enhanced number are regularly revealed in the increment of V z . The frequency, number, and form intensity of optical absorption peaks strongly depend on the modulation period and electric-field strength. Our predicted results could be verified by the experimental measurements. This geometry-modulated system is suitable for studying various physical phenomena. For example, magneto-electronic properties, which exist in a uniform perpendicular magnetic field, might be dominated by the 1D Landau subbands, being in sharp contrast with the Landau levels in 2D systems. This problem is under current investigation. How to generalize the manipulations of geometric structures to emergent layered materials is the near-future focus, e.g., important studies on essential properties of bilayer silicene 68 , germanene 69 , phosphorene 70 , and bismuthene 71 . Figure 1 . 1The structure of geometry-modulated (AB-BA) BLG: (a) top view, and (b) side view. Blue boxes in (a) indicate the regions for DWs. Figure 2 . 2Low-lying energy bands near K for (a) AB stacking BLG, (b) the stacking-modulated system with 20 strained unit cells for each DW, (c) the system with the same arrangement as (b) in finite gate voltage (0.1 eV).Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9weak overlap, which is consistent with the case in the pristine system. Their electronic states are non-degenerate except for the spin degrees of freedom. The other pairs of valence and conduction bands are doubly degenerate and denoted as (v 2 , c 2 ) and (v′ 2 , c′ 2 ), (v 3 , c 3 ) and (v′ 3 , c′ 3 ), and so on. All band-edge states in distinct energy bands are almost situated at K point.Figure 2also shows the second groups of 1D parabolic dispersions, which corresponding to the second valence and conduction bands of pristine BLG split away from zero energy by an energy of the order of interlayer coupling γ1 40 . The wavefunctions at K , shown inFig. 3, illustrate main features of the spatial distributions for various energy subbands. The eigenfunctions, also known as tight-binding functions, on four sublattices (A 1 , B 1 , A 2 , B 2 ) are responsible for the four components of the wavefunction. v 1 and c 1 states(Fig. 3(a and b)), which have constant values for each component, are anti-symmetric and symmetric superpositions of B 1 -and B 2 -dependent tight-binding functions, respectively. Though v 1 and c 1 have different dominant component in Figure 3 . 3The tight-binding functions at K (k y a = 2π/3) for distinct energy subbands in BLG without geometry modulation. For the first valence (a) and conduction (b) bands, the tight-binding functions are non-vanished constant values for B 1 -and B 2 -sublattices. (c-f) show the tight-binding functions for the second valence and conduction bands has two nodes. (g,h) show four nodes for each tight-binding function with next larger subband index. Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9which exhibits the quantization behavior in the x-direction, are fixed for all sublattices of each wavefunction and continuously grow in the increment of subband indices. The detail of wavefunctions would help to realize the fine structure of optical absorption spectrum. Figure 4 . 4The tight-binding functions at K for distinct energy subbands in the geometry-modulated BLGs. N dw = 20. The subband index increases from top to bottom. Each tight-binding function in (a-d) shows two nodes, while that in (e-h) has four nodes.Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 Figure 5 . 5The tight-binding functions at K for distinct energy subbands in the geometry-modulated BLG with a gate voltage, V z = 0.1 eV. N dw = 20. The subband index increases from top to bottom. The sequence of number of nodes becomes complicated in the case. Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 Figure 6 . 6The probability distribution of DW states near K L and K R in the geometry-modulated BLG with a gate voltage (0.1 eV). N dw = 20. The real position (along x) for each component instead of position index is shown in plots. The orange (green) lines of the schematic diagrams below K L and K R represent the linear dispersions around new Dirac points with negative (positive) group velocities for the DW states in the middle (another) DW. Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 Figure 7 .Figure 8 . 78Density of states for AB stacking BLG (a) without geometry modulation, (b) with geometry modulation (N dw = 20.), and (c) the later with finite gate voltage (V z = 0.1 eV). Γ = 2 meV is small enough for observing the fine structures. The optical absorption under the various DW widths (marked by N dw ). Also shown in the inset are JDOS and A(ω) in BLG without and with the geometry modulation (N dw = 20). Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 Figure 9 . 9The optical absorption for a specific geometry-manipulated BLG in presence of distinct gate voltages. N dw = 20. The arrows indicate the special channels for the detailed absorption structures. The inset shows JDOS and A(ω) in BLG without and with the geometry modulation for V z = 0.1 eV.Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 © The Author(s) 2019 AcknowledgementsAuthor ContributionsAll authors conceived this research. B.L.H. performed the calculations. All authors discussed the results and composed the manuscript.Additional InformationCompeting Interests: The authors declare no competing interests.Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Electric Field Effect in Atomically Thin Carbon Films. K S Novoselov, Science. 306666Novoselov, K. S. et al. Electric Field Effect in Atomically Thin Carbon Films. Science 306, 666 (2004). Silicene: a review of recent experimental and theoretical investigations. M Houssa, A Dimoulas, A Molle, J. Phys.: Condens. Matter. 27253002Houssa, M., Dimoulas, A. & Molle, A. Silicene: a review of recent experimental and theoretical investigations. J. Phys.: Condens. Matter 27, 253002 (2015). Germanene: a novel two-dimensional germanium allotrope akin to graphene and silicene. M E Dávila, L Xian, S Cahangirov, A Rubio, G Le Lay, New J. Phys. 1595002Dávila, M. E., Xian, L., Cahangirov, S., Rubio, A. & Le Lay, G. Germanene: a novel two-dimensional germanium allotrope akin to graphene and silicene. New J. Phys. 15, 095002 (2014). Tinene: a two-dimensional Dirac material with a 72 meV band gap. B Cai, Phys. Chem. Chem. Phys. 1712634Cai, B. et al. Tinene: a two-dimensional Dirac material with a 72 meV band gap. Phys. Chem. Chem. Phys. 17, 12634 (2015). The electronic properties of graphene. A H Castro Neto, F Guinea, N M R Peres, K S Novoselov, A K Geim, Rev. Mod. Phys. 81109Castro Neto, A. H., Guinea, F., Peres, N. M. R., Novoselov, K. S. & Geim, A. K. The electronic properties of graphene. Rev. Mod. Phys. 81, 109 (2009). Progress, Challenges, and Opportunities in Two-Dimensional Materials Beyond Graphen. S Z Butler, ACS Nano. 72898Butler, S. Z. et al. Progress, Challenges, and Opportunities in Two-Dimensional Materials Beyond Graphen. ACS Nano 7, 2898 (2013). Localized states at zigzag edges of multilayer graphene and graphite steps. E V Castro, N M R Peres, J M Lopes Dos Santos, EPL. 8417001Castro, E. V., Peres, N. M. R. & Lopes dos Santos, J. M. B. Localized states at zigzag edges of multilayer graphene and graphite steps. EPL 84, 17001 (2008). Electronic properties of a biased graphene bilayer. E V Castro, J. Phys. Condens. Matter. 22175503Castro, E. V. et al. Electronic properties of a biased graphene bilayer. J. Phys. Condens. Matter 22, 175503 (2010). . Y.-K Huang, S.-C Chen, Y.-H Ho, C.-Y Lin, M.-F Lin, Feature-Rich Magnetic Quantization in Sliding Bilayer Graphenes. Sci. Rep. 47509Huang, Y.-K., Chen, S.-C., Ho, Y.-H., Lin, C.-Y. & Lin, M.-F. Feature-Rich Magnetic Quantization in Sliding Bilayer Graphenes. Sci. Rep. 4, 7509 (2014). Optical spectra of AB-and AA-stacked nanographite ribbons. C W Chiu, F L Shyu, C P Chang, R B Chen, M F Lin, J. Phys. Soc. Jpn. 72170Chiu, C. W., Shyu, F. L., Chang, C. P., Chen, R. B. & Lin, M. F. Optical spectra of AB-and AA-stacked nanographite ribbons. J. Phys. Soc. Jpn. 72, 170 (2003). Coulomb excitations in AA-and AB-stacked bilayer graphites. J H Ho, C L Lu, C C Hwang, C P Chang, M F Lin, Phys. Rev. B. 7485406Ho, J. H., Lu, C. L., Hwang, C. C., Chang, C. P. & Lin, M. F. Coulomb excitations in AA-and AB-stacked bilayer graphites. Phys. Rev. B 74, 085406 (2006). Ab initio theory of gate induced gaps in graphene bilayers. H Min, B Sahu, S K Banerjee, A H Macdonald, Phys. Rev. B. 75155115Min, H., Sahu, B., Banerjee, S. K. & MacDonald, A. H. Ab initio theory of gate induced gaps in graphene bilayers. Phys. Rev. B 75, 155115 (2007). . M Z Hasan, C L Kane, Topological insulators. Rev. Mod. Phys. 823045Hasan, M. Z. & Kane, C. L. Topological insulators. Rev. Mod. Phys. 82, 3045 (2010). Topological insulators and superconductors. X.-L Qi, S.-C Zhang, Rev. Mod. Phys. 831057Qi, X.-L. & Zhang, S.-C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057 (2011). . Y Ando, Topological Insulator Materials. J. Phys. Soc. Jpn. 82102001Ando, Y. Topological Insulator Materials. J. Phys. Soc. Jpn. 82, 102001 (2013). Topological confinement in bilayer graphene. I Martin, Y M Blanter, A F Morpurgo, Phys. Rev. Lett. 10036804Martin, I., Blanter, Y. M. & Morpurgo, A. F. Topological confinement in bilayer graphene. Phys. Rev. Lett. 100, 036804 (2008). Localized States at Zigzag Edges of Bilayer Graphene. E V Castro, N M R Peres, J M B Lopes Dos Santos, A H Castro Neto, F Guinea, Phys. Rev. Lett. 10026802Castro, E. V., Peres, N. M. R., Lopes dos Santos, J. M. B., Castro Neto, A. H. & Guinea, F. Localized States at Zigzag Edges of Bilayer Graphene. Phys. Rev. Lett. 100, 026802 (2008). Topological Edges States at a Tilt Boundary in Gated MultilayerGraphene. A Vaezi, Y Liang, D H Ngai, L Yang, E.-A Kim, Phys. Rev. X. 321018Vaezi, A., Liang, Y., Ngai, D. H., Yang, L. & Kim, E.-A. Topological Edges States at a Tilt Boundary in Gated MultilayerGraphene. Phys. Rev. X 3, 021018 (2013). Valley Chern numbers and boundary modes in gapped bilayer graphene. F Zhang, A H Macdonald, E J Mele, Proc. Natl. Acad. Sci. USA. 11010546Zhang, F., MacDonald, A. H. & Mele, E. J. Valley Chern numbers and boundary modes in gapped bilayer graphene. Proc. Natl. Acad. Sci. USA 110, 10546 (2013). Strain solitons and topological defects in bilayer graphene. J S Alden, Proc. Natl. Acad. Sci USA. 11011256Alden, J. S. et al. Strain solitons and topological defects in bilayer graphene. Proc. Natl. Acad. Sci USA 110, 11256 (2013). Dislocations in bilayer graphene. B Butz, Nature. 505533Butz, B. et al. Dislocations in bilayer graphene. Nature 505, 533 (2014). S.-Y Li, arXiv:1609.03313Corrugation induced stacking solitons with topologically confined states in gapped bilayer graphene. Li, S.-Y. et al. Corrugation induced stacking solitons with topologically confined states in gapped bilayer graphene. arXiv: 1609.03313 (2016). Topological valley transport at bilayer graphene domain walls. L Ju, Nature. 520650Ju, L. et al. Topological valley transport at bilayer graphene domain walls. Nature 520, 650 (2015). Direct imaging of topological edge states at a bilayer graphene domain wall. L.-J Yin, H Jiang, J.-B Qiao, L He, Nat. Commun. 711760Yin, L.-J., Jiang, H., Qiao, J.-B. & He, L. Direct imaging of topological edge states at a bilayer graphene domain wall. Nat. Commun. 7, 11760 (2016). Observation of chirality transition of quasiparticles at stacking solitons in trilayer graphene. L.-J Yin, Phys. Rev. B. 95R81402Yin, L.-J. et al. Observation of chirality transition of quasiparticles at stacking solitons in trilayer graphene. Phys. Rev. B 95, 081402(R) (2017). Manipulation of domain-wall solitons in bi-and trilayer graphene. L Jiang, Nat. Nanotech. 13204Jiang, L. et al. Manipulation of domain-wall solitons in bi-and trilayer graphene. Nat. Nanotech. 13, 204 (2018). Optical properties of graphene in magnetic and electric fields. C Y Lin, T N Do, Y K Huang, M F Lin, 978-0- 7503-1566-1IOP e-BookLin, C. Y., Do, T. N., Huang Y. K. & Lin, M. F. Optical properties of graphene in magnetic and electric fields. IOP e-Book, ISBN 978-0- 7503-1566-1 (Nov. 2017). Electronic transmission through AB-BA domain boundary in bilayer graphene. M Koshino, Phys. Rev. B. 88115409Koshino, M. Electronic transmission through AB-BA domain boundary in bilayer graphene. Phys. Rev. B 88, 115409 (2013). Optical absorption in twisted bilayer graphene. P Moon, M Koshino, Phys. Rev. B. 87205404Moon, P. & Koshino, M. Optical absorption in twisted bilayer graphene. Phys. Rev. B 87, 205404 (2013). Plasmon Reflections by Topological Electronic Boundaries in Bilayer Graphene. B.-Y Jiang, Nano Lett. 177080Jiang, B.-Y. et al. Plasmon Reflections by Topological Electronic Boundaries in Bilayer Graphene. Nano Lett. 17, 7080 (2017). Controlling the layer localization of gapless states in bilayer graphene with a gate voltage. W Jaskólski, M Pelc, G W Bryant, L Chico, A Ayuela, 2D Mater. 525006Jaskólski, W., Pelc, M., Bryant, G. W., Chico, L. & Ayuela, A. Controlling the layer localization of gapless states in bilayer graphene with a gate voltage. 2D Mater. 5, 025006 (2018). Zero-line modes at stacking faulted domain walls in multilayer graphene. C Lee, G Kim, J Jung, H Min, Phys. Rev. B. 94125438Lee, C., Kim, G., Jung, J. & Min, H. Zero-line modes at stacking faulted domain walls in multilayer graphene. Phys. Rev. B 94, 125438 (2016). Simplified LCAO method for the periodic potential problem. J Slater, G Koster, Phys. Rev. 941498Slater, J. & Koster, G. Simplified LCAO method for the periodic potential problem. Phys. Rev. 94, 1498 (1954). Determination of the gate-tunable band gap and tightbinding parameters in bilayer graphene using infrared spectroscopy. A B Kuzmenko, I Crassee, D Van Der Marel, P Blake, K S Novoselov, Phys. Rev. B. 80165406Kuzmenko, A. B., Crassee, I., van der Marel, D., Blake, P. & Novoselov, K. S. Determination of the gate-tunable band gap and tight- binding parameters in bilayer graphene using infrared spectroscopy. Phys. Rev. B 80, 165406 (2009). Plasmons and optical properties of carbon nanotubes. M F Lin, K W Shung, .-K , Phys. Rev. B. 5017744Lin, M. F. & Shung, K. W.-K. Plasmons and optical properties of carbon nanotubes. Phys. Rev. B 50, 17744 (1994). Electronic and optical properties of a nanographite ribbon in an electric field. C P Chang, Carbon. 44508Chang, C. P. et al. Electronic and optical properties of a nanographite ribbon in an electric field. Carbon 44, 508 (2006). Optical matrix elements in tight-binding calculations. T G Pedersen, K Pedersen, T B Kriestensen, Phys. Rev. B. 63201101Pedersen, T. G., Pedersen, K. & Kriestensen, T. B. Optical matrix elements in tight-binding calculations. Phys. Rev. B 63, 201101(R) (2001). Optical Properies of Graphite. L G Johnson, G Dresselhaus, Phys. Rev. B. 72275Johnson, L. G. & Dresselhaus, G. Optical Properies of Graphite. Phys. Rev. B 7, 2275 (1973). Self-Consistent Field Approach to the Many-Electron Problem. H Ehrenreic, M H Cohen, Phys. Rev. 115786Ehrenreic, H. & Cohen, M. H. Self-Consistent Field Approach to the Many-Electron Problem. Phys. Rev. 115, 786 (1959). The electronic properties of bilayer graphene. E Mccann, M Koshino, Rep. Prog. Phys. 7656503McCann, E. & Koshino, M. The electronic properties of bilayer graphene. Rep. Prog. Phys. 76, 056503 (2013). Influence of an electric field on the optical properties of few-layer graphene with AB stacking. C L Lu, C P Chang, Y C Huang, R B Chen, M L Lin, Phys. Rev. B. 73144427Lu, C. L., Chang, C. P., Huang, Y. C., Chen, R. B. & Lin, M. L. Influence of an electric field on the optical properties of few-layer graphene with AB stacking. Phys. Rev. B 73, 144427 (2006). Configuration-induced Rich Electronic Properties of Bilayer Graphene. N T T Tran, S.-Y Lin, O E Glukhova, M L Lin, J. Phys. Chem. C. 11910623Tran, N. T. T., Lin, S.-Y., Glukhova, O. E. & Lin, M. L. Configuration-induced Rich Electronic Properties of Bilayer Graphene. J. Phys. Chem. C. 119, 10623 (2015). Electronic and optical properties of graphite-related systems. C Y Lin, R B Chen, Y H Ho, M F Lin, CRC Press9781138571068Lin, C. Y., Chen, R. B., Ho, Y. H. & Lin, M. F. Electronic and optical properties of graphite-related systems. CRC Press, ISBN 9781138571068 (Jan., 2018). Electronic structure of atomically precise graphene nanoribbons. P Ruffieux, Acs Nano. 66930Ruffieux, P. et al. Electronic structure of atomically precise graphene nanoribbons. Acs Nano 6, 6930 (2012). Interlayer interaction and electronic screening in multilayer graphene investigated with angle-resolved photoemission spectroscopy. T Ohta, Phys. Rev. Lett. 98206802Ohta, T. et al. Interlayer interaction and electronic screening in multilayer graphene investigated with angle-resolved photoemission spectroscopy. Phys. Rev. Lett. 98, 206802 (2007). Charge-carrier screening in single-layer graphene. D A Siegel, W Regan, A V Fedorov, A Zettl, A Lanzara, Phys. Rev. Lett. 110146802Siegel, D. A., Regan, W., Fedorov, A. V., Zettl, A. & Lanzara, A. Charge-carrier screening in single-layer graphene. Phys. Rev. Lett. 110, 146802 (2013). Quasiparticle dynamics in graphene. A Bostwick, T Ohta, T Seyller, K Horn, E Rotenberg, Nat. Phys. 336Bostwick, A., Ohta, T., Seyller, T., Horn, K. & Rotenberg, E. Quasiparticle dynamics in graphene. Nat. Phys. 3, 36 (2007). . 10.1038/s41598-018-37058-9Scientific RepoRts |. 9859Scientific RepoRts | (2019) 9:859 | https://doi.org/10.1038/s41598-018-37058-9 Controlling the electronic structure of bilayer graphene. T Ohta, A Bostwick, T Seyller, K Horn, E Rotenberg, Science. 313951Ohta, T., Bostwick, A., Seyller, T., Horn, K. & Rotenberg, E. Controlling the electronic structure of bilayer graphene. Science 313, 951 (2006). Coexisting massive and massless Dirac fermions in symmetry-broken bilayer graphene. K S Kim, Nat. Mater. 12887Kim, K. S. et al. Coexisting massive and massless Dirac fermions in symmetry-broken bilayer graphene. Nat. Mater. 12, 887 (2013). Revealing the electronic band structure of trilayer graphene on SiC: An angle-resolved photoemission study. C Coletti, Phys. Rev. B. 88155439Coletti, C. et al. Revealing the electronic band structure of trilayer graphene on SiC: An angle-resolved photoemission study. Phys. Rev. B 88, 155439 (2013). Spatially resolved electronic structures of atomically precise armchair graphene nanoribbons. H Huang, Sci. Rep. 2983Huang, H. et al. Spatially resolved electronic structures of atomically precise armchair graphene nanoribbons. Sci. Rep. 2, 983 (2012). Electronic band dispersion of graphene nanoribbons via Fouriertransformed scanning tunneling spectroscopy. H Söde, Phys. Rev. B. 9145429Söde, H. et al. Electronic band dispersion of graphene nanoribbons via Fouriertransformed scanning tunneling spectroscopy. Phys. Rev. B 91, 045429 (2015). Tuning the band gap of graphene nanoribbons synthesized from molecular precursors. Y.-C Chen, ACS Nano. 76123Chen, Y.-C. et al. Tuning the band gap of graphene nanoribbons synthesized from molecular precursors. ACS Nano 7, 6123 (2013). Electronic structure of atomically resolved carbon nanotubes. J W Wildöer, L C Venema, A G Rinzler, R E Smalley, C Dekker, Nature. 39159Wildöer, J. W., Venema, L. C., Rinzler, A. G., Smalley, R. E. & Dekker, C. Electronic structure of atomically resolved carbon nanotubes. Nature 391, 59 (1998). Atomic structure and electronic properties of single-walled carbon nanotubes. T W Odom, J.-L Huang, P Kim, C M Lieber, Nature. 39162Odom, T. W., Huang, J.-L., Kim, P. & Lieber, C. M. Atomic structure and electronic properties of single-walled carbon nanotubes. Nature 391, 62 (1998). Stacking-dependent electronic property of trilayer graphene epitaxially grown on Ru (0001). Y Que, App. Phys. Lett. 107263101Que, Y. et al. Stacking-dependent electronic property of trilayer graphene epitaxially grown on Ru (0001). App. Phys. Lett. 107, 263101 (2015). Single-layer behavior and its breakdown in twisted graphene layers. A Luican, Phys. Rev. Lett. 106126802Luican, A. et al. Single-layer behavior and its breakdown in twisted graphene layers. Phys. Rev. Lett. 106, 126802 (2011). Observation of Van Hove singularities in twisted graphene layers. G Li, Nat. Phys. 6109Li, G. et al. Observation of Van Hove singularities in twisted graphene layers. Nat. Phys. 6, 109 (2010). Van Hove singularities in doped twisted graphene bilayers studied by scanning tunneling spectroscopy. V Cherkez, G Trambly De Laissardière, P Mallet, J.-Y Veuillen, Phys. Rev. B. 91155428Cherkez, V., Trambly de Laissardière, G., Mallet, P. & Veuillen, J.-Y. Van Hove singularities in doped twisted graphene bilayers studied by scanning tunneling spectroscopy. Phys. Rev. B 91, 155428 (2015). Atomic and electronic structure of few-layer graphene on SiC (0001) studied with scanning tunneling microscopy and spectroscopy. P Lauffer, Phys. Rev. B. 77155426Lauffer, P. et al. Atomic and electronic structure of few-layer graphene on SiC (0001) studied with scanning tunneling microscopy and spectroscopy. Phys. Rev. B 77, 155426 (2008). Local spectroscopy of the electrically tunable band gap in trilayer graphene. M Yankowitz, F Wang, C N Lau, B J Leroy, Phys. Rev. B. 87165102Yankowitz, M., Wang, F., Lau, C. N. & LeRoy, B. J. Local spectroscopy of the electrically tunable band gap in trilayer graphene. Phys. Rev. B 87, 165102 (2013). Evidence for flat bands near the Fermi level in epitaxial rhombohedral multilayer graphene. D Pierucci, ACS Nano. 95432Pierucci, D. et al. Evidence for flat bands near the Fermi level in epitaxial rhombohedral multilayer graphene. ACS Nano 9, 5432 (2015). Investigations of splitting of the π bands in graphite by scanning tunneling spectroscopy. Z Klusek, App. Surf. Sci. 151251Klusek, Z. Investigations of splitting of the π bands in graphite by scanning tunneling spectroscopy. App. Surf. Sci. 151, 251 (1999). Scanning tunneling spectroscopy of graphene on graphite. G Li, A Luican, E Y Andrei, Phys. Rev. Lett. 102176804Li, G., Luican, A. & Andrei, E. Y. Scanning tunneling spectroscopy of graphene on graphite. Phys. Rev. Lett. 102, 176804 (2009). Room-temperature magnetic order on zigzag edges of narrow graphene nanoribbons. G Z Magda, Nature. 514608Magda, G. Z. et al. Room-temperature magnetic order on zigzag edges of narrow graphene nanoribbons. Nature 514, 608 (2014). Imaging Electron Wave Functions of Quantized Energy Levels in Carbon Nanotubes. L C Venema, Science. 28352Venema, L. C. et al. Imaging Electron Wave Functions of Quantized Energy Levels in Carbon Nanotubes. Science 283, 52 (1999). Two-dimensional imaging of electronic wavefunctions in carbon nanotubes. S G Lemay, Nature. 412617Lemay, S. G. et al. Two-dimensional imaging of electronic wavefunctions in carbon nanotubes. Nature 412, 617 (2001). Free-Standing Bilayer Silicene: The Effect of Stacking Order on the Structural, Electronic, and Transport Properties. J E Padilha, R B Pontes, J. Phys. Chem. C. 1193818Padilha, J. E. & Pontes, R. B. Free-Standing Bilayer Silicene: The Effect of Stacking Order on the Structural, Electronic, and Transport Properties. J. Phys. Chem. C 119, 3818 (2015). Few layer epitaxial germanene: a novel two-dimensional Dirac material. M E Dávila, G Le Lay, Sci. Rep. 620714Dávila, M. E. & Le Lay, G. Few layer epitaxial germanene: a novel two-dimensional Dirac material. Sci. Rep. 6, 20714 (2016). Bilayer Phosphorene: Effect of Stacking Order on Bandgap and Its Potential Applications in Thin-Film Solar Cells. J Dai, X C Zeng, J. Phys. Chem. Lett. 51289Dai, J. & Zeng, X. C. Bilayer Phosphorene: Effect of Stacking Order on Bandgap and Its Potential Applications in Thin-Film Solar Cells. J. Phys. Chem. Lett. 5, 1289 (2014). Electronic structure of a bismuth bilayer. C R Ast, H Höchst, Phys. Rev. B. 67113102Ast, C. R. & Höchst, H. Electronic structure of a bismuth bilayer. Phys. Rev. B 67, 113102 (2003).
[]
[ "Millimeter-Wave Polarimeters Using Kinetic Inductance Detectors for TolTEC and Beyond", "Millimeter-Wave Polarimeters Using Kinetic Inductance Detectors for TolTEC and Beyond" ]
[ "J E Austermann ", "J A Beall ", "· S A Bryan ", "· B Dober ", "· J Gao ", "· G Hilton ", "· J Hubmayr ", "· P Mauskopf ", "· C M Mckenney ", "· S M Simon ", "· J N Ullom ", "· M R Vissers ", "· G W Wilson " ]
[]
[]
Microwave Kinetic Inductance Detectors (MKIDs) provide a compelling path forward to the large-format polarimeter, imaging, and spectrometer arrays needed for nextgeneration experiments in millimeter-wave cosmology and astronomy. We describe the development of feedhorn-coupled MKID detectors for the TolTEC millimeter-wave imaging polarimeter being constructed for the 50-meter Large Millimeter Telescope (LMT). Observations with TolTEC are planned to begin in early 2019. TolTEC will comprise ∼7,000 polarization sensitive MKIDs and will represent the first MKID arrays fabricated and deployed on monolithic 150 mm diameter silicon wafers -a critical step towards future large-scale experiments with over 10 5 detectors. TolTEC will operate in observational bands at 1.1, 1.4, and 2.0 mm and will use dichroic filters to define a physically independent focal plane for each passband, thus allowing the polarimeters to use simple, direct-absorption inductive structures that are impedance matched to incident radiation. This work is part of a larger program at NIST-Boulder to develop MKID-based detector technologies for use over a wide range of photon energies spanning millimeter-waves to X-rays. We present the detailed pixel layout and describe the methods, tools, and flexible design parameters that allow this solution to be optimized for use anywhere in the millimeter and sub-millimeter bands. We also present measurements of prototype devices operating in the 1.1 mm band and compare the observed optical performance to that predicted from models and simulations.
10.1007/s10909-018-1949-5
[ "https://arxiv.org/pdf/1803.03280v1.pdf" ]
116,467,950
1803.03280
45d2d52f2ae12b690da31d329da8ea61bd659e9b
Millimeter-Wave Polarimeters Using Kinetic Inductance Detectors for TolTEC and Beyond J E Austermann J A Beall · S A Bryan · B Dober · J Gao · G Hilton · J Hubmayr · P Mauskopf · C M Mckenney · S M Simon · J N Ullom · M R Vissers · G W Wilson Millimeter-Wave Polarimeters Using Kinetic Inductance Detectors for TolTEC and Beyond the date of receipt and acceptance should be inserted laterJournal of Low Temperature Physics manuscript No. (will be inserted by the editor)KIDMKIDMillimeterSub-mmTHzPolarimetryTolTECLMT Microwave Kinetic Inductance Detectors (MKIDs) provide a compelling path forward to the large-format polarimeter, imaging, and spectrometer arrays needed for nextgeneration experiments in millimeter-wave cosmology and astronomy. We describe the development of feedhorn-coupled MKID detectors for the TolTEC millimeter-wave imaging polarimeter being constructed for the 50-meter Large Millimeter Telescope (LMT). Observations with TolTEC are planned to begin in early 2019. TolTEC will comprise ∼7,000 polarization sensitive MKIDs and will represent the first MKID arrays fabricated and deployed on monolithic 150 mm diameter silicon wafers -a critical step towards future large-scale experiments with over 10 5 detectors. TolTEC will operate in observational bands at 1.1, 1.4, and 2.0 mm and will use dichroic filters to define a physically independent focal plane for each passband, thus allowing the polarimeters to use simple, direct-absorption inductive structures that are impedance matched to incident radiation. This work is part of a larger program at NIST-Boulder to develop MKID-based detector technologies for use over a wide range of photon energies spanning millimeter-waves to X-rays. We present the detailed pixel layout and describe the methods, tools, and flexible design parameters that allow this solution to be optimized for use anywhere in the millimeter and sub-millimeter bands. We also present measurements of prototype devices operating in the 1.1 mm band and compare the observed optical performance to that predicted from models and simulations. Introduction Since being described by Day et al. in 2003 [1], Microwave Kinetic Inductance Detectors (MKIDs) have rapidly advanced so that they are now a viable and often compelling technological choice for the principal sensor in broadband imagers [2,3], polarimeters (e.g. BLAST-TNG [4,5]), and spectrometers (e.g. [6,7]) over a wide range of the electromagnetic spectrum. The chief attraction of KIDs is the ease with which large sensor arrays can be assembled and read out using GHz frequency division multiplexing with minimal interconnects. This allows for many hundreds to several thousands of channels to be measured on a single pair of coaxial cables (e.g. [8,9]) with simple assembly and integration. For some KID architectures, such as that outlined here, fabrication is also simpler and faster than competing approaches. In this report, we describe the design and performance of millimeter-wave, feedhorncoupled, direct-absorption, polarization-sensitive pixels using microwave kinetic inductance detectors (MKIDs). This research is a direct extension of the NIST-Boulder sub-millimeter MKID polarimeters developed for BLAST-TNG [4,5]. These millimeter-wave versions are being developed for TolTEC -a new imaging polarimeter for the 50-meter Large Millimeter Telescope (LMT) -and other future experiments in astronomy and cosmology. TolTEC is a three color millimeter-wave polarimeter designed to perform a series of large legacy surveys that will address many of the fundamental questions related to the formation and evolution of structure on scales from stars to clusters of galaxies. In total, TolTEC will consist of approximately 7,000 polarization sensitive MKIDs across three focal planes. In this work, we describe a scalable polarimeter design for operation at millimeter and submm wavelengths. In particular, we present results for a dual-polarization pixel operating in a band centered at ∼1.1 mm and compare the measured optical performance to that expected from detailed simulation and models. Polarimeter Design TolTEC will use dichroic filters [10] to define a physically independent focal plane for each of three passbands centered at 1.1 mm, 1.4 mm, and 2.0 mm. The single-band nature of each focal plane allows the polarimeters to use simple, direct-absorption resistive structures that are impedance matched to incident radiation through a feedhorn coupled waveguide. The detector's bandpass is primarily defined through a combination of metal-mesh, free-space, low-pass filters [10] and high-pass dichroics, as well as the cutoff frequency of a section of circular waveguide at the detector end of each feedhorn. Similar to the BLAST detectors, the detector arrays are front-side illuminated and reflective quarter-wave backshorts are created by depositing the detectors on a silicon-on-insulator (SOI) substrate with a quarter wave device layer thickness and depositing an aluminum ground plane on the wafer backside (see [4]). This approach allows for a simple photolithographic fabrication process with a single wiring layer and no electrical crossovers. TolTEC pixels comprise two lumped-element MKIDs that are sensitive to orthogonal linear polarization states of incident radiation. Each detector forms a resonant microwave circuit consisting of a 4 µm wide inductive strip and an interdigitated capacitor (IDC) designed to resonate at a unique frequency (see Fig 1). When producing large networks of resonators, we typically design the inductor to be identical in every pixel while each IDC is trimmed to a unique capacitance using the stepper lithography techniques described in [11,12]. The inductive strips are made of proximitized TiN/Ti multilayers, which have a tunable transition temperature, T c , that is highly uniform across an array when compared to substoichiometric TiN [13]. This material also has a tunable sheet resistance, R s , through the choice of TiN and Ti layer thicknesses and the number of layers. For the 1.1 mm prototype devices discussed here, we have used a TiN/Ti/TiN trilayer with thicknesses of 4/10/4 nm, respectively, resulting in T c =1.4 K, R s ∼ 80 Ohm/ , and inductance of L s ∼ 90 pH/ . We have found that a TiN-silicon substrate interface exhibits low two-level system (TLS) noise [13,14], allowing TolTEC to utilize the compact IDCs necessary to achieve the instrument design of 1 f λ pixel spacing (∼2.75 mm pixel-to-pixel spacing for 1.1 mm band) with resonance frequencies in the readout band of 0.5-1.0 GHz. The TiN/Ti/TiN trilayer serves both as the inductor of the resonant circuit and the absorber that couples to radiation from the waveguide. To accomplish both these tasks with optimal performance, the multilayer geometric design must balance the following factors: (a) an appropriate transition temperature for observed photon energies (determined by TiN thickness); (b) polarization efficiency (a function of absorber trace width); (c) detector responsivity and saturation power (a function of multilayer volume); and (d) be optimally GHz. E-plane, H-plane, and X-pol (cross-polarization) measurements are compared to simulation. Differences between measurement and simulation at high angles in the H-plane are due to an occulting mounting structure inherent in the measurement setup. The predicted X-pol dip at the center of the beam is typically difficult to measure within the alignment tolerance of the experimental setup. (Right): Responsivity of the X and Y oriented detectors of three pixel design variations when coupled to a temperature controlled black body varied between 3 K and 20 K. The small systematic difference between the X and Y channels of each pixel could be due to several factors including inductor and capacitor geometric differences in design, differences in optical efficiency, and waveguide misalignment and/or ellipticity. impedance matched to the incident radiation (accomplished through tuning of R s and inductor geometry). An optimal solution to these 4 factors requires an additional design parameter beyond the basic 3 geometric dimensions of the absorber. We accomplish this by separating the duties of impedance matching and TiN geometry by depositing a 100 nm thick layer of aluminum over portions of the inductor/absorber strip (Fig 1). With significantly lower R s and L s than the TiN multilayer, the Al acts as a short that adds minimal impedance and inductance to the overall circuit. This allows the absorbing strip to be designed with almost any volume of inductor while maintaining optimal coupling to the waveguide with high polarization efficiency. In practice, the inductor and aluminum patch geometries are optimized over the observation band through simulations of a parameterized model using finite element simulations while holding the desired multilayer volume and inductance constant. Simulations suggest ∼ 5% of incident radiation is directly absorbed by the low-inductance aluminum shorts in these prototypes. This represents a small reduction in optical efficiency that can be improved in future designs through the use of shorting material and/or geometries with lower impedance. The shorts can also affect detector responsivity through the diffusion of quasiparticles from the higher gap energy trilayer (T c ∼ 1.4 K) to the lower gap aluminum (T c ∼ 1.2 K). We have measured devices with varying lengths of exposed absorber which show that trilayer patch lengths of 15 µm will retain 50% of the responsivity expected in a continuous absorber of equivalent volume and kinetic inductance fraction. For TolTEC, the nominal trilayer patch is 25 µm long and each detector has a total active trilayer volume of 26 µm 3 , resulting in a responsivity (Sec. 3) that allows the detector to operate well within the photon-limited regime under the expected photon load, and thus quasiparticle diffusion will have a negligible effect on detector sensitivity. For future applications that may require significantly shorter trilayer patch lengths, we are exploring tuning T c values of the absorber and/or shorting materials such that the the shorts have a higher gap energy, i.e. ∆ short > ∆ abs , which would lead to significantly reduced quasiparticle diffusion rates. Optical Performance Prototype devices for the 1.1 mm band have been fabricated as 10-pixel arrays (see Fig. 1). Several pixel geometries are included on each array in order to measure the performance of varying designs. The resonators have been designed to operate within the nominal TolTEC readout band (0.5-1.0 GHz) and are measured to have an optically dark internal quality factor, Q i , of 2 − 3 × 10 5 at bath temperatures T bath 200 mK (TolTEC operates at ∼ 100 mK). Five of the pixels on the array are optically coupled through aluminum conical feedhorns (Fig 2) that are oversized in order to match the laboratory optics used in the measurements described in this section. The deployed version of TolTEC pixels will be coupled using spline-profiled silicon platelet feedhorns (Fig. 2) that have been numerically optimized to maximize aperture efficiency while maintaining beam symmetry and polarization performance [15]. We have fabricated a prototype 5-pixel array of these silicon horns and measured the beam response using a room temperature vector network analyzer. The beam measurements of the prototypes are well matched to the simulations (e.g. Fig. 3), validating our model and allowing us to proceed with the full production of the final feedhorn array consisting of ∼40 etched 150 mm diameter wafers. The detector beams will be truncated with a -3 dB edge taper in the final TolTEC instrument using a 4 K Lyot-stop. We measure the detector response to incident radiation (Fig. 3, right) by coupling the radiation from a temperature controlled black body through feedhorn-coupled waveguide (Fig. 2). Measurements are made at various black-body temperatures in the range of 3-20 K. Black-body temperature is converted to incident power on the detector using a passband model outlined in Fig. 4. Measurements of detector noise are used to estimate the optical efficiency as outlined in [16,4]. All three pixel design variations display photon-noise limited performance at the expected loading (∼10 pW) with measured optical efficiency in the range 71%-80%, which is consistent with both the simulation prediction of 80% and the efficiency of BLAST-TNG devices of similar design [5]. Additional detector characterization is conducted by optically coupling the detector to room temperature experiments. This is accomplished through a series of optical elements within the cryostat, including a Zotefoam PPA30 1 vacuum window, a series of metal-mesh low-pass filters [10], and a free-space optical attenuator made of Eccosorb MF-110 2 with a thickness that results in an average of ∼ 5% transmission in the 1.1 mm band. Passbands of X and Y detectors were measured using a Fourier-transform spectrometer (FTS) and are shown in Fig. 4. The X/Y channels appear to be well matched to each other and are consistent with the designed band edges. The passband shape is primarily dictated by the high-pass cutoff frequency of the waveguide and the free-space low-pass filters used along the line of sight. We note that the expected passband differs slightly from what is designed for the final TolTEC experiment (roughly 245-310 GHz) in a few ways. First, these laboratory measurements used low-pass filters of a different cut-off frequency due to filter availability at time of measurement. Second, the low-frequency edge of the TolTEC band will be primarily defined by a dichroic filter rather than the waveguide. The waveguide cutoff frequency is designed to be below the nominal band edge in order to be sensitive to the full range of the dichroic's relatively shallow transition from reflective to transmissive as a function of frequency. We also perform polarization calibration by measuring the detector response to a polarized source -a chopped black body behind a rotating wire grid -as a function of grid polarization angle (Fig 4). Model fits to these data show the X and Y detectors to be sensitive to orthogonal linear polarization states with total (detector + optics) cross-polar leakage of 2.3 ± 0.1% and 3.6 ± 0.1% (stat.-only uncertainties) for the X and Y channels, respectively. These measurements are roughly consistent with the simulated prediction of ∼ 2% cross-polar leakage from the detectors. Ongoing Development TolTEC is currently scheduled to achieve first light at the 50-meter LMT in early 2019 with the operating parameters listed in Table 1. To this end, we continue to characterize and optimize the 1.1 mm pixels and are concurrently developing and optimizing designs for the 1.4 mm and 2.0 mm bands. We are also moving towards the production of large-format detector arrays on 150 mm diameter substrates that are required for the final instrument. We have now produced an array of over 2000 resonators on a 150 mm substrate in order to characterize device uniformity over these large scales. For the deployed arrays, we aim to implement a recently developed LED mapper and post-measurement lithographic capacitor correction technique in order to achieve highly uniform frequency spacing between resonators and detector+readout yield near 100% (see [11]). The measurements presented here, together with results from similar BLAST-TNG devices optimized for sub-mm wavelengths [4,5], show that materials and pixel designs are now available that meet the needs of various experiments in the sub-mm and mm-wave regime. The successfully predictive power of our models and simulations allow for rapid design optimization. Together, these capabilities help to make large KID arrays an attractive and realistic sensor choice over a wide range of observation wavelengths. Per detector loading is calculated using an atmospheric model for the LMT site at median opacity during the observing season, as well as estimates of emitted and scattered light from all optical elements. 1 : 1Quantum Sensors Group, National Institute of Standards and Technology; Boulder, CO 80305, USA 2: Arizona State University, Tempe, AZ 3: University of Michigan, Ann Arbor, MI 4: University of Massachusetts, Amherst, MA E-mail: [email protected] Fig. 1 1Left: Photograph of a 15 mm × 15 mm prototype array of 10 pixels for the 1.1 mm band. Center: Photograph depicting the layout of a single TolTEC dual-polarization pixel. Right: Expanded view of the central region of the inductors/absorbers highlighting the use of aluminum shorting patches. Fig. 2 2(a): Experimental mount with prototype sub-array of 1.1 mm polarimeters. (b): Matching metal feedhorn array for lab measurement of central 5 pixels. (c): Silicon platelet feedhorn sub-array prototype (d): Silicon feedhorn radial profile. Vertical lines represent a change in hole radius at that position. Individual wafers of thickness 500 µm or 333 µm (optionally double-etched for an effective 167 µm step size) are used to create the profile. Fig. 3 ( 3Left): Feedhorn-only measurements of angular response (beam) of the 1.1 mm band silicon platelet feedhorn stack at 255 Fig. 4 ( 4Left): Measured passband of the X and Y polarization detectors of a single pixel compared to a transmission model constructed from the simulated waveguide-coupled detector efficiency and the expected transmission of low-pass filters used for this measurement. Measurements are normalized to have the same band integrated power as the model. (Right): Normalized response as a function of position angle of a linearly polarized source. The data has been fit to a model (solid lines) consisting of detector co-polar angle and amplitude, cross-pol amplitude, and a 180 degree source asymmetry (e.g. misalignment of rotating source). Table 1 Targeted operational parameters for the final TolTEC detector arrays, with two detectors per pixel.Notional Band Center 1.1 mm 1.4 mm 2.0 mm Target Frequency Band 245-310 GHz 195-245 GHz 128-170 GHz Approx. Number of Detectors 3800 1900 950 Expected Loading (median) 10.7 pW 7.2 pW 4.8 pW http://zotefoams.com 2 http://www.eccosorb.com/ Acknowledgements This work is supported by the NSF of the United States through grant MSIP-1636621 and NASA through grant APRA13-0083. . P K Day, H G Leduc, B A Mazin, A Vayonakis, J Zmuidzinas, Nature. 425P. K. Day, H. G. LeDuc, B. A. Mazin, A. Vayonakis, and J. Zmuidzinas, Nature 425, 817-821 (2003). Status of MUSIC, the MUltiwavelength Sub/millimeter Inductance Camera. S R Golwala, C Bockstiegel, S Brugger, N G Czakon, P K Day, T P Downes, R Duan, J Gao, A K Gill, J Glenn, M I Hollister, H G Leduc, P R Maloney, B A Mazin, S G Mchugh, D Miller, O Noroozian, H T Nguyen, J Sayers, J A Schlaerth, S Siegel, A K Vayonakis, P R Wilson, J Zmuidzinas, of Proc. SPIE. 8452Proc. SPIES. R. Golwala, C. Bockstiegel, S. Brugger, N. G. Czakon, P. K. Day, T. P. Downes, R. Duan, J. Gao, A. K. Gill, J. Glenn, M. I. Hollister, H. G. LeDuc, P. R. Maloney, B. A. Mazin, S. G. McHugh, D. Miller, O. Noroozian, H. T. Nguyen, J. Sayers, J. A. Schlaerth, S. Siegel, A. K. Vayonakis, P. R. Wilson, and J. Zmuidzinas, "Status of MUSIC, the MUltiwavelength Sub/millimeter Inductance Camera," in Proc. SPIE, 2012, vol. 8452 of Proc. SPIE. . M Calvo, A Benoît, A Catalano, J Goupy, A Monfardini, N Ponthieu, E Barria, G Bres, M Grollier, G Garde, J.-P Leggeri, G Pont, S Triqueneaux, R Adam, O Bourrion, J.-F Macías-Pérez, M Rebolo, A Ritacco, J.-P Scordilis, D Tourres, A Adane, G Coiffard, S Leclercq, F.-X Désert, S Doyle, P Mauskopf, C Tucker, P Ade, P André, A Beelen, B Belier, A Bideaud, N Billot, B Comis, A D&apos;addabbo, C Kramer, J Martino, F Mayet, F Pajot, E Pascale, L Perotto, V Revéret, A Ritacco, L Rodriguez, G Savini, K Schuster, A Sievers, R Zylka, 10.1007/s10909-016-1582-01573-7357Journal of Low Temperature Physics. 184M. Calvo, A. Benoît, A. Catalano, J. Goupy, A. Monfardini, N. Ponthieu, E. Barria, G. Bres, M. Grollier, G. Garde, J.-P. Leggeri, G. Pont, S. Triqueneaux, R. Adam, O. Bourrion, J.-F. Macías-Pérez, M. Re- bolo, A. Ritacco, J.-P. Scordilis, D. Tourres, A. Adane, G. Coiffard, S. Leclercq, F.-X. Désert, S. Doyle, P. Mauskopf, C. Tucker, P. Ade, P. André, A. Beelen, B. Belier, A. Bideaud, N. Billot, B. Comis, A. D'Addabbo, C. Kramer, J. Martino, F. Mayet, F. Pajot, E. Pascale, L. Perotto, V. Revéret, A. Ritacco, L. Rodriguez, G. Savini, K. Schuster, A. Sievers, and R. Zylka, Journal of Low Temperature Physics 184, 816-823 (2016), ISSN 1573-7357, URL https://doi.org/10.1007/s10909-016-1582-0. . J Hubmayr, J Beall, D Becker, H.-M Cho, M Devlin, B Dober, C Groppi, G C Hilton, K D Irwin, D Li, P Mauskopf, D P Pappas, J Van Lanen, M R Vissers, Y Wang, L F Wei, J Gao, Applied Physics Letters. 10673505J. Hubmayr, J. Beall, D. Becker, H.-M. Cho, M. Devlin, B. Dober, C. Groppi, G. C. Hilton, K. D. Irwin, D. Li, P. Mauskopf, D. P. Pappas, J. Van Lanen, M. R. Vissers, Y. Wang, L. F. Wei, and J. Gao, Applied Physics Letters 106, 073505 (2015). . B Dober, J Austermann, J Beall, D Becker, G Che, H Cho, M Devlin, S Duff, N Galitzki, J Gao, Journal of Low Temperature Physics. 184B. Dober, J. Austermann, J. Beall, D. Becker, G. Che, H. Cho, M. Devlin, S. Duff, N. Galitzki, J. Gao, et al., Journal of Low Temperature Physics 184, 173-179 (2016). . E Shirokoff, P Barry, C Bradford, G Chattopadhyay, P Day, S Doyle, S Hailey-Dunsheath, M Hollister, A Kovács, H Leduc, Journal of Low Temperature Physics. 176E. Shirokoff, P. Barry, C. Bradford, G. Chattopadhyay, P. Day, S. Doyle, S. Hailey-Dunsheath, M. Hol- lister, A. Kovács, H. Leduc, et al., Journal of Low Temperature Physics 176, 657-662 (2014). . B Mazin, S R Meeker, M Strader, P Szypryt, D Marsden, J Van Eyken, G Duggan, A Walter, G Ulbricht, M Johnson, Publications of the Astronomical Society of the Pacific. 1251348B. Mazin, S. R. Meeker, M. Strader, P. Szypryt, D. Marsden, J. van Eyken, G. Duggan, A. Walter, G. Ulbricht, M. Johnson, and et al., Publications of the Astronomical Society of the Pacific 125, 1348 (2013). . O Bourrion, A Benoit, J Bouly, J Bouvier, G Bosson, M Calvo, A Catalano, J Goupy, C Li, J Macías-Pérez, Journal of Instrumentation. 1111001O. Bourrion, A. Benoit, J. Bouly, J. Bouvier, G. Bosson, M. Calvo, A. Catalano, J. Goupy, C. Li, J. Macías-Pérez, and et al., Journal of Instrumentation 11, P11001 (2016). . N Galitzki, P A Ade, F E Angilè, P Ashton, J A Beall, D Becker, K J Bradford, G Che, H.-M Cho, M J Devlin, Journal of Astronomical Instrumentation. 31440001N. Galitzki, P. A. Ade, F. E. Angilè, P. Ashton, J. A. Beall, D. Becker, K. J. Bradford, G. Che, H.-M. Cho, M. J. Devlin, and et al., Journal of Astronomical Instrumentation 3, 1440001 (2014). A review of metal mesh filters. P A R Ade, G Pisano, C Tucker, S Weaver, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 627562750of Proc. SPIEP. A. R. Ade, G. Pisano, C. Tucker, and S. Weaver, "A review of metal mesh filters," in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 2006, vol. 6275 of Proc. SPIE, p. 62750U. . X Liu, W Guo, Y Wang, M Dai, L Wei, B Dober, C Mckenney, G Hilton, J Hubmayr, J Austermann, Applied Physics Letters. 111252601X. Liu, W. Guo, Y. Wang, M. Dai, L. Wei, B. Dober, C. McKenney, G. Hilton, J. Hubmayr, J. Auster- mann, et al., Applied Physics Letters 111, 252601 (2017). . C M Mckenney, J E Austermann, J Beall, B Dober, S M Duff, J Gao, G Hilton, J Hubmayr, D Li, J Ullom, J Van Lanen, M R Vissers, submittedC. M. McKenney, J. E. Austermann, J. Beall, B. Dober, S. M. Duff, J. Gao, G. Hilton, J. Hubmayr, D. Li, J. Ullom, J. Van Lanen, and M. R. Vissers, submitted (2018). . M R Vissers, J Gao, M Sandberg, S M Duff, D S Wisbey, K D Irwin, D P Pappas, Applied Physics Letters. 102232603M. R. Vissers, J. Gao, M. Sandberg, S. M. Duff, D. S. Wisbey, K. D. Irwin, and D. P. Pappas, Applied Physics Letters 102, 232603 (2013). . M R Vissers, J Gao, D S Wisbey, D A Hite, C C Tsuei, A D Corcoles, M Steffen, D P Pappas, Applied Physics Letters. 97232509M. R. Vissers, J. Gao, D. S. Wisbey, D. A. Hite, C. C. Tsuei, A. D. Corcoles, M. Steffen, and D. P. Pappas, Applied Physics Letters 97, 232509 (2010). The design and characterization of wideband splineprofiled feedhorns for Advanced ACTPol. S M Simon, J Austermann, J A Beall, S K Choi, K P Coughlin, S M Duff, P A Gallardo, S W Henderson, F B Hills, S.-P P Ho, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VIII, International Society for Optics and Photonics. 9914991416S. M. Simon, J. Austermann, J. A. Beall, S. K. Choi, K. P. Coughlin, S. M. Duff, P. A. Gallardo, S. W. Henderson, F. B. Hills, S.-P. P. Ho, and et al., "The design and characterization of wideband spline- profiled feedhorns for Advanced ACTPol," in Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VIII, International Society for Optics and Photonics, 2016, vol. 9914, p. 991416. . S Yates, J Baselmans, A Endo, R Janssen, L Ferrari, P Diener, A Baryshev, Applied Physics Letters. 9973505S. Yates, J. Baselmans, A. Endo, R. Janssen, L. Ferrari, P. Diener, and A. Baryshev, Applied Physics Letters 99, 073505 (2011).
[]
[ "ON SIMPLICITY AND STABILITY OF TANGENT BUNDLES OF RATIONAL HOMOGENEOUS VARIETIES", "ON SIMPLICITY AND STABILITY OF TANGENT BUNDLES OF RATIONAL HOMOGENEOUS VARIETIES" ]
[ "Ada Boralevi " ]
[]
[]
Given a rational homogeneous variety G/P where G is complex simple and of type ADE, we prove that all tangent bundles T G/P are simple, meaning that their only endomorphisms are scalar multiples of the identity. This result combined with Hitchin-Kobayashi correspondence implies stability of these tangent bundles with respect to the anticanonical polarization. Our main tool is the equivalence of categories between homogeneous vector bundles on G/P and finite dimensional representations of a given quiver with relations. 1 relations one gets the aforementioned equivalence of categories between Ghomogeneous vector bundles on G/P and finite dimensional representations of the quiver. The relations were later refined by Hille in[Hil94].In [ACGP03]Álvarez-Cónsul and García-Prada gave an equivalent construction, while in [OR06] Ottaviani and Rubei used the quiver for computing cohomology, obtaining a generalization of the well-known Borel-Weil-Bott theorem holding on Hermitian symmetric varieties of ADE type.We describe both equivalence of categories and give details on the quiver, its relations and its representations in Section 2 and 3.Section 4 and 5 contain results on simplicity and stability. We use the quiver to prove that homogeneous vector bundles whose associated quiver representation has a particular configuration-we call such bundles multiplicity free-are weakly simple, which means that their only G-invariant endomorphisms are scalar multiple of the identity. Our result holds on any G/P , where G is a simple group of type ADE:Proposition A. Let E be a multiplicity free homogeneous vector bundle of rank r on G/P . Let k be the number of connected components of the quiver Q| E . Then H 0 (End E) C = C k . In particular if Q| E is connected, then E is weakly simple.It turns out that all tangent bundles T G/P are multiplicity free and connected, and that moreover the isotypical component H 0 (End T G/P ) C coincides with the whole space H 0 (End T G/P ), or in other words that these bundles are simple.Theorem B. Let T G/P the tangent bundle on a rational homogeneous variety G/P , where G is a complex simple Lie group of type ADE and P one of its parabolic subgroups. Then T G/P is simple.If algebraic geometry, representation theory and quiver representations give us simplicity, for stability differential geometry also joins the team. A homogeneous variety G/P is in fact also a Kähler manifold, and as such it admits a Hermite-Einstein structure. In virtue of the Hitchin-Kobayashi correspondence this is equivalent to the polistability of its tangent bundle. This together with Theorem B gives:Theorem C. Let T G/P the tangent bundle on a rational homogeneous variety G/P , where G is a complex simple Lie group of type ADE, and P one of its parabolic subgroups. Then T G/P is stable with respect to the anticanonical polarization −K G/P induced by the Hermite-Einstein structure. 2 5
null
[ "https://arxiv.org/pdf/0901.2350v1.pdf" ]
17,329,628
0901.2350
f212fd4e92cd865aa9234245a31094990e02711e
ON SIMPLICITY AND STABILITY OF TANGENT BUNDLES OF RATIONAL HOMOGENEOUS VARIETIES 15 Jan 2009 Ada Boralevi ON SIMPLICITY AND STABILITY OF TANGENT BUNDLES OF RATIONAL HOMOGENEOUS VARIETIES 15 Jan 2009 Given a rational homogeneous variety G/P where G is complex simple and of type ADE, we prove that all tangent bundles T G/P are simple, meaning that their only endomorphisms are scalar multiples of the identity. This result combined with Hitchin-Kobayashi correspondence implies stability of these tangent bundles with respect to the anticanonical polarization. Our main tool is the equivalence of categories between homogeneous vector bundles on G/P and finite dimensional representations of a given quiver with relations. 1 relations one gets the aforementioned equivalence of categories between Ghomogeneous vector bundles on G/P and finite dimensional representations of the quiver. The relations were later refined by Hille in[Hil94].In [ACGP03]Álvarez-Cónsul and García-Prada gave an equivalent construction, while in [OR06] Ottaviani and Rubei used the quiver for computing cohomology, obtaining a generalization of the well-known Borel-Weil-Bott theorem holding on Hermitian symmetric varieties of ADE type.We describe both equivalence of categories and give details on the quiver, its relations and its representations in Section 2 and 3.Section 4 and 5 contain results on simplicity and stability. We use the quiver to prove that homogeneous vector bundles whose associated quiver representation has a particular configuration-we call such bundles multiplicity free-are weakly simple, which means that their only G-invariant endomorphisms are scalar multiple of the identity. Our result holds on any G/P , where G is a simple group of type ADE:Proposition A. Let E be a multiplicity free homogeneous vector bundle of rank r on G/P . Let k be the number of connected components of the quiver Q| E . Then H 0 (End E) C = C k . In particular if Q| E is connected, then E is weakly simple.It turns out that all tangent bundles T G/P are multiplicity free and connected, and that moreover the isotypical component H 0 (End T G/P ) C coincides with the whole space H 0 (End T G/P ), or in other words that these bundles are simple.Theorem B. Let T G/P the tangent bundle on a rational homogeneous variety G/P , where G is a complex simple Lie group of type ADE and P one of its parabolic subgroups. Then T G/P is simple.If algebraic geometry, representation theory and quiver representations give us simplicity, for stability differential geometry also joins the team. A homogeneous variety G/P is in fact also a Kähler manifold, and as such it admits a Hermite-Einstein structure. In virtue of the Hitchin-Kobayashi correspondence this is equivalent to the polistability of its tangent bundle. This together with Theorem B gives:Theorem C. Let T G/P the tangent bundle on a rational homogeneous variety G/P , where G is a complex simple Lie group of type ADE, and P one of its parabolic subgroups. Then T G/P is stable with respect to the anticanonical polarization −K G/P induced by the Hermite-Einstein structure. 2 5 Introduction In [Ram67] Ramanan proved that irreducible homogeneous bundles on rational homogeneous varieties are stable, and hence in particular simple. If the underlying variety is Hermitian symmetric then this result applies to tangent bundles. For the general case, the Hitchin-Kobayashi correspondence gives a weaker result for the tangent bundles, polystability. In this paper we show that in fact tangent bundles of any G/P are simple, where G is complex, simple and of type ADE. Simplicity and polystability combined give stability. Our main tool is the equivalence of categories between homogeneous bundles on G/P and finite dimensional representations of a given quiver with relations. Once the machinery of this equivalence of categories is set up, the simplicity of tangent bundles turns out to be an immediate and surprisingly easy consequence of it. Indeed one only needs to look at endomorphisms of the bundle as endomorphisms of the associated quiver representations. Homogeneous vector bundles have been classically studied using another equivalence of categories, namely that between homogeneous bundles on G/P and finite dimensional representations of the parabolic subgroup P . In [BK90] Bondal and Kapranov had the idea of associating to any rational homogeneous variety a quiver with relations. By putting the appropriate In the case where G/P is a flag manifold point-hyperplane in P n , we obtained a complete understanding of stability of the tangent bundle with respect to different polarizations: Proposition D. Let F = F(0, n − 1, n) be the flag manifold of type SL n+1 /P (α 1 , α n ), and set: m(n) = −n + n √ 4n 2 + 4n − 3 2(n 2 + n − 1) . We also show similar computations for SL 4 /B. Then the tangent bundle In the last Section 6 we deal with moduli spaces. We quote and generalize the results from [OR06], where the authors showed that King's notion of semistability [Kin94] for a representation [E] of the quiver Q G/P is in fact equivalent to the Mumford-Takemoto semistability of the associated bundle E on G/P , when the latter is a Hermitian symmetric variety. We can thus construct moduli spaces of G-homogeneous semistable bundles with fixed gr E on any homogenous variety G/P of ADE type. Acknowledgements. This paper is part of my PhD thesis. I am very grateful to my advisor Professor Giorgio Ottaviani for the patience with which he followed this work very closely and for always transmitting me lots of encouragement and mathematical enthusiasm. I would also like to thank Professor Jean-Pierre Demailly for inviting me to Grenoble and for his warm hospitality. Preliminaries 2.1. Notations and first fundamental equivalence of categories. Let G be a complex semisimple Lie group. We make a choice ∆ = {α 1 , . . . , α n } of simple roots of g = Lie G, and we call Φ + (respectively Φ − ) the set of positive (negative) roots. We denote by h ⊂ g the Cartan subalgebra so that g decomposes as: g = h ⊕ α∈Φ + g α ⊕ α∈Φ − g α . A parabolic subgroup P ≤ G is a subgroup P = P (Σ), where: Lie(P (Σ)) = h ⊕ α∈Φ + g α ⊕ α∈Φ − (Σ) g α , for a subset Σ ⊂ ∆ that induces Φ − (Σ) = {α ∈ Φ − |α = α i / ∈Σ p i α i }. If Σ = ∆, then P (∆) = B is the Borel subgroup. A rational homogeneous variety is a quotient G/P . 3 A vector bundle E on G/P is called (G)-homogeneous if there is an action of G on E such that the following diagram commutes: G × E / / E G × G/P / / G/P where the bottom row is just the natural action of G on the cosets G/P . Note that the tangent bundle T G/P on any rational homogeneous variety G/P is obviously a G-homogeneous bundle. The category of G-homogeneous vector bundle on G/P is equivalent to the category P -mod of representations of P , and also to the category p-mod, where p = Lie P , see for example [BK90]. More in detail, the group G is a principal bundle over G/P with fiber P . Any G-homogeneous vector bundle E of rank r is induced by this principal bundle via a representation ρ : P → GL(r). We denote E = E ρ . Indeed, E of rank r over G/P is homogeneous if and only if there exists a representation ρ : P → GL(r) such that E ≃ E ρ , and this entails the aforementioned equivalence of categories. For any weight λ we denote by E λ the homogeneous bundle corresponding to the irreducible representation of P with maximal weight λ. Here λ belongs to the fundamental Weyl chamber of the reductive part of P . Indeed, P decomposes as P = R · N into a reductive part R and a nilpotent part N . At the level of Lie algebras this decomposition entails a splitting p = r ⊕ n, with the obvious notation r = Lie R and n = Lie N . Moreover from a result by Ise [Ise60] we learn that a representation of p is completely reducible if and only if it is trivial on n, hence it is completely determined by its restriction to r. The well-known Borel-Weil-Bott theorem [Bot57] computes the cohomology of such E λ 's by using purely Lie algebra tools, namely the action of the Weyl group on the weight λ. In particular the theorem states that if λ is dominant then H 0 (E λ ) = Σ λ (the irreducible representation of highest weight λ) and all higher cohomology vanishes. 3. The quiver Q G/P 3.1. Definition of the quiver Q G/P and its representations. Other than looking at homogeneous bundles as P -modules, it is useful to try a different point of view and look at these same bundles as representation of a given quiver with relations. For basics on quiver theory we refer the reader to [DW05] or [Kin94]. To any rational homogeneous variety G/P we can associate a quiver with relations, that we denote by Q G/P . The idea is to exploit all the information given by the choice of the parabolic subgroup P , with its decomposition P = R · N . Let Λ be the fundamental Weyl chamber of G, and let Λ + be the Weyl chamber of the reductive part R. Then we can give the following: Definition 3.1. Let G/P be any rational homogeneous variety. The quiver Q G/P is constructed as follows. The set of vertices is: Q 0 = Λ + = {λ |λ dominant weight for R}. There is an arrow connecting the vertices λ and µ if and only if the vector space Hom(n ⊗E λ , E µ ) G is non-zero. Remark 3.1. Definition 3.1 is precisely the original one of Bondal and Kapranov [BK90], later used also by Alvarez-Cónsul and García-Prada [ACGP03]. Arrows correspond to weights of the nilpotent algebra n, considered as an r-module with the adjoint action. In fact one could obtain an equivalent theory by considering the same vertices with a smaller number of arrows, i.e. by taking only weights of the quotient n /[n, n]. This is for example the choice made by Hille (see [Hil98] and [Hil96]). Note that vertices λ of Q G/P correspond to irreducible homogeneous bundles E λ on G/P . The relations on the quiver Q G/P will be defined in Section 3.2. Let now E be an homogeneous vector bundle over G/P : we want to associate to it a representation [E] of the quiver Q G/P . The bundle E comes with a filtration: (3.1) 0 ⊂ E 1 ⊂ E 2 ⊂ . . . ⊂ E k = E, where each E i /E i−1 is completely reducible. Of course the filtration does not split in general. We define gr E = ⊕ i E i /E i−1 for any filtration (3.1). The graded bundle gr E does not depend on the filtration: in fact it is given by looking at our p-module E as a module over r, so that it decomposes as a direct sum of irreducibles: (3.2) gr E = λ E λ ⊗ V λ , with multiplicity spaces V λ ≃ C k , where k ∈ Z ≥0 is the number of times E λ occurs. The representation [E] associates to the vertex λ of the quiver Q G/P precisely the multiplicity space V λ in the decomposition (3.2). Going on with the definition of the representation [E], given any λ, µ ∈ Q 0 such that there is an arrow λ → µ ∈ Q 1 , we now need to define a linear map V λ → V µ . This information is given by the nilpotent part n. More precisely, it is given by the natural action of n over gr E, both viewed as r-modules: θ : n ⊗ gr E → gr E. The morphism θ encodes all the information we need, including that on the relations of the quiver. Obviously if we have a vector bundle E we then have the graded gr E and the morphism θ. Viceversa, if we have an r-module and a morphism that behaves "just like θ", we can reconstruct a p-module and hence a vector bundle. More in detail, let us state and prove the following generalization of [OR06, Theorem 3.1]: Theorem 3.1. Consider n as an r-module with the adjoint action. (1) Given a p-module E on X, the action of n over E induces a morphism of r-modules: θ : n ⊗ gr E → gr E. The morphism θ ∧ θ : ∧ 2 n ⊗ gr E → gr E defined by θ ∧ θ((n 1 ∧ n 2 ) ⊗ f ) := n 1 · (n 2 · f ) − n 2 · (n 1 · f ) satisfies the equality θ ∧ θ = θϕ in Hom(∧ 2 n ⊗ gr E, gr E), where ϕ is given by: ϕ : ∧ 2 n ⊗ gr E → n ⊗ gr E (n 1 ∧ n 2 ) ⊗ e → [n 1 , n 2 ] ⊗ e. (2) Conversely, given an r-module F on X and a morphism of r-modules θ : n ⊗F → F such that θ ∧ θ = θϕ, we have that θ extends uniquely to an action of p over F , giving a bundle E such that gr E = F . Proof. (1) Obviously θ is r-equivariant, almost by definition. Take n 1 and n 2 in n. We have that: θ ∧ θ((n 1 ∧ n 2 ) ⊗ f ) = n 1 · (n 2 · f ) − n 2 · (n 1 · f ) = [n 1 , n 2 ] · f, which means exactly that θ ∧ θ = θϕ. (2) For any f ∈ F and any r + n ∈ p = r ⊕ n we set: (3.3) (r + n) · f := r · f + θ(n ⊗ f ). We need to show that given any p 1 , p 2 ∈ p, the action (3.3) respects the bracket, i.e. that for every f ∈ F : [p 1 , p 2 ] · f = p 1 · (p 2 · f ) − p 2 · (p 1 · f ). Now if both p 1 , p 2 ∈ r, then there is nothing to prove. If both p 1 , p 2 ∈ n, then from the equality θ ∧ θ = θϕ, we get: [p 1 , p 2 ] · f = θ([p 1 , p 2 ] ⊗ f ) = θϕ((p 1 ∧ p 2 ) ⊗ f ) = = θ ∧ θ((p 1 ∧ p 2 ) ⊗ f ) = p 1 · (p 2 · f ) − p 2 · (p 1 · f ). Finally, in case p 1 ∈ r and p 2 ∈ n, then [p 1 , p 2 ] ∈ n and we have: p 1 · (p 2 · f ) = θ(p 1 · (p 2 ⊗ f )) = θ(([p 1 , p 2 ] ⊗ f + p 2 ⊗ (p 1 · f )) = = [p 1 , p 2 ] · f + θ(p 2 ⊗ (p 1 · f )) = [p 1 , p 2 ] · f + p 2 · (p 1 · f ). Remark that (3.2) entails that we have a decomposition: (3.4) θ ∈ Hom(n ⊗ gr E, gr E) = λ,µ∈Q 0 Hom(V λ , V µ ) ⊗ Hom(n ⊗E λ , E µ ). Lemma 3.2. [BK90, Proposition 2] In the ADE case dim Hom(n ⊗E λ , E µ ) G is either 0 or 1 for every λ, µ ∈ Λ + . Remark 3.2. From now on G will thus always denote a complex Lie group of ADE type. Nevertheless, the construction of the quiver with its relation can be done for any type of Lie group, like in [ACGP03]. We can now conclude the construction of the representation of the quiver [E] associated to the bundle E. For any λ r-dominant weight fix a maximal vector v λ (it is unique up to constants). For any α weight of n, fix an eigenvector n α ∈ n. Now suppose that there is an arrow λ → µ in the quiver. Then the vector space Hom(n ⊗E λ , E µ ) G is non-zero, and in particular is 1-dimensional. Notice that by definition, being given by the action of n, the arrow will send the weight λ into a weight µ = λ + α, for some α ∈ Φ + root of n (for we have g α ·W λ ⊆ W λ+α ). Then fix the generator f λµ of Hom(n ⊗E λ , E µ ) G that takes n α ⊗ v λ → v µ . Once all the generators are fixed, from (3.4) we write the map θ uniquely as: (3.5) θ = λ,µ g λµ f λµ , and thus we can associate to the arrow λ f λµ − − → µ exactly the element g λµ in Hom(V λ , V µ ). All in all: Definition 3.2. To any homogeneous vector bundle E on G/P we associate a representation [E] of the quiver Q G/P as follows. To any vertex λ ∈ Q 0 we associate the vector space V λ from the decomposition (3.2). To any arrow λ → µ we associate the element g λµ ∈ Hom(V λ , V µ ) from the decomposition (3.4). Notice that a different choice of generators would have led to an equivalent construction. 3.2. Second fundamental equivalence of categories. We introduce here the equivalence of categories between homogeneous vector bundles on G/P and finite dimensional representations of the quiver Q G/P . From Proposition 3.1 it is clear that by putting the appropriate relations on the quiver, namely the equality θ ∧ θ = θϕ, we can get the following: Hil94,ACGP03] Let G/P a rational homogeneous variety of ADE type. The category of finite dimensional representations of the Lie algebra p is equivalent to the category of finite dimensional representations of the quiver Q G/P with certain relations R, and it is equivalent to the category of G-homogeneous bundles on G/P . Theorem 3.3. [BK90, We show here how one can derive the relations. For details, see [ACGP03]. Let λ, µ, ν ∈ Q 0 . We start by defining the morphism φ λµν : Hom(n ⊗E λ , E µ ) ⊗ Hom(n ⊗E µ , E ν ) φ λµν − −− → Hom(∧ 2 n ⊗E λ , E ν ) by setting φ λµν (a ⊗ a ′ ) : (n ∧ n ′ ) ⊗ x → z, where a : n ⊗ x → y and a ′ : n ′ ⊗ y → z. Obviously then the image of the G-invariant part: φ λµν (Hom(n ⊗E λ , E µ ) G ⊗ Hom(n ⊗E µ , E ν ) G ) ⊆ Hom(∧ 2 n ⊗E λ , E ν ) G . In particular recall once the choice of constants has been made, there are fixed generators f λµ , where f λµ : n α ⊗ v λ → v µ , and α = λ − µ. Then if we set β = µ − ν, all in all: φ λµν (f λµ ⊗ f µν ) : (n α ∧ n β ) ⊗ v λ → v ν . Now consider the natural morphism ∧ 2 n → n sending n ∧ n ′ → [n, n ′ ]. It induces a morphism φ λν : Hom(n ⊗E λ , E ν ) φ λν − − → Hom(∧ 2 n ⊗E λ , E ν ). Once again it is clear that the invariant part φ λν (Hom(n ⊗E λ , E ν ) G ) is contained in Hom(∧ 2 n ⊗E λ , E ν ) G . Theorem 3.1 together with the splitting (3.5) entail that we have an equality in Hom(∧ 2 n ⊗E λ , E ν ) G : (3.6) λ,ν µ φ λµν (f λµ ⊗ f µν )(g λµ g µν ) + φ λν ([f λµ , f µν ])g λν = 0. Finally, let {c k λν } be a basis of the vector space Hom(∧ 2 n ⊗E λ , E ν ) G , for k = 1, . . . , m λν , m λν = dim(Hom(∧ 2 n ⊗E λ , E ν ) G ). Expand: φ λµν (f λµ ⊗ f µν ) = m λν k=1 x k λµν c k λν φ λν ([f λµ , f µν ]) = m λν k=1 y k λν c k λν , Then for every couple of vertices λ, ν, equality (3.6) gives us a system of m λν equations that the maps g γδ satisfy: (3.7) R λν k : µ∈Q 0 x k λµν g λµ g µν + y k λν g λν = 0 8 Definition 3.3. We define the relations R on the quiver Q G/P as the ideal generated by all the equations (that with a slight abuse of notation we keep calling R k ): R λν k : µ∈Q 0 x k λµν f λµ f µν + y k λν f λν = 0, for k = 1, . . . , m λν and for any couple of weights λ, ν ∈ Q 0 . 4. Simplicity 4.1. Simplicity of multiplicity free bundles. In this section we work on rational homogeneous varieties G/P , where G is complex, simple of type ADE. Let E be a rank r homogeneous vector bundle on G/B, and let [E] be the associated representation of the quiver Q G/P . Denote by Q| E the subquiver of Q G/P given by all vertices where [E] is non-zero and all arrows connecting any two such vertices. Clearly the support of Q| E has at most r vertices. Notice also that the representation [E] of Q G/P induces a representation of the subquiver Q| E . Call the vertices of Q| E {λ 1 , . . . , λ n }, with n ≤ r. Then the usual decomposition for the graded of E can be written: (4.1) gr E = n i=1 E λ i ⊗ V i . Definition 4.1. A homogeneous vector bundle E of rank r is multiplicity free if dim V i = 1 for every i = 1, . . . , n in (4.1). So let now E be multiplicity free. Just by looking at the associated quiver representation [E], we will show that the only G-invariant endomorphisms that such a bundle can have are scalar multiple of the identity, i.e. that the isotypical component H 0 (End E) C = C. If this holds we call the bundle weakly simple. Proposition 4.1. Let E be a multiplicity free homogeneous vector bundle of rank r on G/P . Let k be the number of connected components of the quiver Q| E . Then H 0 (End E) C = C k . In particular if Q| E is connected, then E is weakly simple. Proof. Suppose first that k = 1, so that the subquiver Q| E is connected. Any element ϕ ∈ H 0 (End E) C is a G-invariant endomorphism ϕ : E → E. In particular we can look at ϕ as a morphism [E] → [E] between representations of the same quiver. This means that we can look at ϕ as a family of morphisms {ϕ i : V i → V i |i+1, . . . , n}. The hypothesis that E is multiplicity free entails that in particular each ϕ i = k i Id and hence ϕ = (k 1 , . . . , k n ) is in fact just an element of C r . By definition of morphism of quiver representations, every time that there 9 is an arrow λ i → λ j in the quiver Q| E , there is a commutative diagram: V i k i / / V j k j V i / / V j Remark that the two horizontal arrows are the same. This means that if we fix the first constant k 1 of ϕ then all ϕ is completely determined thanks to connectedness. So this proves that H 0 (End E) C ≤ C. On the other hand, notice we have homotheties, hence C ⊆ H 0 (End E) C , and the thesis follows for the case k = 1. The same argument applies for each connected component, and this completes the proof. 4.2. Simplicity of tangent bundles. The results of the previous section can be applied to a "special" multiplicity free homogeneous bundle: the tangent bundle T G/P , with G simple of ADE type. Let T G/P be the tangent bundle on G/P . Recall that any parabolic subgroup P = P (Σ) is given by a subset Σ ⊆ ∆ of simple roots. Define also the subset of negative roots Φ − P = Φ − \ Φ − (Σ). Notice that when P = B is the Borel, Σ = ∆ and Φ − B = Φ − . The bundle T G/P is a homogeneous bundle of rank r = dim G/P = |Φ − P |, whose weights are exactly the elements of Φ − P . Remark 4.1. It is convenient for us to take into account all the weights of the tangent bundle, and not only the highest weights. Obviously in the case of the Borel it doesn't make any difference. If P is any other parabolic subgroup of G, this means that instead of the tangent bundle T G/P we are considering its pull-back π * T G/P via the (flat!) projection π : G/B → G/P. The obvious vanishing R i π * O = 0 for i > 0, together with the Projection formula (see [Har77], II.5) guarantee that: H 0 (G/B, End(π * T G/P )) = H 0 (G/B, π * (End T G/P )) = H 0 (G/P, End T G/P ). Hence we are allowed to work on π * T G/P instead of T G/P . To simplify the notation we write T G/P := π * T G/P . Notice that the representation associated to any T G/P , P ⊇ B, is a subrepresentation of that associated to T G/B = T G/B . Now we make the easy but fundamental remark that for every homogeneous variety G/P , the rank T G/P (that is, the dimension of G/P ) coincides with the number of weights of the associated representation and with the number of vertices in the quiver representation [T G/P ]. Hence they must all have multiplicity one. Moreover, notice that in the ADE case whenever α = −β we have [g α , g β ] = g α+β . All in all: Theorem 4.2. For every homogeneous variety G/P the bundle T G/P is multiplicity free and connected. We will now show that in the case of a tangent bundle T G/P all endomorphisms are G-invariant endomorphisms, or in other words that H 0 (End(T G/P )) C = H 0 (End(T G/P )). Proof. This is a direct computation. We use here the notation for positive and negative roots Φ + and Φ − used for example in [FH91], and we call {L i } the standard basis of h * . In the A N −1 case (G = SL N ): Φ + = {L i − L j } 1≤i<j≤N and Φ − = {L i − L j } 1≤j<i≤N , and the fundamental Weyl chamber associated is the set: (4.2) Λ = { a i L i | a 1 ≥ a 2 ≥ . . . ≥ a N }. For the D N case (G = SO 2N ): Φ + = ({L i − L j } ∪ {L i + L j }) i<j and Φ − = ({L i − L j } ∪ {−L i − L j }) i>j , and the fundamental Weyl chamber associated: (4.3) Λ = { a i L i | a 1 ≥ . . . ≥ a N −1 ≥ |a N |}. By the Borel-Weil-Bott theorem [Bot57], an irreducible bundle E ν has nonzero H 0 (E ν ) exactly if and only if ν lies in the fundamental Weyl chamber. For the case of A N −1 : let ν be an irreducible summand of End T SL N /P . Hence ν is of the form α + β with α ∈ Φ + and β ∈ Φ − . Practically speaking, α is a vector in Z N having 1 at the i−th place, −1 at the j−th place and 0 everywhere else, with 1 ≤ i < j ≤ N ; similarly β has a −1 at the h−th place, a 1 at the k−th place and 0 everywhere else, with again 1 ≤ h < k ≤ N . What does a sum ν = α + β look like? Fix α. Then only six possibilities can occur for β, namely either: h < k ≤ i < j, or: h ≤ i ≤ k ≤ j, or: h ≤ i < j ≤ k, or: i ≤ h < k ≤ j, or: i ≤ h ≤ j ≤ k, or else: i < j ≤ h < k. One can check directly that the condition (4.2) is satisfied exactly when β = −α. In this case ν = 0 and H 0 (E 0 ) = C, which is what we wanted. Let us now move to the case D N . Here the situation is complicated by the fact that we deal with more roots, and thus with more possible combinations for the sum ν = α + β. Nevertheless the condition (4.3) for ν to belong to the fundamental Weyl chamber is stronger than that for A N −1 (4.2), thus making our life easier. A direct check shows that the thesis holds true for N = 4. So let us suppose we are in the case D N , with N ≥ 5. The root ν is a vector of Z N having at most 4 non-zero elements a i ∈ {±1, ±2} (thus since N ≥ 5 there is at least one coordinate equal to 0). Look at the last coordinate a N : if a N = 0, then there is no way for condition (4.3) to be satisfied, and we are done. If instead a N = 0, we look at the other coordinates: if all the other elements are non-zero, then it means that we are necessarily in the case D 5 , and such elements are alternating 1's and -1's, hence we are done again. If there is another a i = 0 with i = N − 1, we are also done. Finally, if we are in the case a N −1 = a N = 0, we repeat the argument above. We can go on until we either get to a 1 = a 2 = . . . = a N = 0, or we encounter an element that cannot satisfy (4.3), and this concludes the proof for the D-case. The proof for the three exceptional cases E 6 , E 7 and E 8 is nothing but a brute force check that one can do using any computer algebra system. Theorem 4.4. Let T G/P the tangent bundle on a flag manifold G/P , where G is a complex simple Lie group of type ADE, and P one of its parabolic subgroups. Then T G/P is simple. Proof. Theorem 4.2 together with Remark 4.1 imply that the isotypical component H 0 (End(T G/P )) C = H 0 (End(T G/P )) C = C. But from Lemma 4.3 we get that H 0 (End(T G/P )) = H 0 (End(T G/P )) C , and we are done. Stability 5.1. Simplicity and stability. Let us start this section with some basic definitions. Definition 5.1. Let H be an ample line bundle on a projective variety X of dimension d. For any coherent sheaf E on X define the slope µ H (E) as: µ H (E) = c 1 (E) · H d−1 rk E . E is called H-stable (respectively H-semistable) if for every coherent subsheaf F ⊂ E such that E/F is torsion free and 0 < rk F < rk E we have: µ H (F ) < µ H (E) (respectively ≤). This notion of stability is known as Mumford-Takemoto stability. Definition 5.2. In the same setting as above, E is called H-polystable if it decomposes as a direct sum of H-stable vector bundles with the same slope. It is a well-known fact that for vector bundles stability implies polystability and the latter implies semistability, see for example [Kob87]. Also stability implies simplicity, and the viceversa is not true in general (see [OSS80], or [Fai06] for a homogeneous counterexample). We now want to look at our homogeneous vector bundles from the point of view of differential geometry. A homogeneous variety G/P is in particular a homogeneous Kähler manifold. For an exhaustive introduction on Kähler-Einstein manifolds we refer the reader to [Bes87]. Here we content ourselves with quoting the results on Kähler-Einstein and Hermite-Einstein structures that we need. The following holds: Theorem 5.1 above implies that the tangent bundle T G/P admits a Kähler-Einstein structure, and hence in particular an Hermite-Einstein structure. If X is a compact Kähler manifold and E a holomorphic bundle over X, a Hermitian metric on E determines a canonical unitary connection whose curvature is a (1, 1)-form F with values in End E. The inner product of F with the Kähler form is then an endomorphism of E. Metrics which give rise to connections such that the endomorphism is a multiple of the identity are called Hermite-Einstein metrics. Indeed, the notion of an Hermite-Einstein connection originated in physics. Hitchin and Kobayashi made a very precise conjecture connecting this notion to that of Mumford-Takemoto stability, which is known as the Hitchin-Kobayashi correspondence. Uhlenbeck and Yau showed in [UY86] that the conjecture holds true for compact Kähler manifolds. Theorem 5.2. [UY86] A holomorphic vector bundle over a compact Kähler manifold admits an Hermite-Einstein structure if and only if it is polystable. As an immediate consequence we get that: Corollary 5.3. Let G/P a rational homogeneous variety of type ADE. Then the tangent bundle T G/P is polystable with respect to the anticanonical polarization −K G/P induced by the Hermite-Einstein structure. Recall now that a simple bundle is in particular indecomposable. Thus the polystability of the tangent bundles T G/P combined with their simplicity implies that the direct sum of stable bundles in which they decompose is reduced in reality to only one summand, or in other words that: Theorem 5.4. Let G/P a rational homogeneous variety of type ADE. Then the tangent bundle T G/P is stable with respect to the anticanonical polarization −K G/P induced by the Hermite-Einstein structure. 5.2. Some bounds on stability and polarizations in the A n case. A natural question arising from Theorem 5.4 is whether or not there are other polarizations having the same property of the anticanonical one, and in case we get a positive answer, can we describe them? This section contains an answer to these questions in some specific cases: in particular here we assume G = SL n+1 . 13 We start with flag manifolds of type F(0, n − 1, n): these are homogeneous varieties of dimension 2n − 1 and of the form SL n+1 /P , where P = P (Σ) is the parabolic obtained removing only the first and the last simple root of the Lie algebra, i.e. Σ = {α 1 , α n }. Proposition 5.5. Let F = F(0, n − 1, n) be the flag manifold of the form SL n+1 /P (α 1 , α n ), and set: m(n) = −n + n √ 4n 2 + 4n − 3 2(n 2 + n − 1) . Then the tangent bundle T F is stable with respect to the polarization O F (a, b) if and only if it is semistable if and only if: m(n)a ≤ b ≤ m(n) −1 a. Proof. Start by noticing that: F(0, n − 1, n) = P(Q P n ), meaning that we can look at our varieties as the projectivization F = P(Q P n ) of the quotient bundle Q P n ≃ T P n (−1) on P n . Hence we get projections: F β ! ! B B B B B B B B α~~~~~~~~P n P n∨ Hence Pic(F(0, n − 1, n)) = Z 2 is spanned by F = β * O P n∨ (1) = O F (1, 0) and G = α * O P n (1) = O F (0, 1). Recalling that the elements of F are couples (p, H)=(point, hyperplane) such that p ∈ H ⊂ P n we also get the identification: F = {(p, H) | p ∈ H, p = p 0 } and G = {(p, H) | p ∈ H, H = H 0 }. Moreover, we have two short exact sequences: 0 → O F → π * Q * ⊗ O(1) rel → T rel → 0 (5.1) 0 → T rel → T F → π * T P n → 0. (5.2) All in all, the (quiver associated to the) tangent bundle T F to these varieties has the simple look: (5.3) • n−1 • 1 / / O O • n−1 So gr T F has 3 irreducible summands, all with multiplicity 1 and whose rank is described in the picture (5.3) above. Now that we have understood the tangent bundle, we can look at its subbundles. Of course there are more than two subbundles. Yet it is enough to check the stability condition only on the two homogeneous subbundles, in virtue of a criterion given by Rohmfeld in his paper [Roh91], and later refined by Faini in [Fai06]: Theorem 5.6 (Rohmfeld-Faini). Let H be an ample line bundle. If a homogeneous bundle E = E ρ is not H-semistable then there exists a homogeneous subbundle F induced by a subrepresentation of ρ such that µ H (F ) > µ H (E). Since our tangent bundles have the particular configuration showed in (5.3), all we need to do is just look at the polarizations H = aF + bG such that: µ H (E ′ ) < µ H (T F ) µ H (E ′′ ) < µ H (T F ) where the subbundles E ′ and E ′′ are the irreducible rank n − 1 bundles in the following picture: E ′ = • • • E ′′ = • • • Knowing all the weights of the representation associated to our bundles, we easily compute their first Chern class, so that the two inequalities above read: (5.4) (−λ 1 +nλ 2 )·(aλ 1 +bλ 2 ) 2n−2 n−1 < (nλ 1 +nλ 2 )·(aλ 1 +bλ 2 ) 2n−2 2n−1 (nλ 1 −λ 2 )·(aλ 1 +bλ 2 ) 2n−2 n−1 < (nλ 1 +nλ 2 )·(aλ 1 +bλ 2 ) 2n−2 2n−1 Intersection theory is also easy to understand in this particular case; out of all products F i G j , i + j = 2n − 1, the only non-vanishing ones will be (recall that we are pulling back from a P n !): F n−1 G n = F n G n−1 = 1 F i G j = 0 for i, j = n, n − 1, i + j = 2n − 1 We stress the fact that there is a complete symmetry P n ↔ P n∨ , and thus F ↔ G. Simplifying 5.4 becomes the condition T F is stable if and only if: (n 2 + n − 1)b 2 + nab − n 2 a 2 > 0 −n 2 b 2 + nab + (n 2 + n − 1)a 2 > 0 And from these two inequalities one easily gets that T F is stable with respect to H = aF + bG if and only m(n)a ≤ b ≤ m(n) −1 a, where m(n) is defined as in the statement of the Proposition. The only thing left to check is the equivalence "stable ⇔ semistable", but this simply follows from the fact that the conditions for semistability are just the conditions (5.4) where we substitute the sign < with a ≤. But in reality equality never holds, for m(n) is an irrational coefficient (because 4n 2 + 4n − 3 = (2n + 1) 2 − 4), while we need (a, b) ∈ Z 2 . Remark 5.1. An interesting observation is that as n grows bigger m(n) approaches 1. Hence the cone of polarizations with respect to which T F is stable collapses to the line a = b, which is the once corresponding to the anticanonical. The collapsing process is illustrated in Figure 1 below, where we have drawn the cone for -respectively-n = 2 and n = 20 in the space of polarizations (a, b). The dotted line is the anticanonical polarization {a = b}. We wish to obtain the same type of characterization as Proposition 5.5 for other homogeneous varieties. The next simplest case after the flags with Picard group Z 2 is the full flag manifold F = SL 4 /B, with Pic(F) = Z 3 and dimension 6. The weights of the tangent bundle are the 6 positive roots of SL 4 . Before we can show how the the quiver representation [T F ] looks like, we need to explain here how the relations on the quiver work. In the Borel case, Definition 3.3 can be made more explicit. For each root α, let e α ∈ g α be the corresponding Chevalley generator, and define the Chevalley coefficients N αβ by [e α , e β ] = N αβ e α+β , if α + β ∈ Φ + , and N αβ = 0 otherwise. Proposition 5.7. [ACGP03, Proposition 1.21] The relations R on the quiver Q G/P are the ideal generated by all the equations: R (α,β) = f λµ f µν − f λµ ′ f µ ′ ν − N αβ g λν for α < β ∈ Φ + and for any couple of weights λ, ν ∈ Q 0 , where α+β = λ−ν, µ = λ + α and µ ′ = λ + β. Let now G = SL N . For the sake of simplicity we indicate with e ij the element with weight L i − L j . Take a couple of roots α < β ∈ Φ + , α = L i − L j and β = L h − L k (with i < j). Note that in this case the non-zero coefficients N αβ are all ±1. The only possibility for the coefficient N αβ to be non-zero is if either j = h 16 (⇒ N αβ = 1), or i = k (⇒ N αβ = −1). But then from Proposition 5.7: R (L i −L h ,L h −L k ) = e ih e hk − e hk e ih − e ik = 0 R (L i −L j ,L h −L i ) = e ij e hi − e hi e ij + e hj = 0. In other words the relations tells us nothing newer than [e ih , e hk ] = e ik . Now suppose j = h and i = k, so that N αβ = 0. The relations are thus: R (L i −L j ,L h −L k ) = e ij e hk − e hk e ij = 0, for any i, j, h, k + 1, . . . , N . All in all the relations that we need to put on the quiver Q F for a full flag manifold F = SL N /B are nothing but the Serre relations: R (L i −L h ,L h −L k ) = [e ih , e hk ] = e ik , (5.5) R (L i −L j ,L h −L i ) = [e ij , e hi ] = −e hj , (5.6) R (L i −L j ,L h −L k ) = e ij e hk − e hk e ij = 0, for i = k, j = h. (5.7) For N = 4, we will have in particular that [e 12 , e 34 ] = 0, so the corresponding arrows commute. For SL 4 /B the quiver representation [T F ] looks like in (5.8). We have indicated to which element of n correspond the arrows. (5.8) T F = • • e 23 O O e 12 / / / o / o / o • • e 34 O O e 12 / / / o / o / o • e 34 O O e 23 / / _ _ _ • The relations tell us that the square below is commutative: • e 12 / / / o / o / o • • e 34 O O e 12 / / / o / o / o • e 34 O O Now let's go back to stability computations. Again by Theorem 5.6 all we need to do is identify all the homogenous subbundles F of the tangent bundle T F , and then impose the stability condition µ H (F ) < µ H (T F ); this will give us necessary and sufficient condition for the polarization H to be such that T F is H-stable. All the homogenous subbundles we need to analyze are thus the following six: E 1 = • • • • • • E 2 = • • • • • • E 3 = • • • • • • E 4 = • • O O / / / o / o / o • • • • E 5 = • • • • • O O / / _ _ _ • E 6 = • • O O / / / o / o / o • • • O O / / _ _ _ • Now we need to compute the first Chern class of all these bundles. The elements of F are triples (p, ℓ, π)=(point, line, plane) such that p ∈ ℓ ⊂ π ⊂ P 3 . A basis for the Picard group is given by:    F = {(p, ℓ, π) | p ∈ π 0 , π 0 fixed } G = {(p, ℓ, π) | ℓ ∩ ℓ 0 = ∅, ℓ 0 fixed } H = {(p, ℓ, π) | p 0 ∈ π, p 0 fixed } They correspond to the pull-back (via the standard projection) of the tautological bundle O(1) from respectively P 3 (↔ F ), G(1, 3) (↔ G) and P 3 ∨ (↔ H). We underline the symmetry between F and H. Since we are looking for all polarizations O F (a, b, c) = aF + bG + cH such that for all i = 1, . . . , 6: (5.9) c 1 (E i )(aF + bG + cH) 5 rk E i < c 1 (T F )(aF + bG + cH) 5 6 , we are interested in intersections F i G j H k , where i + j + k = 6. Intersection theory brings us to:    F G 4 H = HG 4 F = F 2 G 2 H 2 = F G 3 H 2 = F 2 G 3 H = 2 F 3 G 2 H = F G 2 H 3 = F 3 GH 2 = F 2 GH 3 = 1 F i G j H k = 0 for all other i, j, k s.t. i + j + k = 6 With the help of a computer algebra system and the intersections above we see that the six inequalities (5.9) define a cone around the line {a = b = c} that corresponds to the anticanonical polarization −K F = O F (2, 2, 2). Figure 2 below shows a section of this cone cut by the plane {a+b+c = 3} orthogonal to the "anticanonical line". It is somewhat unexpected that the region that we obtain is not convex. In fact from general theory we learn that the area would be convex if we were considering all possible characters in the definition of stability, and not just the ones arising from geometric polarizations given by ampla line bundles like in our case. In the next section we will explain with some more detail the question of characters, stability and moduli spaces. Moduli and stability There is a notion of semistability of representations of quivers introduced by King in [Kin94], which is suitable to construct moduli spaces according to the Geometric Invariant Theory (GIT from now on). In their paper [OR06] Ottaviani and Rubei showed that King's notion of semistability for a representation [E] of the quiver Q G/P is in fact equivalent to the Mumford-Takemoto semistability of the associated bundle E on X = G/P , when the latter is a Hermitian symmetric variety. They thus obtain moduli spaces of G-homogeneous semistable bundles with fixed gr E. In this section we recall some of these results and show how they can be extended to our more general setting where X is any -not necessarily Hermitian symmetric-homogeneous variety. Consider the moduli problem of homogeneous vector bundles E on X with the same gr E and thus with the same dimension vector α = (α λ ) ∈ Z |Q 0 | . T F is stable with respect to the polarization O F (a, b) if and only if it is semistable if and only if m(n)a ≤ b ≤ m(n) −1 a. Lemma 4 . 3 . 43Let G/P any homogeneous variety of type ADE. For any G-module W = C, H 0 (gr End(T G/P )) W = 0. Figure 1 . 1The cone for n = 2 (left) and n = 20 (right). Figure 2 . 2Polarizations for SL 4 /B Once we have made the choice of vector spaces V λ with dimension α λ , the isomorphism classes of representations of the quiver Q X with the same dimension vector α are in natural 1-1 correspondence with the orbits of the group:acting over R(Q X , α) := ⊕ a∈Q 1 Hom(V ta , V ha ) by (g · φ) a = g ha φ a g −1 ta , and in particular over the closed subvariety V X (α) ⊆ R(Q X , α) defined by the relations in our quiver. The affine quotient Spec(C[V X (α)] GL(α) ) is a single point, represented by gr E itself, and it thus has no interest for our purposes.Following[Kin94], we call a character of the category CQ − mod an additive function σ : K 0 (CQ − mod) → R on the Grothendieck group. (For the sake of simplicity we denote by CQ − mod the category of left modules on the path algebra (CQ, R) of the quiver Q with relations R: by writing only CQ it is understood that we are modding out the path algebra by the ideal of relations.)A When σ takes integer values, there is an associated character χ σ for GL(α) acting on R(Q X , α). More precisely, King shows that the characters of GL(α) χ σ , χ σ : GL(α) → C * are given by:We stress the fact that σ ∈ Hom(CQ − mod, Z) can be simply seen as an homomorphism that applied to E λ gives σ λ .and the space of such relatively invariant functions is denoted by:So once we have fixed the dimension vector α and a character σ, we can define the moduli space M X (α, σ) by:which is projective over Spec(C[V X (α)] GL(α) ), hence it is a projective variety.20In fact M X (α, σ) has a more geometrical description as the GIT quotient of the open set V X (α) ss of χ σ -semistable points.Fix an ample line bundle H (a polarization). Every ample line bundle H defines a character σ H by:where n is the dimension of the underlying variety.Notice that given an F ∈ CQ − mod with dimension vector α and given a fixed character σ, we can define the slope of F with respect to σ (or slope of α w.r.t. σ):An object is then called µ σ -(semi)stable if and only if it is σ-(semi)stable.Recall now from Theorem 3.1 that a homogeneous bundle E is determined by θ ∈ Hom(gr E, gr E ⊗ T X ) such that θ ∧ θ = ϕθ. (i) for every G-invariant subbundle K, we have µ σ (K) ≤ µ σ (E) (equivariant semistability); (ii) for every subbundle K such that θ E (gr K) ⊂ gr K ⊗ T X , we have µ σ (K) ≤ µ σ (E) (Higgs semistability); (iii) the representation [E] if Q X is σ-semistable, according to[Kin94](quiver semistability); (iv) E is a χ σ -semistable point in V X (α) for the action of GL(α)[Kin94](GIT semistability); (v) for every subsheaf K, we have µ H (K) ≤ µ H (E) (Mumford-Takemoto semistability).Proof. The equivalence (i) ⇔ (ii) follows from the fact that a subbundleis just a rephrasing of the second fundamental equivalence of categories. The equivalence (iii) ⇔ (iv) is proved in [Kin94, Proposition 3.1]. In fact this equivalence holds true even for those characters σ that do not have a geometric interpretation as the one induced by a polarization that we chose. Finally, the equivalence (i) ⇔ (v) is proved for example in[Mig96].With the same reasoning one can prove that: Proof. The proof is the same of Theorem 6.1. The equivalence (i) ⇔ (v) is proved in[Fai06].We remark that case(v)in Theorem 6.2 is more involved with respect to the naive expectation coming from(v)in Theorem 6.1. ii) for every proper subbundle K such that θ E (gr K) ⊂ gr K ⊗ T X , we have µ σ (K) < µ σ (E) (Higgs stability). (ii) for every proper subbundle K such that θ E (gr K) ⊂ gr K ⊗ T X , we have µ σ (K) < µ σ (E) (Higgs stability); E is a χ σ -stable point in V X (α) for the action of GL(α). Kin94] (GIT stabilityE is a χ σ -stable point in V X (α) for the action of GL(α) [Kin94] (GIT stability); . E ≃ W ⊗ E ′ Where W Is An Irreducible G-Module, Proper Subsheaf K ⊂ E ′, we have µ H (K) < µ H (E ′ ) (Mumford-Takemoto stabilityE ≃ W ⊗ E ′ where W is an irreducible G-module, and for every proper subsheaf K ⊂ E ′ , we have µ H (K) < µ H (E ′ ) (Mumford- Takemoto stability). Dimensional reduction and quiver bundles. L , Álvarez Cónsul, O García-Prada, J. Reine Angew. Math. 556MRL.Álvarez Cónsul and O. García-Prada, Dimensional reduction and quiver bundles, J. Reine Angew. Math. 556 (2003), 1-46. MR 1971137 . A L Besse, Einstein Manifolds, MR 0867684Ergebnisse der Mathematik und ihrer Grenzgebiete. 3Springer-VerlagResults in Mathematics and Related Areas (3)A.L. Besse, Einstein Manifolds, Ergebnisse der Mathematik und ihrer Gren- zgebiete (3) [Results in Mathematics and Related Areas (3)], no. 10, Springer- Verlag, Berlin, 1987. MR 0867684 A I Bondal, M M Kapranov, Homogeneous bundles. London MathA.I. Bondal and M.M. Kapranov, Homogeneous bundles, London Math. . Soc, MR 1074782Lecture Note Ser. 148Cambridge Univ. PressSoc. Lecture Note Ser., no. 148, pp. 45-55, Cambridge Univ. Press., 1990. MR 1074782 Homogeneous vector bundles. Raoul Bott, MR 0089473Ann. of Math. 662Raoul Bott, Homogeneous vector bundles, Ann. of Math. 66 (1957), no. 2, 203-248. MR 0089473 Infinite determinants, stable bundles and curvature. S K Donaldson, MR 0885784Duke Math. J. 541S.K. Donaldson, Infinite determinants, stable bundles and curvature, Duke Math. J. 54 (1987), no. 1, 231-247. MR 0885784 Quiver representations. Harm Derksen, Jerzy Weyman, MR 2110070Notices Amer. Math. Soc. 522Harm Derksen and Jerzy Weyman, Quiver representations, Notices Amer. Math. Soc. 52 (2005), no. 2, 200-206. MR 2110070 On simple and stable homogeneous bundles. S Faini, MR 2204900Boll. Unione Mat. Ital. Sez. B Artic. Ric. Mat. 8S. Faini, On simple and stable homogeneous bundles., Boll. Unione Mat. Ital. Sez. B Artic. Ric. Mat.(8) 9 (2006), no. 1, 51-67. MR 2204900 Representation theory. A first course. W Fulton, J Harris, MR 1153249Graduate Texts in Mathematics. 129Springer-VerlagW. Fulton and J. Harris, Representation theory. A first course, Graduate Texts in Mathematics, no. 129, Springer-Verlag, New York, 1991. MR 1153249 Algebraic geometry. R Hartshorne, 1977. MR 0463157Graduate Texts in Mathematics. 52Springer-VerlagR. Hartshorne, Algebraic geometry, Graduate Texts in Mathematics, no. 52, Springer-Verlag, New York-Heidelberg, 1977. MR 0463157 L Hille, Small homogeneous vector bundles. Universität BielefeldPh.D. thesisL. Hille, Small homogeneous vector bundles, Ph.D. thesis, Universität Bielefeld, 1994. Examples of distinguished tilting sequences on homogeneous varieties. MR 1621314Representation theory of algebras. Cocoyoc191Math. Nachr., Examples of distinguished tilting sequences on homogeneous varieties, Representation theory of algebras (Cocoyoc, 1994), CMS Conf. Proc., no. 18, Amer. Math. Soc., 1996, pp. 317-342. MR 1388058 [Hil98] , Homogeneous vector bundles and Koszul algebras, Math. Nachr. 191 (1998), 189-195. MR 1621314 Some properties of complex analytic vector bundles over compact complex homogeneous spaces. M Ise, MR 0124919Osaka Math. J. 12M. Ise, Some properties of complex analytic vector bundles over compact com- plex homogeneous spaces, Osaka Math. J. 12 (1960), 217-252. MR 0124919 Moduli of representations of finite-dimensional algebras. A D King, 515-530. MR 1315461Quart. J. Math. Oxford Ser. 245A.D. King, Moduli of representations of finite-dimensional algebras, Quart. J. Math. Oxford Ser.(2) 45 (1994), no. 180, 515-530. MR 1315461 Differential geometry of complex vector bundles. S Kobayashi, Iwanami Shoten, MR 0909698Princeton University PressPrinceton, NJ; TokyoS. Kobayashi, Differential geometry of complex vector bundles, Publications of the Mathematical Society of Japan, no. 15, Princeton University Press, Princeton, NJ and Iwanami Shoten, Tokyo, 1987. MR 0909698 L Migliorini, 963-990. MR 1430162Stability of homogeneous vector bundles. 10L. Migliorini, Stability of homogeneous vector bundles, Boll. Uni. Mat. Ital B (7) 10 (1996), no. 4, 963-990. MR 1430162 Stable and unitary vector bundles on a compact riemann surface. M S Narasimhan, C S Seshadri, MR 0184252Ann. of Math. 822M.S. Narasimhan and C.S. Seshadri, Stable and unitary vector bundles on a compact riemann surface, Ann. of Math. 82 (1965), no. 2, 540-567. MR 0184252 Quivers and the cohomology of homogeneous vector bundles. G Ottaviani, E Rubei, MR 2219264Duke Math. J. 1323G. Ottaviani and E. Rubei, Quivers and the cohomology of homogeneous vector bundles, Duke Math. J. 132 (2006), no. 3, 459-508. MR 2219264 C Okonek, M Schneider, H Spindler, MR 0561910Vector bundles on complex projective spaces. BostonBirkhäuserC. Okonek, M. Schneider, and H. Spindler, Vector bundles on complex pro- jective spaces, Progress in Mathematics, no. 3, Birkhäuser, Boston, 1980. MR 0561910 Holomorphic vector bundles on homogeneous spaces. Ramanan, Topology. 5Ramanan, Holomorphic vector bundles on homogeneous spaces, Topology 5 (1967), 159-167. R Rohmfeld, MR 1104341Stability of homogeneous vector bundles on CPn. 38R. Rohmfeld, Stability of homogeneous vector bundles on CPn, Geom. Dedicata 38 (1991), no. 2, 159-166. MR 1104341 On the existence of Hermitian-Yang-Mills connections in stable vector bundles. K Uhlenbeck, S T Yau, MR 0861491Comm. Pure Appl. Math. 39SK. Uhlenbeck and S.T. Yau, On the existence of Hermitian-Yang-Mills con- nections in stable vector bundles, Comm. Pure Appl. Math. 39 (1986), no. S, 257-293. MR 0861491 . Mailstop. 3368Department of Mathematics ; Texas A&M University, College StationTX 77843-3368, USA E-mail address: [email protected] of Mathematics, Mailstop 3368, Texas A&M University, Col- lege Station, TX 77843-3368, USA E-mail address: [email protected]
[]
[ "Tunable Polymer/Air Bragg Optical Microcavity Configurations for Controllable Light-Matter Interaction Scenarios", "Tunable Polymer/Air Bragg Optical Microcavity Configurations for Controllable Light-Matter Interaction Scenarios" ]
[ "Chirag Chandrakant Palekar \nFaculty of Physics\nMaterials Sciences Center\nPhilipps-Universität Marburg\nD-35032MarburgGermany\n", "Arash Rahimi-Iman \nFaculty of Physics\nMaterials Sciences Center\nPhilipps-Universität Marburg\nD-35032MarburgGermany\n" ]
[ "Faculty of Physics\nMaterials Sciences Center\nPhilipps-Universität Marburg\nD-35032MarburgGermany", "Faculty of Physics\nMaterials Sciences Center\nPhilipps-Universität Marburg\nD-35032MarburgGermany" ]
[]
Complex optical systems such as high-quality microcavities enabled by advanced lithography and processing techniques paved the way to various light-matter interactions (LMI) studies. Without lattice-matching constraints in epitaxy, coating techniques or shaky open cavity constructions, submicrometer-precise lithographic development of a polymer photoresist paves the way to polymer microcavity structures for various spectral regions based on the material's transparency and the geometrical sizes. We introduce a new approach based on 3D nanowriting in photoresist, which can be employed to achieve microscopic photonic Fabry-Pérot cavity structures with mechanicallytunable resonator modes and polymer/air Bragg mirrors, directly on a chip or device substrate. We demonstrate by transfer-matrix calculations and computer-assisted modelling that open microcavities with up to two "air-Bragg" reflectors comprising alternating polymer/air mirror-pair layers enable compression-induced mode tuning that can benefit many LMI experiments, such as with 2D materials, nanoparticles and molecules.Recently, a variety of new and practical configurations of optical microresonators were used for the investigation of LMI with ultrathin van-der-Waals materials, such as two-dimensional (2D) transition-Palekar et al., Tunable Polymer/Air Bragg Microcavity Light-Matter Interfaces, Manuscript 2021 2 metal dichalcogenides (TMDC), colloidal quantum dots, fluorescent dyes, III-V semiconductors and many other materials [3][11][26]. For the investigation of LMI, be it (quantum) optoelectronic or optomechanic coupling, most commonly used microcavities are (monolithic) planar/ Fabry-Perot (FP) microcavities [14][27], open tunable fiber-based concave mirrors microcavities [28][29], (totalinternal reflecting) whispering gallery modes [30][31], (low-mode-volume) photonic crystal nanocavities [32][33][34], and plasmonic metal cavities (with possible ohmic losses) [3][35], as well as the air-gap type microresonators [36][37][38]. Among the various designs, those highly versatile, spectrally tunable and relatively simple open-cavity configurations have already covered a large spectrum of applications ranging from cavity quantum electrodynamics (CQED) [7][39] to (fibercoupled) optoelectronical or optomechanical devices [29][40][41] as well as sensors [42][43][44]. Many different and unique approaches have been already demonstrated to improve the confinement of the light and functionality of cavity systems [1][3]. Some approaches such as photonic crystal membranes and whispering gallery modes do provide significant confinement of light in all the three
10.1002/pssr.202100182
[ "https://arxiv.org/pdf/2103.16548v1.pdf" ]
232,417,190
2103.16548
cb1a486c8e61e7ad8b6610b93624fe87803f2577
Tunable Polymer/Air Bragg Optical Microcavity Configurations for Controllable Light-Matter Interaction Scenarios Chirag Chandrakant Palekar Faculty of Physics Materials Sciences Center Philipps-Universität Marburg D-35032MarburgGermany Arash Rahimi-Iman Faculty of Physics Materials Sciences Center Philipps-Universität Marburg D-35032MarburgGermany Tunable Polymer/Air Bragg Optical Microcavity Configurations for Controllable Light-Matter Interaction Scenarios Palekar et al., Tunable Polymer/Air Bragg Microcavity Light-Matter Interfaces, Manuscript 2021 1 Complex optical systems such as high-quality microcavities enabled by advanced lithography and processing techniques paved the way to various light-matter interactions (LMI) studies. Without lattice-matching constraints in epitaxy, coating techniques or shaky open cavity constructions, submicrometer-precise lithographic development of a polymer photoresist paves the way to polymer microcavity structures for various spectral regions based on the material's transparency and the geometrical sizes. We introduce a new approach based on 3D nanowriting in photoresist, which can be employed to achieve microscopic photonic Fabry-Pérot cavity structures with mechanicallytunable resonator modes and polymer/air Bragg mirrors, directly on a chip or device substrate. We demonstrate by transfer-matrix calculations and computer-assisted modelling that open microcavities with up to two "air-Bragg" reflectors comprising alternating polymer/air mirror-pair layers enable compression-induced mode tuning that can benefit many LMI experiments, such as with 2D materials, nanoparticles and molecules.Recently, a variety of new and practical configurations of optical microresonators were used for the investigation of LMI with ultrathin van-der-Waals materials, such as two-dimensional (2D) transition-Palekar et al., Tunable Polymer/Air Bragg Microcavity Light-Matter Interfaces, Manuscript 2021 2 metal dichalcogenides (TMDC), colloidal quantum dots, fluorescent dyes, III-V semiconductors and many other materials [3][11][26]. For the investigation of LMI, be it (quantum) optoelectronic or optomechanic coupling, most commonly used microcavities are (monolithic) planar/ Fabry-Perot (FP) microcavities [14][27], open tunable fiber-based concave mirrors microcavities [28][29], (totalinternal reflecting) whispering gallery modes [30][31], (low-mode-volume) photonic crystal nanocavities [32][33][34], and plasmonic metal cavities (with possible ohmic losses) [3][35], as well as the air-gap type microresonators [36][37][38]. Among the various designs, those highly versatile, spectrally tunable and relatively simple open-cavity configurations have already covered a large spectrum of applications ranging from cavity quantum electrodynamics (CQED) [7][39] to (fibercoupled) optoelectronical or optomechanical devices [29][40][41] as well as sensors [42][43][44]. Many different and unique approaches have been already demonstrated to improve the confinement of the light and functionality of cavity systems [1][3]. Some approaches such as photonic crystal membranes and whispering gallery modes do provide significant confinement of light in all the three Introduction Optical microcavities play an important role in the investigation of a wide range of research areas, such as light-matter interaction (LMI) [1] [2][3] [4], nonlinear optics [5] [6], and quantum information processing [1][7] [8]. Various high-quality microcavities [1] [9][10] [11] have been explored for decades and enabled the hunt for ultralow-threshold nanolasers [5][12] [13], opened up fundamental cavity-QED experiments [14][15] [16], and the study of Bose-Einstein-like condensation (BEC) of polaritons in solids [17] [18] [19] [20]. Microcavities have reached popularity not only for conventional (more energyefficient) lasers, but also for polariton physics [3] [21], as well as the novel field of polariton chemistry [22], and became indispensable for optical quantum technologies [23] [24] [25]. In fact, the list of abundant research directions with confined light fields cannot be projected adequately in this short summary here. resonators. To achieve this, suitable 3D-printable micromirror structures need to be designed and developed for the incorporation of the desired active material prior to or after completion of the cavity structure, as appropriate for the given experiment. While mirror production can easily rely on metallization of surfaces, wavelength tunable on-demand deposition of (top-)mirrors can be best addressed with layered dielectric mirrors based on the distributed Bragg reflector (DBR) concept, which can conveniently be formed by polymer/air layer pairs offering considerable refractive index contrast. In case the rigidity and elasticity allow for pressure-induced thickness changes of layered polymer/air-gap structures, even mechanically tunable optical microcavities can be envisioned. Thus, research platforms for tunable Rabi splitting, BEC studies, polariton chemistry, Purcell-enhanced single photon sources and field-enhancement-benefitting nonlinear optics can arise, considering the estimated reasonably-high Q factors, wavelength or structure design flexibility, and stability of such a printed photonic microstructure. Here in this work, two promising printed cavity system configurations are discussed utilizing simulations of optical properties of the microcavities with and without active materials based on the transfer matrix method (TMM). The here presented simulation study for different selected layer thicknesses in the polymer/air ("air-Bragg") microcavity, i.e. of air and the polymer material, is supported by additional stress analysis using a finite element analysis (FEA) for the examination of mechanical stability of the designed structures. Thereby, we demonstrate the practicality and conceptual feasibility of mode-tunable strong-coupling experiments with a van-der-Waals semiconductor monolayer incorporated theoretically in the polymer/air microcavity. In the future, laser nanoprinting of (integrated) photonic structures directly on the chip with similar and more advanced polymer optical microreflectors and microresonators promise great flexibility for optoelectronic, optomechanic, nanophotonic and nonlinear-optics applications. The Optical Microcavity Configurations The optical mirrors in the form of DBRs, i.e. alternating refractive index layers with thicknesses of 0 (4 ) ⁄ , where 0 is the wavelength in vacuum, is the refractive index of the material used and = 1,3,5 and so forth (odd integer numbers), solely rely on the resist material (and air) and are designed to work well with WS2 monolayer excitonic resonances. In the following, = ( 0 ⁄ ). The first configuration (Fig. 1a) is composed of a conventional dielectric mirror, comprising 6 pairs of SiO2 and Ti3O5, as the bottom mirror as well as substrate for the active medium, and the polymer/air-Bragg structure as the top mirror (in the following "air-Bragg" reflector). The second configuration ( Fig. 1b) form. This is a unique approach as far as spectrally-tunable microcavities are concerned. The refractive index of the IP-DIP photoresist in the visible spectral range is 1.52 at room temperature which exhibits minor changes at low temperature [46]. This allows one to obtain a reasonable refractive index contrast of 1.52/1 between the two materials of the dielectric mirror. The refractive index contrast is not very high, but compared to typical III/V DBRs made of GaAs/AlAs with index contrast in the range of 3.5/3, the dielectric mirror with air and polymer still yields an improvement. Nonetheless, with a large number of mirror pairs, an overall high reflectivity is achievable. For the TMM calculations, considering the active material to be thin layered semiconducting materials (TMDCs), the design wavelength of the air-Bragg reflector is targeted to be in the range of 600 nm to 800 nm (for the most popular TMDCs with their A-excitons in that range). Note that longer wavelengths, e.g. for near-infrared to infrared intralayer or interlayer excitonic species in TMDC monolayers or 2D heterostructures, respectively, provide more favorable printing conditions than for the here chosen WS2 A-exciton resonance. For detailed explanations regarding the TMM, we refer to the supplementary information of our previous work by Wall et al. [42]. (Fig. 2e). A large number of mirror pairs typically results in a high reflectivity, as evidenced in the calculated reflectivity spectra for all four configurations. In a fully air-Bragg-based cavity system, the bottom reflector consists of 7 or 8 pairs and the top one of 6 pairs for better out-coupling. The air-Braggs with 7 and 6 pairs possess maximum reflectivities of 98.5 % and 95%, respectively (Fig. 2e). Next, the influence of the layer thickness on the stopband width is briefly summarized. It can be clearly seen that the stopband width changes drastically as a function of the layer thickness. The stopband width for the /4, 3 /4, 5 /4 and 7 /4 air-Bragg reflector amounts to approximately 100 nm, 60 nm, Stress Analysis of Tunable Polymer/air-Bragg Microcavities The crucial stress analysis based on FEA was performed on aforementioned polymer/air-Bragg reflector designs, using a computer-aided design (CAD) tool (see methods section). Here, Autodesk Inventor allows simulating the practical pressure-affected cavity configurations, which employ air-Bragg reflectors towards their applications in tunable open-microresonator devices. This is possible due to material-specific mechanical properties allocated to the structure's CAD model. The physical and mechanical properties of the photoresist IP-DIP are summarized in Tab. 1. The CAD of air-Braggs for various layer thickness configurations is shown in Fig. 3 along with the stress analysis simulation. The air-Bragg reflectors consist of 8 layers with = 1,3,5,7 quarter-wavelength layer thickness (Fig. 3a, b, c, Our simulation-based mechanical analysis indicates that the 4 ⁄ air-Bragg structure is unstable as can be evidenced by the strong bending and bunching of the photoresist layers. Besides, the air-Bragg with layer thickness 3 4 ⁄ (Fig. 3b), 5 4 ⁄ (Fig. 3c) and 7 4 ⁄ (Fig. 3d) exhibit a clearly more stable behavior under influence of gravity and external pressure due to the improved rigidity. As can be deduced from TMM calculations based on extracted structural information, the effective deformation (in terms of thickness reduction) of the air layers in air-Bragg structures influences the overall stopband of the microcavity. The stopband of the individual air-Bragg reflector experiences an increasing blue-shift in its spectral position when the external pressure is gradually increased as applied on the upper surface of the structure (indicated by double arrows in Fig. 3). The stress analysis method used in this work can also be utilized to study the behavior of the cavity structures upon application of mechanical pressure. To address the targeted effect of cavity-mode tunability by external pressure, two different examples of (printable) cavity configurations under a variation of the applied pressure are discussed, that are (a) the air-Bragg/DBR microcavity and (b) the air-Bragg/air-Bragg microcavity (see Fig. 1a and b, respectively). To begin with, Fig. 4a demonstrates the tunability induced by applied pressure in the range of 0 to 50 MPa (corresponding to a uniform force of maximally 0.1 N on the two side bars) on the 5 /4 air-Bragg/DBR microcavity which causes the quality factor = /∆ (that is mode energy over linewidth, i.e. full-width-at-half-maximum FWHM) to change drastically. The cavity length of 2.6 µm, whereas being 620 nm, allows us to determine the cavity mode (C) q = 8. The cavity mode q = 8 clearly changes its spectral position upon application of pressure (Fig. 4b), which compresses the overall multilayered air-Bragg structure. In the waterfall diagram of Fig. 4b, the TMM calculated reflectivities for the microcavity with arbitrary increment of pressure in steps of 10 MPa are displayed. The resonance of the cavity mode was adjusted to be resonant with the 617 nm A-exciton mode in WS2 at room temperature by fine-tuning the cavity spacer thickness and performing the TMM calculations for this configuration in the absence of external pressure. This ensures one to obtain directly resonance conditions at elevated temperatures around 300 K and also reach a red photon-exciton detuning at low temperature, which is beneficial for pressure-based cavity mode tuning in strong-coupling measurements in cryogenic experiments. An example of the calculated strong-coupling situation in a WS2-air-Bragg microcavity for different pressure levels is displayed in Fig. 5. structure visually delivered by the FEA simulation. On the other hand, the expected blue-shift of the mode is attributed to the compressed structure with reduced air-gap thicknesses which is seemingly sufficient in order to obtain the tuning effect. Nonetheless, owing to the increased thickness unproportionality in the polymer/air configuration for increased pressure levels, the resonance conditions in the cavity suffer and a gradually reduced Q factor is obtained. In principle, the SiO2/Ti3O5 DBR stopband width (150 nm) is much larger than that of the 5 /4 air-Bragg (40 nm), which allows one to practically shift the stopband of the air-Bragg over a wider spectral region by external pressure. However, a large structural compression may lead to a relatively strong deformation of the air-Bragg configuration from the actual design, which influences not only the spectral position of the cavity mode and stopband, but also the Q factor of the cavity mode (as indicated in Fig. 4). The second approach pursued utilizes a fully air-Bragg-based microcavity design (Fig. 1b), and, in this study, the homogeneous external pressure is applied on the whole cavity structure (on its side bars). This particular scenario provides the unique opportunity to investigate light-matter coupling scenarios due to the flexibility with respect to mode detunings, i.e. the energy difference between cavity and emitter modes, in a substrate-independent fashion. Moreover, it can be used to insert suspended 2Dmaterial sheets with support structures, if appropriately designed, as an additional tool to tweak the LMI by changing the position of the ML with respect to the standing electromagnetic (EM) fields inside the cavity. As a consequence of suspension, also the substrate-induced impurities and all other consequent effects get automatically eliminated. In this configuration, the cavity spacer is similar to the first example proportionally reduced together with air gaps in the Bragg sections. Again, a gradual reduction of the Q factor is obtained (Fig. 6a) and the overall cavity stopband experiences a comparable shift. Figure 6b demonstrates Conclusion To summarize, two spectrally-tunable microcavity configurations based on polymer/air Bragg This implies that different light-matter coupling scenarios with flexibility in the cavity-emitter resonance detuning, even after production, can be realized with the help of such polymer/air-based optical microcavities. In the future, precise laser nanoprinting of (integrated) photonic structures directly on the chip will be a common practice, and an approach to achieve optical microreflectors and microresonators such as the here presented one can be very useful in many specific scenarios, including for LEDs and photovoltaics, nanolasers and nonclassical light sources, optical filters or couplers and nonlinear optical elements, as well as optical sensing through LMI. Methods Stress analysis of CAD structures: The stress analysis function provided by Autodesk Inventor Professional 2019 was used to determine the impact of vertical external pressure (force per area) on different polymer/air-Bragg structures using CAD models. The stress analysis function based on a finite element analysis (FEA) allows one to assign the applied force on the surface of the CAD model and to obtain mechanical deformation for every setting of externally applied force (represented by the double arrow in the shown figure) and under influence of gravity (single arrow). The hereby obtained prediction of the overall structural response was later used to model reflectivity spectra (using manually extracted position changes of the layered structure for given pressure level). Calculation of spectra: The targeted microreflector and microresonator systems were modeled using the transfer matrix method (TMM) regarding their standing-wave light-field profile and reflectivity spectra. Based on theoretical data and considerations with chosen design parameters, reflectivity spectra towards strong light-matter coupling with a virtual 2D semiconductor inside the cavity were equally obtained. The optical properties such as reflection, transmission and angle-resolved spectra of multi-layered open cavity structures were calculated using the TMM-based simulation code as used in Wall et. al [42]. Step 2 Supplementary Information The air-Bragg-reflector microcavity with inserted 2D membrane Air-Bragg-reflector microcavity Step 3 a Substrate Air-Bragg (7 pairs) spatial dimensions, whereas open cavity configurations with an air-gapped microresonator such as fiber-based FP cavities provide intrinsic tunability in both the spectral and spatial domains [10][29][41][42][45] as flexible light-matter interfaces. Often, a top-down nanotechnology approach is used to define precise (monolithic) resonator structures, or movable reflectors are combined to form tunable open resonators. In contrast, 3D (nano-)printing offers on-demand bottom-up production of cavity configurations nearly arbitrarily on various substrates and in combination with different active materials, around quantum emitters, in combination with fluids and gases, and even in a disposable fashion. The tunable nature of open (air-gap) FP microcavities is a key advantage which provides in-situ control over LMI through longitudinal mode tuning, lateral mode positioning, compatibility with different (nano-)materials and access to the intracavity space. However, open microcavity configurations are typically susceptible to vibrations and unstable at relatively large cavity length. In contrast, nanoprinting of FP microcavities around a target material could open up a new path to tailor-made optical Figure 1 : 1Schematic of the possible microcavity configurations incorporating the printed polymer/air-Bragg reflectors. a) "Air-Bragg"-Ti 3 O 5 / SiO 2 DBR microcavity. b) Polymer/air-Bragg-reflector microcavity. Figure 2a - 2ad shows the stopband of an air-Bragg reflector with different layer pair numbers, for four different layer thickness configurations, considering the design wavelength ( ) to be around 620 nm in air (2.0 eV). The air-Bragg reflector with /4 layer thickness exhibits a reflectivity close to 1 over a 100 nm range when composed of 8 mirror pairs (of air and IP-DIP). The reflectivity as a function of the layer thickness does not change significantly, but it expectedly relies on the number of pairs in the air-Bragg d, respectively) and cover each an area of 50 x 50 μm 2 . The gradual increment or decrement in the applied pressure leads to the deformation of the polymer parts of the air-Bragg structures. The pressure is solely applied on the (here two opposing) side pillars/bars which mechanically support the quasi-free-standing sub-micrometer-thick polymer layers in the vertical microstructure. The upper surface area of each pillar is 20 x 50 μm 2 .To fulfill their purpose, the deformation of these pillars causes a noticeable change in polymer layer separation, which gives the overall microresonator platform the intended functionality to tune/detune cavity modes by deforming the air-Bragg structure. The application of external pressure on such a small structure modifies the thickness of all air layers considerably while leaving the polymer layers basically unchanged. Thereby, the structural modification results in the desired shift of the reflectivity spectrum and ultimately the spectral position of the optical cavity modes in a complete microresonator. However, the tunability of the air-Bragg reflector is limited, on the one hand, by the pressure-bearing capacity (i.e. maximum compression and expansion) of the photoresist IP-DIP, and on the other hand, by the pressure degree, at which the deformation still provides a functioning DBR based on the pressure-affected effective layer thickness ratio between air and polymer layers. Figure 3 :) 4 ⁄ b) 3 4 ⁄ c) 5 4 ⁄ and d) 7 4 ⁄ 344Finite element analysis (FEA) of the polymer/air-Bragg reflector with different layer thickness configurations. FEA simulations for air-Braggs with alayer thickness demonstrate a gradual deformation of that structure upon application of a uniform pressure of 100 MPa on the side bars of the structure (double arrows). The central arrow indicates the force of gravity applied to the overall structure. The area shaded blue experiences less induced deformation and less pressure (densification) compared to the green and particularly the red colored areas. Here, the floating layers remain unaltered. Note that the thin black lines in the background indicate the initial shape of the structure in the absence of external pressure. In fact, gravity leads to the pronounced bending of the thinnest and, thus, least rigid layers. Figure 4 : 4a) Theoretical Q factor of the cavity mode (q = 8) as a function of applied pressure based on calculated reflectivity spectra shown in (b) as waterfall diagram with constant vertical offset of 1. The thickness reduction of air layers in the polymer/air-Bragg structures alters the spectral position and Q factor of the cavity mode q = 8 due to the material structure deformation obtained through the uniformly applied pressure. Figure 5 . 5The theoretical reflectivity spectrum for each pressure setting is obtained by manually reading out the positions of the individual layers of the (compressed) structure from the FEA simulation results. Thus, the altered air-Bragg structure (with pressure-dependent layer thicknesses) in 1D representation can be fed into the TMM model. Accordingly, the Q factors of the mode q = 8 at 30 and 40 MPa exhibit an unnatural trend, which results from the read-out uncertainties for the respective underlying layer Calculated angled-resolved reflection for air-Bragg/ 2 /planar-DBR microcavity. a)Calculated angleresolved reflection spectrum (false-color contour diagram) for the 3λ 4 ⁄ air-Bragg/DBR microcavity illustrating a Rabi splitting of 29 meV obtained with a cavity length of 0.77 µm. Upper (UP) and lower polariton (LP) branches labeled with exciton resonance (black horizontal line) of WS 2 at 2.01 eV featuring a hypothetical linewidth of 30 meV, for the bare cavity mode (q = 3). b) Calculated reflection spectra at various exciton-photon resonance detunings demonstrating tunable control over light-matter coupling with clear anticrossing behaviour. The mode detuning is an effect of the gradual application of pressure upon the microcavity structure. In the model, the WS2 monolayer is directly placed on the dielectric mirror's top facet, on which the air-Bragg including a cavity spacer is placed (see Fig. 1a). Figure 6 6: a) The influence of the applied pressure (indicated by blue arrows) on the calculated Q factor with schematic of the polymer/air Bragg-reflector microcavity design (inset). b) Plot of calculated reflectivity spectra with constant vertical offset of 1, showing the tunability of the polymer/air Bragg-reflector microcavity when a homogeneous pressure is applied (varied between 0 and 50 MPa). the tuning of modes in the calculated reflectivity spectra of the air-Bragg microcavity system upon application of external pressure ranging between 0 and 50 MPa. The step-wise reduction of air layer thicknesses causes a controllable blue-shift of the cavity mode's spectral positions (q = 3 and 4), which is accompanied by a reduction of the Q factor with increasing pressure level. The crucial stress analysis of the air-Bragg structures for tunability of cavity modes with sufficiently high Q factors let us conclude that the targeted strong light-matter coupling scenarios can be theoretically realized. Incorporating quantum dots, florescent dyes or even TMDC monolayers in air-Bragg/DBR and all-air-Bragg microcavities can open the path to flexible resonator-emitter systems for various LMI experiments. Additionally, this approach delivers in-situ control over LMI by an applied vertical force on the surface of the device (i.e. mechanical pressure), which provides the necessary mode tunability by compression of the microcavity structures. Inspired by these theoretical results, work is ongoing towards the optimization of the printing parameters for such air-Bragg structures to obtain the overall optical quality and mechanical stability along with the investigation of various approaches to systematically induce the necessary thickness-reducing deformations in polymer/airlayers in a controlled and controllable fashion for larger and, foremost, predictable tunability of highquality resonator modes. Figure S2 . S2Numerical calculations for an all-air-Bragg-reflector microcavity. a) Schematic of air-Braggreflector microcavity with suspended WS 2 monolayer (red color slab), indicating the step-wise assembly from left to right in three steps. b) Field distribution inside the air-Braggs microcavity with WS 2 ML at field maximum. 1 and 2 are the separation between monolayer and the upper and lower air-Bragg reflector, respectively. c) Calculated angle-resolved reflection (false-color contour plot) of the closed cavity consisting of air-Bragg reflectors with 5 4 ⁄ layer thickness which exhibits a Rabi splitting of 13 ± 3 meV at a total cavity length of 3.4 . The upper (UP) and lower polariton (LP) branches are labeled with the exciton resonance of WS 2 at 2.01 eV (black horizontal line), for the bare cavity mode (q = 11). Inset: line spectrum of the microcavity reflection at normal incidence angle, corresponding to almost zero detuning between exciton and photon mode (spectral resonance). aims at directly printing two air-Braggs on top of one-another with an appropriate distance(cavity spacer) between them to form a microresonator incorporating the active medium, which needs to be (in most cases) inserted in an intermediate step. For layered materials, this is sketched in the Supplementary Information. Later on, the desired tunability of cavity modes can be introduced by application of mechanical pressure on the polymer/air microstructure. If appropriately constructed (with a stable enough configuration), in principle, one can compress mirrors independently, i.e. only the underlying mirror, only the top mirror, or on demand both simultaneously. In the simple case, both mirrors will be addressed by an external (here vertical) pressure simultaneously. This air-Bragg- reflector cavity also allows one to have active material placed on the bottom DBR in contact manner, provided that the terminating layer features an adjusted thickness, or incorporated in suspended (air and polymer) layer thickness. e) Maximum reflectivity given at the Bragg wavelength as a function of the mirror-pair number. Here, the calculated maximum reflectivity exceeds 99% for 8 mirror pairs. f)1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 0.0 0.2 0.4 0.6 0.8 1.0 Norm. reflectively Energy (eV) 8 Pairs 7 Pairs 6 Pairs 5 Pairs 4 Pairs 800 700 600 500 Wavelength (nm) a b 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 0.0 0.2 0.4 0.6 0.8 1.0 Norm. reflectively Energy (eV) 8 Pairs 7 Pairs 6 Pairs 5 Pairs 4 Pairs 800 700 600 500 Wavelength (nm) 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 0.0 0.2 0.4 0.6 0.8 1.0 Norm. reflectively Energy (eV) 8 Pairs 7 Pairs 6 Pairs 5 Pairs 4 Pairs 800 700 600 500 Wavelength (nm) d c 4 5 6 7 8 84 86 88 90 92 94 96 98 100 Reflectivity (%) Number of pairs f e 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 0.0 0.2 0.4 0.6 0.8 1.0 Norm. reflectively Energy (eV) 8 Pairs 7 Pairs 6 Pairs 5 Pairs 4 Pairs 800 700 600 500 Wavelength (nm) Figure 2: Stopband and reflectivity dependencies of polymer/air-Bragg reflector for different layer thicknesses. Calculated reflectivity spectra for a) 4 ⁄ b) 3 4 ⁄ c) 5 4 ⁄ and d) 7 4 ⁄ material specific Extracted stopband width (for 8 pairs) as a function of the layer thickness. Accordingly for a 4 ⁄ layer thickness, the total stopband width is 100 nm, whereas for 7 4 ⁄ layers it becomes 25 nm. Table 1 . 1Key physical and mechanical properties of IP-DIP resin. (If not indicated by *asterisks, the values arefrom the Nanoscribe GmbH[47]) *Ref.[48]**Ref.[49].Resin Density (liq) (g/cm 3 ) Density (s) (g/cm 3 ) Young's modulus (GPa) Hardness (MPa) Poisson's ratio Refractive index IP-DIP 1.14 -1.19 1.2* 4.5 152 0.35** 1.52 reflectors (air-Braggs) with accessible open cavity spacer were discussed for weak and strong LMI, such as with various types of nanomaterials, molecules as well as quantum dots. Building upon the capability of 3D nanoprinting these polymer/air layered structures with sub-micrometer precision, this study explored the concept of mechanical-pressure-induced cavity mode shifts through air-Bragg structure compression. Our modelling work indicates that by using state-of-the-art technology, one can in principle realize optical microcavity systems on demand for various spectral regions provided that the target wavelength is compatible with the printing precision and minimal feature size (voxel size) of the 3D nanoprinting technique. The crucial stress analysis based on FEA simulations reveals the tunable nature of the polymer/air Bragg structures upon pressure application, leading to a controllable energy shift of the microresonator modes. Such versatile light-matter interface that can be conveniently deposited on different surfaces at the position of interest consist at minimum of one such "air-Bragg" reflector and can for the discussed designs theoretically exhibit Q-factors up to 5000. AcknowledgementThe authors would like to thank F. Wall for providing her optimized TMM frame for 2D-materials cavity simulations, and Dr. Henning for helpful discussions and insights on 3D nanoprinting. Access to the clean room of the MiNa Lab at Justus Liebig University (JLU), Giessen, through the subscription of the Semiconductor Photonics Group, Marburg, is acknowledged. The authors are furthermore grateful to the (former) members of their 2D Materials team, Marburg, for fruitful discussions.Author contributionsARI initiated the study and conceived the concepts for printed tunable microcavities for LMI with 2D materials and nanoparticles. CCP performed the theoretical optical analysis using the TMM and simulated mechanical properties. CCP and ARI designed the structures, outlined the simulations and evaluated the data. The results were summarized in a manuscript by both authors.Corresponding [email protected]' statement/Competing interestsThe authors declare no conflict of interestAdditional informationSupplementary Information accompanies this paper Optical microcavities. K Vahala, World ScientificK. Vahala, Optical microcavities. World Scientific, 2004. Quantum fluids of light. I Carusotto, C Ciuti, 10.1103/RevModPhys.85.299Rev. Mod. Phys. 851I. Carusotto and C. Ciuti, "Quantum fluids of light," Rev. Mod. Phys., vol. 85, no. 1, pp. 299-366, Feb. 2013, doi: 10.1103/RevModPhys.85.299. A V Kavokin, J J Baumberg, G Malpuech, F P Laussy, Microcavities. Oxford University Press1A. V. Kavokin, J. J. Baumberg, G. Malpuech, and F. P. Laussy, Microcavities, vol. 1. Oxford University Press, 2017. Strong Light-Matter Coupling as a New Tool for Molecular and Material Engineering: Quantum Approach. B Kolaric, B Maes, K Clays, T Durt, Y Caudano, 10.1002/qute.201800001Adv. Quantum Technol. 131800001B. Kolaric, B. Maes, K. Clays, T. Durt, and Y. Caudano, "Strong Light-Matter Coupling as a New Tool for Molecular and Material Engineering: Quantum Approach," Adv. Quantum Technol., vol. 1, no. 3, p. 1800001, Dec. 2018, doi: 10.1002/qute.201800001. Light-matter interaction in the strong coupling regime: Configurations, conditions, and applications. D S Dovzhenko, S V Ryabchuk, Y P Rakovich, I R Nabiev, 10.1039/c7nr06917kNanoscale. 108Royal Society of ChemistryD. S. Dovzhenko, S. V. Ryabchuk, Y. P. Rakovich, and I. R. Nabiev, "Light-matter interaction in the strong coupling regime: Configurations, conditions, and applications," Nanoscale, vol. 10, no. 8. Royal Society of Chemistry, pp. 3589-3605, Feb. 28, 2018, doi: 10.1039/c7nr06917k. Extreme terahertz science. X C Zhang, A Shkurinov, Y Zhang, 10.1038/nphoton.2016.249Nature Photonics. 111Nature Publishing GroupX. C. Zhang, A. Shkurinov, and Y. Zhang, "Extreme terahertz science," Nature Photonics, vol. 11, no. 1. Nature Publishing Group, pp. 16-18, Jan. 03, 2017, doi: 10.1038/nphoton.2016.249. Cavity Quantum Electrodynamics: Quantum Information Processing with Atoms and Photons. J Raimond, G Rempe, Quantum Information. WileyJ. Raimond and G. Rempe, "Cavity Quantum Electrodynamics: Quantum Information Processing with Atoms and Photons," in Quantum Information, Wiley, 2016, pp. 669-689. Photonic quantum information processing: A concise review. S Slussarenko, G J Pryde, S. Slussarenko and G. J. Pryde, "Photonic quantum information processing: A concise review," . 10.1063/1.5115814Appl. Phys. Rev. 641303Appl. Phys. Rev, vol. 6, p. 41303, 2019, doi: 10.1063/1.5115814. Optically controlled elastic microcavities. A M Flatae, 10.1038/lsa.2015.55Light Sci. Appl. 44282A. M. Flatae et al., "Optically controlled elastic microcavities," Light Sci. Appl., vol. 4, no. 4, p. 282, Apr. 2015, doi: 10.1038/lsa.2015.55. Tunable Open-Access Microcavities for Solid-State Quantum Photonics and Polaritonics. F Li, Y Li, Y Cai, P Li, H Tang, Y Zhang, 10.1002/qute.201900060Adv. Quantum Technol. 2101900060F. Li, Y. Li, Y. Cai, P. Li, H. Tang, and Y. Zhang, "Tunable Open-Access Microcavities for Solid- State Quantum Photonics and Polaritonics," Adv. Quantum Technol., vol. 2, no. 10, p. 1900060, Oct. 2019, doi: 10.1002/qute.201900060. Quantum dot micropillars. S Reitzenstein, A Forchel, 10.1088/0022-3727/43/3/033001/metaJ. Phys. D. Appl. Phys. 4333300S. Reitzenstein and A. Forchel, "Quantum dot micropillars," J. Phys. D. Appl. Phys., vol. 43, no. 3, p. 03300, 2010, [Online]. Available: https://iopscience.iop.org/article/10.1088/0022- 3727/43/3/033001/meta. Ultra-Low Threshold Monolayer Semiconductor Nanocavity Lasers. S Wu, arXiv:1502.01973S. Wu et al., "Ultra-Low Threshold Monolayer Semiconductor Nanocavity Lasers." arXiv:1502.01973. All-color plasmonic nanolasers with ultralow thresholds: Autotuning mechanism for single-mode lasing. Y J Lu, 10.1021/nl501273uNano Lett. 148Y. J. Lu et al., "All-color plasmonic nanolasers with ultralow thresholds: Autotuning mechanism for single-mode lasing," Nano Lett., vol. 14, no. 8, pp. 4381-4388, Aug. 2014, doi: 10.1021/nl501273u. Observation of the coupled excitonphoton mode splitting in a semiconductor quantum microcavity. C Weisbuch, M Nishioka, A Ishikawa, Y Arakawa, 10.1103/PhysRevLett.69.3314Phys. Rev. Lett. 6923C. Weisbuch, M. Nishioka, A. Ishikawa, and Y. Arakawa, "Observation of the coupled exciton- photon mode splitting in a semiconductor quantum microcavity," Phys. Rev. Lett., vol. 69, no. 23, pp. 3314-3317, Dec. 1992, doi: 10.1103/PhysRevLett.69.3314. Strong coupling in a single quantum dot-semiconductor microcavity system. J P Reithmaier, 10.1038/nature02969Nature. 4327014J. P. Reithmaier et al., "Strong coupling in a single quantum dot-semiconductor microcavity system," Nature, vol. 432, no. 7014, pp. 197-200, Nov. 2004, doi: 10.1038/nature02969. Vacuum Rabi splitting with a single quantum dot in a photonic crystal nanocavity. T , Yoshie , 10.1038/nature03119Nature. 4327014T. Yoshie et al., "Vacuum Rabi splitting with a single quantum dot in a photonic crystal nanocavity," Nature, vol. 432, no. 7014, pp. 200-203, Nov. 2004, doi: 10.1038/nature03119. Condensation of semiconductor microcavity exciton polaritons. H Deng, G Weihs, C Santori, J Bloch, Y Yamamoto, 10.1126/science.1074464Science (80-. ). 2985591H. Deng, G. Weihs, C. Santori, J. Bloch, and Y. Yamamoto, "Condensation of semiconductor microcavity exciton polaritons," Science (80-. )., vol. 298, no. 5591, pp. 199-202, Oct. 2002, doi: 10.1126/science.1074464. Bose-Einstein condensation of exciton polaritons. J Kasprzak, 10.1038/nature05131Nature. 4437110J. Kasprzak et al., "Bose-Einstein condensation of exciton polaritons," Nature, vol. 443, no. 7110, pp. 409-414, Sep. 2006, doi: 10.1038/nature05131. Room-temperature Bose-Einstein condensation of cavity exciton-polaritons in a polymer. J D Plumhof, T Stöferle, L Mai, U Scherf, R F Mahrt, 10.1038/nmat3825Nat. Mater. 133J. D. Plumhof, T. Stöferle, L. Mai, U. Scherf, and R. F. Mahrt, "Room-temperature Bose-Einstein condensation of cavity exciton-polaritons in a polymer," Nat. Mater., vol. 13, no. 3, pp. 247- 252, Mar. 2014, doi: 10.1038/nmat3825. Room-temperature superfluidity in a polariton condensate. G Lerario, 10.1038/nphys4147Nat. Phys. 139G. Lerario et al., "Room-temperature superfluidity in a polariton condensate," Nat. Phys., vol. 13, no. 9, pp. 837-841, Sep. 2017, doi: 10.1038/nphys4147. . A Rahimi-Iman, Polariton Physics. 229Springer International PublishingA. Rahimi-Iman, Polariton Physics, vol. 229. Springer International Publishing, 2020. Polariton Chemistry: controlling molecular dynamics with optical cavities. R F Ribeiro, L A Martínez-Martínez, M Du, J Campos-Gonzalez-Angulo, J Yuen-Zhou, Chem. Sci. 930R. F. Ribeiro, L. A. Martínez-Martínez, M. Du, J. Campos-Gonzalez-Angulo, and J. Yuen-Zhou, "Polariton Chemistry: controlling molecular dynamics with optical cavities," Chem. Sci., vol. 9, no. 30, pp. 6325-6339, Feb. 2018, Accessed: Mar. 18, 2021. [Online]. Available: http://arxiv.org/abs/1802.08681. Photonic quantum technologies. J L O&apos;brien, A Furusawa, J Vučković, 10.1038/nphoton.2009.229Nature Photonics. 312Nature Publishing GroupJ. L. O'Brien, A. Furusawa, and J. Vučković, "Photonic quantum technologies," Nature Photonics, vol. 3, no. 12. Nature Publishing Group, pp. 687-695, Dec. 2009, doi: 10.1038/nphoton.2009.229. Exciton-Polariton Quantum Simulators. N Y Kim, Y Yamamoto, SpringerN. Y. Kim and Y. Yamamoto, "Exciton-Polariton Quantum Simulators," Springer, Cham, 2017, pp. 91-121. High-performance semiconductor quantum-dot singlephoton sources. P Senellart, G Solomon, A White, 10.1038/NNANO.2017.218Nat. Publ. Gr. 12P. Senellart, G. Solomon, and A. White, "High-performance semiconductor quantum-dot single- photon sources," Nat. Publ. Gr., vol. 12, 2017, doi: 10.1038/NNANO.2017.218. Two-dimensional semiconductors in the regime of strong light-matter coupling. C Schneider, M M Glazov, T Korn, S Höfling, B Urbaszek, 10.1038/s41467-018-04866-6Nature Communications. 91Nature Publishing GroupC. Schneider, M. M. Glazov, T. Korn, S. Höfling, and B. Urbaszek, "Two-dimensional semiconductors in the regime of strong light-matter coupling," Nature Communications, vol. 9, no. 1. Nature Publishing Group, pp. 1-9, Dec. 01, 2018, doi: 10.1038/s41467-018-04866-6. Strongly directed single mode emission from organic electroluminescent diode with a microcavity. S Tokito, K Noda, Y Taga, 10.1063/1.116205Appl. Phys. Lett. 68192633S. Tokito, K. Noda, and Y. Taga, "Strongly directed single mode emission from organic electroluminescent diode with a microcavity," Appl. Phys. Lett., vol. 68, no. 19, p. 2633, May 1995, doi: 10.1063/1.116205. Fiber-cavity-based optomechanical device. N E Flowers-Jacobs, 10.1063/1.4768779Appl. Phys. Lett. 10122221109N. E. Flowers-Jacobs et al., "Fiber-cavity-based optomechanical device," Appl. Phys. Lett., vol. 101, no. 22, p. 221109, Nov. 2012, doi: 10.1063/1.4768779. A simple approach to fiber-based tunable microcavity with high coupling efficiency. P Qing, 10.1063/1.5083011Appl. Phys. Lett. 114221106P. Qing et al., "A simple approach to fiber-based tunable microcavity with high coupling efficiency," Appl. Phys. Lett., vol. 114, no. 2, p. 021106, Jan. 2019, doi: 10.1063/1.5083011. Optically Pumped Two-Dimensional MoS 2 Lasers Operating at Room-Temperature. O Salehzadeh, M Djavid, H Tran, I Shih, Z Mi, 10.1021/acs.nanolett.5b01665O. Salehzadeh, M. Djavid, H. Tran, I. Shih, and Z. Mi, "Optically Pumped Two-Dimensional MoS 2 Lasers Operating at Room-Temperature," 2015, doi: 10.1021/acs.nanolett.5b01665. High-Quality Whispering-Gallery-Mode Lasing from Cesium Lead Halide Perovskite Nanoplatelets. Q Zhang, R Su, X Liu, J Xing, T C Sum, Q Xiong, 10.1002/adfm.201601690Adv. Funct. Mater. 2634Q. Zhang, R. Su, X. Liu, J. Xing, T. C. Sum, and Q. Xiong, "High-Quality Whispering-Gallery-Mode Lasing from Cesium Lead Halide Perovskite Nanoplatelets," Adv. Funct. Mater., vol. 26, no. 34, pp. 6238-6245, Sep. 2016, doi: 10.1002/adfm.201601690. Laser oscillation in a strongly coupled single-quantum-dot-nanocavity system. M Nomura, N Kumagai, S Iwamoto, Y Ota, Y Arakawa, 10.1038/nphys1518Nat. Phys. 64M. Nomura, N. Kumagai, S. Iwamoto, Y. Ota, and Y. Arakawa, "Laser oscillation in a strongly coupled single-quantum-dot-nanocavity system," Nat. Phys., vol. 6, no. 4, pp. 279-283, Feb. 2010, doi: 10.1038/nphys1518. Demonstration of a self-pulsing photonic crystal Fano laser. Y Yu, W Xue, E Semenova, K Yvind, J Mork, 10.1038/nphoton.2016.248Nat. Photonics. 112Y. Yu, W. Xue, E. Semenova, K. Yvind, and J. Mork, "Demonstration of a self-pulsing photonic crystal Fano laser," Nat. Photonics, vol. 11, no. 2, pp. 81-84, Feb. 2017, doi: 10.1038/nphoton.2016.248. Monolayer semiconductor nanocavity lasers with ultralow thresholds. S Wu, 10.1038/nature14290Nature. 5207545S. Wu et al., "Monolayer semiconductor nanocavity lasers with ultralow thresholds," Nature, vol. 520, no. 7545, pp. 69-72, Apr. 2015, doi: 10.1038/nature14290. Coherent Coupling of WS2 Monolayers with Metallic Photonic Nanostructures at Room Temperature. S Wang, 10.1021/acs.nanolett.6b01475Nano Lett. 167S. Wang et al., "Coherent Coupling of WS2 Monolayers with Metallic Photonic Nanostructures at Room Temperature.," Nano Lett., vol. 16, no. 7, pp. 4368-74, 2016, doi: 10.1021/acs.nanolett.6b01475. Towards optimal single-photon sources from polarized microcavities. H Wang, 10.1038/s41566-019-0494-3Nat. Photonics. 1311H. Wang et al., "Towards optimal single-photon sources from polarized microcavities," Nat. Photonics, vol. 13, no. 11, pp. 770-775, Nov. 2019, doi: 10.1038/s41566-019-0494-3. Exciton-polaritons in van der Waals heterostructures embedded in tunable microcavities. S , 10.1038/ncomms9579Nat. Commun. 618579S. Dufferwiel et al., "Exciton-polaritons in van der Waals heterostructures embedded in tunable microcavities," Nat. Commun., vol. 6, no. 1, p. 8579, Dec. 2015, doi: 10.1038/ncomms9579. Electrically tunable organic-inorganic hybrid polaritons with monolayer WS2. L C Flatten, 10.1038/ncomms14097Nat. Commun. 81L. C. Flatten et al., "Electrically tunable organic-inorganic hybrid polaritons with monolayer WS2," Nat. Commun., vol. 8, no. 1, pp. 1-5, Jan. 2017, doi: 10.1038/ncomms14097. A quantum network node with crossed optical fibre cavities. M Brekenfeld, D Niemietz, J D Christesen, G Rempe, 10.1038/s41567-020-0855-3Nat. Phys. 166M. Brekenfeld, D. Niemietz, J. D. Christesen, and G. Rempe, "A quantum network node with crossed optical fibre cavities," Nat. Phys., vol. 16, no. 6, pp. 647-651, Jun. 2020, doi: 10.1038/s41567-020-0855-3. Strong dispersive coupling of a high-finesse cavity to a micromechanical membrane. J D Thompson, B M Zwickl, A M Jayich, F Marquardt, S M Girvin, J G E Harris, 10.1038/nature06715Nature. 4527183J. D. Thompson, B. M. Zwickl, A. M. Jayich, F. Marquardt, S. M. Girvin, and J. G. E. Harris, "Strong dispersive coupling of a high-finesse cavity to a micromechanical membrane," Nature, vol. 452, no. 7183, pp. 72-75, Mar. 2008, doi: 10.1038/nature06715. A fiber Fabry-Perot cavity with high finesse. D Hunger, T Steinmetz, Y Colombe, C Deutsch, T W Hänsch, J Reichel, 10.1088/1367-2630/12/6/065038New J. Phys. 12665038D. Hunger, T. Steinmetz, Y. Colombe, C. Deutsch, T. W. Hänsch, and J. Reichel, "A fiber Fabry- Perot cavity with high finesse," New J. Phys., vol. 12, no. 6, p. 065038, Jun. 2010, doi: 10.1088/1367-2630/12/6/065038. Continuously-tunable light-matter coupling in optical microcavities with 2D semiconductors. F Wall, O Mey, L M Schneider, A Rahimi-Iman, 10.1038/s41598-020-64909-1Sci. Rep. 101F. Wall, O. Mey, L. M. Schneider, and A. Rahimi-Iman, "Continuously-tunable light-matter coupling in optical microcavities with 2D semiconductors," Sci. Rep., vol. 10, no. 1, pp. 1-8, Dec. 2020, doi: 10.1038/s41598-020-64909-1. Exceptional points enhance sensing in an optical microcavity. W Chen, Ş K Özdemir, G Zhao, J Wiersig, L Yang, 10.1038/nature23281Nature. 5487666W. Chen, Ş. K. Özdemir, G. Zhao, J. Wiersig, and L. Yang, "Exceptional points enhance sensing in an optical microcavity," Nature, vol. 548, no. 7666, pp. 192-195, Aug. 2017, doi: 10.1038/nature23281. Protein detection by optical shift of a resonant microcavity. F Vollmer, D Braun, A Libchaber, M Khoshsima, I Teraoka, S Arnold, 10.1063/1.1482797Appl. Phys. Lett. 8021F. Vollmer, D. Braun, A. Libchaber, M. Khoshsima, I. Teraoka, and S. Arnold, "Protein detection by optical shift of a resonant microcavity," Appl. Phys. Lett., vol. 80, no. 21, pp. 4057-4059, May 2002, doi: 10.1063/1.1482797. C Saavedra, D Pandey, W Alt, H Pfeifer, D Meschede, 10.1364/oe.412273Tunable Fiber Fabry-Perot Cavities with High Passive Stability. 29arXivC. Saavedra, D. Pandey, W. Alt, H. Pfeifer, and D. Meschede, "Tunable Fiber Fabry-Perot Cavities with High Passive Stability," arXiv, vol. 29, no. 2. arXiv, pp. 974-982, Oct. 03, 2020, doi: 10.1364/oe.412273. Refractive index measurements of photo-resists for three-dimensional direct laser writing. T Gissibl, S Wagner, J Sykora, M Schmid, H Giessen, 10.1364/ome.7.002293Opt. Mater. Express. 772293T. Gissibl, S. Wagner, J. Sykora, M. Schmid, and H. Giessen, "Refractive index measurements of photo-resists for three-dimensional direct laser writing," Opt. Mater. Express, vol. 7, no. 7, p. 2293, Jul. 2017, doi: 10.1364/ome.7.002293. NanoScribe Photonic Professional User Manual. Nanoscribe Gmbh, GermanyNanoscribe GmbH, NanoScribe Photonic Professional User Manual. Germany, 2013. Direct writing target structures by two-photon polymerization. L J Jiang, J H Campbell, Y F Lu, T Bernat, N Petta, 10.13182/FST15-222Fusion Science and Technology. 70L. J. Jiang, J. H. Campbell, Y. F. Lu, T. Bernat, and N. Petta, "Direct writing target structures by two-photon polymerization," in Fusion Science and Technology, Aug. 2016, vol. 70, no. 2, pp. 295-309, doi: 10.13182/FST15-222. Poisson's ratio and modern materials. G N Greaves, A L Greer, R S Lakes, T Rouxel, 10.1038/nmat3134Nature Materials. 1011Nature Publishing GroupG. N. Greaves, A. L. Greer, R. S. Lakes, and T. Rouxel, "Poisson's ratio and modern materials," Nature Materials, vol. 10, no. 11. Nature Publishing Group, pp. 823-837, 2011, doi: 10.1038/nmat3134.
[]
[ "Optical tomography on graphs", "Optical tomography on graphs" ]
[ "Francis J Chung \nDepartment of Mathematics\nUniversity of Kentucky\n40506LexingtonKY\n", "Anna C Gilbert \nDepartment of Mathematics\nUniversity of Michigan\n48109Ann ArborMI\n", "Jeremy G Hoskins \nDepartment of Mathematics\nUniversity of Michigan\n48109Ann ArborMI\n", "John C Schotland \nDepartment of Mathematics\nUniversity of Michigan\n48109Ann ArborMI\n\nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMI\n" ]
[ "Department of Mathematics\nUniversity of Kentucky\n40506LexingtonKY", "Department of Mathematics\nUniversity of Michigan\n48109Ann ArborMI", "Department of Mathematics\nUniversity of Michigan\n48109Ann ArborMI", "Department of Mathematics\nUniversity of Michigan\n48109Ann ArborMI", "Department of Physics\nUniversity of Michigan\n48109Ann ArborMI" ]
[]
We present an algorithm for solving inverse problems on graphs analogous to those arising in diffuse optical tomography for continuous media. In particular, we formulate and analyze a discrete version of the inverse Born series, proving estimates characterizing the domain of convergence, approximation errors, and stability of our approach. We also present a modification which allows additional information on the structure of the potential to be incorporated, facilitating recovery for a broader class of problems.
10.1088/1361-6420/aa66d1
[ "https://arxiv.org/pdf/1609.03041v1.pdf" ]
14,057,367
1609.03041
54ab681c7836db5d5165b1d26ebb43821859b6cc
Optical tomography on graphs Francis J Chung Department of Mathematics University of Kentucky 40506LexingtonKY Anna C Gilbert Department of Mathematics University of Michigan 48109Ann ArborMI Jeremy G Hoskins Department of Mathematics University of Michigan 48109Ann ArborMI John C Schotland Department of Mathematics University of Michigan 48109Ann ArborMI Department of Physics University of Michigan 48109Ann ArborMI Optical tomography on graphs graph algorithmsgraphs and groupsgraphs and matricesdiscrete We present an algorithm for solving inverse problems on graphs analogous to those arising in diffuse optical tomography for continuous media. In particular, we formulate and analyze a discrete version of the inverse Born series, proving estimates characterizing the domain of convergence, approximation errors, and stability of our approach. We also present a modification which allows additional information on the structure of the potential to be incorporated, facilitating recovery for a broader class of problems. Introduction Inverse problems arise in numerous settings within discrete mathematics, including graph tomography [41,27,28,24,18] and resistor networks [19,20,21,22,29,11,10]. In such problems, one is typically interested in reconstructing a function defined on edges of a fixed graph or, in some cases, the edges themselves. In this paper, we focus on recovering vertex properties of a graph from boundary measurements. The problem we consider is the discrete analog of optical tomography. Optical tomography is a biomedical imaging modality that uses scattered light as a probe of structural variations in the optical properties of tissue [4]. The inverse problem of optical tomography consists of recovering the coefficients of a Schrodinger operator from boundary measurements. Let G = (V, E) be a finite locally connected loop-free graph with vertex boundary δV . We consider the time-independent diffusion equation [35] (Lu)(x) + α 0 [1 + η(x)]u(x) = f (x), x ∈ V,(1)t u(x) + ∂u(x) = g(x), x ∈ δV,(2) which, in the continuous setting, describes the transport of the energy density of an optical field in an absorbing medium. Here we assume that the absorption of the medium is nearly constant, with small absorbing inhomogeneities represented by an absorption coefficient, or vertex potential η. In place of the Laplace-Beltrami operator, we introduce the combinatorial Laplacian L defined by (Lu)(x) = y∼x [u(x) − u(y)] ,(3) where y ∼ x if the vertices x and y are adjacent. We make use of the graph analog of Robin boundary conditions, where ∂u(x) = y∈V y∼x [u(x) − u(y)] ,(4) and t is an arbitrary nonnegative parameter, which interpolates between Dirichlet and Neumann boundary conditions. If the vertex potential η is non-negative, then there exists a unique solution to the diffusion equation (1) satisfying the boundary condition (2), [see 23 and the references therein]. In [23] we presented an algorithm for solving the forward problem of determining u, given η. Our approach was a perturbative one, making use of known Green's functions for the time-independent diffusion equation (or Schrödinger equation) [3,8,9,7,12,13,14,15,40,42], with η identically zero. The corresponding inverse problem, which we refer to as graph optical tomography, is to recover the potential η from measurements of u on the boundary of the graph. More precisely, let G = (V, E) be a connected subgraph of a finite graph Γ = (V, E) and let δV denote those vertices in V adjacent to a vertex in V. In addition, let S, R denote fixed subsets of δV . We will refer to elements of S and R as sources and receivers, respectively. For a fixed potential η, source s ∈ S and receiver r ∈ R, let u(r, s; η) be the solution to (1) with vertex potential η and boundary condition (2), where g(x) = 1 x = s, 0 x = s.(5) We define the Robin-to-Dirichlet map Λ η by Λ η (s, r) = u(r, s; η). The inverse problem is to recover η from the Robin-to-Dirichlet map Λ η . Eqs. (1) and (2) also arise when considering the Schrödinger equation on graphs and related inverse problems [38,30,2,11]. For circular planar graphs, or lattice graphs in two or more dimensions, these works outline an algorithm that can be used to recover the vertex potential. In particular, the first three employ special combinations of boundary sources which force the solution in the interior to be zero except on a small, controllable set of vertices. Using this approach, the potential at each vertex can be calculated. Then, starting at the boundary, the entire potential can be recovered. The resulting algorithm relies on the lattice structure of the graphs and is unstable for potentials with large support. In this paper we present a reconstruction method for graph optical tomography that is based on inversion of the Born series solution to the forward problem [35,6,33,36,34,5,31]. Using this approach, we show that it is possible to recover vertex potentials for a general class of graphs under certain smallness conditions on the boundary measurements. In addition, we obtain sufficient conditions under which the inverse Born series converges to the vertex potential. We also obtain a corresponding stability estimate, which is independent of the support of the potential. In numerical studies of the inverse Born series for large potentials or large graphs, where exact recovery is not guaranteed, we nevertheless find that good qualitative recovery of large scale features of the potential is possible. Moreover, our approach can be easily modified to incorporate additional information on the structure of the potential, improving both the speed and accuracy of the algorithm. As an application of this idea, we show how to determine the potential η using data for multiple values of α 0 , assuming η is independent of α 0 . This allows us to apply our method to graphs whose structure makes exact potential recovery otherwise impossible. The remainder of this paper is organized as follows. In Section 2 we briefly review key results on the solvability of the forward problem and introduce the Born series. We obtain necessary conditions for the convergence of the inverse Born series depending on the measurement data and the graph. We also describe related stability and error estimates. In Section 3 we discuss the numerical implementation of the inverse series and present the results of numerical simulations. Finally, in Section 4 we extend our results to the case where measurements can be taken at multiple values of α 0 . Inverse Born series Forward Born series In this section we formulate the inverse Born series. We begin by reviewing some important properties of the Born series, based in part on [23,35]. We recall that the background Green's function [23] for (1) is the matrix G 0 whose i, jth entry is the solution to (1), with η ≡ 0, at the ith vertex for a unit source at the jth vertex. Under suitable restrictions this matrix can be used to construct the Robin-to-Dirichlet map Λ η giving the solution of (1) on R ⊂ δV to unit sources located in S ⊂ δV. To write a compact expression for Λ η in terms of G 0 , let D η denote the matrix with entries given by (D η ) i,j = η i if i = j, 0 else. Additionally, for any two sets U, W ⊂ V ∪ δV, let G U ;W 0 denote the submatrix of G 0 formed by taking the rows indexed by U and the columns indexed by W. For η sufficiently small we may write the Robin-to-Dirichlet map as a Neumann series Λ η (s, r) = G 0 (r, s) − ∞ j=1 K j (η, · · · , η) (r, s), r ∈ R, s ∈ S,(7) where K j : p (V n ) → p (R × S) is defined by K j (η 1 , · · · , η j ) (r, s) = (−α 0 ) j G r;V 0 D η 1 G V ;V 0 D η 2 · · · G V ;V 0 D η j G V ;s 0 .(8) We refer to the series (7) as the forward Born series. In order to establish the convergence and stability of (7), we seek appropriate bounds on the operators K j : p (V × · · · × V ) → p (δV × δV ). Note that if |V | and |δV | are finite then all norms are equivalent. However, since we are interested in the rate of convergence of the inverse series it will prove useful to establish bounds for arbitrary p norms. Proposition 1. Let p, q ∈ [1, ∞] such that 1/p + 1/q = 1 and define the constants ν p and µ p by ν p = α 0 G R;V 0 q (V )× p (R) G V ;S 0 q (V )× p (S) , and µ p = α 0 C G V ;V 0 ,q ,(9) where C G V ;V 0 ,q = max v∈V G V ;v 0 q (V ) .(10) The forward Born series (7) converges if µ p η p < 1.(11) Moreover, the N -term truncation error has the following bound, Λ η −   G 0 + ∞ j=N K j (η, · · · , η)   p (R×S) ≤ ν p η N +1 p µ N p 1 1 − µ p η p .(12) Remark 2. The bounds we obtain are similar to those found in the continuous setting [35], though here we present a novel proof of 2 -boundedness and extend our results to include p ∈ [1, 2); a case not previously considered. Before proving the proposition, we first establish the following useful identities. Lemma 3. Let M be an n×n matrix, and D a , D b be n×n diagonal matrices with diagonal entries given by vectors a and b, respectively. Let M (k) denote the kth row of M, and C M,q = max k M (k) q ,(13) for 1 ≤ q ≤ ∞. Then for any vectors u T and v, and p, q ∈ [1, ∞], such that 1/p + 1/q = 1, |u T D a M D b v| ≤ C M,q u q a p b p v ∞ .(14) Proof. We begin by observing that if e k is the kth canonical basis vector, a = k D a e k and I = k e k e T k , where I is the n × n identity matrix. Hence |u T D a M D b v| ≤ k |u T D a e k | max k |M (k)D b v| , ≤ u q a p max k j |M (k) D b e j | max j |e T j v|, ≤ u q a p b p max k M (k) q max j |e T j v|.(15) We can iterate the result of the Lemma to obtain the following corollary. Corollary 4. Let M 1 , · · · , M j−1 be n × n matrices and D a 1 , · · · , D a j be n × n diagonal matrices with diagonal elements given by the vectors a 1 , · · · , a j . If M i (k) and C M i ,q are defined as in the previous Lemma, then for all u and v, u T D a 1 M 1 D a 2 · · · M j−1 D a j v ≤ a 1 p · · · a j p C M 1 ,q · · · C M j−1 ,q u q v ∞ ,(16) where once again p, q ∈ [1, ∞] and 1/p + 1/q = 1. We now return to the proof of Proposition 1. Proof. Since D η i is a diagonal matrix, D η i = k∈V η i (k)e k e T k , where η i (k) is the kth component of the vector η i and e k is the canonical basis vector corresponding to the vertex k. From the definition of K j , we see that K j (η 1 , · · · , η n ) p ≤ α j 0   r∈R, s∈S G r;V 0 D η 1 G V ;V 0 D η 2 · · · G V ;V 0 D η j G V ;s 0 p   1/p .(17) The previous Corollary implies that G r;V 0 D η 1 · · · D η j G V ;s 0 ≤ η 1 p · · · η j p C j−1 G V ;V 0 ,q G V,r 0 q G V,s 0 ∞ .(18) 5 Thus K j p ≤ α j 0 G R,V 0 p (R)× q (V ) G V,S 0 q (V )× p (S) C j−1 G V ;V 0 ,q , ≤ ν p µ j−1 p ,(19) where ν p = α 0 G R,V 0 q (V )× p (R) G V,S 0 q (V )× p (S) and µ p = α 0 C G V ;V 0 ,q , from which the result follows immediately. Inverse Born series Proceeding as in [35], let φ ∈ 2 (R × S) denote the scattering data, φ(r, s) = G 0 (r, s) − Λ η (r, s),(20) corresponding to the difference between the measurements in the background medium and those in the medium with the potential present. Note that if the forward Born series converges, we have φ(r, s) = ∞ j=1 K j (η, . . . , η).(21) Next, we introduce the ansatz η = K 1 (φ) + K 2 (φ, φ) + K 3 (φ, φ, φ) + · · · ,(22) where each K n is a multilinear operator. Though φ can be thought of as an operator from 2 (R) to 2 (S), in (22) we treat it as a vector of length |R| · |S|. Similarly, though it is often convenient to think of η as a (diagonal) matrix, in (22) it should be thought of as a vector of length |V |. Treating η and φ as matrices results in a different inverse problem related to matrix completion [32]. With a slight abuse of notation, we also use K 1 to denote the |R||S| × |V | matrix mapping η (viewed as a vector) to K 1 η, once again thought of as a vector. To derive the inverse Born series, we substitute the ansatz (22) into the forward series (21) and equate tensor powers of φ. We thus obtain which the following recursive expressions for the operators K j [35]: K 1 = K + 1 , K 2 = −K 1 K 2 K 1 ⊗ K 1 , K 3 = − (K 2 K 1 ⊗ K 2 + K 2 K 2 ⊗ K 1 + K 1 K 3 ) K 1 ⊗ K 1 ⊗ K 1 , K j = −   j−1 m=1 K m i 1 +···+im=j K i 1 ⊗ · · · ⊗ K im   K 1 ⊗ · · · ⊗ K 1 ,(23) where K + 1 denotes the (regularized) pseudoinverse of K 1 . The following result provides sufficient conditions for the convergence of the inverse Born series for graphs where |V | = |R × S|, corresponding to the case of a formally determined inverse problem. Theorem 5. Let |V | = |R × S| and p ∈ [1, ∞]. Suppose that the operator K 1 is invertible. Then the inverse Born series converges to the original potential, η, if φ p < r p . Here the radius of convergence r p is defined by r p = C p µ p 1 − 2 ν p C p 1 + C p ν p − 1 ,(24) where C p = min η p=1 K 1 (η) p(25) and ν p , µ p are defined in (9). The proof of Theorem 5 requires the following multi-dimensional version of Rouché's theorem. Theorem 6. [Theorem 2.5, 1] Let D be a domain in C n with a piecewise smooth boundary ∂D. Suppose that f, g : C n → C n are holomorphic onD. If for each point z ∈ ∂D there is at least one index j, j = 1, . . . , n, such that |g j (z)| < |f j (z)|, then f (z) and f (z) + g(z) have the same number of zeros in D, counting multiplicity. Proof of Theorem 5. Put n = |V | = |R ×S|. Let F : C n ×C n → C n be the function defined by F (η, φ) = φ − ∞ j=1 K j (η, . . . , η).(26) Note that F has n components F 1 , . . . , F n , each of which is well-defined and holomorphic for all φ if η p < 1/µ p . Let C p = min η p=1 K 1 (η) p ,(27) which is non-zero for all p since K 1 is invertible. Then F (η, 0) p ≥ C p η p − ∞ j=2 K j (η, . . . , η) p , ≥ C p η p − ν p ∞ j=2 µ j−1 p η j p , ≥ C p η p − ν p µ p η 2 p 1 1 − µ p η p ,(28) where the second inequality follows from the bounds on the forward operators obtained in the proof of Proposition 1. For 0 < η p < 1/µ p , F (η, 0) p is non-vanishing if η p < 1 µ p C p C p + ν p .(29) Suppose λ ≥ 1. We then define R λ = 1 µ p C p C p + ν p λ ,(30) and let Ω 1,λ = {η ∈ C n | η p < R λ }. Next, we observe that F (η, φ) − F (η, 0) = φ and hence if φ p < F (η, 0) p ,(31) then F (η, φ) − F (η, 0) p < F (η, 0) p .(32) Note that F (η, 0) p ≥ C p η p − ν p µ p η 2 p 1 − µ p η p ,(33) and thus (31) holds if φ p < C p η p − ν p µ p η 2 p 1 − µ p η p .(34) If η ∈ ∂Ω 1,λ , (31) holds if φ p < R λ C p 1 − 1 λ ≡ r p,λ .(35) Defining Ω 2,λ = {φ ∈ C n | φ p < r p,λ }, we note the following: for all (η, φ) ∈ Ω 1,λ × Ω 2,λ , F (η, 0) = 0; and, for all (η, φ) ∈ ∂Ω 1,λ × Ω 2,λ , F (η, φ) − F (η, 0) p < F (η, 0) p . By Theorem 6, F (η, 0) and F (η, φ) have the same number of zeroes counting multiplicity on Ω 1,λ × Ω 2,λ , namely precisely one. Thus, for all φ ∈ Ω 2,λ there exists a unique η = ψ(φ) such that F (ψ(φ), φ) = 0. Since the unique zero must have multiplicity one, det {∂ η j F i (ψ(φ), φ)} n i,j=1 = 0.(36) Consequently, by the analytic implicit function theorem [Theorem 3.1.3, 39], ψ is analytic in a neighborhood of each φ ∈ Ω 2,λ , which is sufficient to prove that ψ is analytic on all of Ω 2,λ . Hence ψ has a Taylor series converging absolutely for all φ ∈ Ω 2,λ . By construction, the terms in this series must match those of the inverse Born series. It follows that the inverse Born series must also converge for all φ ∈ Ω 2,λ . Optimizing over λ ≥ 1, the inverse Born series converges for all φ ∈ C n , such that φ p < C p µ p 1 − 2 ν p C p 1 + C p ν p − 1 ,(37) which completes the proof. Remark 7. We note that Theorem 6 is closely related to the problem of determining the domain of biholomorphy of a function of several complex variables, where the radii of analyticity of the function and its inverse are referred to as Bloch radii or Bloch constants [26,25,17]. In the context of nonlinear optimization a related result was obtained in [16], which also made use of Rouché's theorem. Remark 8. The bound constructed in Theorem 5 is only a lower bound for the radius of convergence. In practice, the series converges well outside this range, as the example in the next section confirms. Additionally, if in the proof of Theorem 5 we instead define F (η, φ) by F (η, φ) = K 1 φ − ∞ j=1 K 1 K j (φ, . . . , φ),(38) then it can easily be shown that the inverse series converges if K 1 φ p < 1 µ p 1 − 2 ν p C p 1 + C p ν p − 1 .(39) Though the right-hand side is slightly more complicated, it is often easily computed and gives a better bound. Figure 1 shows a plot of the bound on the radius of convergence, r p = C p µ p 1 − 2 ν p C p 1 + C p ν p − 1 for various values of C p /ν p . For large graphs we expect the determinant of K 1 to be small, corresponding to a small value of C p . In this regime we observe that the first term in the asymptotic expansion of (37) is r p = C 2 p 4ν p µ p + O C 3 p .(40) We now consider the stability of the limit of the inverse scattering series under perturbations in the scattering data. The following stability estimate follows immediately from Theorem 5. Figure 1: The bound on the radius of convergence of the inverse Born series as a function of C p /ν p . The radius r p (multiplied by ν p /µ p ) is shown in blue. The red curve is the asymptotic estimate given in (40). Proposition 9. Let D be a compact subset of Ω p = {φ ∈ C n | φ p < r p } , where r p is defined in (24) and p ∈ [1, ∞]. Let φ 1 and φ 2 be scattering data belonging to D and ψ 1 and ψ 2 denote the corresponding limits of the inverse Born series. Then the following stability estimate holds: ψ 1 − ψ 2 p ≤ M φ 1 − φ 2 p , where M = M (D, p) is a constant which is otherwise independent of φ 1 and φ 2 . Proof. In the proof of Theorem 5 it was shown that ψ is analytic on Ω p . In particular, it follows that there exists an M < ∞ such that Dψ p ≤ M,(41) for all φ ∈ Ω. Here Dψ is the differential of ψ and · p is its induced matrix p-norm. By the mean value theorem, ψ 1 − ψ 2 p ≤ M φ 1 − φ 2 p ,(42) for all φ 1 , φ 2 ∈ Ω. Theorem 5 guarantees convergence of the inverse Born series, but does not provide an estimate of the approximation error. Such an estimate is provided in the next theorem. Theorem 10. Suppose that the hypotheses of Theorem 5 hold and φ p < τ r p , where τ < 1. If η is the true vertex potential corresponding to the scattering data φ, then η − N m=1 K m (φ, . . . , φ) ∞ < M 1 1 − τ n φ p τ r p N 1 1 − φ p τ rp . Proof. The proof follows a similar argument as the one used to show uniform convergence of analytic functions on polydiscs, see [Lemma 1.5.8 and Corollary 1.5.9, 39] for example. By Theorem 5, since φ p < r p , the inverse Born series converges. Moreover, the value to which it converges is precisely the unique potential η corresponding to the scattering data φ. Let ψ j be the jth component of the sum of the inverse Born series, which is of the form ψ j = ∞ |α|=0 c (j) α φ α ,(43) for suitable c (j) α , consistent with (22). Here we have used the following notational convention: if α = (α 1 , . . . , α n ) then φ α ≡ φ α 1 1 . . . φ αn n . Additionally, for a given multi-index α we define |α| = α 1 + · · · + α n . Note that each α in the sum has exactly n elements, though any number of them may be zero. Let ψ (N ) j = N |α|=0 c (j) α φ α , and ∆ φ be the polydisc ∆ φ = z ∈ C n | |z s | < |φ s | r p φ p , s = 1, . . . , n . We note that φ ∈ ∆ φ ⊆ {φ | φ p < r p }. |c (j) α | ≤ M φ p r p |α| 1 |φ| α ,(44) where M = max φ p<rp ψ p . To proceed, we employ the following combinatorial identity, [Example 1.5.7, 39], M |α|=0 φ p r p |α| = M ∞ m=1 φ p r p m n = M   1 1 − φ p rp   n . The function 1/(1 − t) n is bounded by M 1 1 − τ n for all |t| < τ < 1. Thus the one-dimensional Cauchy estimate implies that the kth coefficient of its Taylor series, b k , is bounded by |b k | ≤ M 1 1 − τ n 1 τ k , and so |α|>N φ p r p |α| ≤ M ∞ k>N 1 1 − τ n φ p τ r p k , = M 1 1 − τ n φ p τ r p N 1 1 − φ p τ rp .(46) Hence, independent of j, ψ j − ψ (N ) j = |α|>N c (j) α φ α , ≤ |α|>N |c (j) α ||φ| α , ≤ M |α|>N φ p r p |α| , ≤ M 1 1 − τ n φ p τ r p N 1 1 − φ p τ rp ,(47) from which the result follows immediately. Remark 11. Note that in the previous theorem we can minimize our bound over τ ∈ ( φ p /r p , 1). Letting γ = φ p /r p the minimum occurs at τ * = γ 2   1 + N − γ γ(n + N ) + 1 − N − γ γ(N + n) 2 + 4 1 − γ γ(N + n)   . Finally, we conclude our discussion of the convergence of the inverse Born series by proving an asymptotic estimate for the truncation error. Specifically, we show that for a fixed number of terms N the error in the N -term inverse Born series goes to zero as η goes to zero. We note that our estimate does not apply to the case of fixed φ and N → ∞ since C N,a x N → ∞ for any fixed positive x. Theorem 12. Let η p µ p < a < 1. Then there exists a constant C N,a , depending on N such that η − N j=1 K j (φ, · · · , φ) p ≤ C N,a η N +1 p . (48) Proof. We begin by considering the truncated inverse Born series, η N (φ) = N j=1 K j (φ, · · · , φ),(49) noting that η N − η = N j=1 K j (φ, · · · , φ) − η.(50) If µ p η p < 1, φ is equal to its forward Born series, and hence η N − η = N j=1 ∞ i 1 ,··· ,i j =1 K j [K i 1 (η), · · · , K i N (η)] − η.(51) Using (23) we find that η n − η = n j=1 ∞ i 1 +···+i j >n K j [K i 1 (η), · · · , K i N (η)],(52) which follows from the construction of the inverse Born series. Therefore η N − η p ≤ N j=1 K j p ν j p ∞ k>N µ k−j p η k p , le N j=1 K j p ν p µ p j ∞ k>N µ k p η k p , le N j=1 K j p ν p µ p j η N +1 p µ N +1 p ∞ k=0 (µ p η p ) k , = N j=1 K j p ν j p µ N +1−j p η N +1 p 1 1 − µ p η p .(53) If we define C = max{1, r}, then it follows from (53) and (57) η N − η p ≤ η N +1 p µ N +1 p K 1 p 1 − µ p η p N j=1 2ν p r µ p j C j 2 2 , ≤ η N +1 p µ N +1 p K 1 p 1 − µ p η p 1 − 2ν p µ −1 p r N +1 1 − 2ν p µ −1 p r C N 2 2 , ≤C(N ) η N +1 p 1 − µ p η p .(58) Thus, for η p < µ −1 p a < µ −1 p , η N − η p ≤ C (N ) η N +1 p(59) for some constant C (N ). Implementation Regularizing K 1 In the previous section we found that the norm of K 1 plays an essential role in controlling the convergence of the inverse Born series. In practice, for large graphs K 1 p is too large to guarantee convergence of the inverse series. Moreover, even if the series converges, a modest amount of noise can lead to large changes in the recovered potential. Regularization improves the stability and radius of convergence of the inverse Born series by replacing K 1 by a more stable operatorK 1 . In our implementations we use Tikhonov regularization [37]. Numerical examples We consider a 12×12 lattice, with all boundary vertices acting as sources and receivers. We simulate the scattering data by solving the forward problem for a particular vertex potential. As is often the case in biomedical applications, we consider a homogeneous medium with a small number of large inclusions. Figure 2 shows a typical result of the inverse Born series method. In this example, µ 2 ∼ 0.1738, ν 2 ∼ 10.47, and K 1 2 ∼ 2.9 × 10 8 , which gives a radius of convergence of K 1 φ 2 < 4.5 × 10 −9 using (39), or φ 2 < 1.51 × 10 −16 for (37). For η ∞ = 0.1, φ 2 ∼ 0.1853 and K 1 φ 2 ∼ 16.2522, while for η ∞ = 0.5, φ 2 ∼ 0.9012 and K 1 φ 2 ∼ 393. Therefore, though both of these φ's lie outside our bound, the first example shows signs of convergence, while the latter one appears to diverge. η ∞ = 0.5. In each group, a) is the absorption, η, used to generate measurement data; b) is first term of the inverse Born series; c) is the first two terms of the inverse Born series; and d) is the first 5 terms of the inverse Born series. 16 Incorporating potential structure The inverse Born series algorithm can be extended to take into account additional constraints on the vertex potential η, such as restrictions on its support or requirements that it is constant on some subset of the domain, allowing the recovery of vertex potentials which would otherwise be unrecoverable using the inverse Born series described above. Theorem 13. Let F be a linear mapping from R k → R |V | , where k ≤ |V | and suppose that η is in the image of F. Let η be its pre-image, η = F −1 (η).(60) a Then η = K 1 (φ) + K 2 (φ, φ) + · · · + K n (φ, . . . , φ) + . . . ,(61) and K 1 = (K 1 • F ) + , K 2 = −K 1 • K 2 • ((F • K 1 ) ⊗ (F • K 1 )) , . . . K n = − n−1 j=1 K j •   i 1 +...i j =n K i 1 ⊗ K i 2 ⊗ · · · ⊗ K i j   • ((F • K 1 ) ⊗ · · · ⊗ (F • K 1 )) ,(62) where (K 1 • F ) + denotes the (regularized) pseudoinverse of (K 1 • F ). Proof. We begin by rewriting the discrete time-independent diffusion equation as Lu + α 0 [I + D F (η ) ]u − A T V,δV v = 0, −A V,δV u + Dv = g,(63) where D F (η ) is the diagonal matrix whose diagonal elements are given by the vector F (η ). If Λ η : 2 (R k ) → 2 (R × S) denotes the Robin-to-Dirichlet map for the modified system (63) and η is in the image of F, then Λ η = Λ η .(64) Thus the forward Born series of (63) is given by Λ η (s, r) = G 0 (r, s) − ∞ n=1 K n F (η ), . . . , F (η ) .(65) Following the construction of the inverse Born series, we let φ represent the measured data, and consider the ansatz η = K 1 (φ) + K 2 (φ, φ) + · · · + K n (φ, . . . , φ) + . . .(66) We see immediately that K 1 • K 1 • F = I, K 1 • K 2 • (F ⊗ F ) + K 2 • ((K 1 • F ) ⊗ (K 1 • F )) = 0, . . . n j=1 K j •   i 1 +...i j =n K i 1 ⊗ K i 2 ⊗ · · · ⊗ K i j   • (F ⊗ · · · ⊗ F ) = 0.(67) If (K 1 • F ) + denotes the (regularized) pseudoinverse of (K 1 • F ), then we obtain K 1 = (K 1 • F ) + , K 2 = −K 1 • K 2 • ((F • K 1 ) ⊗ (F • K 1 )) , . . . K n = − n−1 j=1 K j •   i 1 +...i j =n K i 1 ⊗ K i 2 ⊗ · · · ⊗ K i j   • ((F • K 1 ) ⊗ · · · ⊗ (F • K 1 )) .(68) We observe that bounds on the radius of convergence, truncation error, and stability of the modified inverse Born series can be easily obtained using arguments similar to those made in Section 2. Theorem 13 can easily be applied to incorporate measurements from multiple values of α 0 , provided the vertex potential η is independent of the value of α 0 . In particular, let Γ = (E, V ) be a graph and suppose we have measurements for α 0 = (α i ) m 1 . Let Γ = {Γ 1 , . . . , Γ m } be the graph with vertices V = {V 1 , . . . , V m } and edges E = {E 1 , . . . , E m }, consisting of m copies of Γ. Here the subscript denotes the copy of E, V, or Γ to which we are referring. Let π : V → V denote the projection map taking a vertex in V i or δV i to the corresponding vertex in V or δV, respectively. Finally, for a given vertex potential, η, on Γ let η denote the corresponding potential on Γ . Thus, for each vertex v ∈ V , η (v) = η(π(v)). Next we construct the following modified time-independent diffusion equation L i u i + α i [I + D η ]u i − (A i ) T V,δV v i = 0, −(A i ) V,δV u i + Dv i = g i ,(70) where u i and v i , i = 1, . . . , m, are supported on V i and δV i , respectively, and L i is the Laplacian corresponding to the ith subgraph. As before D η denotes the diagonal matrix with entries given by η . Note that Γ consists of m disconnected components, and hence the solution in one component is independent of the solution in another. If W, U ⊂ V i × δV i let G W ;U i denote the submatrix of G i consisting of the rows indexed by W and the columns indexed by U. It follows that the background Green's function for (70) is given by G 0 =                G V1;V1 1 G V1;δV1 1 G V2;V2 2 G V2;δV2 2 . . . . . . G Vm;Vm m G Vm;δVm m G δV1;V1 1 G δV1;δV1 1 G δV2;V2 2 G δV2;δV2 2 . . . . . . G δVm;Vm m G δVm;δVm m               (71) Thus, if u = (u 1 , . . . , u m , v 1 , . . . , v m ) T solves (70) whenη ≡ 0, and g = (0, . . . , 0, g 1 , . . . , g m ) T , then u = G 0 g.(72) Using this we can define the operators K 1 , . . . , K n for (70), where we replace G 0 by G 0 =                 α 1 α 0 G V 1 ;V 1 1 α 1 α 0 G V 1 ;δV 1 1 α 2 α 0 G V 2 ;V 2 2 α 2 α 0 G V 2 ;δV 2 2 . . . . . .               ,(73) to account for the different α value in each component. We now enforce the condition that η is identical on each copy of Γ, and hence is independent of α. The map F : p (V 1 ) → p (V 1 × · · · × V m ) in (60) is defined by F {η}(v) = η(π(v)).(74) Using this we form the modified inverse Born series operators in (68) and thus construct the modified inverse Born series. Provided that (K 1 •F ) is invertible and the measured data φ is sufficiently small, by Theorem 5 the inverse Born series converges to the true (unique) value of η. Since η is the α-independent absorption of the vertices in Γ, we have constructed a reconstruction algorithm using data from multiple α 0 . To illustrate this algorithm we , and t = 0.1, where η is equal to 0.02 on vertices 1, 2 and 4, and is identically zero on the remaining vertices. Here k is the number of terms of the inverse Born series that were kept. consider a path of length 10, noting that it cannot be imaged using the standard inverse Born series, that is with one value of α 0 . More generally, any graph containing a path of length greater than six in its interior, connected to the remainder of the graph only at its endpoints, the corresponding K 1 is not invertible. In fact, it can be shown that for such graphs that the absorption η cannot be uniquely determined from the data φ. For our example, we choose one boundary vertex to act both as source and receiver and take α i = 0.1 + 0.15 · i 1 2 , i = 1, . . . , 25. Here η is chosen to be a function supported on three randomly-chosen interior vertices, with a height of 0.02. The sums of the first few terms of the inverse Born series are shown in Figure 3. Acknowledgements This work was supported in part by the National Science Foundation grants DMS-1115574, DMS-1108969 and DMS-1619907 to JCS, and National Science Foundation grants CCF-1161233 and CIF-0910765 to ACG. Figure 2 : 2Inverse Born numerical experiments for a 12 × 12 lattice with α 0 = 0.1, t = 0 and receivers and sources placed at each vertex on the boundary. i) η ∞ = 0.1, and ii) Figure 3 : 3Multi-frequency inverse Born numerical experiments for α i ∞ |α|=0 t |α| = 1 (1 − t) n ,(45)for all t ∈ (−1, 1). In light of the above, we see that In order to proceed, we require a bound on K j p . As in[35], we begin by observing that if p ∈ [1, ∞], j > 2,where we have shifted the index m in the last expression. It follows immediately from the binomial thoerem thatFurther note that if j = 2, thenFor ease of notation, let r = (µ p + ν p ) K 1 p and note that Integral Representations and Residues in Multidimensional Complex Analysis. Translations of Mathematical Monographs. L A Aizenberg, A P Yuzhakov, American Mathematical SocietyL.A. Aizenberg and A.P. Yuzhakov. Integral Representations and Residues in Multi- dimensional Complex Analysis. Translations of Mathematical Monographs. American Mathematical Society, 1983. Inverse scattering theory for discrete Schrödinger operators on the hexagonal lattice. K Ando, Annales Henri Poincaré. 142K. Ando. Inverse scattering theory for discrete Schrödinger operators on the hexagonal lattice. Annales Henri Poincaré, 14(2):347-383, October 2013. Overdetermined partial boundary value problems on finite networks. C Araúz, A Carmona, A M Encinas, J. Math. Anal. Appl. 4231C. Araúz, A. Carmona, and A.M. Encinas. Overdetermined partial boundary value problems on finite networks. J. Math. Anal. Appl., 423(1):191-207, 2014. Optical tomography in medical imaging. S R Arridge, Inverse Problems. 15241S R Arridge. Optical tomography in medical imaging. Inverse Problems, 15(2):R41, 1999. Inverse Born series for the Calderon problem. Simon Arridge, Shari Moskow, John C Schotland, Inverse Problems. 28Simon Arridge, Shari Moskow, and John C Schotland. Inverse Born series for the Calderon problem. Inverse Problems, 28, 2012. Restarted inverse Born series for the Schrödinger problem with discrete internal measurements. Patrick Bardsley, Fernando Guevara Vasquez, Inverse Problems. 30445014Patrick Bardsley and Fernando Guevara Vasquez. Restarted inverse Born series for the Schrödinger problem with discrete internal measurements. Inverse Problems, 30(4):045014, 2014. Eigenvalues, eigenfunctions and Green's functions on a path via Chebyshev polynomials. E Bendito, A M Encinas, A Carmona, Appl. Anal. Discrete Math. 3E. Bendito, A. M. Encinas, and A. Carmona. Eigenvalues, eigenfunctions and Green's functions on a path via Chebyshev polynomials. Appl. Anal. Discrete Math., 3:282- 302, 2009. Solving Boundary Value Problems on Networks Using Equilibrium Measures. Enrique Bendito, Angeles Carmona, Andrés M Encinas, J. Funct. Anal. 1711Enrique Bendito, Angeles Carmona, and Andrés M. Encinas. Solving Boundary Value Problems on Networks Using Equilibrium Measures. J. Funct. Anal., 171(1):155-176, 2000. Potential Theory for Schrödinger operators on finite networks. Enrique Bendito, Angeles Carmona, Andrés M Encinas, Rev. Mat. Iberoamericana. 21Enrique Bendito, Angeles Carmona, and Andrés M. Encinas. Potential Theory for Schrödinger operators on finite networks. Rev. Mat. Iberoamericana, 21:771-818, 2005. Resistor network approaches to electrical impedance tomography. Liliana Borcea, Vladimir Druskin, Fernando Guevara Vasquez, Alexander , V Mamonov, Inside Out, Mathematical Sciences Research Institute Publications. Liliana Borcea, Vladimir Druskin, Fernando Guevara Vasquez, Alexander, and V. Ma- monov. Resistor network approaches to electrical impedance tomography. In Inside Out, Mathematical Sciences Research Institute Publications, 2011. A discrete Liouville identity for numerical reconstruction of Schrödinger potentials. Liliana Borcea, Fernando Guevara Vasquez, Alexander V Mamonov, Liliana Borcea, Fernando Guevara Vasquez, and Alexander V. Mamonov. A discrete Liouville identity for numerical reconstruction of Schrödinger potentials, January 2016. Discrete elliptic operators and their Green operators. A Carmona, A M Encinas, M Mitjana, Linear Algebra Appl. 4422A. Carmona, A. M. Encinas, and M. Mitjana. Discrete elliptic operators and their Green operators. Linear Algebra Appl., 442(2):115-134, 2014. Green matrices associated with generalized linear polyominoes. A Carmona, A M Encinas, M Mitjana, Linear Algebra Appl. 468A. Carmona, A. M. Encinas, and M. Mitjana. Green matrices associated with gener- alized linear polyominoes. Linear Algebra Appl., 468:38-47, 2015. Perturbations of discrete elliptic operators. A Carmona, A M Encinas, M Mitjana, Linear Algebra Appl. 468A. Carmona, A. M. Encinas, and M. Mitjana. Perturbations of discrete elliptic oper- ators. Linear Algebra Appl., 468:270-285, 2015. Fonctions harmoniques sur un arbre. P Cartier, Sympos. Math. 9P. Cartier. Fonctions harmoniques sur un arbre. Sympos. Math., 9:203-270, 1972. The analytic domain in the implicit function theorem. H C Chang, W He, N Prabhu, Journal of Inequalities in Pure and Applied Mathematics. 41H.C. Chang, W. He, and N. Prabhu. The analytic domain in the implicit function theorem. Journal of Inequalities in Pure and Applied Mathematics, 4(1), 2003. On Bloch's constant. Huaihui Chen, Paul M Gauthier, Journal d'Analyse Mathématique. 691Huaihui Chen and Paul M. Gauthier. On Bloch's constant. Journal d'Analyse Mathématique, 69(1):275-291, 1996. Distance realization problems with applications to internet tomography. Fan Chung, Mark Garrett, Ronald Graham, David Shallcross, Journal of Computer and System Sciences. 633Fan Chung, Mark Garrett, Ronald Graham, and David Shallcross. Distance realization problems with applications to internet tomography. Journal of Computer and System Sciences, 63(3):432 -448, 2001. Finding the conductors in circular networks from boundary measurements. E Curtis, E Mooers, J Morrow, ESAIM: Mathematical Modelling and Numerical Analysis -Modélisation Mathématique et Analyse Numérique. 28E. Curtis, E. Mooers, and J. Morrow. Finding the conductors in circular networks from boundary measurements. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, 28(7):781-814, 1994. Determining the Resistors in a Network. B Edward, James A Curtis, Morrow, SIAM Journal on Applied Mathematics. 503Edward B. Curtis and James A. Morrow. Determining the Resistors in a Network. SIAM Journal on Applied Mathematics, 50(3):918-930, 1990. The Dirichlet to Neumann Map for a Resistor Network. B Edward, James A Curtis, Morrow, SIAM Journal on Applied Mathematics. 514Edward B. Curtis and James A. Morrow. The Dirichlet to Neumann Map for a Resistor Network. SIAM Journal on Applied Mathematics, 51(4):1011-1029, 1991. Inverse Problems for Electrical Networks. B Edward, James Curtis, Morrow, World Scientific PublishingEdward B. Curtis and James. A Morrow. Inverse Problems for Electrical Networks. World Scientific Publishing, 2000. Diffuse scattering on graphs. A C Gilbert, J G Hoskins, J C Schotland, Linear Algebra and its Applications. 496A.C. Gilbert, J.G. Hoskins, and J. C. Schotland. Diffuse scattering on graphs. Linear Algebra and its Applications, 496:1-35, 2016. A nonlinear inverse problem inspired by three-dimensional diffuse tomography: Explicit formulas. F , Alberto Grünbaum, Laura Felicia Matusevich, International Journal of Imaging Systems and Technology. 125F. Alberto Grünbaum and Laura Felicia Matusevich. A nonlinear inverse problem inspired by three-dimensional diffuse tomography: Explicit formulas. International Journal of Imaging Systems and Technology, 12(5):198-203, 2002. On the size of balls covered by analytic transformations. Lawrence A Harris, 83Monatshefte für MathematikLawrence A. Harris. On the size of balls covered by analytic transformations. Monat- shefte für Mathematik, 83(1):9-23, 1977. Fixed point theorems for infinite dimensional holomorphic functions. Lawrence A Harris, Journal of the Korean Mathematical Society. 411Lawrence A. Harris. Fixed point theorems for infinite dimensional holomorphic func- tions. Journal of the Korean Mathematical Society, 41(1):175-192, 2004. T Gabor, Attila Herman, Kuba, A Recursive Algorithm for Diffuse Planar Tomography. Boston, MAGabor T. Herman and Attila Kuba, editors. A Recursive Algorithm for Diffuse Planar Tomography. Birkhäuser Boston, Boston, MA, 1999. Atsushi Imiya, Akihiko Torii, Kosuke Sato, Proceedings of the workshop on discrete tomography and its applications tomography on finite graphs. Electronic Notes in Discrete Mathematics. the workshop on discrete tomography and its applications tomography on finite graphs. Electronic Notes in Discrete Mathematics20Atsushi Imiya, Akihiko Torii, and Kosuke Sato. Proceedings of the workshop on discrete tomography and its applications tomography on finite graphs. Electronic Notes in Discrete Mathematics, 20:217 -232, 2005. Discrete and Continuous Dirichlet-to-Neumann Maps in the Layered Case. David V Ingerman, SIAM Journal on Mathematical Analysis. 316David V. Ingerman. Discrete and Continuous Dirichlet-to-Neumann Maps in the Layered Case. SIAM Journal on Mathematical Analysis, 31(6):1214-1234, 2000. Inverse scattering at a fixed energy for discrete Schrödinger operators on the square lattice. H Isozaki, H Morioka, ArXiv e-printsH. Isozaki and H. Morioka. Inverse scattering at a fixed energy for discrete Schrödinger operators on the square lattice. ArXiv e-prints, August 2012. Inverse Born Series for Scalar Waves. Kimberly Kilgore, Shari Moskow, John C Schotland, Journal of Computational Mathematics. 306Kimberly Kilgore, Shari Moskow, and John C Schotland. Inverse Born Series for Scalar Waves. Journal of Computational Mathematics, 30(6):601-614, 2012. Solution of the inverse scattering problem by t-matrix completion. W Howard, Levinson, A Vadim, Markel, Howard W. Levinson and Vadim A. Markel. Solution of the inverse scattering problem by t-matrix completion, 2014. Inverse problem in optical diffusion tomography. iv. Nonlinear inversion formulas. A Vadim, Joseph A Markel, John C O&apos;sullivan, Schotland, J. Opt. Soc. Am. A. 205Vadim A. Markel, Joseph A. O'Sullivan, and John C. Schotland. Inverse problem in optical diffusion tomography. iv. Nonlinear inversion formulas. J. Opt. Soc. Am. A, 20(5):903-912, May 2003. On the convergence of the Born series in optical tomography with diffuse light. A Vadim, John C Markel, Schotland, Inverse Problems. 2341445Vadim A. Markel and John C. Schotland. On the convergence of the Born series in optical tomography with diffuse light. Inverse Problems, 23(4):1445, 2007. Shari Moskow, John Schotland, Convergence and Stability of the Inverse Scattering Series for Diffuse Waves. Inverse Problems. 24Shari Moskow and John Schotland. Convergence and Stability of the Inverse Scattering Series for Diffuse Waves. Inverse Problems, 24, 2008. Numerical studies of the inverse Born series for diffuse waves. Shari Moskow, C John, Schotland, Inverse Problems. 25995007Shari Moskow and John C Schotland. Numerical studies of the inverse Born series for diffuse waves. Inverse Problems, 25(9):095007, 2009. F Natterer, The Mathematics of Computerized Tomography. SIAM. F. Natterer. The Mathematics of Computerized Tomography. SIAM, 2001. Discrete inverse problems for Schrödinger and Resistor networks. Richard Oberlin, University of WashingtonTechnical reportRichard Oberlin. Discrete inverse problems for Schrödinger and Resistor networks. Technical report, University of Washington, July 2000. Introduction to Complex Analysis in Several Variables. Volker Scheidemann, Volker Scheidemann. Introduction to Complex Analysis in Several Variables. Potential theory on infinite networks. Paolo M Soardi, Lecture Notes in Mathematics. Springer-VerlagPaolo M. Soardi. Potential theory on infinite networks. Lecture Notes in Mathematics. Springer-Verlag, 1994. Network Tomography: Estimating Source-Destination Traffic Intensities from Link Data. Y Vardi, Journal of the American Statistical Association. 91433Y. Vardi. Network Tomography: Estimating Source-Destination Traffic Intensities from Link Data. Journal of the American Statistical Association, 91(433):365-377, 1996. The equation ∆u = qu on an infinite network. M Yamasaki, Mem. Fac. Sci. Shimane Univ. 21M. Yamasaki. The equation ∆u = qu on an infinite network. Mem. Fac. Sci. Shimane Univ., 21:31-46, 1987.
[]
[ "The gravitational equation in higher dimensions", "The gravitational equation in higher dimensions" ]
[ "Naresh Dadhich \nIndia and Inter\nCentre for Theoretical Physics\nUniversity Centre for Astronomy & Astrophysics\nJamia Millia Islamia, Post Bag 4110025, 411 007New Delhi, PuneIndia\n" ]
[ "India and Inter\nCentre for Theoretical Physics\nUniversity Centre for Astronomy & Astrophysics\nJamia Millia Islamia, Post Bag 4110025, 411 007New Delhi, PuneIndia" ]
[]
Like the Lovelock Lagrangian which is a specific homogeneous polynomial in Riemann curvature, for an alternative derivation of the gravitational equation of motion, it is possible to define a specific homogeneous polynomial analogue of the Riemann curvature, and then the trace of its Bianchi derivative yields the corresponding polynomial analogue of the divergence free Einstein tensor defining the differential operator for the equation of motion. We propose that the general equation of motion is G (n) ab = −Λg ab + κnT ab for d = 2n + 1, 2n + 2 dimensions with the single coupling constant κn, and n = 1 is the usual Einstein equation. It turns out that gravitational behavior is essentially similar in the critical dimensions for all n. All static vacuum solutions asymptotically go over to the Einstein limit, Schwarzschild-dS/AdS. The thermodynamical parameters bear the same relation to horizon radius, for example entropy always goes as r d−2n h and so for the critical dimensions it always goes as r h , r 2 h . In terms of the area, it would go as A 1/n . The generalized analogues of the Nariai and Bertotti-Robinson solutions arising from the product of two constant curvature spaces, also bear the same relations between the curvatures k1 = k2 and k1 = −k2 respectively.
10.1007/978-3-319-06761-2_6
[ "https://arxiv.org/pdf/1210.3022v1.pdf" ]
55,927,849
1210.3022
8aeb2a48fa08c8cea215051cb1c3c4e6041a707c
The gravitational equation in higher dimensions 10 Oct 2012 Naresh Dadhich India and Inter Centre for Theoretical Physics University Centre for Astronomy & Astrophysics Jamia Millia Islamia, Post Bag 4110025, 411 007New Delhi, PuneIndia The gravitational equation in higher dimensions 10 Oct 2012PACS numbers: 04.50.-h, 04.20.Jb, 04.70.-s Like the Lovelock Lagrangian which is a specific homogeneous polynomial in Riemann curvature, for an alternative derivation of the gravitational equation of motion, it is possible to define a specific homogeneous polynomial analogue of the Riemann curvature, and then the trace of its Bianchi derivative yields the corresponding polynomial analogue of the divergence free Einstein tensor defining the differential operator for the equation of motion. We propose that the general equation of motion is G (n) ab = −Λg ab + κnT ab for d = 2n + 1, 2n + 2 dimensions with the single coupling constant κn, and n = 1 is the usual Einstein equation. It turns out that gravitational behavior is essentially similar in the critical dimensions for all n. All static vacuum solutions asymptotically go over to the Einstein limit, Schwarzschild-dS/AdS. The thermodynamical parameters bear the same relation to horizon radius, for example entropy always goes as r d−2n h and so for the critical dimensions it always goes as r h , r 2 h . In terms of the area, it would go as A 1/n . The generalized analogues of the Nariai and Bertotti-Robinson solutions arising from the product of two constant curvature spaces, also bear the same relations between the curvatures k1 = k2 and k1 = −k2 respectively. Like the Lovelock Lagrangian which is a specific homogeneous polynomial in Riemann curvature, for an alternative derivation of the gravitational equation of motion, it is possible to define a specific homogeneous polynomial analogue of the Riemann curvature, and then the trace of its Bianchi derivative yields the corresponding polynomial analogue of the divergence free Einstein tensor defining the differential operator for the equation of motion. We propose that the general equation of motion is G (n) ab = −Λg ab + κnT ab for d = 2n + 1, 2n + 2 dimensions with the single coupling constant κn, and n = 1 is the usual Einstein equation. It turns out that gravitational behavior is essentially similar in the critical dimensions for all n. All static vacuum solutions asymptotically go over to the Einstein limit, Schwarzschild-dS/AdS. The thermodynamical parameters bear the same relation to horizon radius, for example entropy always goes as r d−2n h and so for the critical dimensions it always goes as r h , r 2 h . In terms of the area, it would go as A 1/n . The generalized analogues of the Nariai and Bertotti-Robinson solutions arising from the product of two constant curvature spaces, also bear the same relations between the curvatures k1 = k2 and k1 = −k2 respectively. I. INTRODUCTION What stands gravity apart from rest of the physics is its universal character that it links to everything including massless particles and hence it can only be described by the spacetime curvature, and its dynamics has therefore to follow from the geometric properties of the Riemann curvature tensor [1]. The Einstein gravitational equation could be deduced from the geometric property of Riemann curvature, known as the Bianchi identity, implying vanishing of its Bianchi derivative identically. Then on taking its trace yields the divergence free second rank symmetric Einstein tensor, which defines the differential operator on the left for the equation while on the right gravitational source, energy momentum distribution described by a second rank symmetric tensor with the condition of vanishing divergence. This is the case for Einstein gravity which is linear in Riemann curvature, and its vacuum is trivially flat in 3 dimension and it becomes dynamically non-trivial in 4 dimension. The question is, could this be generalized to a polynomial analogue of the Riemann tensor? Consider a tensor with the same symmetry properties as the Riemann which is a homogeneous polynomial of degree n in Riemann, and then demand that the trace of its Bianchi derivative vanishes. This will fix the coefficients in the polynomial and will give the divergence free second rank symmetric tensor G (n) ab , the nth order analogue of the Einstein tensor, which is the same as what one would get from the variation of the nth order Lovelock Lagrangian [2]. Thus we have the generalized polynomial Riemann curvature, R (n) abcd , which would describe gravitational dynamics in d = 2n + 1, 2n + 2 in the same manner as Riemann does for d = 3, 4. We can define corresponding vacuum as R (n) ab = 0, would it also be trivial in d = 2n + 1 dimension? The answer is indeed, yes [3]. It would be R (n) abcd flat but not Riemann flat, and for that it would describe a global monopole [4]. What should be the gravitational equation in dimension > 4? Should it continue to be the Einstein equation which is linear in Riemann or should it include the one following from the higher order Riemann, R (n) abcd yet giving the second order quasi-linear equation? A general abiding principle is that the equation be second order quasi-linear so that the initial value problem is well formulated giving unique evolution. This uniquely identifies the Lovelock polynomial Lagrangian or equivalently the above discussed polynomial Riemann curvature [2]. Should all orders that are non-trivial in the equation be included like the linear Einstein, quadratic Gauss-Bonnet, and so on, or the only highest one? one gi and all orders of Riemann should be included that make non-zero contribution in the equation. Should it be G (n) ab or G (n) ab ? In the former, the each order will have its own coupling and so there would be n of * Electronic address: [email protected] them, and there is no obvious way to fix them. Since there is only one force which allows determination of only one coupling parameter by experimentally measuring its strength, gravity should therefore have only one dimensional coupling parameter and its dimension would however depend upon the spacetime dimension. Thus we propose the gravitational equation should in general be written as G (n) ab = −Λg ab + κ n T ab(1) for d = 2n + 1, 2n + 2 dimensions. Note that Λ, which characterizes dynamics free spacetime, is part of the structure of spacetime on the same footing as the velocity of light [5]. In what follows we wish to demonstrate that this equation imbibes beautifully the general vacuum character [3] as well as the static vacuum solutions asymptotically go over to the right Einstein limit, even though the linear Einstein term is not included. This means higher order terms in curvature are only pertinent to the high energy end near the black hole horizon while their effect weans out asymptotically at the low energy end approximating to the linear order Einstein solution, Schwarzschild-dS/AdS in d dimension [6,7]. It is remarkable that the thermodynamical parameters, temperature and entropy bear universal relation to the horizon radius for static black holes in d = 2n + 1, 2n + 2, and interestingly this property also marks the characterization of this class of black holes [7,8]. II. THE LOVELOCK CURVATURE POLYNOMIAL AND THE EQUATION OF MOTION Following Ref. [2], we define the Lovelock curvature polynomial R (n) abcd = F (n) abcd − n − 1 n(d − 1)(d − 2) F (n) (g ac g bd − g ad g bc ), F (n) abcd = Q ab mn R cdmn (2) where Q ab cd = δ aba1b1...anbn cdc1d1...cndn R a1b1 c1d1 , . . . , R an−1bn−1 cn−1dn−1 , Q abcd ;d = 0.(3) It follows that the trace of the Bianchi derivative yields the divergence free G where the analogue of n th order Einstein tensor is given by G (n) ab = n(R (n) ab − 1 2 R (n) g ab ).(5) Note that R (n) = d − 2n n(d − 2) F (n)(6) which vanishes for D = 2n while F (n) , the Lovelock action polynomial, is non-zero but its variation, G ab vanishes identically. Since R (n) = g ab R (n) ab = 0 for d = 2n for arbitrary g ab , it implies R (n) ab = 0 identically as it involves apart from the metric its first and second derivatives which are arbitrary. Since G (n) ab is divergence free, we could write G (n) ab = κ n T ab − Λg ab , T ab ;b = 0.(7) This is the gravitational equation for d = 2n + 1, 2n + 2 dimensions with κ n as the gravitational constant, and n = 1 is the Einstein equation for 3 and 4 dimensions. What degree of polynomial in Riemann should the equation have is thus determined by the spacetime dimension. It is linear for 3, 4, quadratic for 5, 6, and so on. III. UNIVERSAL FEATURES The first universal feature studied was that of gravitational field inside a uniform density sphere and it was shown that it was always given by the Schwarzschild interior solution in Einstein as well as in Einstein-Gauss-Bonnet/Lovelock theories [9]. Here we shall consider the cases of static black holes, and product spaces describing the Nariai and Bertotti-Robinson spacetimes. A. Static black holes The static spherically symmetric solution of the vacuum equation (1) is given by g tt = −1/g rr = V = 1 − r 2 (Λ + M/r d−1 ) 1/n(8) which asymptotically takes the form of the Schwarzschild-dS/AdS solution in d dimension showing the correct Einstein limit. The solution for the general case of the Einstein-Lovelock equation can also be written in terms of the nth order algebraic polynomial equation which cannot be solved in general for n > 4. It is therefore clear that we cannot carry on with arbitrarily large number of coupling parameters. For the case of dimensionally continued black holes [10], it was proposed that all the couplings are determined in terms the unique ground state Λ, and the solution is then given by V = 1 − r 2 Λ − M/r d−1/2 which clearly does not go over to the Einstein solution for large r. This corresponded to the algebraic polynomial being degenerate. It turns out that the proper Einstein limit could be brought is simply by considering the polynomial to be derivative degenerate [7]. Then the solution agrees near the horizon with the dimensionally continued black hole and asymptotically to the proper Einstein limit, and it is the solution of the equation (1). Further the thermodynamical parameters, temperature and entropy bear the universal relation to the horizon radius for the critical d = 2n + 1, 2n + 2 dimensions [8]. For instance, the entropy always goes as r d−2n h which for the critical dimensions would always go as r h , r 2 h . In terms of the area, it would however go as A 1 /n, and hence the entropy is proportional to area only for the n = 1 Einstein theory. Interestingly this universality is also the characterizing property of this class of pure Lovelock black holes [7,8]. We would like to conjecture that the above universality property would also be true for the rotating black hole solution as and when it is found. B. Product spaces: Nariai and Bertotti-Robinson solutions The Nariai and Bertotti-Robinson solutions arise as product of two constant curvature spaces. When the two curvatures are equal, k 1 = k 2 , it is the Nariai solution of the equation (1) with T ab = 0 for n = 1, and when the curvatures are equal and opposite, k 1 = −k 2 , it is the Bertotti-Robinson solution describing the uniform electric field. The former is the Λ vacuum spacetime but is not conformally flat while the latter is the Einstein-Maxwell solution for uniform electric field which is conformally flat. It turns out the generalized pure Lovelock solutions of the equation (1) for any n bear out the same curvature relations for the Nariai vacuum (k 1 = k 2 ) and Bertotti-Robinson uniform electric field (k 1 = −k 2 ), and the condition for conformal flatness is also k + k 2 = 0 [11]. In d = 2n + 2 dimension, we have the following general relation connecting the two curvatures, Λ and the electric field E, (k 1 + k 2 )E 2 = −4(k 1 − k 2 )Λ.(9) This clearly indicates k 1 = k 2 for E = 0, the Nariai vacuum spacetime and k 1 = −k 2 for Λ = 0, the Bertotti-Robinson uniform electric field spacetime. The metric is given by ds 2 = (1 − k 1 r 2 )dt 2 − dr 2 1 − k 1 r 2 − 1 k 2 dΣ 2 (d−2) .(10) IV. DISCUSSION We have proposed that the equation (1) is the proper equation for gravity in higher dimensions. The correct equation should have the following properties: (a) it should be second order quasi-linear, (b) for a given dimension, it should be of degree n = [(d − 1)/2] in the Riemann curvature, (c) it should have only one coupling constant which could be determined by experimentally measuring the strength of the force and (d) since higher order curvature contributions are the high energy corrections to the linear order in Riemann Einstein gravity which should wean out asymptotically, hence solutions should tend to the corresponding Einstein solution for large r. The proposed equation satisfies all these properties. The latter feature of the asymptotic Einstein limit is verified for the static black hole solutions which is however also true for the Einstein-Gauss-Bonnet black hole. What is remarkable here is that the equation is free of the Einstein term, yet asymptotically solutions go over to the proper Einstein limit. This means high energy effects which come through the higher order curvature terms are fully and properly taken care by the highest order n = [(d − 1)/2] term, and they could be realized only in higher dimensions [12]. It is interesting that gravity asks for higher dimensions for realization of its high energy effects. This is because inclusion of higher orders in Riemann curvature and the demand that the equation continues to be second order quasi-linear naturally lead to higher dimensions. This does not happen for any other force that one has to consider higher dimension for realization of its high energy corrections. It happens for gravity because the spacetime curvature is the basic field variable, and hence high energy effects involve higher orders in it and their contribution in the equation, if it continues to retain its second order quasi-linear character, can be realized only in higher dimensions [12]. We would like to emphasize that higher dimensions and high energy effects seem to be intimately connected. Since high energy effects ask for higher dimensions, quantum gravity should also involve higher dimensions. This is because quantum gravity should approach the classical limit via the high energy intermediate limit. One of the problems with the Einstein-Lovelock solutions is the number of coupling constants and there is no way to fix them. For the dimensionally continued static black holes, all the couplings were prescribed in terms of the unique ground state Λ [10]. These solutions were however not asymptotically Einstein, Schwarzschild-dS/AdS. Instead the corresponding solutions of the equation (1) have the right limits at both ends, nearer to horizon agreeing with the dimensionally continued and asymptotically to Schwarzschild-dS/AdS. This is indicative of the inherent correctness of the equation. The universal character of gravity in the critical dimensions is another very attractive feature of the equation. That the vacuum, G (n) ab = 0, in the odd critical dimension is always trivial, R (n) abcd = 0 [3]. All this taken together points to the fact that the equation (1) is the right equation for gravitation in higher dimensions. For a given order n in the Riemann curvature, the critical dimensions are d = 2n + 1, 2n + 2 and it is trivial/kinematic in the former and it becomes dynamic in the latter. This is a universal general feature. In the critical dimensions, gravity has the similar behavior as indicated by universality of the thermodynamic parameters in terms of the horizon radius and of the Nariai and Bertotti-Robinson solutions. It is interesting to note that in terms of black hole area, entropy is always proportional to A 1/n and so it is proportional to area only for the n = 1 Einstein gravity. This is an interesting general result that entropy always goes as the nth root of area of the black hole. In an intuitive sense we can say that it is nth root of the Einstein gravity for the critical d = 2n + 1, 2n + 2 dimensions. All this we have established for the simple case of static black hole but we believe that it is indeed a general feature and hence should be true for the stationary rotating black hole as well. So far there exists no rotating pure Lovelock black hole solution, and this conjecture would be verified as and when a solution is found. PACS numbers: 04.50.-h, 04.20.Jb, 04.70.-s AcknowledgementIt is a pleasure to thank the organizers of the conference, Relativity and Gravitation: 100 years after Einstein in Prague, June 25-28, 2012. N Dadhich, arxiv:gr-qc/0102009Subtle is the gravity. N. Dadhich, Subtle is the gravity, arxiv:gr-qc/0102009. . N Dadhich, arXiv:0802.3034Pramana. 74N. Dadhich, Pramana 74, 875 (2010) (arXiv:0802.3034) . N Dadhich, S G Ghosh, S Jhingan, arXiv:1202.4575Phys. Lett. B. 711196gr-qcN. Dadhich, S. G. Ghosh and S. Jhingan, Phys. Lett. B 711, 196 (2012) [arXiv:1202.4575 [gr-qc]]. . M Barriola, A Vilenkin, Phys. Rev. Lett. 63341M. Barriola and A. Vilenkin, Phys. Rev. Lett. 63, 341 (1989). . N Dadhich, arXiv:1105.3396Int. J. Mod. Phys. 202739N. Dadhich, Int. J. Mod. Phys. D20, 2739 (2011), arXiv:1105.3396. . N Dadhich, arXiv:1006.0337Math. Today. 26hep-thN. Dadhich, Math. Today 26, 37 (2011) [arXiv:1006.0337 [hep-th]]. N Dadhich, J M Pons, K Prabhu, arXiv:1201.4994On the static Lovelock black holes. gr-qcN. Dadhich, J. M. Pons and K. Prabhu,"On the static Lovelock black holes", arXiv:1201.4994 [gr-qc]. . N Dadhich, J M Pons, K Prabhu, arXiv:1110.0673Gen. Rel. Grav. 442595gr-qcN. Dadhich, J. M. Pons and K. Prabhu, Gen. Rel. Grav. 44, 2595 (2012) [arXiv:1110.0673 [gr-qc]]. . N Dadhich, A Molina, A Khugaev, arXiv:1001.3922Phys. Rev. 81N. Dadhich, A. Molina and A. Khugaev, Phys. Rev. 81, 104026 (2010), arXiv:1001.3922. . M Banados, C Teitelboim, J Zanelli, Phys. Rev. Lett. 691849M. Banados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett. 69, 1849 (1992). . N Dadhich, J M Pons, arXiv:1210.1109gr-qcN. Dadhich and J. M. Pons, arXiv:1210.1109 [gr-qc]. On the Gauss-Bonnet gravity. N Dadhich, hep-th/0509126N. Dadhich, "On the Gauss-Bonnet gravity," hep-th/0509126.
[]
[ "Photometry of SN 2002ic and Implications for the Progenitor Mass-Loss History", "Photometry of SN 2002ic and Implications for the Progenitor Mass-Loss History" ]
[ "W M Wood-Vasey [email protected] \nPhysics Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA\n", "L Wang \nPhysics Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA\n", "G Aldering \nPhysics Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA\n" ]
[ "Physics Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA", "Physics Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA", "Physics Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA" ]
[]
We present new pre-maximum and late-time optical photometry of the Type Ia/IIn supernova 2002ic. These observations are combined with the published V-band magnitudes of and the VLT spectrophotometry ofWang et al. (2004)to construct the most extensive light curve to date of this unusual supernova. The observed flux at late time is significantly higher relative to the flux at maximum than that of any other observed Type Ia supernova and continues to fade very slowly a year after explosion. Our analysis of the light curve suggests that a non-Type Ia supernova component becomes prominent ∼ 20 days after explosion. Modeling of the non-Type Ia supernova component as heating from the shock interaction of the supernova ejecta with pre-existing circumstellar material suggests the presence of a ∼ 1.7 × 10 15 cm gap or trough between the progenitor system and the surrounding circumstellar material. This gap could be due to significantly lower mass-loss ∼ 15 ( vw 10 km/s ) −1 years prior to explosion or evacuation of the circumstellar material by a low-density fast wind. The latter is consistent with observed properties of proto-planetary nebulae and with models of white-dwarf + asymptotic giant branch star progenitor systems with the asymptotic giant branch star in the proto-planetary nebula phase.
10.1086/424826
[ "https://arxiv.org/pdf/astro-ph/0406191v1.pdf" ]
14,001,603
astro-ph/0406191
054d41f1a96e8b8c249c6d285d0de8af4c9b1155
Photometry of SN 2002ic and Implications for the Progenitor Mass-Loss History 7 Jun 2004 2004 May 6 W M Wood-Vasey [email protected] Physics Division Lawrence Berkeley National Laboratory 94720BerkeleyCA L Wang Physics Division Lawrence Berkeley National Laboratory 94720BerkeleyCA G Aldering Physics Division Lawrence Berkeley National Laboratory 94720BerkeleyCA Photometry of SN 2002ic and Implications for the Progenitor Mass-Loss History 7 Jun 2004 2004 May 6Subject headings: stars: winds -supernovae -supernovae: individual (2002ic) We present new pre-maximum and late-time optical photometry of the Type Ia/IIn supernova 2002ic. These observations are combined with the published V-band magnitudes of and the VLT spectrophotometry ofWang et al. (2004)to construct the most extensive light curve to date of this unusual supernova. The observed flux at late time is significantly higher relative to the flux at maximum than that of any other observed Type Ia supernova and continues to fade very slowly a year after explosion. Our analysis of the light curve suggests that a non-Type Ia supernova component becomes prominent ∼ 20 days after explosion. Modeling of the non-Type Ia supernova component as heating from the shock interaction of the supernova ejecta with pre-existing circumstellar material suggests the presence of a ∼ 1.7 × 10 15 cm gap or trough between the progenitor system and the surrounding circumstellar material. This gap could be due to significantly lower mass-loss ∼ 15 ( vw 10 km/s ) −1 years prior to explosion or evacuation of the circumstellar material by a low-density fast wind. The latter is consistent with observed properties of proto-planetary nebulae and with models of white-dwarf + asymptotic giant branch star progenitor systems with the asymptotic giant branch star in the proto-planetary nebula phase. Introduction Historically, the fundamental division of supernova (SN) types was defined by the absence (Type I) or presence (Type II) of hydrogen in the observed spectrum. Later refinements distinguished Type Ia supernovae from other types of supernovae by the presence of strong silicon absorption features in their spectra (Wheeler & Harkness 1990;Filippenko 1997). Type Ia supernovae (SNe Ia) are generally accepted to result from the thermonuclear burning of a white dwarf in a binary system, whereas all the other types of supernovae are believed to be produced by the collapse of the stellar core, an event which leads to the formation of a neutron star or black hole. While interaction with circumstellar material (CSM) has been observed for many core-collapse supernovae, the search for evidence of CSM around Type Ia SNe has so far been unsuccessful. Cumming et al. (1996) reported high resolution spectra of SN 1994D and found an upper limit for the pre-explosion mass-loss rate ofṀ ∼ 1.5 × 10 −5 M ⊙ yr −1 for an assumed wind speed of v w = 10 km s −1 . However, they also note that this limit allows most of the expected range of massloss rates from symbiotic systems (Ṁ v 10 2 × 10 −5 M ⊙ yr −1 ). On the other hand, the surprisingly strong high-velocity Ca II absorption and associated high degree of linear polarization observed in SN 2001el by Wang et al. (2003) and the high velocity features in SN 2003du by Gerardy et al. (2004) have led these authors to suggest that the high velocity Ca feature could be the result of the interaction between the supernova ejecta and a CSM disk. About 0.01 M ⊙ of material is required in the disk, and the spatial extent of the disk must be small to be consistent with the absence of narrow emission lines at around optical maximum (Cumming et al. 1996). Due to the strength of the Ca II feature in SN 2001el, Wang et al. (2003 speculated that the disk of SN 2001el may have been over-abundant in Ca II. In contrast, Gerardy et al. (2004) found that a standard solar abundance of Ca II is sufficient to explain the observed feature in SN 2003du (Gerardy et al. 2004), for which the high-velocity Ca II feature is significantly weaker than in SN 2001el. Supernova 2002ic (Wood-Vasey et al. 2002) is a very interesting event that shows both silicon absorption (Hamuy et al. 2002) and hydrogen emission . This SN is the first case for which there is unambiguous evidence of the existence of circumstellar matter around a SN Ia and is therefore of great importance to the understanding of the progenitor systems and explosion mechanisms of SNe Ia. By studying the spectral polarimetry and the light curve of the Hα line, Wang et al. (2004) found the spatial extent of the hydrogen-rich material to be as large as 10 17 cm and distributed in a quite asymmetric configuration, most likely in the form of a flattened disk. The implied total mass of the hydrogen-rich CSM is a few solar masses. Similar conclusions were reached by Deng et al. (2004). In this paper, we present new photometry of SN 2002ic and discuss the implications for the interaction of the ejecta and the CSM. Sec. 2 presents our data processing procedure and calibration for our photometry of SN 2002ic. In Sec. 3, we discuss the light curve of SN 2002ic and the immediate implications from our data. A more in-depth investigation and qualitative modeling of the light curve of SN 2002ic as an interaction of a SN Ia with surrounding CSM is presented in Sec. 5. Our discussion in Sec. 6 presents our interpretations of the structure of the CSM surrounding SN 2002ic. Finally, in Sec. 7 we present some intriguing possibilities for the progenitor system of SN 2002ic and speculate on other possible SN 2002ic-like events. Data Processing for SN 2002ic Data processing and Discovery We discovered SN 2002ic on images from the NEAT team (Pravdo et al. 1999) taken on the Samuel Oschin 1.2-m telescope on Mt. Palomar, California. In preparation for searching, the images were transmitted from the telescope to the High-Performance Storage System (HPSS) at the National Energy Research and Scientific Computer Center (NERSC) in Oakland, California via the HPWREN (Braun 2003) 1 and ESnet (U.S. Department of Energy 2004) 2 networks. These data were then automatically processed and reduced on the NERSC Parallel Distributed System Facility (PDSF) using software written at Lawrence Berkeley National Laboratory by WMWV and the Supernova Cosmology Project. The first-level processing of the NEAT images involved decompression and conversion from the NEAT internal format used for transfer to the standard astronomical FITS format, subtraction of the dark current for these thermoelectrically cooled CCDs, and flat-fielding with sky flats constructed from a sample of the images from the same night. These processed images were then loaded into an image database, and archival copies were stored on HPSS. The images were further processed to remove the sky background. An object-finding algorithm was used to locate and classify the stars and galaxies in the fields. The stars were then matched and calibrated against the USNO A1.0 POSS-E catalog (Monet et al. 1996) to derive a magnitude zeropoint for each image. There were typically a few hundred USNO A1.0 stars in each 0.25 ⊓ ⊔ • image. The supernova was discovered by subtracting PSF-matched historical NEAT images from new images, then automatically detecting residual sources for subsequent human inspection (see Wood-Vasey et al. (2004)). Photometry For analysis, we assembled all NEAT images, including later images kindly taken at our request by the NEAT team. Light curves were generated using aperture photometry scaled to the effective seeing of each image. A set of the 4 best-seeing (< 3 ′′ ) reference images was selected from among all NEAT Palomar pre-explosion images from 2001 of SN 2002ic. Multiple reference images were chosen to better constrain any underlying galaxy flux. The differential flux in an aperture around SN 2002ic was measured between each reference image and every other image of SN 2002ic. Aperture correction was performed to account for the different seeing and pixel scales of the images. The overall flux ratio between each reference image and light-curve image was tracked and normalized with respect to a primary reference image. This primary reference image was chosen from the reference images used for the image subtraction on which SN 2002ic was originally discovered. The flux differences calculated relative to each reference were combined in a noise-weighted average for each image to yield an average flux for the image. As the observations were taken within a span of less than one hour on each night, the results from the images of a given night were averaged to produce a single light curve point for that night. The reference zeropoint calculated for the primary reference image from the above USNO A1.0 POSS-E calibration was used to set the magnitudes for the rest of the measured fluxes. Table 1 reports these magnitudes and associated measurement uncertainties. An overall systematic uncertainty in the zeropoint calibration is not included in the listed errors. The USNO A1.0 POSS-E catalog suffers from systematic field-to-field errors of ∼ 0.25 magnitudes in the northern hemisphere (Monet et al. 1996). The conversion of POSS-E magnitudes to V-band magnitudes for a SN Ia is relatively robust, as a SN Ia near maximum resembles a ∼ 10, 000 K blackbody quite similar to Vega in the wavelength range from 4, 500-10, 000Å. At late times, the observations of Wang et al. (2004) show that the smoothed spectrum of SN 2002ic tracks that of Vega red-ward of 5,000Å. We estimate that, taken together, the calibration of our unfiltered observations with these POSS-E magnitudes and the subsequent comparison with V-band magnitudes are susceptible to a 0.4 magnitude systematic uncertainty. Any such systematic effect is constant for all data points and stems directly from the magnitude calibration of the primary reference. However, the observed NEAT POSS-E magnitudes show agreement with the V-band magnitudes of and the V-band magnitudes obtained from integrating the spectrophotometry of Wang et al. (2004) to significantly better than any 0.4 magnitude systematic uncertainty estimate. This synthesized VLT photometry is presented in Table 2. Comparing the photometry of SN 2002ic and nearby reference star with a similar color (B − R = 0.3), we find agreement between the VLT V-band acquisition camera images and the NEAT images to within ±0.05 magnitudes. Given this good agreement, it appears that our POSS-E-calibrated magnitudes for SN 2002ic can be used effectively as V-band photometry points. Mario Hamuy was kind enough to share his BVI secondary standard stars from the field of SN 2002ic. We attempted to use these stars to calculate the color correction from our POSS-E mag-nitudes to V-band, but our analysis predicted an adjustment of up to +0.4 magnitudes. This was inconsistent with the good agreement with the VLT magnitudes (calculated correction = +0.1 mag) at late times and with the Hamuy V-band points after maximum (+0.4 mag). This disagreement is not fully understood. We note, however, that the colors of the secondary standard stars did not extend far enough to the blue to cover the majority of the color range of the supernova during our observations (a common problem when observing hot objects such as supernovae). In addition, as there is no color information from before maximum light, it is possible that SN 2002ic does not follow the color evolution of a typical SN Ia. Our newly reported pre-maximum photometry points (see Table 1 and Fig. 1) are invaluable for disentangling the SN and CSM components, which we now proceed to do. Light Curve of SN 2002ic The light curve of SN 2002ic is noticeably different from that of a normal SN Ia, as can be seen in Fig. 2, and as was first noted by . The detection of hydrogen emission lines in the spectra of SN 2002ic in combination with the slow decay of the light curve is seen as evidence for interaction of the SN ejecta and radiation with a hydrogen-rich CSM Wang et al. 2004). The profile of the hydrogen emission line and the flat light curves can be understood in the context of Type IIn supernovae as discussed in Chugai et al. (2002), Chugai & Yungelson (2004), and references therein. The data presented here show that the slow decay has continued ∼ 320 days after maximum at a rate of ∼ 0.004 mag/day, a rate that is significantly slower than the 0.01 mag/day decay rate expected from Co 56 decay (also see Deng et al. (2004)). In addition, our early-time points show that the light curve of SN 2002ic was consistent with a pure SN Ia early in its evolution. This implies that there was a significant time delay between the explosion and development of substantial radiation from the CSM interaction, possibly due to a a physical gap between the progenitor explosion and the beginning of the CSM. After maximum, we note the existence of a second bump in the light curve, which is put in clear relief by our photometry data on JD 2452628.6. We interpret this second bump as evidence for further structure in the CSM. performed a spectroscopic decomposition of the underlying supernova and ejecta-CSM interaction components. We perform here an analogous photometric decomposition. To decompose the observed light curve into the contributions from the SN material and the shock-heated CSM, we first consider a range of light curve stretch values (Perlmutter et al. 1997), using the magnitude-stretch relation, ∆m = 1.18 (1 − s) (Knop et al. 2003), applied to the normal SN Ia template light curve of Goldhaber et al. (2001); we consider the remaining flux as being due to the SN eject-CSM interaction (see Fig. 2). At early times, the inferred contribution of the CSM is dependent on the stretch of the template chosen, but at later times the CSM component completely dominates for any value of the stretch parameter. It is not possible to disentangle the contribution of the CSM from that of the SN at maximum light, although a normal SN Ia at the redshift of SN 2002ic, z = 0.0666 ), corresponding to a distance modulus of 37.44 for an H 0 = 72 km/s/Mpc (Freedman et al. 2001), would only generate about half of the flux observed for SN 2002ic at maximum. find that SN 2002ic resembles SN 1991T spectroscopically and note that SN 1991T/SN 1999aa-like events are brighter a month after maximum light than explainable by the standard stretch relation. A SN 1991T-like event (stretch= 1.126, ∆m = 0.15, based on the template used in Knop et al. (2003) for the first 50 days is thus much too luminous to be due entirely to a 91T-like supernova. In addition, the spectroscopically-inferred CSM-interaction contribution of (open triangles in Fig. 2) limits the SN contribution at maximum to that expected from a normal SN Ia. After 50 days, SN 2002ic exhibits even more significant non-SN Ia-like behavior. Decomposition of SN Ia and CSM components We next use the formalism of Chevalier & Fransson (1994) to fit a simple interacting SN ejecta-CSM model to the observed data. While Chevalier & Fransson (1994) focus on SNe II, their formalism is generally applicable to SNe ejecta interacting with a surrounding CSM. We simultaneously fit the SN Ia flux and the luminosity from the SN ejecta-CSM interaction. Our analysis allows us to infer the integrated radial density distribution of the CSM surrounding SN 2002ic. Simple Scaling of the SN Ejecta-CSM Interaction Following the hydrodynamic models of Chevalier & Fransson (1994), we assume a power-law supernova ejecta density of ρ SN ∝ t n−3 r −n(1) where t is the time since explosion, r is the radius of the ejecta, and n is the power-law index of a radial fall-off in the ejecta density. Chevalier & Fransson (2001) note that for SNe Ia an exponential ejecta profile is perhaps preferred. However, this profile does not yield an analytical solution and so, for the moment, we proceed assuming a power-law profile. In Sec. 6 we explore the ramifications of an exponential ejecta profile. Chevalier & Fransson (1994) give the time evolution of the shock-front radius, R s , as R s = 2 (n − 3)(n − 4) 4πv ẇ M A 1/(n−2) t (n−3)/(n−2) ,(2) where v w is the velocity of the pre-explosion stellar wind,Ṁ is the pre-explosion mass-loss rate, and A is a constant in the appropriate units for the given power-law index n. Taking the parameters in the square brackets as fixed constants, we can calculate the shock velocity, v s , as v s = 2 (n − 3)(n − 4) 4πv ẇ M A 1/(n−2) n − 3 n − 2 t −1/(n−2) .(3) Thus the shock velocity goes as v s ∝ t −α ,(4)where α = 1 n − 2 .(5) We assume that the luminosity of the ejecta-CSM interaction is fed by the energy imparted at the shock front and view the unshocked wind as crossing the shock front with a velocity of v s + v w ≈ v s . As the wind particles cross the shock front, they are thermalized and their crossing kinetic energy, K.E. = 1 2 ρ w v 2 s dV , is converted to thermal energy. Putting this in terms of the mass-loss rate,Ṁ , we can express the CSM density as ρ w =Ṁ 4πR 2 s v w ,(6) and we can calculate the energy available to be converted to luminosity, L, as L = α(λ, t) d dt K.E. = α(λ, t) 1 2Ṁ 4πR 2 s v w v 2 s dV = α(λ, t) 1 2Ṁ 4πR 2 s v w v 2 s v s 4πR 2 s .(7) The luminosity dependence on R s drops out and we have L = α(λ, t) d dt K.E. = α(λ, t) 1 2Ṁ v w v 3 s .(8) A key missing ingredient is a more detailed modeling of the kinetic energy to optical luminosity conversion term, α(λ, t). We note that the available kinetic energy is on the order of 1.6 × 10 44 erg s −1 forṀ = 10 −5 M ⊙ yr −1 , v s = 10 4 km s −1 , and v w = 10 km s −1 . This implies a conversion efficiency from shock interaction K.E. to luminosity of 50%, given the luminosity, 1.6 × 10 44 erg s −1 , of SN 2002ic and the typical luminosity of a SN Ia near maximum of 0.8 × 10 44 erg s −1 . Assuming this constant conversion produces reasonable agreement with the data, so we proceed with this simple assumption. Using Eq. 4 to give the time dependence of v s , we obtain the time dependence of the luminosity, L ∝ v 3 s ∝ t −3α ,(9) which can be expressed in magnitude units as m ejecta−CSM = C − 5 2 log 10 t −3α = C + 15 2 α log 10 t,(10) where C is a constant that incorporatesṀ , ρ SN , v w , n, and the appropriate units for those parameters. The difference in magnitude between two times, t 1 and t 2 , then becomes m t 2 − m t 1 = 15 2 α log 10 t 2 t 1 .(11) We obtain a date of B-maximum for the supernova component of 2452606 JD from our SN Ialight curve analysis. Our fit yields an α = 0.16 ⇒ n = 8.5 for any fixedṀ and v w . This n is squarely in the range of values suggested by Chevalier & Fransson (1994) as being typical for SN ejecta. While Chevalier & Fransson (1994) is framed in the context of SNe II, their formalism applies to any SN explosion into a surrounding medium whose ejecta density profile is described by their analytic model. The interaction scaling relations presented above are useful for decomposing the interaction and supernova contributions to the total light curve of SN 2002ic. This simple, analytic description approximates our data reasonably well. However, more sophisticated theoretical calculations, which are beyond the scope of this paper, are necessary to more quantitatively derive the detailed physical parameters of the SN ejecta and the CSM (see Chugai & Yungelson (2004)). Discussion Inferred CSM structure and Progenitor Mass-Loss History We can match the inferred SN ejecta-CSM component of with the interaction model described above and reproduce the light curve near maximum light by adding the flux from a normal SN Ia. Fig. 3 shows our model fit in comparison with the observed light curve of SN 2002ic. Note that our model does not match the observed bump at 40 days after maximum. note a similar disagreement, but the data we present here show that this region is clearly a second bump rather than just a very slow decline. This discrepancy could be explained by a change in the density of the circumstellar medium due to a change in the progenitor mass-loss evolution at that point. In fact, our simple fit is too bright before the bump and too dim during the bump, implying more structure in the underlying CSM than accounted for in our model. Any clumpiness in the progenitor wind would have to be on the largest angular scale to create such a bump and would not explain the new decline rate shown by our observations to extend out to late time . We find that our data are consistent with a model comprising three CSM density regions: (i) an evacuated region out to 20v s days; (ii) CSM material at a nominal density (ρ ∝ r −2 ) out to ∼ 100v s days; and (iii) an increase in CSM density at ∼ 100v s days, with a subsequent r −2 fall-off extending through the 800v s days covered by our observations. This model agrees well with the light curve of SN 2002ic, but, as it involves too many parameters to result in a unique fit using only the photometric data, we do not show it here. Our data, particularly the pre-maximum observations, provide key constraints on the nature of the progenitor system of SN 2002ic. In the context of our model, a mass-loss gradient of some form is required by our early data points. As a computational convenience, our model assumes that the transition to a nominal circumstellar density is described by sin( t 20 days ). If the mass-loss rate had been constant until just prior to the explosion, then the t −3α model light curve would continue to curve upward and significantly violate our first data point at the 7σ level (as shown by the line extended from the ejecta-CSM component in Fig. 3). If the conversion of kinetic energy to luminosity is immediate and roughly constant in time, as assumed in our model, we would conclude that a low-density region must have existed between the star and 20 days·v s out from the star. For example, as a stellar system transitions from an AGB star to a proto-planetary nebula (PPN), it changes from emitting a denser, cooler wind, to a hotter, less dense wind (Kwok 1993). This hot wind pushes the older wind out farther and creates a sharp density gradient and possible clumping near the interface between the cool and hot winds (Young et al. 1992). This overall structure is similar to that which we infer from our modeling of SN 2002ic. Assuming a SN ejecta speed of v s = 30, 000 km s −1 and a progenitor star hot wind speed of v w = 100 km s −1 (Young et al. 1992;Herpin et al. 2002), we conclude that the hot wind must have begun just ∼ 15 years prior to the SN explosion. Alternatively, there is also the possibility that the conversion from kinetic energy to optical luminosity is for some reason significantly less efficient at very early times. It is interesting to note that the observed light curve decline rate of SN 2002ic after 40 days past maximum light is apparently constant during these observations. Spectroscopic study shows the highest observed velocity of the ejecta to be around 11000 km s −1 at day 200 after maximum light. If we assume a constant expansion rate, these observations of continuing emission through ∼ 320 days after maximum provide a lower limit of ∼ 3 × 10 16 cm for the spatial extent of the CSM. Compared to a nominal pre-explosion stellar wind speed of 10 km s −1 , the ejecta is moving ∼ 1000 times more rapidly and thus has overtaken the progenitor wind from the past ∼ 800 years. The overall smoothness of the late-time light curve shows the radial density profile of the CSM to be similarly smooth and thus implies a fairly uniform mass-loss rate between 100-800 years prior to the SN explosion. We take the lack of enhanced flux at early times and the bump after maximum light as evidence for a gap between the SN progenitor and the dense CSM as well as a significant further change in the mass-loss of the progenitor system ∼ 100 years prior to the SN explosion. Reinterpretation of Past SNe IIn These new results prompt a reexamination of supernovae previously classified as Type IIn, specifically SN 1988Z (Pollas et al. 1988;Stathakis & Sadler 1991), SN 1997cy (Sabine et al. 1997;Turatto et al. 2000;Germany et al. 2000), and SN 1999E (Cappellaro et al. 1999;Siloti et al. 2000;Rigon et al. 2003). These supernovae bear striking similarities in their light curves and their late-time spectra to SN 2002ic. However, SN 2002ic is the only one of these supernovae to have been observed early in its evolution. If SN 2002ic had been observed at the later times typical of the observations of these Type IIn SNe, it would not have been identified as a Type Ia. It is interesting to note that Chugai & Danziger (1994) found from models of light curves of SN 1988Z that the mass of the SN 1988Z supernova ejecta is on the order of 1 M ⊙ , which is consistent with a SN Ia. We next explore the possibility that SN 1997cy and SN 1999E (a close parallel to SN 1997cy) may have been systems like SN 2002ic. found that available spectra of SN 1997cy were very similar to post-maximum spectra of SN 2002ic. We complement this spectroscopic similarity with a comparison of the photometric behavior of SN 1997cy and SN 2002ic. As shown in Fig. 4, the late-time behavior of both SNe appear remarkably similar with both SNe fading by ∼ 2.5 magnitudes 8 months after their respective discoveries. The luminosity decay rate of the ejecta-CSM interaction is directly related to the assumed functional form for the ejecta density and the mass-loss rate (Eq. 1). The observed late-time light curves of SN 1997cy and SN 1999E clearly follow a linear magnitude decay with time, which implies an exponential flux vs. time dependence: m ∝ t ⇒ flux ∝ e Ct . If the ejecta density followed an exponential rather than a power-law decay, the magnitude would similarly follow a linear magnitude-time decay. Fig. 5 shows a fit to the light curve of SN 2002ic using the framework of Sec. 5 but using an exponential SN-ejecta density profile. Chevalier & Fransson (2001) suggest that SNe Ia follow exponential ejecta profiles (Dwarkadas & Chevalier 1998) while core-collapse SNe follow power-law decays (Chevalier & Soker 1989;Matzner & McKee 1999). Thus, if SN 1997cy and SN 1999E had been core-collapse events, they would have been expected to show power-law declines. Instead, their decline behavior lends further credence to the idea that they were SN Ia events rather than core-collapse SNe. Although we modeled the light curve of SN 2002ic using a power-law ejecta profile, SN 2002ic was not observed between 100 and 200 days after explosion, so its decay behavior during that time is not well constrained. Its late-time light curve is consistent with the linear magnitude behavior of SN 1997cy. We fit such a profile to our data (see Fig. 5 & 6) and arrive at an exponential fit to the flux of the form e −0.003t where t is measured in days. As the solution for the SN-ejecta interaction is not analytic, we cannot immediately relate the exponential decay parameter to any particular property of the SN ejecta. Taken together in the context of the Chevalier & Fransson (1994 lend support to numerical simulations of the density profiles of SNe Ia explosions. If we take the time of maximum for SN 1997cy to be the earliest light curve point from Germany et al. (2000) and shift the magnitudes from the redshift of SN 1997cy, z = 0.063 (Germany et al. 2000), to the redshift of SN 2002ic (z = 0.0666), we find that the luminosity of both SNe agree remarkably well. This further supports that hypothesis that SN 1997cy and SN 2002ic are related events. However, we note that the explosion date of SN 1997cy is uncertain and may have been 2-3 months prior to the discovery date (Germany et al. 2000;Turatto et al. 2000). Fig. 3 & 5 show that neither a power-law nor an exponential model allow for a significant ejecta-CSM contribution before maximum light. In each figure, the "[exp] ejecta-CSM fit w/o gap" line shows how the SN ejecta-CSM interaction would continue if the density profile remained the same. Both lines significantly disagree with the earliest light curve point. This is consistent with our earlier conclusion that the light curve is dominated by the SN until near maximum light. Relation to Proto-Planetary Nebulae? The massive CSM and spatial extent inferred for SN 2002ic are surprisingly similar to certain PPNe and the atmospheres of very late red giant stars evolving to PPNe. Such structures are normally short-lived (less than or on the order of 1000 years). The polarization seen by Wang et al. (2004) suggests the presence of a disk-like structure surrounding SN 2002ic. Furthermore, the Hα luminosity and mass and size estimates suggest a clumpy medium. Combined with the evidence presented here for a possible transition region between a slow and fast wind, we are left with an object very similar to observed PPNe. We encourage more detailed radiative hydrodynamic modeling of SNe Ia in a surrounding medium as our data provide valuable constraints on this important early-time phase. Of particular interest are bi-polar PPNe where a WD companion emits a fast wind that shapes the AGB star wind while simultaneously accreting (Soker & Rappaport 2000) mass from the AGB star. Typical thermonuclear supernovae are believed to have accretion time scales of 10 7 years, yet several SNe Ia (SN 2002ic and possibly SN 1988Z, SN 1997cy, and SN 1999E) out of several hundred have been observed to show evidence for significant CSM. If the presence of a detectable CSM is taken as evidence that these SNe exploded within a particular ∼ 1000 year-period in their respective evolution, such as the PPN phase, this coincidence would imply a factor of ∼ 100 enhancement (10 7 years /1000 years /100) in the supernova explosion rate during this period. Thus we suggest that it is not a coincidence that the supernova explosion is triggered during this phase. Conclusions The supernova 2002ic exhibits the light curve behavior and hydrogen emission of a Type IIn supernova after maximum but was spectroscopically identified as a Type Ia supernova near maximum light. The additional emission is attributed to a contribution from surrounding CSM. This emission remains quite significant ∼11 months after the explosion. The discovery of dense CSM surrounding a Type Ia supernova strongly favors the binary nature of Type Ia progenitor systems to explain the simultaneous presence of at least one degenerate object and substantial material presumably ejected by a significant stellar wind. However, it is as yet unclear whether the available data for SN 2002ic can prove or disprove either the single-or the double-degenerate scenario, although the inferred resemblance to PPN systems is suggestive. The early-time light curve data presented in this paper strongly suggest the existence of a ∼ 15 ( vw 10 km/s ) −1 year gap between the exploding object and the surrounding CSM. Our discovery and early-through late-time photometric followup of SN 2002ic suggests a reinterpretation of some Type IIn events as Type Ia thermonuclear explosions shrouded by a substantial layer of circumstellar material. Acknowledgments We would like to acknowledge our fruitful collaboration with the NEAT group at the Jet Propulsion Laboratory, operated under contract NAS7-030001 with the National Aeronautics and Space Administration, which provided the images for our supernova work. We thank Mario Hamuy for sharing his BVI photometry of stars in the field of SN 2002ic. We are grateful to Nikolai Chugai for helpful comments. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. We would like to thank them for making available the significant computational resources required for our supernova search. In addition, we thank NERSC for a generous allocation of computing time on the IBM SP RS/6000 used by Peter Nugent and Rollin Thomas to reconstruct the response curve for the NEAT detector on our behalf. HPWREN is operated by the University of California, San Diego under NSF Grant Number ANI-0087344. This work was supported in part by the Director, Office of Science, Office of High Energy and Nuclear Physics, of the US Department of Energy under Contract DE-AC03-76SF000098. WMWV was supported in part by a National Science Foundation Graduate Research Fellowship. We thank the anonymous referee for helpful and detailed comments that improved the scientific clarity of this manuscript. Table 1). The magnitudes have been calibrated against the USNO-A1.0 POSS-E stars in the surrounding field. No color correction has been applied. Also shown are the observed V-band magnitudes from and V-band magnitudes from the spectrophotometry of Wang et al. (2004). Germany et al. (2000). No date of maximum or magnitude uncertainties are available for SN 1997cy. Here the maximum observed magnitude for SN 1997cy has been adjusted to the redshift of 2002ic, z=0.0666 , and the date of the first light curve point of SN 1997cy has been set to the date of maximum for SN 2002ic from our V-band fit. (A.J. Conley 2004, private communication)), would lie near the stretch= 1 line of Fig. 2. The light curve of SN 2002ic Fig. 1 . 1-The unfiltered optical light curve of SN 2002ic as observed by NEAT with the Palomar 1.2-m and Haleakala 1.2-m telescopes (see Fig. 2 .Fig. 3 . 23-A template SN Ia V-band light curve (solid lines -stretch decreases from top to bottom line) shown for comparison with the photometric observations at several stretch values, s, where the magnitude-stretch relation ∆m = 1.18 (1 − s) has been applied. The difference between the observed photometry points and the template fit has been smoothed over a 50-day window (dashed lines). Note that an assumption of no CSM contribution in the first 15 days after maximum light (i.e. s = 1.5) is in conflict with the spectroscopic measurements of (open triangles-no error bars available). -The observed photometry compared with the SN + power-law ejecta-CSM model described in Sec. Fig. 4 . 4-The NEAT unfiltered and Hamuy V-band observations of SN 2002ic compared to the K-corrected V-band observations of SN 1997cy from Fig. 5 . 5-The observed photometry compared with the SN + exponential ejecta-CSM model described in Sec. Fig. 6 . 6-A comparison of fits with power-law (solid) and exponential (dotted) SN ejecta density profiles. Table 1 . 1The unfiltered magnitudes for SN 2002ic as observed by the NEAT telescopes and shown inFig. 1. The left brackets ([) denote limiting magnitudes at a signal-to-noise of 3. A systematic uncertainty of 0.4 magnitudes in the overall calibration is not included in the tabulated uncertainties. (See Sec. 2 for further discussion of our calibration).Table 2. The V-band magnitudes for SN 2002ic as synthesized from the VLT spectrophotometry ofWang et al. (2004) and shown inFig. 2.JD -2452000 E Mag E Mag Telescope Uncertainty 195.4999 [ 20.52 Palomar 1.2-m 224.2479 [ 20.44 Palomar 1.2-m 250.2492 [ 21.01 Palomar 1.2-m 577.4982 [ 20.29 Haleakala 1.2-m 591.2465 19.04 − 0.07 + 0.07 Palomar 1.2-m 598.2519 18.20 − 0.06 + 0.06 Palomar 1.2-m 599.3306 18.11 − 0.03 + 0.03 Palomar 1.2-m 628.0956 18.12 − 0.03 + 0.03 Palomar 1.2-m 656.2508 18.06 − 0.13 + 0.12 Haleakala 1.2-m 674.2524 18.47 − 0.13 + 0.12 Haleakala 1.2-m 680.2519 18.53 − 0.10 + 0.09 Haleakala 1.2-m 849.5003 [ 18.88 Haleakala 1.2-m 853.4994 [ 18.54 Haleakala 1.2-m 855.4963 [ 19.32 Haleakala 1.2-m 858.4986 [ 19.23 Haleakala 1.2-m 860.4992 [ 18.74 Haleakala 1.2-m 864.5017 [ 17.17 Haleakala 1.2-m 874.4982 19.05 − 0.15 + 0.13 Haleakala 1.2-m 876.4998 19.15 − 0.10 + 0.09 Haleakala 1.2-m 902.4989 19.29 − 0.07 + 0.07 Palomar 1.2-m 903.4138 19.47 − 0.08 + 0.08 Palomar 1.2-m 932.2942 19.42 − 0.10 + 0.09 Palomar 1.2-m Department of Physics, University of California, Berkeley, CA 94720 http://hpwren.ucsd.edu 2 http://www.es.net This preprint was prepared with the AAS L A T E X macros v5.2. H.-W Braun, High Performance Wireless Research and Education Network. Braun, H.-W. 2003, High Performance Wireless Research and Education Network, http://hpwren.ucsd.edu/ . E Cappellaro, M Turatto, P Mazzali, IAU Circ. 70911Cappellaro, E., Turatto, M., & Mazzali, P. 1999, IAU Circ., 7091, 1 . R A Chevalier, C Fransson, ApJ. 420268Chevalier, R. A. & Fransson, C. 1994, ApJ, 420, 268 ArXiv Astrophysics e-prints. 110060-. 2001, ArXiv Astrophysics e-prints, 0110060 . R A Chevalier, N Soker, ApJ. 341867Chevalier, R. A. & Soker, N. 1989, ApJ, 341, 867 . N N Chugai, S I Blinnikov, A Fassia, P Lundqvist, W P S Meikle, E I Sorokina, MNRAS. 330473Chugai, N. N., Blinnikov, S. I., Fassia, A., Lundqvist, P., Meikle, W. P. S., & Sorokina, E. I. 2002, MNRAS, 330, 473 . N N Chugai, I J Danziger, MNRAS. 268173Chugai, N. N. & Danziger, I. J. 1994, MNRAS, 268, 173 . N N Chugai, L R Yungelson, Astron. Letters. 3065Chugai, N. N. & Yungelson, L. R. 2004, Astron. Letters, 30, 65 . R J Cumming, P Lundqvist, L J Smith, M Pettini, D L King, MNRAS. 2831355Cumming, R. J., Lundqvist, P., Smith, L. J., Pettini, M., & King, D. L. 1996, MNRAS, 283, 1355 . J Deng, K S Kawabata, Y Ohyama, K Nomoto, P A Mazzali, L Wang, D J Jeffery, M Iye, H Tomita, Y Yoshii, ApJ. 37Deng, J., Kawabata, K. S., Ohyama, Y., Nomoto, K., Mazzali, P. A., Wang, L., Jeffery, D. J., Iye, M., Tomita, H., & Yoshii, Y. 2004, ApJ, 605, L37 . V V Dwarkadas, R A Chevalier, ApJ. 497807Dwarkadas, V. V. & Chevalier, R. A. 1998, ApJ, 497, 807 . A V Filippenko, ARA&A. 35309Filippenko, A. V. 1997, ARA&A, 35, 309 . W L Freedman, B F Madore, B K Gibson, L Ferrarese, D D Kelson, S Sakai, J R Mould, R C Kennicutt, H C Ford, J A Graham, J P Huchra, S M G Hughes, G D Illingworth, L M Macri, P B Stetson, ApJ. 55347Freedman, W. L., Madore, B. F., Gibson, B. K., Ferrarese, L., Kelson, D. D., Sakai, S., Mould, J. R., Kennicutt, R. C., Ford, H. C., Graham, J. A., Huchra, J. P., Hughes, S. M. G., Illing- worth, G. D., Macri, L. M., & Stetson, P. B. 2001, ApJ, 553, 47 . C L Gerardy, P Höflich, R Quimby, L Wang, J C Wheeler, R A Fesen, G H Marion, K Nomoto, B E Schaefer, ApJ. in pressGerardy, C. L., Höflich, P., Quimby, R., Wang, L., Wheeler, J. C., Fesen, R. A., Marion, G. H., Nomoto, K., & Schaefer, B. E. 2004, ApJ, in press . L M Germany, D J Reiss, E M Sadler, B P Schmidt, C W Stubbs, ApJ. 533320Germany, L. M., Reiss, D. J., Sadler, E. M., Schmidt, B. P., & Stubbs, C. W. 2000, ApJ, 533, 320 . G Goldhaber, D E Groom, A Kim, G Aldering, P Astier, A Conley, S E Deustua, R Ellis, S Fabbro, A S Fruchter, A Goobar, I Hook, M Irwin, M Kim, R A Knop, C Lidman, R Mcmahon, P E Nugent, R Pain, N Panagia, C R Pennypacker, S Perlmutter, P Ruiz-Lapuente, B Schaefer, N A Walton, T York, ApJ. 558359Goldhaber, G., Groom, D. E., Kim, A., Aldering, G., Astier, P., Conley, A., Deustua, S. E., Ellis, R., Fabbro, S., Fruchter, A. S., Goobar, A., Hook, I., Irwin, M., Kim, M., Knop, R. A., Lidman, C., McMahon, R., Nugent, P. E., Pain, R., Panagia, N., Pennypacker, C. R., Perl- mutter, S., Ruiz-Lapuente, P., Schaefer, B., Walton, N. A., & York, T. 2001, ApJ, 558, 359 . M Hamuy, J Maza, M Phillips, IAU Circ. 80282Hamuy, M., Maza, J., & Phillips, M. 2002, IAU Circ., 8028, 2 . M Hamuy, M Phillips, N Suntzeff, J Maza, IAU Circ. 81512Hamuy, M., Phillips, M., Suntzeff, N., & Maza, J. 2003, IAU Circ., 8151, 2 . M Hamuy, M M Phillips, N B Suntzeff, J Maza, L E González, M Roth, K Krisciunas, N Morrell, E M Green, S E Persson, P J Mccarthy, Nature. 424651Hamuy, M., Phillips, M. M., Suntzeff, N. B., Maza, J., González, L. E., Roth, M., Krisciunas, K., Morrell, N., Green, E. M., Persson, S. E., & McCarthy, P. J. 2003, Nature, 424, 651 . F Herpin, J R Goicoechea, J R Pardo, J Cernicharo, ApJ. 577961Herpin, F., Goicoechea, J. R., Pardo, J. R., & Cernicharo, J. 2002, ApJ, 577, 961 . R A Knop, G Aldering, R Amanullah, P Astier, G Blanc, M S Burns, A Conley, S E Deustua, M Doi, R Ellis, S Fabbro, G Folatelli, A S Fruchter, G Garavini, S Garmond, K Garton, R Gibbons, G Goldhaber, A Goobar, D E Groom, D Hardin, I Hook, D A Howell, A G Kim, B C Lee, C Lidman, J Mendez, S Nobili, P E Nugent, R Pain, N Panagia, C R Pennypacker, S Perlmutter, R Quimby, J Raux, N Regnault, P Ruiz-Lapuente, G Sainton, B Schaefer, K Schahmaneche, E Smith, A L Spadafora, V Stanishev, M Sullivan, N A Walton, L Wang, W M Wood-Vasey, N Yasuda, ApJ. 598102Knop, R. A., Aldering, G., Amanullah, R., Astier, P., Blanc, G., Burns, M. S., Conley, A., Deustua, S. E., Doi, M., Ellis, R., Fabbro, S., Folatelli, G., Fruchter, A. S., Garavini, G., Garmond, S., Garton, K., Gibbons, R., Goldhaber, G., Goobar, A., Groom, D. E., Hardin, D., Hook, I., Howell, D. A., Kim, A. G., Lee, B. C., Lidman, C., Mendez, J., Nobili, S., Nugent, P. E., Pain, R., Panagia, N., Pennypacker, C. R., Perlmutter, S., Quimby, R., Raux, J., Regnault, N., Ruiz-Lapuente, P., Sainton, G., Schaefer, B., Schahmaneche, K., Smith, E., Spadafora, A. L., Stanishev, V., Sullivan, M., Walton, N. A., Wang, L., Wood-Vasey, W. M., & Yasuda, N. 2003, ApJ, 598, 102 . S Kwok, ARA&A. 3163Kwok, S. 1993, ARA&A, 31, 63 . C D Matzner, C F Mckee, ApJ. 510379Matzner, C. D. & McKee, C. F. 1999, ApJ, 510, 379 . D Monet, A Bird, B Canzian, H Harris, N Reid, A Rhodes, S Sell, H Ables, C Dahn, H Guetter, A Henden, S Leggett, H Levison, C Luginbuhl, J Martini, A Monet, J Pier, B Riepe, R Stone, F Vrba, R Walker, USNO-SA1.0 CatalogWashington DCU.S. Naval ObservatoryMonet, D., Bird, A., Canzian, B., Harris, H., Reid, N., Rhodes, A., Sell, S., Ables, H., Dahn, C., Guetter, H., Henden, A., Leggett, S., Levison, H., Luginbuhl, C., Martini, J., Monet, A., Pier, J., Riepe, B., Stone, R., Vrba, F., & Walker, R. 1996, USNO-SA1.0 Catalog (U.S. Naval Observatory, Washington DC) . S Perlmutter, S Gabi, G Goldhaber, A Goobar, D E Groom, I M Hook, A G Kim, M Y Kim, J C Lee, R Pain, C R Pennypacker, I A Small, R S Ellis, R G Mcmahon, B J Boyle, P S Bunclark, D Carter, M J Irwin, K Glazebrook, H J M Newberg, A V Filippenko, T Matheson, M Dopita, W J Couch, ApJ. 483565& The Supernova Cosmology ProjectPerlmutter, S., Gabi, S., Goldhaber, G., Goobar, A., Groom, D. E., Hook, I. M., Kim, A. G., Kim, M. Y., Lee, J. C., Pain, R., Pennypacker, C. R., Small, I. A., Ellis, R. S., McMahon, R. G., Boyle, B. J., Bunclark, P. S., Carter, D., Irwin, M. J., Glazebrook, K., Newberg, H. J. M., Filippenko, A. V., Matheson, T., Dopita, M., Couch, W. J., & The Supernova Cosmology Project. 1997, ApJ, 483, 565 . C Pollas, E Cappellaro, M Turatto, G Candeo, IAU Circ. 46911Pollas, C., Cappellaro, E., Turatto, M., & Candeo, G. 1988, IAU Circ., 4691, 1 . S H Pravdo, D L Rabinowitz, E F Helin, K J Lawrence, R J Bambery, C C Clark, S L Groom, S Levin, J Lorre, S B Shaklan, P Kervin, J A Africano, P Sydney, V Soohoo, AJ. 1171616Pravdo, S. H., Rabinowitz, D. L., Helin, E. F., Lawrence, K. J., Bambery, R. J., Clark, C. C., Groom, S. L., Levin, S., Lorre, J., Shaklan, S. B., Kervin, P., Africano, J. A., Sydney, P., & Soohoo, V. 1999, AJ, 117, 1616 . L Rigon, M Turatto, S Benetti, A Pastorello, E Cappellaro, I Aretxaga, O Vega, V Chavushyan, F Patat, I J Danziger, M Salvo, MNRAS. 340191Rigon, L., Turatto, M., Benetti, S., Pastorello, A., Cappellaro, E., Aretxaga, I., Vega, O., Chavushyan, V., Patat, F., Danziger, I. J., & Salvo, M. 2003, MNRAS, 340, 191 . S Sabine, D Baines, J Howard, IAU Circ. 67061Sabine, S., Baines, D., & Howard, J. 1997, IAU Circ., 6706, 1 . S Z Siloti, E M Schlegel, P Challis, S Jha, R P Kirshner, P Garnavich, BAAS. 321538Siloti, S. Z., Schlegel, E. M., Challis, P., Jha, S., Kirshner, R. P., & Garnavich, P. 2000, BAAS, 32, 1538 . N Soker, S Rappaport, ApJ. 538241Soker, N. & Rappaport, S. 2000, ApJ, 538, 241 . R A Stathakis, E M Sadler, MNRAS. 250786Stathakis, R. A. & Sadler, E. M. 1991, MNRAS, 250, 786 . M Turatto, T Suzuki, P A Mazzali, S Benetti, E Cappellaro, I J Danziger, K Nomoto, T Nakamura, T R Young, F Patat, ApJ. 53457Turatto, M., Suzuki, T., Mazzali, P. A., Benetti, S., Cappellaro, E., Danziger, I. J., Nomoto, K., Nakamura, T., Young, T. R., & Patat, F. 2000, ApJ, 534, L57 . Energy Sciences Network. U.S. Department of EnergyU.S. Department of Energy. 2004, Energy Sciences Network, http://www.es.net/ . L Wang, D Baade, P Höflich, A Khokhlov, J C Wheeler, D Kasen, P E Nugent, S Perlmutter, C Fransson, P Lundqvist, ApJ. 5911110Wang, L., Baade, D., Höflich, P., Khokhlov, A., Wheeler, J. C., Kasen, D., Nugent, P. E., Perlmut- ter, S., Fransson, C., & Lundqvist, P. 2003, ApJ, 591, 1110 . L Wang, D Baade, P Höflich, J C Wheeler, K Kawabata, K Nomoto, ApJ. 60453Wang, L., Baade, D., Höflich, P., Wheeler, J. C., Kawabata, K., & Nomoto, K. 2004, ApJ, 604, L53 . J C Wheeler, R P Harkness, Reports of Progress in Physics. 531467Wheeler, J. C. & Harkness, R. P. 1990, Reports of Progress in Physics, 53, 1467 . W M Wood-Vasey, G Aldering, B C Lee, S Loken, P Nugent, S Perlmutter, J Siegrist, L Wang, P Antilogus, P Astier, D Hardin, R Pain, Y Copin, G Smadja, E Gangler, A Castera, G Adam, R Bacon, J.-P Lemonnier, A Pécontal, E Pécontal, R Kessler, New Astronomy Review. 48637Wood-Vasey, W. M., Aldering, G., Lee, B. C., Loken, S., Nugent, P., Perlmutter, S., Siegrist, J., Wang, L., Antilogus, P., Astier, P., Hardin, D., Pain, R., Copin, Y., Smadja, G., Gangler, E., Castera, A., Adam, G., Bacon, R., Lemonnier, J.-P., Pécontal, A., Pécontal, E., & Kessler, R. 2004, New Astronomy Review, 48, 637 . W M Wood-Vasey, G Aldering, Nugent, IAU Circ. 80192Wood-Vasey, W. M., Aldering, G., & Nugent. 2002, IAU Circ., 8019, 2 . K Young, G Serabyn, T G Phillips, G R Knapp, R Guesten, A Schulz, ApJ. 385265Young, K., Serabyn, G., Phillips, T. G., Knapp, G. R., Guesten, R., & Schulz, A. 1992, ApJ, 385, 265
[]
[ "Learning Theories Reveal Loss of Pancreatic Electrical Connectivity in Diabetes as an Adaptive Response", "Learning Theories Reveal Loss of Pancreatic Electrical Connectivity in Diabetes as an Adaptive Response" ]
[ "P Goel ", "A Mehta " ]
[]
[ "PLoS ONE" ]
Cells of almost all solid tissues are connected with gap junctions which permit the direct transfer of ions and small molecules, integral to regulating coordinated function in the tissue. The pancreatic islets of Langerhans are responsible for secreting the hormone insulin in response to glucose stimulation. Gap junctions are the only electrical contacts between the beta-cells in the tissue of these excitable islets. It is generally believed that they are responsible for synchrony of the membrane voltage oscillations among beta-cells, and thereby pulsatility of insulin secretion. Most attempts to understand connectivity in islets are often interpreted, bottom-up, in terms of measurements of gap junctional conductance. This does not, however, explain systematic changes, such as a diminished junctional conductance in type 2 diabetes. We attempt to address this deficit via the model presented here, which is a learning theory of gap junctional adaptation derived with analogy to neural systems. Here, gap junctions are modelled as bonds in a beta-cell network, that are altered according to homeostatic rules of plasticity. Our analysis reveals that it is nearly impossible to view gap junctions as homogeneous across a tissue. A modified view that accommodates heterogeneity of junction strengths in the islet can explain why, for example, a loss of gap junction conductance in diabetes is necessary for an increase in plasma insulin levels following hyperglycemia.
10.1371/journal.pone.0070366
null
43,618
1307.0131
b78f21bf09022ee5ee33a97a3322519d64a94078
Learning Theories Reveal Loss of Pancreatic Electrical Connectivity in Diabetes as an Adaptive Response 2013 P Goel A Mehta Learning Theories Reveal Loss of Pancreatic Electrical Connectivity in Diabetes as an Adaptive Response PLoS ONE 88703662013Received April 3, 2013; Accepted June 17, 2013;Editor: Massimo Pietropaolo, University of Michigan Medical School, United States of America Funding: No current external funding sources for this study. Competing Interests: The authors have declared that no competing interests exist. * E-mail: [email protected] Cells of almost all solid tissues are connected with gap junctions which permit the direct transfer of ions and small molecules, integral to regulating coordinated function in the tissue. The pancreatic islets of Langerhans are responsible for secreting the hormone insulin in response to glucose stimulation. Gap junctions are the only electrical contacts between the beta-cells in the tissue of these excitable islets. It is generally believed that they are responsible for synchrony of the membrane voltage oscillations among beta-cells, and thereby pulsatility of insulin secretion. Most attempts to understand connectivity in islets are often interpreted, bottom-up, in terms of measurements of gap junctional conductance. This does not, however, explain systematic changes, such as a diminished junctional conductance in type 2 diabetes. We attempt to address this deficit via the model presented here, which is a learning theory of gap junctional adaptation derived with analogy to neural systems. Here, gap junctions are modelled as bonds in a beta-cell network, that are altered according to homeostatic rules of plasticity. Our analysis reveals that it is nearly impossible to view gap junctions as homogeneous across a tissue. A modified view that accommodates heterogeneity of junction strengths in the islet can explain why, for example, a loss of gap junction conductance in diabetes is necessary for an increase in plasma insulin levels following hyperglycemia. Introduction Gap junctions are clusters of intercellular channels between cells formed by the membrane proteins connexins (Cx), that mediate rapid intercellular communication via direct electric contact and diffusion of metabolites [1]. In excitable cells such as neurons, cardiac myocytes and smooth muscles, gap junctions provide efficient low-resistance pathways through which membrane voltage changes can be shared across the tissue. Besides excitable cells, gap junctions are found between cells in almost every solid tissue [1]. Gap junctions are thus central to multicellular life [2], with numerous diseases linked to connexin disorders [3], including type 2 diabetes mellitus [4][5][6]. The islets of Langerhans in the pancreas are clusters of largely alpha-, beta-and delta-cells that respectively control secretion of the hormones glucagon, insulin and somatostatin central to energy regulation. Gap junctions form direct connections between beta-cells [6,10,11] in islets, and are important for normal glucose-stimulated insulin secretion (GSIS) [7][8][9]. Gap junctions are generally believed to be important for coordinating the beta-cell electrical oscillations known as bursting, which in turn, can then support pulsatile insulin secretion [6,10,12]; this view is supported by theoretical studies [13][14][15] as well. The conductance strength of gap junctions evolves by the insertion or deletion of connexin proteins (Fig. 1) into junctional plaques, and by altering the single-channel conductance and probability of channel opening [1]. Whether these molecular changes constitute a systematic adaptive response of the endocrine tissue to its metabolic environment remains to be investigated, in particular from a theoretical point of view. As with many other excitable cells, the information content of bioelectric signals [16] in islets is yet unclear. The mechanisms underlying bursting are well understood [17,18]; however, how those temporal properties regulate energy homeostasis is not. While slow (5-15 minute period) bursts are generally thought to drive secretion at stimulatory concentrations of glucose, faster (periods less than 5 minutes) oscillations are also found, typically at sub-stimulatory (basal) glucose levels; the average calcium signal, however, is comparable in either case (such as in simulations from [17], not shown). The hypothesis that a synchronous bursting of beta-cells [10,12,19] is essential to GSIS is guided by the observation of pulsatile insulin secretion from islets [20] and in vivo [21]. Gap junctions can certainly mediate synchrony in principle, as shown in both simulations [12,13] and experiments [5,22]. Whether this is their role in vivo is debatable. In general, in vitro studies do not address this question completely, because they are typically carried out with glucose perifusion. Since glucose is microscopically delivered to beta-cells via a rich blood vessel supply in the islet in vivo, oscillator entrainment by junction coupling may be far less important than expected from experiments on isolated islets, especially if the beta-cells are not too heterogeneous in frequency [23]. In fact, Rocheleau et al. [24] have performed experiments using a microfluidic chip taking care to see that glucose stimulates an islet only partially; they find partially propagated waves but not synchrony. Their result shows that gap junctions are limited in their ability to support uniform synchronization across the entire islet in the presence of a glucose gradient in the islet. It is possible that even with glucose microdelivery as in vivo, synchronization may be a more local phenomenon than has been previously appreciated. Stozer et al. [25] have recently demonstrated that in islet slices only local synchronization is seen across groups of beta-cells. Another theory, different from one that anticipates gap junctions serve to synchronize an islet uniformly, thus appears to be necessary to explain some of the phenomena associated with insulin secretion, and it is this that we attempt in the rest of this paper. A paradigm that is gaining increasing recognition is that bioelectric and (epi-)genetic signaling are related as a cyclical dynamical system [16]: membrane voltage activity induces changes in mRNA expression and transcriptional regulation, which in turn leads to altered membrane channel proteins. Here we develop a theory to study an adaptive response of gap junctions to islet firing activity. Bioelectric cues are encoded as bursting, these determine junctional conductance states, and junctions respond in turn by translation modifications that alter firing rates. In this way, electric and genetic components ''learn'' from each other, iteratively. While learning is integral to neural systems and functionally beneficial at the level of a single individual, many studies have focused on the collective effects of [simple forms of] individual learning and decision-making, e.g. in populations of interacting individuals, or agents. Such distributed systems, exemplifying social or ecological group behavior, also share similarities with interacting systems of statistical physics, in the nature of the local ''rules'' followed by the individual units as well as in the emergent behavior at the macro level. Game-theoretic approaches [26][27][28] are sometimes brought to bear on such issues, their underlying idea being that the behavior of an individual (its ''strategy'') is to a large extent determined by what the other individuals are doing. The strategic choices of an individual are thus guided by those of the others, through considerations of the relative ''payoffs'' (returns) obtainable in interactive games. In this context, a stochastic model of strategic decision-making was introduced in [29], which captures the essence of the above-stated notion, i.e. selection from among a set of competing strategies based on a comparison of the expected payoffs from them. Depending upon which of the available strategic alternatives (that are being wielded by the other agents) is found to have the most favorable ''outcome'' in the local vicinity, every individual appropriately revises its strategic choice. Competition between prevalent strategies and adaptive changes at the individual level characterize the sociologically motivated model of [29]. Given that these two features of competition and adaptation also generally occur across the framework of activityinduced synaptic plasticity, a translation of the notions in [29] to the latter context was attempted in [30] and [31]. A model was delineated in ref. [30] along these lines, with the types or weights of a plastic synapse taking the place of strategies. In the next subsections, we will extend these concepts to formulate a theory of 'competing' gap junctions in a network. Voltage Gating of Junctional Conductance and Homeostatic Adaptation Gap junctions are known to adapt on at least two timescales: trans-junctional currents are gated on a fast timescale of the order of a few milliseconds to seconds in response to a trans-junctional voltage difference (DV ) [1]. Voltage gated currents of Cx36 channels (the connexin isoform relevant to islets [5]), expressed in Xenopus oocytes and transfected human HeLa cells, were recorded in [32] (Fig. 2). Haefliger et al. [33] have shown hyperglycemia decreases Cx expression in adult rats. Paulauskas et al. [34] have recently described a 16-state stochastic model of gap junctional currents that are voltage gated by altering, amongst other things, unitary single channel conductance and the probability of opening [1,35]. On much slower timescales of hours to days, gap junctions are regulated by the events that alter the insertion and deletion of channels in the junctional plaque, connexin proteins synthesis, trafficking to the membrane and degradation. We propose to study adaptation in gap junction strength on slow timescales; this is the natural setting for a mean field theory of gap junction modification, that is, over suitably long periods that averages over cellular firing rates can be treated as adiabatic. Interestingly, the voltage-gated gap junction appears to conform to a homeostatic principle with respect to transjunctional current, I gap~ggap DV : when DV is small, such as during synchronous bursting for example, gap junctional conductance is large, while a large DV , as in anti-synchrony, is compensated with a small g gap . That is, firing patterns DV result in changes in g gap that stabilize I gap . We extrapolate from this argument to construct a homeostatic learning rule for (slow) modification of gap junctions, as described below. Model and Results Model -A Learning Theory of Gap Junctional Adaptation Our starting point is a model of competitive learning introduced in [29] and applied, in [30] and [31] to look at the optimisation of learning via a model of competing synapses. Proceeding by analogy, we consider a network consisting of b-cells connected by gap junctions, where the latter are treated as mutual neighbors if they are connected by a b-cell. In a one-dimensional formulation, each gap junction will thus be associated with two gap junctional neighbors. For simplicity the b-cells can be represented by binary threshold units, and the two states of the binary gap junction, which are inter-convertible by definition, are assumed to have different weights, which we label as 'strong' and 'weak' types. A weak gap junction is characterized, for example, by fewer connexin proteins in the junctional plaque. When the middle gap junction is under consideration for a state update, the b-cells A and B (Fig. 3) share this middle gap junction in common; thus, in comparing how often the two b-cells are found activated, one can factor out the influence of the common gap junction, when considering averages, and effectively treat the time-averaged activation frequency of either b-cell as being determined only by the single, other gap junction that the b-cell is connected to. This essentially implies that the state of b-cell A, say, can be considered quite reasonably as an ''outcome'' to be associated with gap junction n{1, and similarly with b-cell B and gap junction nz1; thus, b-cells can be thought of as taking on the identities of the respective gap junctions. There are few general principles that can organize an argument to discuss plastic behavior in excitable cells; Hebb's postulate is one such. In common colloquialism this learning rule is stated as ''cells that fire together, wire together''; in other words, temporal association between pairs of firing neurons is successively encoded in synaptic coupling between those neurons. A Hebbian philosophy asserts that the direction of adaptation is such as to reinforce coordinated activity between cells. One can now set forth some rules governing the above weight changes, which may have a Hebbian or anti-Hebbian flavor as the situation demands, and depend on the outcomes of the surrounding b-cells. Hebbian rules in the case of synaptic plasticity favour synchrony, so that e.g. a synapse is strengthened if its surrounding neurons fire or do not fire together; the opposite is the case with anti-Hebbian rules. In the present context, we use this concept analogously: for Hebbian rules, synchronous activity causes a strengthening of conductance while anti-synchronous activity causes a weakening of conductance.Thus, loosely speaking, two gap junctions adjacent to any given gap junction ''compete'' to decide its type, and this continues to happen repeatedly across the entire network. Let us now consider the update dynamics of a single effective gap junction, that in some sense represents the average state of the whole network. To begin with, in such a picture, the outcomes are assumed to be uncorrelated at different locations, and treated as independent random variables, with the probability for activation being obtainable from the time-averaged activation frequency of the bcell. Consistent with the situation described in the previous paragraph, that the effect of the common gap junction can be left out on average in comparing the outcomes of its connected b-cells, we associate, with each b-cell, a probability for activation at any instant that is only a function of the other neighboring gap junction, being equal to p z (p { ) for a strong (weak) type gap junction. We now consider a mean-field version of the model. The idea behind the mean-field approximation is that we look at the average behavior in an infinite system. This, at one stroke, deals with two problems: first, there are no fluctuations associated with [32]. Steady-state junctional currents from HeLa-Cx36 cell pairs indicate conductance, G j , varies with transjunctional potential difference, DV j . If two neighboring coupled cells fire nearly together, or do not simultaneously fire, trans-junctional conductance is high, but when one fires and the other does not conductance is low. This compensatory behavior inspires our homeostatic learning rule, see text. doi:10.1371/journal.pone.0070366.g002 system size, and second, the approximation that we have made in ignoring the ''self-coupling'' of the gap junction is better realized. In the mean-field representation, every gap junction is assigned a probability (uniform over the lattice) to be either strong (f z ) or weak (f { ), so that spatial variation is ignored, as are fluctuations and correlations. This single effective degree of freedom allows for a description of the system in terms of its fixed point dynamics. The rate of change of the probability f z , say, (which in the limit of large system size is equivalent to the fraction of strong units) with time, is computed by taking into account only the nearestneighbor gap junctional interactions, via specific rules. To design a transition rule for gap junctions that is consistent with a Hebbian theory, and at the same time tunes gap junctional plasticity to voltage activity in the network, we mimic the homeostatic adaptation implicit in (fast) voltage-gating of conductance (Fig. 2): to reinforce synchronous activity conductance, changes must be directed towards a maximal state of conductance, while anti-synchronous activity is best served by a weakening of conductance. The homeostatic learning rule is summarised as follows: if b-cells (Fig. 3) fire simultaneously DV AB is zero and gap junction, g, strengthens to one, while if one b-cell fires but not the other, DV AB is one and junction strength weakens to zero. We write equations for the probability f z (tz1) that the intermediate gap junction (Figure 3) is in the strong state, say, at time tz1 in terms of the same probability at time t, f z (t), the (complementary) probability that it was in the weak state at time t, f { (t) and Prob(DF ), the probability of a change in strength of a given magnitude: f z (tz1)~f z (t)|Prob(DF~1)zf { (t)|Prob(DF~1)ð1Þ The first term on the right hand side represents the probability that the strong state at time t stays strong at time tz1; since the gap junctions are binary, a strong junction cannot get any stronger. Since f z (t)zf { (t)~1, this reduces to the equation f S (tz1)~Prob(DF~1), independent of the initial state of the gap junction. We now write down all possible scenarios for Prob(DF~1): in words, these correspond to the sum of the following probabilities: (Prob that both g L and g R are in the strong state)6(Prob that A and B both fire, AND both don't fire)+(Prob that both g L and g R are in the weak state)6(Prob that A and B both fire, AND both don't fire)+(Prob that g L and g R are in disparate states)6(Prob. that A and B both fire, AND both don't fire). For example: if g L and g R (see Fig. 3) are both strong -with probability f 2 z -the firing pattern that leads to a strong middle junction, g, according to the homeostatic learning rule is when DV AB~0 , i.e. either when both A and B fire simultaneously (probability, p 2 z ), or both do not fire (probability, (1{p z ) 2 ). All such combinations are enumerated in Table 1, this leads to an equation for the evolution of g: f z (tz1)~f 2 z (t) (p 2 z z(1{p z ) 2 ) z f 2 { (t) (p 2 { z(1{p { ) 2 ) z 2f z (t)f { (t) (p z p { z(1{p z )(1{p { )ð2Þ This evolution equation thus embodies that if b-cells (Fig. 3) fire simultaneously, DV AB is zero and gap junctions strengthen, while if DV AB is one, junction strength weakens. Results -the Steady State Distribution of Gap Junctions The steady-state distribution of weak and strong junctions is obtained as the fixed point solution of Eq. (2): Figure 3. The bonds formalism of an islet. b-cells, A and B, are dominated by gap junctions g L and g R respectively. Each junction (g L and g R ) can be in either strong (with probability f z ) or weak state (with probability f { ). A weak (strong) junction is likely to fire with a probability p { (p z ). The central gap junction g is altered in response to the average potential difference of cells A and B, DV AB , across it, according to a specified learning rule, such as the homeostatic rule of Fig. 2 that is considered here. For example, if cell A (red) here is assumed to fire in response to a strong g L (this occurs with probability p z ) while cell B is silent (the probability with which it could have been active is p { ) in response to a weak g R , then the bond, g, will be weakened since DV AB~1 . doi:10.1371/journal.pone.0070366.g003 Table 1. The probability of a gap junction adapting to a strong, high conductance state is determined by the current state of the bonds g L and g R (Fig. 3). (Fig. 4). That is, the theory predicts that in vivo at least half of the gap junctions in an islet will be of the strong type. It is possible for strong junctions to dominate the islet completely, f à z &1, but this is seen to be an extreme scenario and requires either: p z is very low and p { as well, or p z is very high and p { is greater than about half. For the large part of the (p { , p z ) parameter space f à z is predominantly between 0.5 and 0.7. f à z{ 4p { (p z {p { )z2(p z {p { )z1{ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi {4(p 2 z {p 2 { )z4(p z {p { )z1 q 4(p z {p { ) 2 :ð3Þg L g R P Strong Strong f 2 z fp 2 z z(1{p z ) 2 g Strong (Weak) Weak (Strong) 2f z f { fp z p { z(1{p z )(1{p { )g Weak Weak f 2 { fp 2 { z(1{p For low firing probabilities, such as for example (p { , p z )~(0:1, 0:15) the beta-cells A and B (Fig. 3) seldom fire and DV AB is invariably close to zero; g therefore adapts towards the strong state. Likewise, when p { and p z are both high, such as for example at (0:85, 0:9) beta-cells A and B fire with a high rate and DV AB is again close to zero and g adapts towards the strong state. When the probabilities p { and p z are considerably different, however, for example when (p { , p z )~(0:1, 0:9) four possibilities arise: either A and B are both associated with weak (strong) junctions and g adapts towards 1; or one of A or B is associated with a weak (strong) junction, but since one beta-cell then fires with a probability much larger than the other, g adapts towards 0. Thus f à z is close to half in this case (g equally likely to be 0 or 1), as is the firing rate (Fig. 5). We see thus that similar behaviour for the two gap junctions induces strengthening, while dissimilar behaviour induces weakening, in line with the Hebbian viewpoint adopted above. Discussion One major interest in developing a theory of gap junction adaption is to understand the changes in junctional conductance that take place in type 2 diabetes. It has been suspected from animal studies that loss of Cx36 is phenotypically similar to a prediabetic condition characterized by glucose intolerance, diminished insulin oscillations and first and second phases of insulin secretion, and a loss of beta-cell mass [6,[36][37][38]. Head et al. [11] have recently confirmed this in vivo via the observation that Cx36 conductance loss induces postprandial glucose intolerance in mice. These observations suggest that a loss of electrical connectivity in islets may underlie type 2 diabetes by disrupting insulin oscillations and reducing first-phase insulin secretion [6,11]. Benninger et al. [39] have found yet another effect that could be relevant to diabetes, that a loss of gap junctions in islets leads to increased basal (i.e. when minimally stimulated by glucose) insulin release. If this were to hold in vivo it could explain hyperinsulinaemia as a result of gap junction loss as well, when steady state levels of circulating plasma insulin in diabetics continue to be high even in fasting conditions. A word about dimensionalities -while we recognise that the geometries of real synaptic networks are complex and that they are embedded in three dimensions, our choice of working in one dimension is based as much on simplicity as on the absence of a reason to choose a more complex geometry. Working on a threedimensional lattice would only increase the complexity of our algebra, while not really getting closer to the real geometry of synaptic networks, which are, as the name suggests, probably embedded on abstract graphs. However, the fact that we have worked in mean field (ignoring correlations and going to the limit of infinite systems) in a one-dimensional embedding makes our results less reliant on the embedding geometry than they otherwise might have been. We mean by this that while specific quantitative estimates might well be affected by the inclusion of more neighbours in higher dimensionalities, the qualitative outlines of our calculations will remain very similar. Our choice of mean field dynamics both in this case (as well as in the original learning model of [30])was very purposeful: in both cases, the exact geometries/ connectivities of islets/synapses are imprecisely known, and infinitely variable. Under these conditions mean field theory is the tool most widely resorted to by modellers, since it is able to predict general features based on minimalistic assumptions. The game-theoretic formalism presented here provides a highlevel explanation why a loss of junctional conductance would be necessary in diabetes. In the healthy individual insulin secretion occurs relatively sparingly, for a few hours at regularly spaced intervals following glucose ingestion (breakfast, lunch and dinner). The low firing rates in a healthy individual are accompanied by a high proportion of strong gap junctions (that is, near the region marked by A, Fig. 4, where f à z is close to 1). Diabetes is associated with overnutrition among various other factors [40], and invariably involves combating an increased glucose load [41,42]. Several authors that proposed that a substantial loss of Cx36 could ocur in type 2 diabetes (reviewed for example in [37]). Much of the evidence that connexins expression or signaling are altered in models of type 2 diabetes comes from rodents; however, because Cx36 is present in human islets, this gives rise to the speculation (see e.g. [11]) that a loss of Cx36 gap junction conductance may occur in type 2 diabetes. Thus, based on glucose intolerance measured in the conscious mouse Head et al. [11], as well as others [5,6,43,44], have estimated that a loss of nearly 50% in junctional conductance could occur in diabetes. In Fig. 4 the locus of a 50% connectivity loss is the line p z zp {~1 , where the fraction of strong gap junctions is halved (f à z~0 :5) but the firing rates are higher (Fig. 5). That is, the islet stressed by an increase glycemic stimulation is forced to respond with an increase in its firing and insulin secretion rate, which it does by degrading strong gap junctions to weaker ones. In this way, the islet is able to accommodate a stimulus stronger than that for which its physiology had evolved. A change in f à z is accomplished largely through altering the probabilities of junctioninduced firing, p z and p { . As mentioned in the introduction, the classical view of diabetes is that it results from gap junction dysfunction. Instead, the game-theoretic theory we have presented relates a conductance decrease to an adaptive response of an islet that sacrifices strong gap junctions in order to maintain insulin control over hyperglycemia. At the heart of our game-theoretic theory is its use of stochasticity in gap junction synchronisation. Classically, strong gap junctions entrain beta-cells to fire, the entire assembly is assumed to be fairly homogeneous in gap junction strength, and the resultant synchronous bursting is seen to be essential to GSIS. Our theory on the other hand, introduces the possibility that betacells coupled even to strong gap junctions may not fire, and likewise, weak gap junctions may induce simultaneous firing. Further, synchronous bursting, as well as the simultaneous absence of bursting, induces stronger junctions, while antisynchrony weakens them. The result is that gap junctional strengths are constantly updated as a result of the synchronous or asynchronous bursting of beta-cells. In other words, the core idea of our paper is that disparate firing patterns lead to changes in gap junctional strength -which provides a hitherto unexplored scenario for synchrony. This then naturally leads to a situation where heterogeneity prevails in the distribution of gap junctional strengths in the islet. The heterogeneity of gap junctions in turn determines more complex patterns of activity in the network, beyond the simple categories of (anti-)synchronous bursting. In principle it is possible to explain observations of junctional strengths such as in [39] individually, without recourse to a general theory of gap junction function. Typically, a lot of the focus is on studying the heterogeneity of beta-cells in an islet. Indeed, Benninger et al. verify that different thresholds exist for calcium excitations among the beta-cells of a (Cx36 null) islet, and conclude therefore that beta-cells with high thresholds create oscillator death [45] through gap junctions to decrease basal secretion. The other question to ask, however, is: can heterogeneous gap junctions within an islet shape the emergent properties of bursting? Once the heterogeneity of the gap junctions themselves is recognized as crucial, that leads, ipso facto, to an alternate view, one in which changes in junctional conductance are seen as solutions to an optimization problem. The essential ingredients of a theory of gap junction adaptation include keeping track of the propensities with which strong and weak junctions influence firing rates in beta-cells, and transition rules that determine how gap junctions will respond to local firing patterns. We have concentrated on learning rules that embody homeostatic principles, which are a central feature of the energy maintenance pathways of the body. However our general formalism is certainly applicable to other forms of adaptation rules that may be uncovered in future experiments. We have constructed a theory that offers an alternative explanation to the classical view that gap junctions primarily function to synchronize beta-cells in an islet so the entire islet behaves like a syncytium and a uniform period emerges. When gap junction adaptation is considered, partial synchronization can occur even in networks fully coupled with (strong) gap junctions. This learning framework predicts in a natural fashion that a full synchrony across the islet is very unlikely, that synchronization is a local phenomenon and happens across a few groups of cells. Thus the view that emerges instead is that the islet is sensitive to a glucose demand in secreting insulin and uses gap junctions as a tuning parameter in this adaptation. Paradoxically, an increase in secretion efficiency can come not by strengthening junctions, but down-regulating them instead. Thus, a lowered conductance need not necessarily be interpreted as ''failing'' gap junctions. On the contrary, they are judiciously adapting to the increased glucose load to cope with an increased demand for insulin secretion. At the moment there does not seem to be direct experimental evidence that a reduction of gap junctions occurs in human type 2 diabetes. Additionally, although it is very attractive from a theoretical viewpoint, it is not proven that gap junctions are altered in response to altered islet firing activity in diabetes. Our model is a complementary line of evidence, albeit theoretical, in these directions. Further, the model makes another related prediction, that gap junction expression and coupling strength are very likely to occur as heterogeneous across the islet, in both health as well as diabetes. If the naturally heterogeneous nature of gap junctions is acknowledged, this could be critical in designing appropriate clinical interventions, since connexins are potential targets for diabetes therapy. Indeed, we hope that our work will be helpful to researchers seeking to clarify the adaptive dynamics of gap junctions in diabetes. Author Contributions Figure 2 . 2Voltage gating of Cx36 gap junctions, adapted from Figure 4 . 4The f à z contour plot in the p { -p z plane. The physically relevant (p { vp z ) region is the triangle ABC above the line p z~p{ . f à z~0 along BD, p z zp {~1 . The region near A where f à z is close to 1 represents healthy individuals while diabetics are assumed to lie along BD where f à z~0 :5. doi:10.1371/journal.pone.0070366.g004 Figure 5 . 5Evolution of gap junctions with network activity. Beta-cells were initialized as firing(1) or not (0), and gap junctions as weak (0) or strong (1) with equal probability. 5000 beta-cell-gap junction pairs(Fig. 3)were iterated according to the learning rules described in the text. The legend indicates the (p { , p z ) values for a computation. The top panel shows the evolution of the fraction of strong gap junctions, f à z , in the network. The bottom panel shows the corresponding fraction of beta-cells that are active. Note that f à z as well as firing rate in the simulation are both 0.5 along p { zp z~1 as expected from the theory,Fig. 4. A transition from health with low firing and high proportion of strong gap junctions (black curves) to diabetes takes place with degrading the gap junctions to increase firing rates (red curves). doi:10.1371/journal.pone.0070366.g005 The physically reasonable condition on the firing probabilities is p { vp z . The minimum f à z is 0:5 which occurs for p { zp z~1{ ) 2 g doi:10.1371/journal.pone.0070366.t001 f à z is stable in the entire 0v(p { , p z )v1 domain. Perturbations from f à z relax at a rate l~1{ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1{4p 2 z z4p z {4p { z4p 2 { p . PLOS ONE | www.plosone.org August 2013 | Volume 8 | Issue 8 | e70366 Gap junctions. D A Goodenough, D L Paul, Cold Spring Harb Perspect Biol. 12576Goodenough DA, Paul DL (2009) Gap junctions. Cold Spring Harb Perspect Biol 1: a002576. Gap junctions -from cell to molecule. B J Nicholson, J Cell Sci. 116Nicholson BJ (2003) Gap junctions -from cell to molecule. J Cell Sci 116: 4479- 4481. Structural and functional diversity of connexin genes in the mouse and human genome. K Willecke, J Eiberger, J Degen, D Eckardt, A Romualdi, Biol Chem. 383Willecke K, Eiberger J, Degen J, Eckardt D, Romualdi A, et al. (2002) Structural and functional diversity of connexin genes in the mouse and human genome. Biol Chem 383: 725-737. . E Winterhager, Springer-VerlagBerlin HeidelbergWinterhager E (2005) Springer-Verlag Berlin Heidelberg. Loss of connexin36 channels alters beta-cell coupling, islet synchronization of glucoseinduced Ca2+ and insulin oscil-lations, and basal insulin release. M A Ravier, M Guldenagel, A Charollais, A Gjinovci, D Caille, Diabetes. 54Ravier MA, Guldenagel M, Charollais A, Gjinovci A, Caille D, et al. (2005) Loss of connexin36 channels alters beta-cell coupling, islet synchronization of glucose- induced Ca2+ and insulin oscil-lations, and basal insulin release. Diabetes 54: 1798-1807. The in vivo -to-cell chat room: connexin connections matter. P Meda, Diabetes. 61Meda P (2012) The in vivo -to-cell chat room: connexin connections matter. Diabetes 61: 1656-1658. Structural coupling between pancreatic islet cells. L Orci, R H Unger, A E Renold, Experientia. 29Orci L, Unger RH, Renold AE (1973) Structural coupling between pancreatic islet cells. Experientia 29: 1015-1018. Connexins: key mediators of endocrine function. D Bosco, J A Haefliger, P Meda, Physiol Rev. 91Bosco D, Haefliger JA, Meda P (2011) Connexins: key mediators of endocrine function. Physiol Rev 91: 1393-1445. The unique cytoar-chitecture of human pancreatic islets has implications for islet cell function. O Cabrera, D M Berman, N S Kenyon, C Ricordi, P O Berggren, Proc Natl Acad Sci. 103Cabrera O, Berman DM, Kenyon NS, Ricordi C, Berggren PO, et al. (2006) The unique cytoar-chitecture of human pancreatic islets has implications for islet cell function. Proc Natl Acad Sci USA 103: 2334-2339. Oscillations, intercellular coupling, and insulin secretion in pancreatic beta cells. P E Macdonald, P Rorsman, PLoS Biol. 449MacDonald PE, Rorsman P (2006) Oscillations, intercellular coupling, and insulin secretion in pancreatic beta cells. PLoS Biol 4: e49. Connexin-36 gap junc-tions regulate in vivo first-and second-phase insulin secretion dynamics and glucose tolerance in the conscious mouse. W S Head, M L Orseth, C S Nunemaker, L S Satin, D W Piston, Diabetes. 61Head WS, Orseth ML, Nunemaker CS, Satin LS, Piston DW, et al. (2012) Connexin-36 gap junc-tions regulate in vivo first-and second-phase insulin secretion dynamics and glucose tolerance in the conscious mouse. Diabetes 61: 1700-1707. Emergence of organized bursting in clusters of pancreatic beta-cells by channel sharing. A Sherman, J Rinzel, J Keizer, Biophys J. 54Sherman A, Rinzel J, Keizer J (1988) Emergence of organized bursting in clusters of pancreatic beta-cells by channel sharing. Biophys J 54: 411-425. Why pancreatic islets burst but single beta cells do not. The heterogeneity hypothesis. P Smolen, J Rinzel, A Sherman, Biophys J. 64Smolen P, Rinzel J, Sherman A (1993) Why pancreatic islets burst but single beta cells do not. The heterogeneity hypothesis. Biophys J 64: 1668-1680. Computer modeling of heterogeneous beta-cell populations. A Sherman, P Smolen, Adv Exp Med Biol. 426Sherman A, Smolen P (1997) Computer modeling of heterogeneous beta-cell populations. Adv Exp Med Biol 426: 275-284. Electrical bursting, calcium oscillations, and synchroniza-tion of pancreatic islets. R Bertram, A Sherman, L S Satin, Adv Exp Med Biol. 654Bertram R, Sherman A, Satin LS (2010) Electrical bursting, calcium oscillations, and synchroniza-tion of pancreatic islets. Adv Exp Med Biol 654: 261-279. Regulation of cell behavior and tissue patterning by bioelectrical signals: challenges and opportunities for biomedical engineering. M Levin, C G Stevenson, Annu Rev Biomed Eng. 14Levin M, Stevenson CG (2012) Regulation of cell behavior and tissue patterning by bioelectrical signals: challenges and opportunities for biomedical engineering. Annu Rev Biomed Eng 14: 295-323. Calcium and glycolysis mediate multiple bursting modes in pancreatic islets. R Bertram, L Satin, M Zhang, P Smolen, A Sherman, Biophys J. 87Bertram R, Satin L, Zhang M, Smolen P, Sherman A (2004) Calcium and glycolysis mediate multiple bursting modes in pancreatic islets. Biophys J 87: 3074-3087. The geometry of bursting in the dual oscillator model of pancreatic-cells. P Goel, A Sherman, SIAM Journal on Applied Dynamical Systems. 8Goel P, Sherman A (2009) The geometry of bursting in the dual oscillator model of pancreatic-cells. SIAM Journal on Applied Dynamical Systems 8: 1664-1693. The topography of electrical synchrony among beta-cells in the mouse islet of Langerhans. P Meda, I Atwater, A Goncalves, A Bangham, L Orci, Q J Exp Physiol. 69Meda P, Atwater I, Goncalves A, Bangham A, Orci L, et al. (1984) The topography of electrical synchrony among beta-cells in the mouse islet of Langerhans. Q J Exp Physiol 69: 719-735. Synchronous oscillations of cytoplasmic Ca2+ and insulin release in glucosestimulated pancreatic islets. P Bergsten, E Grapengiesser, E Gylfe, A Tengholm, B Hellman, J Biol Chem. 269Bergsten P, Grapengiesser E, Gylfe E, Tengholm A, Hellman B (1994) Synchronous oscillations of cytoplasmic Ca2+ and insulin release in glucose- stimulated pancreatic islets. J Biol Chem 269: 8749-8753. Pulsatile insulin release from islets isolated from three subjects with type 2 diabetes. J M Lin, M E Fabregat, R Gomis, P Bergsten, Diabetes. 51Lin JM, Fabregat ME, Gomis R, Bergsten P (2002) Pulsatile insulin release from islets isolated from three subjects with type 2 diabetes. Diabetes 51: 988-993. Connexin 36 controls synchronization of Ca2+ oscillations and insulin secretion in MIN6 cells. A Calabrese, M Zhang, V Serre-Beinier, D Caton, C Mas, Diabetes. 52Calabrese A, Zhang M, Serre-Beinier V, Caton D, Mas C, et al. (2003) Connexin 36 controls synchronization of Ca2+ oscillations and insulin secretion in MIN6 cells. Diabetes 52: 417-424. Individual mice can be distinguished by the period of their islet calcium oscillations: is there an intrinsic islet period that is imprinted in vivo. C S Nunemaker, M Zhang, D H Wasserman, O P Mcguinness, A C Powers, Diabetes. 54Nunemaker CS, Zhang M, Wasserman DH, McGuinness OP, Powers AC, et al. (2005) Individual mice can be distinguished by the period of their islet calcium oscillations: is there an intrinsic islet period that is imprinted in vivo? Diabetes 54: 3517-3522. Microfluidic glucose stimulation reveals limited coordination of intracellular Ca2+ activity oscillations in pancreatic islets. J V Rocheleau, G M Walker, W S Head, O P Mcguinness, D W Piston, Proc Natl Acad Sci. 101Rocheleau JV, Walker GM, Head WS, McGuinness OP, Piston DW (2004) Microfluidic glucose stimulation reveals limited coordination of intracellular Ca2+ activity oscillations in pancreatic islets. Proc Natl Acad Sci USA 101: 12899-12903. Functional connectivity in islets of Langerhans from mouse pancreas tissue slices. A Stozer, M Gosak, J Dolensek, M Perc, M Marhl, PLoS Comput Biol. 91002923Stozer A, Gosak M, Dolensek J, Perc M, Marhl M, et al. (2013) Functional connectivity in islets of Langerhans from mouse pancreas tissue slices. PLoS Comput Biol 9: e1002923. Behavioural studies of strategic thinking in games. C F Camerer, Trends Cogn Sci. 7Camerer CF (2003) Behavioural studies of strategic thinking in games. Trends Cogn Sci 7: 225-231. Evolutionary games on graphs. G Szabo, G Fath, Physics Reports. 446Szabo G, Fath G (2007) Evolutionary games on graphs. Physics Reports 446: 97-216. Coevolutionary games A mini review. M Perc, A Szolnoki, BioSystems. 99Perc M, Szolnoki A (2010) Coevolutionary games A mini review. BioSystems 99: 109-125. Models of competitive learning: Complex dynamics, intermittent con-versions, and oscillatory coarsening. A Mehta, J M Luck, Phys Rev E. 60Mehta A, Luck JM (1999) Models of competitive learning: Complex dynamics, intermittent con-versions, and oscillatory coarsening. Phys Rev E 60: 5218- 5230. Competing synapses with two timescales as a basis for learning and forgetting. G Mahajan, A Mehta, Europhys Lett. 95Mahajan G, Mehta A (2011) Competing synapses with two timescales as a basis for learning and forgetting. Europhys Lett 95: 109-125. Learning with a network of competing synapses. A A Bhat, G Mahajan, A Mehta, PLoS ONE. 625048Bhat AA, Mahajan G, Mehta A (2011) Learning with a network of competing synapses. PLoS ONE 6: e25048. Functional expression of the murine connexin 36 gene coding for a neuronspecific gap junctional protein. B Teubner, J Degen, G Sohl, M Guldenagel, F F Bukauskas, J Membr Biol. 176Teubner B, Degen J, Sohl G, Guldenagel M, Bukauskas FF, et al. (2000) Functional expression of the murine connexin 36 gene coding for a neuron- specific gap junctional protein. J Membr Biol 176: 249-262. Hyperglycemia downregulates Connexin36 in pancreatic islets via the upregulation of ICER-1/ICER-1. J A Haefliger, F Rohner-Jeanrenaud, D Caille, A Charollais, P Meda, J Mol Endocrinol. 51Haefliger JA, Rohner-Jeanrenaud F, Caille D, Charollais A, Meda P, et al. (2013) Hyperglycemia downregulates Connexin36 in pancreatic islets via the upregulation of ICER-1/ICER-1. J Mol Endocrinol 51: 49-58. Stochastic 16-state model of voltage gating of gap-junction channels enclosing fast and slow gates. N Paulauskas, H Pranevicius, J Mockus, F F Bukauskas, Biophys J. 102Paulauskas N, Pranevicius H, Mockus J, Bukauskas FF (2012) Stochastic 16-state model of voltage gating of gap-junction channels enclosing fast and slow gates. Biophys J 102: 2471-2480. A stochastic four-state model of contingent gating of gap junction channels containing two ''fast'' gates sensitive to transjunctional voltage. N Paulauskas, M Pranevicius, H Pranevicius, F F Bukauskas, Biophys J. 96Paulauskas N, Pranevicius M, Pranevicius H, Bukauskas FF (2009) A stochastic four-state model of contingent gating of gap junction channels containing two ''fast'' gates sensitive to transjunctional voltage. Biophys J 96: 3936-3948. Islet-cell-tocell communication as basis for normal insulin secretion. S Bavamian, P Klee, A Britan, C Populaire, D Caille, Diabetes Obes Metab. 92Bavamian S, Klee P, Britan A, Populaire C, Caille D, et al. (2007) Islet-cell-to- cell communication as basis for normal insulin secretion. Diabetes Obes Metab 9 Suppl 2: 118-132. Connexins, diabetes and the metabolic syndrome. R Hamelin, F Allagnat, J A Haefliger, P Meda, Curr Protein Pept Sci. 10Hamelin R, Allagnat F, Haefliger JA, Meda P (2009) Connexins, diabetes and the metabolic syndrome. Curr Protein Pept Sci 10: 18-29. Connexin-dependent signaling in neuro-hormonal systems. I Potolicchio, V Cigliola, S Velazquez-Garcia, Klee P Valjevac, A , Biochim Biophys Acta. 1818Potolicchio I, Cigliola V, Velazquez-Garcia S, Klee P, Valjevac A, et al. (2012) Connexin-dependent signaling in neuro-hormonal systems. Biochim Biophys Acta 1818: 1919-1936. Gap junctions and other mecha-nisms of cell-cell communication regulate basal insulin secretion in the pancreatic islet. R K Benninger, W S Head, M Zhang, L S Satin, D W Piston, J Physiol. 589Benninger RK, Head WS, Zhang M, Satin LS, Piston DW (2011) Gap junctions and other mecha-nisms of cell-cell communication regulate basal insulin secretion in the pancreatic islet. J Physiol (Lond) 589: 5453-5466. Management of hyper-glycemia in type 2 diabetes: A consensus algorithm for the initiation and adjustment of therapy: a consensus statement from the American Diabetes Association and the European Association for the Study of Diabetes. D M Nathan, J B Buse, M B Davidson, R J Heine, R R Holman, Diabetes Care. 29Nathan DM, Buse JB, Davidson MB, Heine RJ, Holman RR, et al. (2006) Management of hyper-glycemia in type 2 diabetes: A consensus algorithm for the initiation and adjustment of therapy: a consensus statement from the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care 29: 1963-1972. Management of hy-perglycaemia in type 2 diabetes: a patient-centered approach. Position statement of the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). S E Inzucchi, R M Bergenstal, J B Buse, M Diamant, E Ferrannini, Diabetologia. 55Inzucchi SE, Bergenstal RM, Buse JB, Diamant M, Ferrannini E, et al. (2012) Management of hy-perglycaemia in type 2 diabetes: a patient-centered approach. Position statement of the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetologia 55: 1577-1596. Management of hyperglycemia in type 2 diabetes: a patient-centered approach: position statement of the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). S E Inzucchi, R M Bergenstal, J B Buse, M Diamant, E Ferrannini, Diabetes Care. 35Inzucchi SE, Bergenstal RM, Buse JB, Diamant M, Ferrannini E, et al. (2012) Management of hyperglycemia in type 2 diabetes: a patient-centered approach: position statement of the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetes Care 35: 1364-1379. Gap junction coupling and calcium waves in the pancreatic islet. R K Benninger, M Zhang, W S Head, L S Satin, D W Piston, Biophys J. 95Benninger RK, Zhang M, Head WS, Satin LS, Piston DW (2008) Gap junction coupling and calcium waves in the pancreatic islet. Biophys J 95: 5048-5061. Cx36-mediated coupling reduces beta-cell heterogeneity, confines the stimulating glucose concentration range, and affects insulin release kinetics. S Speier, A Gjinovci, A Charollais, P Meda, M Rupnik, Diabetes. 56Speier S, Gjinovci A, Charollais A, Meda P, Rupnik M (2007) Cx36-mediated coupling reduces beta-cell heterogeneity, confines the stimulating glucose concentration range, and affects insulin release kinetics. Diabetes 56: 1078- 1086. Oscillation death. K Bar-Eli, B Ermentrout, 35371Bar-Eli K, Ermentrout B (2008) Oscillation death. Scholarpedia 3: 5371.
[]
[ "A New Light-Speed Anisotropy Experiment: Absolute Motion and Gravitational Waves Detected", "A New Light-Speed Anisotropy Experiment: Absolute Motion and Gravitational Waves Detected" ]
[ "Reginald T Cahill [email protected] \nSchool of Chemistry, Physics and Earth Sciences\nFlinders University\n5001AdelaideAustralia\n" ]
[ "School of Chemistry, Physics and Earth Sciences\nFlinders University\n5001AdelaideAustralia" ]
[ "Progress in Physics" ]
Data from a new experiment measuring the anisotropy of the one-way speed of EM waves in a coaxial cable, gives the speed of light as 300,000±400±20km/s in a measured direction RA=5.5±2 hrs, Dec=70±10 • S, is shown to be in excellent agreement with the results from seven previous anisotropy experiments, particularly those of Miller (1925/26), and even those of Michelson and Morley (1887). The Miller gas-mode interferometer results, and those from the RF coaxial cable experiments of Torr and Kolen (1983), De Witte (1991) and the new experiment all reveal the presence of gravitational waves, as indicated by the last ± variations above, but of a kind different from those supposedly predicted by General Relativity. Miller repeated the Michelson-Morley 1887 gas-mode interferometer experiment and again detected the anisotropy of the speed of light, primarily in the years 1925/1926 atop Mt.Wilson, California. The understanding of the operation of the Michelson interferometer in gas-mode was only achieved in 2002 and involved a calibration for the interferometer that necessarily involved Special Relativity effects and the refractive index of the gas in the light paths. The results demonstrate the reality of the Fitzgerald-Lorentz contraction as an observer independent relativistic effect. A common misunderstanding is that the anisotropy of the speed of light is necessarily in conflict with Special Relativity and Lorentz symmetrythis is explained. All eight experiments and theory show that we have both anisotropy of the speed of light and relativistic effects, and that a dynamical 3-space exists -that absolute motion through that space has been repeatedly observed since 1887. These developments completely change fundamental physics and our understanding of reality. "Modern" vacuum-mode Michelson interferometers, particularly the long baseline terrestrial versions, are, by design flaw, incapable of detecting the anisotropy effect and the gravitational waves.
null
[ "https://export.arxiv.org/pdf/physics/0610076v1.pdf" ]
9,447,877
physics/0610076
d63e1bf34d4bc70917a166a8df9acdbb52e53849
A New Light-Speed Anisotropy Experiment: Absolute Motion and Gravitational Waves Detected 2006 Reginald T Cahill [email protected] School of Chemistry, Physics and Earth Sciences Flinders University 5001AdelaideAustralia A New Light-Speed Anisotropy Experiment: Absolute Motion and Gravitational Waves Detected Progress in Physics 42006arXiv:physics/0610076v1 [physics.gen-ph] Published: Data from a new experiment measuring the anisotropy of the one-way speed of EM waves in a coaxial cable, gives the speed of light as 300,000±400±20km/s in a measured direction RA=5.5±2 hrs, Dec=70±10 • S, is shown to be in excellent agreement with the results from seven previous anisotropy experiments, particularly those of Miller (1925/26), and even those of Michelson and Morley (1887). The Miller gas-mode interferometer results, and those from the RF coaxial cable experiments of Torr and Kolen (1983), De Witte (1991) and the new experiment all reveal the presence of gravitational waves, as indicated by the last ± variations above, but of a kind different from those supposedly predicted by General Relativity. Miller repeated the Michelson-Morley 1887 gas-mode interferometer experiment and again detected the anisotropy of the speed of light, primarily in the years 1925/1926 atop Mt.Wilson, California. The understanding of the operation of the Michelson interferometer in gas-mode was only achieved in 2002 and involved a calibration for the interferometer that necessarily involved Special Relativity effects and the refractive index of the gas in the light paths. The results demonstrate the reality of the Fitzgerald-Lorentz contraction as an observer independent relativistic effect. A common misunderstanding is that the anisotropy of the speed of light is necessarily in conflict with Special Relativity and Lorentz symmetrythis is explained. All eight experiments and theory show that we have both anisotropy of the speed of light and relativistic effects, and that a dynamical 3-space exists -that absolute motion through that space has been repeatedly observed since 1887. These developments completely change fundamental physics and our understanding of reality. "Modern" vacuum-mode Michelson interferometers, particularly the long baseline terrestrial versions, are, by design flaw, incapable of detecting the anisotropy effect and the gravitational waves. Introduction Of fundamental importance to physics is whether the speed of light is the same in all directions, as measured say in a laboratory attached to the Earth. This is what is meant by light speed anisotropy in the title of this paper. The prevailing belief system in physics has it that the speed of light is isotropic, that there is no preferred frame of reference, that absolute motion has never been observed, and that 3-space does not, and indeed cannot exist. This is the essence of Einstein's 1905 postulate that the speed of light is independent of the choice of observer. This postulate has determined the course of physics over the last 100 years. Despite the enormous significance of this postulate there has never been a reliable direct experimental test, that is, in which the one-way travel time of light in vacuum over a set distance has been measured, and repeated for different directions. So how could a science as fundamental and important as physics permit such a key idea to go untested? And what are the consequences for fundamental physics if indeed, as reported herein and elsewhere, that the speed of light is anisotropic, that a dynamical 3-space does exist? This would imply that if reality is essentially space and matter, with time tracking process and change, then physics has completely missed the existence of that space. If this is the case then this would have to be the biggest blunder ever in the history of science, more so because some physicists have independently detected that anisotropy. While herein we both summarise seven previous detections of the anisotropy and report a new experiment, the implications for fundamental physics have already been substantially worked out. It leads to a new modelling and comprehension of reality known as Process Physics [1]. The failure of mainstream physics to understand that the speed of light is anisotropic, that a dynamical 3-space exists, is caused by an ongoing failure to comprehend the operation of the Michelson interferometer, and also by theoretical physicists not understanding that the undisputed successes of special relativity effects, and even Lorentz symmetry, do not imply that the speed of light must be isotropic -this is a mere abuse of logic, as explained later. The Michelson interferometer is actually a complex instrument. The problem is that the anisotropy of the speed of light affects its actual dimensions and hence its operation: there are actual length contractions of its physical arms. Because the anisotropy of the speed of light is so fundamental it is actually very subtle to design an effective experiment because the sought for effect also affects the instrument in more than one way. This subtlety has been overlooked for some 100 years, until in 2002 the original data was reanalysed using a relativistic theory for the calibration of the interferometer [2]. The new understanding of the operation of the Michelson interferometer is that it can only detect the light speed anisotropy when there is gas in the light paths, as there was in the early experiments. Modern versions have removed the gas and made the instrument totally unable to detect the light speed anisotropy. Even in gas mode the interferometer is a very insensitive device, being 2nd order in v/c and further suppressed in sensitivity by the gas refractive index dependency. More direct than the Michelson interferometer, but still not a direct measurement, is to measure the onespeed of radio frequency (RF) electromagnetic waves in a coaxial cable, for this permits electronic timing methods. This approach is 1st order in v/c, and independent of the refractive index suppression effect. Nevertheless because it is one-way clocks are required at both ends, as in the Torr and Kolen, and De Witte experiments, and the required length of the coaxial cable was determined, until now, by the stability of atomic clocks over long durations. The new one-way RF coaxial experiment reported herein utilises a new timing technique that avoids the need for two atomic clocks, by using a very special property of optical fibres, namely that the light speed in optical fibres is isotropic, and is used for transmitting timing information, while in the coaxial cables the RF speed is anisotropic, and is used as the sensor. There is as yet no explanation for this optical fibre effect, but it radically changes the technology for anisotropy experiments, as well and at the same time that of gravitational wave detectors. In the near future all-optical gravitational wave detectors are possible in desktop instruments. These gravitational waves have very different properties from those supposedly predicted from General Relativity, although that appears to be caused by errors in that derivation. As for gravitational waves, it has been realised now that they were seen in the Miller, Torr and Kolen, and De Witte experiments, as they are again observed in the new experiment. Most amazing is that these wave effects also appear to be present in the Michelson-Morley fringe shift data from 1887, as the fringe shifts varied from day to day. So Michelson and Morley should have reported that they had discovered absolute motion, a preferred frame, and also wave effects of that frame, that the speed of light has an anisotropy that fluctuated over and above that caused by the rotation of the Earth. The first and very successful attempt to look for a preferred frame was by Michelson and Morley in 1887. They did in fact detect the expected anisotropy at the level of ±8km/s [3], but only according to Michelson's calibration theory. However this result has essentially been ignored ever since as they expected to detect an effect of at least ±30km/s, which is the orbital speed of the earth about the sun. As Miller recognised the basic problem with the Michelson interferometer is that the calibration of the instrument was then clearly not correctly understood, and most likely wrong [4]. Basically Michelson had used Newtonian physics to calibrate his instrument, and of course we now know that that is completely inappropriate as relativistic effects play a critical role as the interferometer is a 2nd order device (∼ v 2 /c 2 where v is the speed of the device relative to a physical dynamical 3-space 1 ), and so various effects at that order must be taken into account in determining the calibration of the instrument, that is, what light speed anisotropy corresponds to the observed fringe shifts. It was only in 2002 that the calibration of the Michelson interferometer was finally determined by taking account of relativistic effects [2]. One aspect of that was the discovery that only a Michelson interferometer in gas-mode could detect the light anisotropy, as discussed below. As well the interferometer when used in air is nearly a factor of 2000 less sensitive than that according to the inappropriate Newtonian theory. This meant that the Michelson and Morley anisotropy speed variation was now around 330km/s on average, and as high as 400km/s on some days. Miller was aware of this calibration problem, and resorted to a brilliant indirect method, namely to observe the fringe shifts over a period of a year, and to use the effect of the earth's orbital speed upon the fringe shifts to arrive at a calibration. The earth's orbital motion was clearly evident in Miller's data, and using this effect he obtained a light speed anisotropy effect of some 200km/s in a particular direction. However even this method made assumptions which are now known to be invalid, and correcting his earth-effect calibration method we find that it agrees with the new relativistic effects calibration, and both methods now give a speed of near 400km/s. This also then agrees with the Michelson-Morley results. Major discoveries like that of Miller must be reproduced by different experiments and by different techniques. Most significantly there are in total seven other experiments that confirm this Miller result, with four being gas-mode Michelson interferometers using either air, helium or a He/Ne mixture in the light path, and three experiments that measure variations in the one-way speed of EM waves travelling through a coaxial cable as the orientation of the cable is changed, with the latest being a high precision technique reported herein and in [5,6]. This method is 1st order in v/c, so it does not require relativistic effects to be taken into account, as discussed later. As the Michelson interferometer requires a gas to be present in the light path in order to detect the anisotropy it follows that vacuum interferometers, such as those in [7], are simply inappropriate for the task, and it is surprising that some attempts to detect the anisotropy in the speed of light still use vacuum-mode Michelson interferometers, some years after the 2002 discovery of the need for a gas in the light path [2]. Despite the extensive data collected and analysed by Miller after his fastidious testing and refinements to control temperature effects and the like, and most importantly his demonstration that the effects tracked sidereal time and not solar time, the world of physics has, since publication of the results by MIller in 1933, simply ignored this discovery. The most plausible explanation for this situation is the ongoing misunderstanding by many physicists, but certainly not all, that any anisotropy in the speed of light must necessarily by incompatible with Special Relativity (SR), with SR certainly well confirmed experimentally. This is misunderstanding is clarified. In fact Miller's data can now be used to confirm an important aspect of SR. Even so, ignoring the results of a major experiment simply because they challenge a prevailing belief system is not science -ignoring the Miller experiment has stalled physics for some 70 years. It is clear that the Miller experiment was highly successful and highly significant, and we now know this because the same results have been obtained by later experiments which used different experimental techniques. The most significant part of Miller's rigorous experiment was that he showed that the effect tracked sidereal time and not solar time -this is the acid test which shows that the direction of the anisotropy velocity vector is relative to the stars and not to the position of the Sun. This difference is only some 4 minutes per day, but over a year amounts to a huge 24 hours effect, and Miller saw that effect and extensively discussed it in his paper. Similarly De Witte in his extensive 1991 coaxial cable experiment [9] also took data for 178 days to again establish the sidereal time effect: over 178 days this effect amounts to a shift in the phase of the signal through some 12 hours! The sidereal effect has also been established in the new coaxial cable experiment by the author from data spanning some 200 days. The interpretation that has emerged from the Miller and related discoveries is that space exists, that it is an observable and dynamical system, and that the Special Relativity effects are caused by the absolute motion of quantum systems through that space [1,25]. This is essentially the Lorentz interpretation of Special Relativity, and then the spacetime is merely a mathematical construct. The new understanding has lead to an explanation of why Lorentz symmetry manifests despite there being a preferred frame, that is, a local frame in which only therein is the speed of light isotropic. A minimal theory for the dynamics of this space has been developed [1,25] which has resulted in an explanation of numerous phenomena, such as gravity as a quantum effect [25,8], the so-called "dark matter" effect, the black hole systematics, gravitational light bending, gravitational lensing, and so [21][22][23][24][25]. The Miller data also revealed another major discovery that Miller himself may not have understood, namely that the anisotropy vector actually fluctuates form hour to hour and day to day even when we remove the manifest effect of the Earth's rotation, for Miller may have interpreted this as being caused by imperfections in his experiment. This means that the flow of space past the Earth displays turbulence or a wave effect: basically the Miller data has revealed what we now call gravitational waves, although these are different to the waves supposedly predicted by General Relativity. These wave effects were also present in the Torr and Kolen [10] first coaxial cable experiment at Utah University in 1981, and were again manifest in the De Witte data from 1991. Analysis of the De Witte data has shown that these waves have a fractal structure [9]. The Flinders University Gravitational Waves Detector (also a coaxial cable experiment) was constructed to investigate these waves effects. This sees the wave effects detected by Miller, Torr and Kolen, and by De Witte. The plan of this paper is to first outline the modern understanding of how a gas-mode Michelson interferometer actually operates, and the nature, accuracy and significance of the Miller experiment. We also report the other seven experiments that confirm the Miller discoveries, particularly data from the new high-precision gravity wave detector that detects not only a light speed anisotropy but also the wave effects. Special Relativity and the speed of light anisotropy It is often assumed that the anisotropy of the speed of light is inconsistent with Special Relativity, that only one or the other can be valid, that they are mutually incompatible. This misunderstanding is very prevalent in the literature of physics, although this conceptual error has been explained [1]. The error is based upon a misunderstanding of how the logic of theoretical physics works, namely the important difference between an if statement, and an if and only if statement. To see how this confusion has arisen we need to recall the history of Special Relativity (SR). In 1905 Einstein deduced the SR formalism by assuming, in part, that the speed of light is invariant for all relatively moving observers, although most importantly one must ask just how that speed is defined or is to be measured. The SR formalism then predicted numerous effects, which have been extensively confirmed by experiments over the last 100 years. However this Einstein derivation was an if statement, and not an if and only if statement. For an if statement, that if A then B, does not imply the truth of A if B is found to be true; only an if and only if statement has that property, and Einstein did not construct such an argument. What this means is that the validity of the various SR effects does not imply that the speed of light must be isotropic. This is actually implicit in the SR formalism itself, for it permits one to use any particular foliation of the 4-dimensional spacetime into a 3-space and a 1-space (for time). Most importantly it does not forbid that one particular foliation be actual. So to analyse the data from gas-mode interferometer experiments we must use the SR effects, and the fringe shifts reveal the preferred frame, an actual 3-space, by revealing the anisotropic speed of light, as Maxwell and Michelson had originally believed. For "modern" resonant-cavity Michelson interferometer experiments we predict no rotation-induced fringe shifts, unless operated in gas-mode. Unfortunately in analysing the data from the vacuum-mode experiments the consequent null effect is misinterpreted, as in [7], to imply the absence of a preferred direction, of absolute motion. But it is absolute motion which causes the dynamical effects of length contractions, time dilations and other relativistic effects, in accord with Lorentzian interpretation of relativistic effects. The detection of absolute motion is not incompatible with Lorentz symmetry; the contrary belief was postulated by Einstein, and has persisted for over 100 years, since 1905. So far the experimental evidence is that absolute motion and Lorentz symmetry are real and valid phenomena; absolute motion is motion presumably relative to some substructure to space, whereas Lorentz symmetry parameterises dynamical effects caused by the motion of systems through that substructure. To check Lorentz symmetry we can use vacuum-mode resonant-cavity interferometers, but using gas within the resonant-cavities would enable these devices to detect absolute motion with great precision. As well there are novel wave phenomena that could also be studied, as discussed herein and in [19,20]. Motion through the structured space, it is argued, induces actual dynamical time dilations and length contractions in agreement with the Lorentz interpretation of special relativistic effects. Then observers in uniform motion "through" the space will, on measurement of the speed of light using the special but misleading Einstein measurement protocol, obtain always the same numerical value c. To see this explicitly consider how various observers P, P ′ , . . . moving with different speeds through space, measure the speed of light. They each acquire a standard rod and an accompanying standardised clock. That means that these standard rods would agree if they were brought together, and at rest with respect to space they would all have length ∆l 0 , and similarly for the clocks. Observer P and accompanying rod are both moving at speed v R relative to space, with the rod longitudinal to that motion. P then measures the time ∆t R , with the clock at end A of the rod, for a light pulse to travel from end A to the other end B and back again to A. The light travels at speed c relative to space. Let the time taken for the light pulse to travel from A → B be t AB and from B → A be t BA , as measured by a clock at rest with respect to space 2 . The length of the rod moving at speed v R is contracted to ∆l R = ∆l 0 1 − v 2 R c 2 .(1) In moving from A to B the light must travel an extra distance because the end B travels a distance v R t AB in this time, thus the total distance that must be traversed is ct AB = ∆l R + v R t AB ,(2) similarly on returning from B to A the light must travel the distance ct BA = ∆l R − v R t BA .(3) Hence the total travel time ∆t 0 is ∆t 0 = t AB + t BA = ∆l R c − v R + ∆l R c + v R = (4) = 2∆l 0 c 1 − v 2 R c 2 .(5) Because of the time dilation effect for the moving clock ∆t R = ∆t 0 1 − v 2 R c 2 .(6) Then for the moving observer the speed of light is defined as the distance the observer believes the light travelled (2∆l 0 ) divided by the travel time according to the accompanying clock (∆t R ), namely 2∆l 0 /∆t R = 2∆l R /∆t 0 , from above, which is thus the same speed as seen by an observer at rest in the space, namely c. So the speed v R of the observer through space is not revealed by this procedure, and the observer is erroneously led to the conclusion that the speed of light is always c. This follows from two or more observers in manifest relative motion all obtaining the same speed c by this procedure. Despite this failure this special effect is actually the basis of the spacetime Einstein measurement protocol. That this protocol is blind to the absolute motion has led to enormous confusion within physics. To be explicit the Einstein measurement protocol actually inadvertently uses this special effect by using the radar method for assigning historical spacetime coordinates to an event: the observer records the time of emission and reception of radar pulses (t r > t e ) travelling through space, and then retrospectively assigns the time and distance of a distant event B according to (ignoring directional information for simplicity) T B = 1 2 t r + t e , D B = c 2 t r − t e ,(7) A P (v 0 = 0) B (t ′ B ) D D B T P ′ (v ′ 0 ) ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✯ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ( t e T B t r γ γ Figure 1: Here T − D is the spacetime construct (from the Einstein measurement protocol) of a special observer P at rest wrt space, so that v0 = 0. Observer P ′ is moving with speed v ′ 0 as determined by observer P , and therefore with speed v ′ R = v ′ 0 wrt space. Two light pulses are shown, each travelling at speed c wrt both P and space. Event A is when the observers pass, and is also used to define zero time for each for convenience. where each observer is now using the same numerical value of c. The event B is then plotted as a point in an individual geometrical construct by each observer, known as a spacetime record, with coordinates (D B , T B ). This is no different to an historian recording events according to some agreed protocol. Unlike historians, who don't confuse history books with reality, physicists do so. We now show that because of this protocol and the absolute motion dynamical effects, observers will discover on comparing their historical records of the same events that the expression τ 2 AB = T 2 AB − 1 c 2 D 2 AB ,(8) is an invariant, where T AB = T A − T B and D AB = D A − D B are the differences in times and distances assigned to events A and B using the Einstein measurement protocol (7), so long as both are sufficiently small compared with the scale of inhomogeneities in the velocity field. To confirm the invariant nature of the construct in (8) one must pay careful attention to observational times as distinct from protocol times and distances, and this must be done separately for each observer. This can be tedious. We now demonstrate this for the situation illustrated in Fig. 1. By definition the speed of P ′ according to P is v ′ 0 = D B /T B and so v ′ R = v ′ 0 , where T B and D B are the protocol time and distance for event B for observer P according to (7). Then using (8) P would find that (τ P AB ) 2 = T 2 B − 1 c 2 D 2 B since both T A = 0 and D A =0, and whence (τ P AB ) 2 = (1 − v ′2 R c 2 )T 2 B = (t ′ B ) 2 where the last equality follows from the time dilation effect on the P ′ clock, since t ′ B is the time of event B according to that clock. Then T B is also the time that P ′ would compute for event B when correcting for the time-dilation effect, as the speed v ′ R of P ′ through the quantum foam is observable by P ′ . Then T B is the 'common time' for event B assigned by both observers. For P ′ we obtain directly, also from (7) and (8), that (τ P ′ AB ) 2 = (T ′ B ) 2 − 1 c 2 (D ′ B ) 2 = (t ′ B ) 2 , as D ′ B = 0 and T ′ B = t ′ B . Whence for this situation (τ P AB ) 2 = (τ P ′ AB ) 2 ,(9) and so the construction (8) is an invariant. While so far we have only established the invariance of the construct (8) when one of the observers is at rest in space, it follows that for two observers P ′ and P ′′ both in absolute motion it follows that they also agree on the invariance of (8). This is easily seen by using the intermediate step of a stationary observer P : (τ P ′ AB ) 2 = (τ P AB ) 2 = (τ P ′′ AB ) 2 .(10) Hence the protocol and Lorentzian absolute motion effects result in the construction in (8) being indeed an invariant in general. This is a remarkable and subtle result. For Einstein this invariance was a fundamental assumption, but here it is a derived result, but one which is nevertheless deeply misleading. Explicitly indicating small quantities by ∆ prefixes, and on comparing records retrospectively, an ensemble of nearby observers agree on the invariant ✲ ✲ ✛ ✻ ❄ ❄ L A B L C D ✲ ✲ ✲ ✲ ✛ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✗ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❲ α A 1 A 2 D B C v (a) (b)∆τ 2 = ∆T 2 − 1 c 2 ∆D 2 ,(11) for any two nearby events. This implies that their individual patches of spacetime records may be mapped one into the other merely by a change of coordinates, and that collectively the spacetime patches of all may be represented by one pseudo-Riemannian manifold, where the choice of coordinates for this manifold is arbitrary, and we finally arrive at the invariant ∆τ 2 = g µν (x) ∆x µ ∆x ν ,(12)with x µ = {D 1 , D 2 , D 3 , T }. Eqn. (12) is invariant under the Lorentz transformations x ′µ = L µ ν x ν ,(13) where, for example for relative motion in the x direction, L µ ν is specified by x ′ = x − vt 1 − v 2 /c 2 , y ′ = y , z ′ = z , t ′ = t − vx/c 2 1 − v 2 /c 2 .(14) So absolute motion and special relativity effects, and even Lorentz symmetry, are all compatible: a possible preferred frame is hidden by the Einstein measurement protocol. So the experimental question is then whether or not a supposed preferred frame actually exists or notcan it be detected experimentally? The answer is that there are now eight such consistent experiments. In Sect. 4.7 we generalise the Dirac equation to take account of the coupling of the spinor to an actual dynamical space. This reveals again that relativistic effects are consistent with a preferred frame -an actual space. Furthermore this leads to the first derivation of gravity from a deeper theory -gravity turns out to be a quantum matter wave effect. Light speed anisotropy experiments We now consider the various experiments from over more than 100 years that have detected the anisotropy of the speed of light, and so the existence of an actual dynamical space, an observable preferred frame. As well the experiments, it is now understood, showed that this frame is dynamical, it exhibits time-dependent effects, and that these are "gravitational waves". Michelson gas-mode interferometer Let us first consider the new understanding of how the Michelson interferometer works. This brilliant but very subtle device was conceived by Michelson as a means to detect the anisotropy of the speed of light, as was expected towards the end of the 19th century. Michelson used Newtonian physics to develop the theory and hence the calibration for his device. However we now understand that this device detects 2nd order effects in v/c to determine v, and so we must use relativistic effects. However the application and analysis of data from various Michelson interferometer experiments using a relativistic theory only occurred in 2002, some 97 years after the development of Special Relativity by Einstein, and some 115 years after the famous 1887 experiment. As a consequence of the necessity of using relativistic effects it was discovered in 2002 that the gas in the light paths plays a critical role, and that we finally understand how to calibrate the device, and we also discovered, some 76 years after the 1925/ 26 Miller experiment, what determines the calibration constant that Miller had determined using the Earth's rotation speed about the Sun to set the calibration. This, as we discuss later, has enabled us to now appreciate that gas-mode Michelson interferometer experiments have confirmed the reality of the Fitzgerald-Lorentz length contraction effect: in the usual interpretation of Special Relativity this effect, and others, is usually regarded as an observer dependent effect, an illusion induced by the spacetime. But the experiments are to the contrary showing that the length contraction effect is an actual observer-independent dynamical effect, as Fitzgerald [27] and Lorentz had proposed [28]. The Michelson interferometer compares the change in the difference between travel times, when the device is rotated, for two coherent beams of light that travel in orthogonal directions between mirrors; the changing time difference being indicated by the shift of the interference fringes during the rotation. This effect is caused by the absolute motion of the device through 3-space with speed v, and that the speed of light is relative to that 3-space, and not relative to the apparatus/observer. However to detect the speed of the apparatus through that 3-space gas must be present in the light paths for purely technical reasons. The post relativistic-effects theory for this device is remarkably simple. The relativistic Fitzgerald-Lorentz contraction effect causes the arm AB parallel to the absolute velocity to be physically contracted to length L || = L 1 − v 2 c 2 .(15) The time t AB to travel AB is set by V t AB = L || + vt AB , while for BA by V t BA = L || − vt BA , where V = c/n is the speed of light, with n the refractive index of the gas present (we ignore here the Fresnel drag effect for simplicity, an effect caused by the gas also being in absolute motion, see [1]). For the total ABA travel time we then obtain t ABA = t AB + t BA = 2LV V 2 − v 2 1 − v 2 c 2 .(16) For travel in the AC direction we have, from the Pythagoras theorem for the right-angled triangle in Fig. 1 that (V t AC ) 2 = L 2 + (vt AC ) 2 and that t CA = t AC . Then for the total ACA travel time t ACA = t AC + t CA = 2L √ V 2 − v 2 .(17) Then the difference in travel time is ∆t = (n 2 − 1) L c v 2 c 2 + O v 4 c 4 .(18) after expanding in powers of v/c. This clearly shows that the interferometer can only operate as a detector of absolute motion when not in vacuum (n = 1), namely when the light passes through a gas, as in the early experiments (in transparent solids a more complex phenomenon occurs). A more general analysis [1], including Fresnel drag, gives ∆t = k 2 Lv 2 P c 3 cos 2(θ − ψ) ,(19) where k 2 ≈ n(n 2 − 1), while neglect of the relativistic Fitzgerald-Lorentz contraction effect gives k 2 ≈ n 3 ≈ 1 for gases, which is essentially the Newtonian theory that Michelson used. However the above analysis does not correspond to how the interferometer is actually operated. That analysis does not actually predict fringe shifts for the field of view would be uniformly illuminated, and the observed effect would be a changing level of luminosity rather than fringe shifts. As Miller knew the mirrors must be made slightly non-orthogonal, with the degree of non-orthogonality determining how many fringe shifts were visible in the field of view. Miller experimented with this effect to determine a comfortable number of fringes: not too few and not too many. Hicks [29] developed a theory for this effect -however it is not necessary to be aware of this analysis in using the interferometer: the non-orthogonality reduces the symmetry of the device, and instead of having period of 180 • the symmetry now has a period of 360 • , so that to (19) we must add the extra term in ∆t = k 2 Lv 2 P c 3 cos 2(θ − ψ) + a cos(θ − β) .(20) Miller took this effect into account when analysing his data. The effect is apparent in Fig. 5, and even more so in the Michelson-Morley data in Fig. 4. The interferometers are operated with the arms horizontal, as shown by Miller's interferometer in Fig. 3. Then in (20) θ is the azimuth of one arm relative to the local meridian, while ψ is the azimuth of the absolute motion velocity projected onto the plane of the interferometer, with projected component v P . Here the Fitzgerald-Lorentz contraction is a real dynamical effect of absolute motion, unlike the Einstein spacetime view that it is merely a spacetime perspective artifact, and whose magnitude depends on the choice of observer. The instrument is operated by rotating at a rate of one rotation over several minutes, and observing the shift in the fringe pattern through a telescope during the rotation. Then fringe shifts from six (Michelson and Morley) or twenty (Miller) successive rotations are averaged to improve the signal to noise ratio, and the average sidereal time noted, giving the Michelson-Morley data in Fig. 4. or the Miller data like that in Fig. 5. The form in (20) is then fitted to such data by varying the parameters v P , ψ, a and β, The data from rotations is sufficiently clear, as in Fig. 5, that Miller could easily determine these parameters from a graphical plot. However Michelson and Morley implicitly assumed the Newtonian value k=1, while Miller used an indirect method to estimate the value of k, as he understood that the Newtonian theory was invalid, but had no other theory for the interferometer. Of course the Einstein postulates, as distinct from Special Relativity, have that absolute motion has no meaning, and so effectively demands that k=0. Using k=1 gives only a nominal value for v P , being some 8-9 km/s for the Michelson Michelson-Morley experiment The Michelson and Morley air-mode interferometer fringe shift data was based upon a total of only 36 rotations in July 1887, revealing the nominal speed of some 8-9 km/s when analysed using the prevailing but incorrect Newtonian theory which has k = 1 in (20), and this value was known to Michelson and Morley. Including the Fitzgerald-Lorentz dynamical contraction effect as well as the effect of the gas present as in (20) we find that n air = 1.00029 gives k 2 = 0.00058 for air, which explains why the observed fringe shifts were so small. The example in Fig. 4 reveals a speed of 400 km/s with an azimuth of 40 • measured from south at 7:00 hrs local sidereal time. The data is clearly very consistent with the expected form in (20). They rejected their own data on the sole but spurious ground that the value of 8 km/s was smaller than the speed of the Earth about the Sun of 30km/s. What their result really showed was that (i) absolute motion had been detected because fringe shifts of the correct form, as in (20), had been detected, and (ii) that the theory giving k 2 = 1 was wrong, that Newtonian physics had failed. Michelson and Morley in 1887 should have announced that the speed of light did depend of the direction of travel, that the speed was relative to an actual physical 3-space. However contrary to their own data they concluded that absolute motion had not been detected. This bungle has had enormous implications for fundamental theories of space and time over the last 100 years, and the resulting confusion is only now being finally corrected, albeit with fierce and spurious objections. Miller interferometer It was Miller [4] who saw the flaw in the 1887 paper and realised that the theory for the Michelson interferometer must be wrong. To avoid using that theory Miller introduced the scaling factor k, even though he had no theory for its value. He then used the effect of the changing vector addition of the Earth's orbital velocity and the absolute galactic velocity of the solar system to determine the numerical value of k, because the orbital motion modulated the data, as shown in Fig. 6. By making some 8,000 rotations of the interferometer at Mt. Wilson in 1925/26 Miller determined the first estimate for k and for the absolute linear velocity of the solar system. Fig. 5 shows typical data from averaging the fringe shifts from 20 rotations of the Miller interferometer, performed over a short period of time, and clearly shows the expected form in (20) (only a linear drift caused by temperature effects on the arm lengths has been removed -an effect also removed by Michelson and Morley and also by Miller). In Fig. 5 the fringe shifts during rotation are given as fractions of a wavelength, ∆λ/λ = ∆t/T , where ∆t is given by (20) and T is the period of the light. Such rotation-induced fringe shifts clearly show that the speed of light is different in different directions. The claim that Michelson interferometers, operating in gas-mode, do not produce fringe shifts under rotation is clearly incorrect. But it is that claim that lead to the continuing belief, within physics, that absolute motion had never been detected, and that the speed of light is invariant. The value of ψ from such rotations together lead to plots like those in Fig. 6, which show ψ from the 1925/1926 Miller [4] interferometer data for four different months of the year, from which the RA = 5.2 hr is readily apparent. While the orbital motion of the Earth about the Sun slightly affects the RA in each month, and Miller used this effect do determine the value of k, the new theory of gravity required a reanalysis of the data [1,19], revealing that the solar system has a large observed galactic velocity of some 420±30 km/s in the direction (RA = 5. The azimuth data gives a clearer signal than the speed data in Fig. 18. The data shows that the time when the azimuth ψ is zero tracks sidereal time, with the zero times being approximately 5 hrs and 17 hrs. However these times correspond to very different local times, for from April to August, for example, there is a shift of 8 hrs in the local time for these crossings. This is an enormous effect. Again this is the acid test for light speed anisotropy experiments when allowing the rotation of the Earth to change the orientation of the apparatus. The zero crossing times are when the velocity vector for absolute motion when projected onto the plane of the interferometer lines up with the local meridian. As well we see variations throughout these composite days with the crossing times changing by as much as ±3 hrs, The same effect, and perhaps even larger, is seen in the Flinders data in Fig. 15. The above plots also show a distinctive signature, namely the change from month to month. This is caused by the vector addition of the Earth's orbital velocity of 30 km/s, the Sun's spatial in-flow velocity of 42 km/s at the Earth's distance and the cosmic velocity changing over a year. This is the effect that Miller used to calibrate his interferometer. However he did not know of the Sun in-flow component. Only after taking account of that effect does this calibration method agree with the results from the calibration method using Special Relativity, as in (20). Other gas-mode Michelson interferometer experiments Two old interferometer experiments, by Illingworth [11] and Joos [12], used helium, enabling the refractive index effect to be recently confirmed, because for helium, with n = = 1.000036, we find that k 2 = 0.00007. Until the refractive index effect was taken into account the data from the helium-mode experiments appeared to be inconsistent with the data from the air-mode experiments; now they are seen to be consistent [1]. Ironically helium was introduced in place of air to reduce any possible unwanted effects of a gas, but we now understand the essential role of the gas. The data from an interferometer experiment by Jaseja et al. [13], using two orthogonal masers with a He-Ne gas mixture, also indicates that they detected absolute motion, but were not aware of that as they used the incorrect Newtonian theory and so considered the fringe shifts to be too small to be real, reminiscent of the same mistake by Michelson and Morley. The Michelson interferometer is a 2nd order device, as the effect of absolute motion is proportional to (v/c) 2 , as in (20), but 1st order devices are also possible and the coaxial cable experiments described next are in this class. The experimental results and the implications for physics have been extensively reported in [1,14,15,16,17,18]. Coaxial cable speed of EM waves anisotropy experiments Rather than use light travel time experiments to demonstrate the anisotropy of the speed of light another technique is to measure the one-way speed of radio waves through a coaxial electrical cable. While this not a direct "ideal" technique, as then the complexity of the propagation physics comes into play, it provides not only an independent confirmation of the light anisotropy effect, but also one which takes advantage of modern electronic timing technology. Torr-Kolen coaxial cable anisotropy experiment The first one-way coaxial cable speed-of-propagation experiment was performed at the Utah University in 1981 by Torr and Kolen. This involved two rubidium clocks placed approximately 500 m apart with a 5 MHz radio frequency (RF) signal propagating between the clocks via a buried EW nitrogen-filled coaxial cable maintained at a constant pressure of 2 psi. Torr will eventually be recognised as two of the most significant experiments in physics, for independently and using different experimental techniques they detected essentially the same velocity of absolute motion. But also they detected turbulence in the flow of space past the Earth -none other than gravitational waves. The De Witte experiment was within Belgacom, the Belgium telecommunications company. This organisation had two sets of atomic clocks in two buildings in Brussels separated by 1.5 km and the research project was an investigation of the task of synchronising these two clusters of atomic clocks. To that end 5MHz RF signals were sent in both directions through two buried coaxial cables linking the two clusters. The atomic clocks were caesium beam atomic clocks, and there were three in each cluster: A1, A2 and A3 in one cluster, and B1, B2, and B3 at the other cluster. In that way the stability of the clocks could be established and monitored. One cluster was in a building on Rue du Marais and the second cluster was due south in a building on Rue de la Paille. Digital phase comparators were used to measure changes in times between clocks within the same cluster and also in the one-way propagation times of the RF signals. At both locations the comparison between local clocks, A1-A2 and A1-A3, and between B1-B2, B1-B3, yielded linear phase variations in agreement with the fact that the clocks have not exactly the same frequencies together with a short term and long term phase noise. But between distant clocks A1 toward B1 and B1 toward A1, in addition to the same linear phase variations, there is also an additional clear sinusoidal-like phase undulation with an approximate 24 hr period of the order of 28 ns peak to peak, as shown in Fig. 8. The possible instability of the coaxial lines cannot be responsible for the observed phase effects because these signals are in phase opposition and also because the lines are identical (same place, length, temperature, etc. . . ) causing the cancellation of any such instabilities. As well the experiment was performed over 178 days, making it possible to measure with an accuracy of 25 s the period of the phase signal to be the sidereal day (23 hr 56 min). Changes in propagation times were observed over 178 days from June 3 to November 27, 1991. A sample of the data, plotted against sidereal time for just three days, is shown in Fig. 8. De Witte recognised that the data was evidence of absolute motion but he was unaware of the Miller experiment and did not realise that the Right Ascensions for minimum/maximum propagation time agreed almost exactly with that predicted using the Miller's direction (RA = 5.2 hr, Dec =−67 • ). In fact De Witte expected that the direction of absolute motion should have been in the CMB direction, but that would have given the data a totally different sidereal time signature, namely the times for maximum/minimum would have been shifted by 6 hrs. The declination of the velocity observed in this De Witte experiment cannot be determined from the data as only three days of data are available. The De Witte data is analysed in Sect. 4.7 and assuming a declination of 60 • S a speed of 430 km/s is obtained, in good agreement with the Miller speed and Michelson-Morley speed. So a different and non-relativistic technique is confirming the results of these older experiments. This is dramatic. De Witte did however report the sidereal time of the cross-over time, that is in Fig. 8 for all 178 days of data. That showed, as in Fig. 9, that the time variations are correlated with sidereal time and not local solar time. A least-squares best fit of a linear relation to that data gives that the cross-over time is retarded, on average, by 3.92 minutes per solar day. This is to be compared with the fact that a sidereal day is 3.93 minutes shorter than a solar day. So the effect is certainly galactic and not associated with any daily thermal effects, which in any case would be very small as the cable is buried. Miller had also compared his data against sidereal time and established the same property, namely that the diurnal effects actually tracked sidereal time and not solar time, and that orbital effects were also apparent, with both effects apparent in The dominant effect in Fig. 8 is caused by the rotation of the Earth, namely that the orientation of the coaxial cable with respect to the average direction of the flow past the Earth changes as the Earth rotates. This effect may be approximately unfolded from the data leaving the gravitational waves shown in Fig. 10. This is the first evidence that the velocity field describing the flow of space has a complex structure, and is indeed fractal. The fractal structure, i. e. that there is an intrinsic lack of scale to these speed fluctuations, is demonstrated by binning the absolute speeds and counting the number of speeds within each bin, as discussed in [8,9]. The Miller data also shows evidence of turbulence of the same magnitude. So far the data from three experiments, namely Miller, Torr and Kolen, and De Witte, show turbulence in the flow of space past the Earth. This is what can be called gravitational waves. This can be understood by noting that fluctuations in the velocity field induce ripples in the mathematical construct known as spacetime, as in (32). Such ripples in spacetime are known as gravitational waves. Flinders University gravitational wave detector In February 2006 first measurements from a gravitational wave detector at Flinders University, Adelaide, were taken. This detector uses a novel timing scheme that overcomes the limitations associated with the two previous coaxial cable experiments. The intention in such experiments is simply to measure the one-way travel time of RF waves propagating through the coaxial cable. To that end one would apparently require two very accurate clocks at each end, and associated RF generation and detection electronics. However the major limitation is that even the best atomic clocks are not sufficiently accurate over even This plot is suggestive of a fractal structure to the velocity field. This is confirmed by the power law analysis in [8,9]. a day to make such measurements to the required accuracy, unless the cables are of order of a kilometre or so in length, and then temperature control becomes a major problem. The issue is that the time variations are of the order of 25 ps per 10 meters of cable. To measure that requires time measurements accurate to, say, 1 ps. But atomic clocks have accuracies over one day of around 100 ps, implying that lengths of around 1 kilometre would be required, in order for the effect to well exceed timing errors. Even then the atomic clocks must be brought together every day to resynchronise them, or use De Witte's method of multiple atomic clocks. However at Flinders University a major breakthrough for this problem was made when it was discovered that unlike coaxial cables, the movement of optical fibres through space does not affect the propagation speed of light through them. This is a very strange effect and at present there is no explanation for it. Optical fibre effect This effect was discovered by Lawrance, Drury and the author, using optical fibres in a Michelson interferometer arrangement, where the effective path length in each arm was 4 metres of fibre. So rather than having light pass through a gas, and being reflected by mirrors, here the light propagates through fibres and, where the mirrors would normally be located, a 180 degree bend in the fibres is formed. The light emerging from the two fibres is directed to a common region on a screen, and the expected fringe shifts were seen. However, and most dramatically, when the whole apparatus was rotated no shift in the fringe shifts was seen, unlike the situation with light passing through a gas as above. This result implied that the travel time in each arm of the fibre was unaffected by the orientation of that arm to the direction of the spatial flow. While no explanation has been developed for this effect, other than the general observation that the propagation speed in optical fibres depends on refractive index profiles and transverse and longitudinal Lorentz contraction effects, as in solids these are coupled by the elastic properties of the solid. Nevertheless this property offered a technological leap forward in the construction of a compact coaxial cable gravitational wave detector. This is because timing information can be sent though the fibres in a way that is not affected by the orientation of the fibres, while the coaxial cables do respond to the anisotropy of the speed of EM radiation in vacuum. Again why they respond in this way is not understood. All we have is that fibres and coaxial cables respond differently. So this offers the opportunity to have a coaxial cable one-way speed Figure 11: Schematic layout of the Flinders University Gravitational Wave Detector. Double lines denote coaxial cables, and single lines denote optical fibres. The detector is shown in Fig. 12 and is orientated NS along the local meridian, as indicated by direction D in Fig. 16 The key effects are that the propagation speeds through the coaxial cables and optical fibres respond differently to their absolute motion through space. The special optical fibre propagation effect is discussed in the text. Sections AB and CD each have length 5.0 m. The fibres and coaxial cable are specially manufactured to have negligible variation in travel speed with variation in temperature. The zero-speed calibration point can be measured by looping the arm back onto itself, as shown in Fig. 13, because then the 1st order in v/c effect cancels, and only 2nd order effects remain, and these are much smaller than the noise levels in the system. This detector is equivalent to a one-way speed measurement through a single coaxial cable of length 10 m, with an atomic clock at each end to measure changes in travel times. However for 10 m coaxial cable that would be impractical because of clock drifts. With this set-up the travel times vary by some 25 ps over one day, as shown in Figs.14 and 17. The detector was originally located in the author's office, as shown in Fig. 12, but was later located in an underground laboratory where temperature variations were very slow. The travel time variations over 7 days are shown in Fig. 15. measurement set up, but using only one clock, as shown in Fig. 11. Here we have one clock at one end of the coaxial cable, and the arrival time of the RF signal at the other end is used to modulate a light signal that returns to the starting end via an optical fibre. The return travel time is constant, being independent of the orientation of the detector arm, because of this peculiar property of the fibres. In practice one uses two such arrangements, with the RF directions opposing one another. This has two significant advantages, (i) that the effective coaxial cable length of 10 meters is achieved over a distance of just 5 meters, so the device is more easily accommodated in a temperature controlled room, and (ii) temperature variations in that room have a smaller effect than expected because it is only temperature differences between the cables that have any net effect. Indeed with specially constructed phase compensated fibre and coaxial cable, having very low speed-sensitivity to temperature variations, the most temperature sensitive components are the optical fibre transceivers (E/O and O/E in Fig. 11). The data from such a looping is shown in Fig. 14, but when the detector was relocated to an isolated underground laboratory. Rb DSO ✲ ✲ ✛ ✛ ✛ ✲ ✲ EO A B D C EO OE OE ✙ ✘ ✪ ✩ ✻ ✻ N S Experimental components linear extended band (5-2000 MHz) low noise RF fibre optic transceiver for single mode 1.3 µm fibre optic wireless systems, with independent receiver and transmitter. RF interface is a 50Ω connector and the optical connector is a low reflection FC/APC connector. Temperature dependence of phase delay is not measured yet. The experiment is operated in a uniform temperature room, so that phase delays between the two transceivers cancel to some extent. Coaxial Cable: Andrews FSJ1-50A Phase Stabilised 50Ω Coaxial Cable. Travel time temperature dependence is 0.026 ps/m/ • C. The speed of RF waves in this cable is c/n = = 0.84 c, arising from the dielectric having refractive index n = 1.19. As well temperature effects cancel because the two coaxial cables are tied together, and so only temperature differences between adjacent regions of the cables can have any effect. If such temperature differences are <1 • C, then temperature generated timing errors from this source should be <0.3 ps for the 10 m. Optical Fibre: Sumitomo Electric Industries Ind. Ltd Japan Phase Stabilised Optical Fibre (PSOF) -single mode. Uses Liquid Crystal Polymer (LCP) coated single mode optical fibre, with this coating designed to make the travel time temperature dependence <0.002 ps/m/ • C very small compared to normal fibres (0.07 ps/m/ • C). As well temperature effects cancel because the two optical fibres are tied together, and so only temperature differences between adjacent regions of the fibres can have any effect. If such temperature differences are <1 • C, then temperature generated timing errors from this source should be <0.02 ps for the 10 m. Now only Furukawa Electric Ind. Ltd Japan manufacturers PSOF. Photographs of the Flinders detector are shown in Fig.12. Because of the new timing technology the detector is now small enough to permit the looping of the detector arm as shown in Fig. 13. This enables Figure 14: The detector arm was formed into a loop at approximately 10:00hrs local time. With the system still operating time averaging causes the trace to interpolate during this procedure, as shown. This looping effect is equivalent to having v = 0, which defines the value of ∆τ . In plotting the times here the zero time is set so that then ∆τ = 0. Now the detector is calibrated, and the times in this figure are absolute times. The times are the N to S travel time subtracted from the shorter S to N travel time, and hence are negative numbers. This demonstrates that the flow of space past the Earth is essentially from south to north, as shown in Fig. 16. When the arms are straight, as before 10:00hrs we see that on average the two travel times differ by some 55 ps. This looping effect is a critical test for the detector. It clearly shows the effect of absolute motion upon the RF travel times. As well we see Earth rotation, wave and converter noise effects before 10:00hrs, and converter noise and some small signal after 10:00hrs, caused by an imperfect circle. From this data (24) and (25) give δ = 72 • S and v = 418 km/s. a key test to be performed as in the loop configuration the signal should disappear, as then the device acts as though it were located at rest in space, because the actual effects of the absolute motion cancel. The striking results from this test are shown in Fig. 14. As well this key test also provides a means of calibrating the detector. All-optical detector The unique optical fibre effect permits an even more compact gravitational wave detector. This would be an all-optical system 1st order in v/c device, with light passing through vacuum, or just air, as well as optical fibres. The travel time through the fibres is, as above, unaffected by orientation of the device, while the propagation time through the vacuum is affected by orientation, as the device is moving through the local space. In this system the relative time differences can be measured using optical interference of the light from the vacuum and fibre components. Then it is easy to see that the vacuum path length needs only be some 5 cm. This makes the construction of a three orthogonal arm even simpler. It would be a cheap bench-top box. In which case many of these devices could be put into operation around the Earth, and in space, to observe the new spatial-flow physics, with special emphasis on correlation studies. These can be used to observe the spatial extent of the fluctuations. As well space-probe based systems could observe special effects in the flow pattern associated with the Earth-Moon system; these effects are caused by the α-dependent dynamics in (26). Results from the Flinders detector Results from the detector are shown in Fig. 15. There the time variations in picoseconds are plotted against local Adelaide time. The times have an arbitrary zero offset. However most significantly we see ∼24 hr variations in the travel time, as also seen by De Witte. We also see variations in the times and magnitudes from day to day and within each day. These are the wave effects although as well a component of these is probably also coming from temperature change effects in the optical fibre transceivers. In time the instrument will be improved and optimised. But we are certainly seeing the evidence of absolute motion, namely the detection of the velocity field, as well as fluctuations in that velocity. To understand the daily variations we show in Fig. 16 the orientation of the detector arm relative to the Earth rotation axis and the Miller flow direction, at two key local sidereal times. So we now have a very inexpensive gravitational wave detector sufficiently small that even a coaxial-cable three-arm detector could easily be located within a building. Three orthogonal arms permit a complete measurement of the spatial flow velocity. Operating such a device over a year or so will permit the extraction of the Sun in-flow component and the Earth in-flow component, as well as a detailed study of the wave effects. Right ascension The sidereal effect has been well established, as shown in Fig. 9 for both the De Witte and Flinders data. 2 − δ + λ and θ = δ + λ − π 2 at these two RA, respectively. As the Earth rotates the inclination angle changes from a minimum of θ to a maximum of φ, which causes the dominant "dip" effect in, say, Fig. 17. The gravitational wave effect is the change of direction and magnitude of the flow velocity v, which causes the fluctuations in, say, Fig. 17. The latitude of Mt. Wilson is 34 • N, and so its latitude almost mirrors that of Adelaide. This is relevant to the comparison in Fig. 18. 5.5±2 hrs. This agrees remarkably well with the Miller and De Witte Right Ascension determinations, as discussed above. A one hour change in RA corresponds to a 15 • change in direction at the equator. However because the declination, to be determined next, is as large as some 70 • , the actual RA variation of ±2 hrs, corresponds to an angle variation of some ±10 • at that declination. On occasions there was no discernible unique maximum travel time difference; this happens when the declination is fluctuating near 90 • , for then the RA becomes ill-defined. Declination and speed Because the prototype detector has only one arm, rather than the ideal case of three orthogonal arms, to determine the declination and speed we assume here that the flow is uniform and time-independent, and use the changing difference in travel times between the two main coaxial cables. Consider Fig. 11 showing the detector schematic layout and Fig. 16 showing the various angles. The travel time in one of the circuits is given by (21) and that in the other arm by t 1 = τ 1 + L 1 v c − v cos(Φ)t 2 = τ 1 + L 1 v c + v cos(Φ)(22) where Φ is the angle between the detector direction and the flow velocity v, v c is the speed of radio frequency (RF) electromagnetic waves in the coaxial cable when v = 0, namely v c = c/n where n is the refractive index of the dielectric in the coaxial cable, and v is the change in that speed caused by the absolute motion of the coaxial cables through space, when the cable is parallel to v. The factor of cos(Φ) is just the projection of v onto the cable direction. The difference in signs in (21) and (22) arises from the RF waves travelling in opposite directions in the two main coaxial cables. The distance L 1 is the arm length of the coaxial cable from A to B, and L 2 is that from C to D. The constant times τ 1 and τ 2 are travel times arising from the optical fibres, the converters, and the coaxial cable lengths not included in L 1 and L 2 , particularly the optical fibre travel times. which is the key to the new detector. The effect of the two shorter coaxial cable sections in each arm are included in τ 1 and τ 2 because the absolute motion effects from these arms is additive, as the RF travels in opposite directions through them, and so only contributes at 2nd order. Now the experiment involves first the measurement of the difference ∆t = t 1 − t 2 , giving ∆t = τ 1 − τ 2 + L 1 v c −v cos(Φ) − L 2 v c +v cos(Φ) ≈ ≈ ∆τ + (L 1 + L 2 ) cos(Φ) v v 2 c + . . .(23) on expanding to lowest order in v/v c , and where ∆τ ≡ ≡ τ 1 − τ 2 + L 1 −L 2 v c . Eqn. (23) is the key to the operation of the detector. We see that the effective arm length is L = = L 1 +L 2 = 10 m. Over time the velocity vector v changes, caused by the wave effects and also by the Earth's orbital velocity about the Sun changing direction, and as well the Earth rotates on its axis. Both of these effects cause v and the angle Φ to change. However over a period of a day and ignoring wave effects we can assume that v is unchanging. Then we can determine a declination δ and the speed v by (i) measuring the maximum and minimum values of ∆t over a day, which occur approximately 12 hours apart, and (ii) determine ∆τ , which is the time difference when v = 0, and this is easily measured by putting the detector arm into a circular loop, as shown in Fig. 13, so that absolute motion effects cancel, at least to 1st order in v/v c . Now from Fig. 16 we see that the maximum travel time difference ∆t max occurs when Φ = θ = λ + δ − π 2 in (23), and the minimum ∆t min (20). Maximum projected speed is 417 km/s, as given in [3,20,9]. The data shows consider-able fluctuations. The dashed curve shows the non-fluctuating variation expected over one day as the Earth rotates, causing the projection onto the plane of the interferometer of the velocity of the average direction of the space flow to change. If the data was plotted against solar time the form is shifted by many hours. Note that the min/max occur at approximately 5 hrs and 17 hrs, as also seen by De Witte and the new experiment herein. The corresponding variation of the azimuthal phase ψ from Fig. 5 is shown in Fig. 6. Bottom: Data from the new experiment for one sidereal day on approximately August 23. We see similar variation with sidereal time, and also similar wave structure. This data has been averaged over a running 1hr time interval to more closely match the time resolution of the Miller experiment. These fluctuations are believed to be real wave phenomena, predicted by the new theory of space [1]. The new experiment gives a speed of 418 km/s. We see remarkable agreement between all three experiments. when Φ=φ=λ−δ+ π 2 , 12 hours later. Then the declination δ may be determined by numerically solving the transcendental equation which follows from these two times from (23) cos(λ + δ − π 2 ) cos(λ − δ + π 2 ) = ∆t max − ∆τ ∆t min − ∆τ .(24) Subsequently the speed v is obtained from v = (∆t max − ∆t min ) v 2 c L cos(λ + δ − π 2 ) − cos(λ − δ + π 2 ) . In Fig. 14 we show the travel time variations for September 19,2006. The detector arm was formed into a loop at approximately 10:00 hrs local time, with the system still operating: time averaging causes the trace to interpolate during this procedure, as shown. This looping effect is equivalent to having v = 0, which defines the value of ∆τ . In plotting the times in Fig. 14 the zero time is set so that then ∆τ = 0. When the arms are straight, as before 10:00 hrs we see that on average the travel times are some 55 ps different: this is because the RF wave travelling S to N is now faster than the RF wave travelling from N to S. The times are negative because the longer S to N time is subtracted from the shorter N to S travel time in the DSO. As well we see the daily variation as the Earth rotates, showing in particular the maximum effect at approximately 8:00 hrs local time (approximately 15hrs sidereal time) as shown for the three experiments in Fig. 18, as well as wave and converter noise. The trace after 10:00 hrs should be flat -but the variations seen are coming from noise effects in the converters as well as some small signal arising from the loop not being formed into a perfect circle. Taking ∆t max =−63 ps and ∆t min =−40 ps from Fig. 14, (24) and (25) give δ = 72 • S and v = 418 km/s. This is in extraordinary agreement with the Miller results for September 1925. We can also analyse the De Witte data. We have L = = 3.0 km, v c = 200,000 km/s, from Fig. 8 ∆t max −∆t min ≈ ≈ 25 ns, and the latitude of Brussels is λ = 51 • N. There is not sufficient De Witte data to determine the declination of v on the days when the data was taken. Miller found that the declination varied from approximately 60 • S to 80 • S, depending on the month. The dates for the De Witte data in Fig. 8 are not known but, for example, a declination of δ = 60 • gives v = 430 km/s. Gravity and gravitational waves We have seen that as well as the effect of the Earth rotation relative to the stars, as previously shown by the data from Michelson-Morley, Illingworth, Joos, Jaseja el al., Torr and Kolen, Miller, and De Witte and the data from the new experiment herein, there is also from the experimental data of Michelson-Morley, Miller, Torr and Kolen, De Witte and from the new experiment, evidence of turbulence in this flow of space past the Earth. This all points to the flow velocity field v(r, t) having a time dependence over an above that caused simply because observations are taken from the rotating Earth. As we shall now show this turbulence is what is conventionally called "gravitational waves", as already noted [1,19,20]. To do this we briefly review the new dynamical theory of 3-space, following [25], although it has been extensively discussed in the related literature. In the limit of zero vorticity for v(r, t) its dynamics is determined by ∇ · ∂v ∂t + (v · ∇)v + + α 8 (trD) 2 − tr(D 2 ) = −4πGρ ,(26) where ρ is the effective matter/energy density, and where D ij = 1 2 ∂v i ∂x j + ∂v j ∂x i .(27) Most significantly data from the bore hole g anomaly and from the systematics of galactic supermassive black hole shows that α ≈ 1/137 is the fine structure constant known from quantum theory [21][22][23][24]. Now the Dirac equation uniquely couples to this dynamical 3-space, according to [25] ih ∂ψ ∂t =−ih c α·∇+ v·∇+ 1 2 ∇·v ψ+ βmc 2 ψ where α and β are the usual Dirac matrices. We can compute the acceleration of a localised spinor wave packet accord-ing to g ≡ d 2 dt 2 ψ(t), r ψ(t) With v R = v 0 − v the velocity of the wave packet relative to the local space, as v 0 is the velocity relative to the embedding space 3 , and we obtain g = ∂v ∂t +(v·∇)v+(∇×v)×v R − v R 1− v 2 R c 2 1 2 d dt v 2 R c 2(30) which gives the acceleration of quantum matter caused by the inhomogeneities and time-dependencies of v(r, t). It has a term which limits the speed of the wave packet relative to space to be < c. Hence we see that the phenomenon of gravity, including the Equivalence Principle, has been derived from a deeper theory. Apart from the vorticity 4 and relativistic terms in (30) the quantum matter acceleration is the same as that of the structured 3-space [25,8]. We can now show how this leads to both the spacetime mathematical construct and that the geodesic for matter worldlines in that spacetime is equivalent to trajectories from (30). First we note that (30) may be obtained by extremising the time-dilated elapsed time τ [r 0 ] = dt 1 − v 2 R c 2 1/2(31) with respect to the particle trajectory r 0 (t) [1]. This happens because of the Fermat least-time effect for waves: only along the minimal time trajectory do the quantum waves remain in phase under small variations of the path. This again emphasises that gravity is a quantum wave effect. We now introduce a spacetime mathematical construct according to the metric ds 2 = dt 2 − dr − v(r, t) dt 2 c 2 = g µν dx µ dx ν .(32) Then according to this metric the elapsed time in (31) is τ = dt g µν dx µ dt dx ν dt ,(33) and the minimisation of (33) leads to the geodesics of the spacetime, which are thus equivalent to the trajectories from (31), namely (30). Hence by coupling the Dirac spinor dynamics to the space dynamics we derive the geodesic formalism of General Relativity as a quantum effect, but without reference to the Hilbert-Einstein equations for the induced metric. Indeed in general the metric of this induced spacetime will not satisfy these equations as the dynamical space involves the α-dependent dynamics, and α is missing from GR 5 . Hence so far we have reviewed the new theory of gravity as it emerges within the new physics 6 . In explaining gravity we discover that the Newtonian theory is actually flawed: this happened because the motion of planets in the solar system is too special to have permitted Newtonian to model all aspects of the phenomenon of gravity, including that the fundamental dynamical variable is a velocity field and not an acceleration field. We now discuss the phenomenon of the so-called "gravitational waves". It may be shown that the metric in (32) satisfies the Hilbert-Einstein GR equations, in "empty" space, but only when α → 0: G µν ≡ R µν − 1 2 Rg µν = 0 ,(34) where G µν is the Einstein tensor, R µν = R α µαν and R = = g µν R µν and g µν is the matrix inverse of g µν , and the curvature tensor is R ρ µσν = Γ ρ µν,σ − Γ ρ µσ,ν + Γ ρ ασ Γ α µν − Γ ρ αν Γ α µσ ,(35) where Γ α µσ is the affine connection Γ α µσ = 1 2 g αν ∂g νµ ∂x σ + ∂g νσ ∂x µ − ∂g µσ ∂x ν .(36) Hence the GR formalism fails on two grounds: (i) it does not include the spatial self-interaction dynamics which has coupling constant α, and (ii) it very effectively obscures the dynamics, for the GR formalism has spuriously introduced the speed of light when it is completely absent from (26), except on the RHS when the matter has speed near that of c relative to the space 7 . Now when wave effects are supposedly extracted from (34), by perturbatively expanding about a background metric, the standard derivation supposedly leads to waves with speed c. This derivation must be manifestly incorrect, as the underlying equation (26), even in the limit α → 0, does not even contain c. In fact an analysis of (26) shows that the perturbative wave effects are fluctuations of v(r, t), and travel at approximately that speed, which in the case of the data reported here is some 400 km/s in the case of earth based detections, i. e. 0.1% of c. These waves also generate gravitational effects, but only because of the α-dependent dynamical effects: when α → 0 we still have wave effects in the velocity field, but that they produce no gravitational acceleration effects upon quantum matter. Of course even in the case of α → 0 the velocity field wave effects are detectable by their effects upon EM radiation, as shown by various gas-mode Michelson interferometer and coaxial cable experiments. Amazingly there is evidence that Michelson-Morley actually detected such gravitational waves as well as the absolute motion effect in 1887, because fluctuations from day to day of their data shows effects similar to those reported by Miller, Torr and Kolen, De Witte, and the new experiment herein. Of course if the Michelson interferometer is operated in vacuum mode it is totally insensitive to absolute motion effects and to the accompanying wave effects, as is the case. This implies that experiments such as the long baseline terrestrial Michelson interferometers are seriously technically flawed as gravitational wave detectors. However as well as the various successful experimental techniques discussed herein for detecting absolute motion and gravitational wave effects a novel technique is that these effects will manifest in the gyroscope precessions observed by the Gravity Probe B satellite experiment [30,31]. Eqn. (26) determines the dynamical time evolution of the velocity field. However that aspect is more apparent if we write that equation in the integro-differential form ∂v ∂t = −∇ v 2 2 + + G d 3 r ′ ρ DM (r ′ , t) + ρ (r ′ , t) |r − r ′ | 3 (r − r ′ )(37) in which ρ DM is velocity dependent, ρ DM (r, t) ≡ α 32πG (trD) 2 − tr(D 2 ) ,(38) and is the effective "dark matter" density. This shows several key aspects: (i) there is a local cause for the time dependence from the ∇ term, and (ii) a non-local action-at-a-distance effect from the ρ DM and ρ terms. This is caused by space being essentially a quantum system, so this is better understood as a quantum non-local effect. However (37) raises the question of where the observed wave effects come from? Are they local effects or are they manifestations of distant phenomena? In the latter case we have a new astronomical window on the universe. Conclusions We now have eight experiments that independently and consistently demonstrated (i) the anisotropy of the speed of light, and where the anisotropy is quite large, namely 300,000 ± 400 km/s, depending on the direction of measurement relative to the Milky Way, (ii) that the direction, given by the Right Ascension and Declination, is now known, being established by the Miller, De Witte and Flinders experiments 8 . The reality of the cosmological meaning of the speed was confirmed by detecting the sidereal time shift over 6 months and more, (iii) that the relativistic Fitzgerald-Lorentz length contraction is a real effect, for otherwise the results from the gas-mode interferometers would have not agreed with those from the coaxial cable experiments, (iv) that Newtonian physics gives the wrong calibration for the Michelson interferometer, which of course is not surprising, (v) that the observed anisotropy means that these eight experiments have detected the existence of a 3-space, (vi) that the motion of that 3-space past the Earth displays wave effects at the level of ±20km/s, as confirmed by three experiments, and possibly present even in the Michelson-Morley data. The Miller experiment was one of the most significant experiments of the 20th century. It meant that a substructure to reality deeper than spacetime had been revealed, that spacetime was merely a mathematical construct and not an aspect of reality. It meant that the Einstein postulate regarding the invariance of the speed of light was incorrect -in disagreement with experiment, and had been so from the beginning. This meant that the Special Relativity effects required a different explanation, and indeed Lorentz had supplied that some 100 years ago: in this it is the absolute motion of systems through the dynamical 3-space that causes SR effects, and which is diametrically opposite to the Einstein formalism. This has required the generalisation of the Maxwell equations, as first proposed by Hertz in 1888 [26]), and of the Schrödinger and Dirac equations [25,8]. This in turn has lead to a derivation of the phenomenon of gravity, namely that it is caused by the refraction of quantum waves by the inhomogeneities and time dependence of the flowing patterns within space. That same data has also revealed the in-flow component of space past the Earth towards the Sun [1], and which also is revealed by the light bending effect observed by light passing close to the Sun's surface [25]. This theory of gravity has in turn lead to an explanation of the so-called "dark matter" effect in spiral galaxies [22], and to the systematics of black hole masses in spherical star systems [25], and to the explanation of the bore hole g anomaly [21,22,23]. These effects have permitted the development of the minimal dynamics of the 3-space, leading to the discovery that the parameter that determines the strength of the spatial self-interaction is none other than the fine structure constant, so hinting at a grand unification of space and the quantum theory, along the lines proposed in [1], as an information theoretic theory of reality. These developments demonstrate the enormous significance of the Miller experiment, and the extraordinary degree to which Miller went in testing and refining his interferometer. The author is proud to be extending the Miller discoveries by studying in detail the wave effects that are so apparent in his extensive data set. His work demonstrates the enormous importance of doing novel experiments and doing them well, despite the prevailing prejudices. It was a tragedy and an injustice that Miller was not recognised for his contributions to physics in his own lifetime; but not everyone is as careful and fastidious with detail as he was. He was ignored by the physics community simply because in his era it was believed, as it is now, that absolute motion was incompatible with special relativistic effects, and so it was accepted, without any evidence, that his experiments were wrong. His experiences showed yet again that few in physics actually accept that it is an evidence based science, as Galileo long ago discovered also to his great cost. For more than 70 years this experiment has been ignored, until recently, but even now discussion of this and related experiments attracts hostile reaction from the physics community. The developments reported herein have enormous significance for fundamental physics -essentially the whole paradigm of 20th century physics collapses. In particular spacetime is now seen to be no more than a mathematical construct, that no such union of space and time was ever mandated by experiment. The putative successes of Special Relativity can be accommodated by the reality of a dynamical 3-space, with time a distinctly different phenomenon. But motion of quantum and even classical electromagnetic fields through that dynamical space explain the SR effects. Lorentz symmetry remains valid, but must be understood as applying only when the space and time coordinates are those arrived at by the Einstein measurement protocol, and which amounts to not making corrections for the effects of absolute motion upon rods and clocks on those measurements. Nevertheless such coordinates may be used so long as we understand that they lead to a confusion of various related effects. To correct the Einstein measurement protocol readings one needs only to have each observer use an absolute motion meter, such as the new compact all-optical devices, as well as a rod and clock. The fundamental discovery is that for some 100 years physics has failed to realise that a dynamical 3-space exists -it is observable. This contradicts two previous assumptions about space: Newton asserted that it existed, was unchanging, but not observable, whereas Einstein asserted that 3-space did not exist, could not exist, and so clearly must be unobservable. The minimal dynamics for this 3-space is now known, and it immediately explains such effects as the "dark matter" spiral galaxy rotation anomaly, novel black holes with non-inverse square law gravitational accelerations, which would appear to offer an explanation for the precocious formation of spiral galaxies, the bore hole anomaly and the systematics of supermassive black holes, and so on. Dramatically various pieces of data show that the self-interaction constant for space is the fine structure constant. However unlike SR, GR turns out to be flawed but only because it assumed the correctness of Newtonian gravity. The self-interaction effects for space make that theory invalid even in the non-relativistic regime -the famous universal inverse square law of Newtonian gravity is of limited validity. Uniquely linking the quantum theory of matter with the dynamical space shows that gravity is a quantum matter wave effect, so we can't understand gravity without the quantum theory. As well the dynamics of space is intrinsically non-local, which implies a connectivity of reality that far exceeds any previous notions. Figure 2 : 2Schematic diagrams of the Michelson Interferometer, with beamsplitter/mirror at A and mirrors at B and C on arms from A, with the arms of equal length L when at rest. D is a screen or detector. In (a) the interferometer is at rest in space. In (b) the interferometer is moving with speed v relative to space in the direction indicated. Interference fringes are observed at the detector D. If the interferometer is rotated in the plane through 90 o , the roles of arms AC and AB are interchanged, and during the rotation shifts of the fringes are seen in the case of absolute motion, but only if the apparatus operates in a gas. By counting fringe changes the speed v may be determined. Figure 3 : 3Miller's interferometer with an effective arm length of L = 32 m achieved by multiple reflections. Used by Miller on Mt.Wilson to perform the 1925-1926 observations of absolute motion. The steel arms weighed 1200 kilograms and floated in a tank of 275 kilograms of Mercury. From Case Western Reserve University Archives. and Morley experiment, and some 10 km/s from Miller; the difference arising from the different latitudes of Cleveland and Mt. Wilson, and from Michelson and Morley taking data at limited times. So already Miller knew that his observations were consistent with those of Michelson and Morley, and so the important need for reproducibility was being confirmed. Figure 4 : 4Example of Michelson-Morley fringe shifts from average of 6 rotations measured every 22.5 • , in fractions of a wavelength ∆λ/λ, vs arm azimuth θ(deg), from Cleveland, Ohio, July 11, 1887 12:00 hrs local time or 7:00 hrs local sidereal time. This shows the quality of the fringe shift data that Michelson and Morley obtained. The curve is the best fit using the form in (20) which includes the Hick's cos(θ − β) component that is required when the mirrors are not orthognal, and gives ψ = 140 • , or 40 • measured from South, compared to the Miller ψ for August at 7:00 hrs local sidereal time in Fig. 6, and a projected speed of vP = 400 km/s. The Hick's effect is much larger in this data than in the Miller data in Fig. 5. Figure 5 : 5Typical Miller rotation-induced fringe shifts from average of 20 rotations, measured every 22.5 • , in fractions of a wavelength ∆λ/λ, vs arm azimuth θ(deg), measured clockwise from North, from Cleveland Sept. 29, 1929 16:24 UT; 11:29 hrs average local sidereal time. The curve is the best fit using the form in(20)which includes the Hick's cos(θ − β) component that is required when the mirrors are not orthognal, and gives ψ = 158 • , or 22 • measured from South, and a projected speed of vP = 351 km/s. This process was repeated some 8,000 times over days throughout 1925/1926 giving, in part, the data inFig. 6andFig. 18. Figure 6 : 62 hr, Dec =−67 • ). This is different from the speed of 369 km/s in the direction (RA = 11.20 hr, Dec =−7.22 • ) extracted from the Cosmic Microwave Background (CMB) anisotropy, and which describes a motion relative to the distant universe, but not relative to the local 3-space. The Miller velocity is explained by galactic gravitational in-flows [1]. Miller azimuths ψ, measured from south and plotted against sidereal time in hours, showing both data and best fit of theory giving vcosmic = 433 km/s in the direction (RA = 5.2 hr , Dec =−67 • ), and using n = 1.000226 appropriate for the altitude of Mt. Wilson. Figure 7 : 7and Kolen found that, while the round speed time remained constant within 0.0001% c, as expected from Sect. 2, variations in the one-way travel time were observed. The maximum effect occurred, typically, at the times predicted using the Miller galactic velocity, although Torr and Kolen appear to have been unaware of the Miller experiment. As well Torr and Kolen reported fluctuations in both the magnitude, from 1-3 ns, and the time of maximum variations in travel time. These effects are interpreted as arising from the turbulence in the flow of space past the Earth. One day of their data is shown in Fig. 7. Data from one day of the Torr-Kolen EW coaxial cable anisotropy experiment. Smooth curves show variations in travel times when the declination is varied by ± 10 • about the direction (RA = 5.2 hr , Dec =−67 • ), for a cosmic speed of 433 km/s. Most importantly the dominant feature is consistent with the predicted local sidereal time. 3.7 De Witte coaxial cable anisotropy experiment During 1991 Roland De Witte performed a most extensive RF coaxial cable travel-time anisotropy experiment, accumulating data over 178 days. His data is in complete agreement with the Michelson-Morley 1887 and Miller 1925/26 interferometer experiments. The Miller and De Witte experiments Figure 8 : 8Variations in twice the one-way travel time, in ns, for an RF signal to travel 1.5 km through a coaxial cable between Rue du Marais and Rue de la Paille, Brussels. An offset has been used such that the average is zero. The cable has a North-South orientation, and the data is ± difference of the travel times for NS and SN propagation. The sidereal time for maximum effect of ∼ 5 hr and ∼ 17 hr (indicated by vertical lines) agrees with the direction found by Miller. Plot shows data over 3 sidereal days and is plotted against sidereal time. The fluctuations are evidence of turbulence of gravitational waves. Figure 9 : 9Upper: Plot from the De Witte data of the negative of the drift of the cross-over time between minimum and maximum travel-time variation each day (at ∼ 10hr±1hr ST) versus local solar time for some 180 days. The straight line plot is the least-squares fit to the experimental data, giving an average slope of 3.92 minutes/day. The time difference between a sidereal day and a solar day is 3.93 minutes/day. This demonstrates that the effect is related to sidereal time and not local solar time. Lower: Analogous sidereal effect seen in the Flinders experiment. Due to on-going developments the data is not available for all days, but sufficient data is present to indicate a time shift of 3.97 minutes/day. This data also shows greater fluctuations than indicated by the De Witte data, presumably because De Witte used more extensive data averaging. Fig. 6 . 6Fig. 6. The dominant effect in Fig. 8 is caused by the rotation of the Earth, namely that the orientation of the coaxial cable with respect to the average direction of the flow past the Earth changes as the Earth rotates. This effect may be approximately unfolded from the data leaving the gravitational waves shown in Fig. 10. This is the first evidence that the velocity field describing the flow of space has a complex structure, and is indeed fractal. The fractal structure, i. e. that there is an intrinsic lack of scale to these speed fluctuations, is demonstrated by binning the absolute speeds and counting the number of speeds within each bin, as discussed in [8, 9]. The Miller data also shows evidence of turbulence of the same magnitude. So far the data from three experiments, namely Miller, Torr and Kolen, and De Witte, show turbulence in the flow of space past the Earth. This is what can be called gravitational waves. This can be understood by noting that fluctuations in the velocity field induce ripples in the mathematical construct known as spacetime, as in (32). Such ripples in spacetime are known as gravitational waves. Figure 10 : 10Shows the speed fluctuations, essentially "gravitational waves" observed by De Witte in 1991 from the measurement of variations in the RF coaxial-cable travel times. This data is obtained from that in Fig. 8 after removal of the dominant effect caused by the rotation of the Earth. Ideally the velocity fluctuations are three-dimensional, but the De Witte experiment had only one arm. Figure 13 : 13The Flinders University Gravitational Wave Detector showing the cables formed into a loop. This configuration enables the calibration of the detector. Figure 15 : 15RF travel time variations in picoseconds (ps) for RF waves to travel through, effectively, 10 meters of coaxial cable orientated in a NS direction. The data is plotted against local Adelaide time for the days August[18][19][20][21][22][23][24][25] 2006. The zero of the travel time variations is arbitrary. Long term temperature related drifts over these 7 days have been removed by fitting a low order polynomial to the original data and subtracting the best fit. The data shows fluctuations identified as earth rotation effect and gravitational waves. These fluctuations exceed those from timing errors in the detector. Figure 16 : 16Fig. 6clearly shows that effect also for theMiller data. None of the other anisotropy experiments took data for a sufficiently long enough time to demonstrate this effect, although their results are consistent with the Right Ascension and Declination found by the Miller, De Witte and Flinders experiments. From some 25 days of data in August 2006, the local Adelaide time for the largest travel-time difference is approximately 10±2 hrs. This corresponds to a local sidereal time of 17.5±2 hrs. According to the Miller convention we give the direction of the velocity vector of the Earth's motion through the space, which then has Right Ascension Profile of Earth, showing NS axis, at Adelaide local sidereal time of RA ≈ 5 hrs (on RHS) and at RA ≈ 17 hrs (on LHS). Adelaide has latitude λ = 38 • S. Z is the local zenith, and the detector arm has horizontal local NS direction D. The flow of space past the Earth has average velocity v. The average direction, −v, of motion of the Earth through local 3-space has RA ≈ 5 hrs and Declination δ ≈ 70 • S. The angle of inclination of the detector arm D to the direction −v is φ = π Figure 17 : 17The superimposed plots show the sidereal time effect. The plot (blue) with the minimum at approximately 17 hrs local Adelaide time is from June 9, 2006, while the plot (red) with the minimum at approximately 8 hrs local time is fromAugust 23, 2006. We see that the minimum has moved forward in time by approximately 9 hrs. The expected shift for this 65 day difference, assuming no wave effects, is 4.3 hrs, but the wave effects shift the RA by some ±2 hrs on each day as also shown inFig. 9. This sidereal time shift is a critical test for the confirmation of the detector. Miller also detected variations of that magnitude as shown inFig. 6. The August 23 data is also shown inFig. 18, but there plotted against local sidereal time for comparison with the De Witte and Miller data. Figure 18 : 18Top: De Witte data, with sign reversed, from the first sidereal day inFig. 8. This data gives a speed of approximately 430km/s. The data appears to have been averaged over more than 1hr, but still shows wave effects. Middle: Absolute projected speeds vP in the Miller experiment plotted against sidereal time in hours for a composite day collected over a number of days in September 1925. Speed data like this comes from the fits as inFig. 5using the Special Relativity calibration in Overall this apparatus measures the difference in EM travel time from A to B compared to C to D. All other travel times cancel in principle, though in practice small differences in cable or fibre lengths need to be electronically detected by the looping procedure.. Two 10 MHz RF signals come from the Rubidium atomic clock (Rb). The Electrical to Optical converters (EO) use the RF signals to modulate 1.3 µm infrared signals that propagate through the single-mode optical fibres. The Optical to Electrical converters (OE) demodulate that signal and give the two RF signals that finally reach the Digital Storage Oscilloscope (DSO), which measures their phase difference. Pairs of E/O and O/E are grouped into one box. Rubidium Atomic Clock: Stanford Research System FS725 Rubidium Frequency Standard. Multiple 10MHz RF outputs. Different outputs were used for the two circuits of the detector. The data was further running-averaged over a 60 minute interval. Connecting the Rb clock directly to the DSO via its two channels showed a long-term accuracy of ±1 ps rms with this setup.Digital Storage Oscilloscope: LeCroy WaveRunner WR6051A 500 MHz 2-channel Digital Storage Oscillo- scope (DSO). Jitter Noise Floor 2 ps rms. Clock Accuracy ¡5 pm. DSO averaging set at 5000, and generating time readings at 440/minute. Further averaged in DSO over 60 seconds, giving stored data stream at one data point/minute. Fibre Optic Transceivers: Fiber-Span AC231-EB-1-3 RF/ Fiber Optic Transceiver (O/E and E/O). Is a Figure 12: The Flinders University Gravi- tational Wave Detector located in the au- thor's office, showing the Rb atomic clock and Digital Storage Oscilloscope (DSO) at the Northern end of the NS 5 m cable run. In the foreground is one Fibre Optic Trans- ceiver. The coaxial cables are black, while the optical fibres are tied together in a white plastic sleeve, except just prior to connecting with the transceiver. The second photograph shows the other transceiver at the Southern end.Most of the data reported herein was taken when the detector was relocated to an isolated underground laboratory with the transceivers resting on a concrete floor for temperature stabilisation. In Michelson's era the idea was that v was the speed of light relative to an ether, which itself filled space. This dualism has proven to be wrong. Not all clocks will behave in this same "ideal" manner. See[25] for a detailed explanation of the embedding space concept.4 The vorticity term explains the Lense-Thirring effect[30].5 Why the Schwarzschild metric, nevertheless, works is explained in[25].6 Elsewhere it has been shown that this theory of gravity explains the bore hole anomaly, supermassive black hole systematics, the "dark matter" spiral galaxy rotation anomaly effect, as well as the putative successes of GR, including light bending and gravitational lensing. See[1] for a possible generalisation to include vorticity effects and matter related relativistic effects. Intriguingly this direction is, on average, perpendicular to the plane of the ecliptic. This may be a dynamical consequence of the new theory of space. Process Physics: From information theory to quantum space and matter. R T Cahill, Nova Science, N.Y.Cahill R. T. Process Physics: From information theory to quantum space and matter. Nova Science, N.Y., 2005. Michelson-Morley experiments revisited. R T Cahill, K Kitto, Apeiron. 102Cahill R. T. and Kitto K. Michelson-Morley experiments revisited. Apeiron, 2003, v. 10(2), 104-117. . A A Michelson, E W Morley, Philos. Mag., S. 5151Michelson A. A. and Morley E. W. Philos. Mag., S. 5, 1887, v. 24, No. 151, 449-463. . D C Miller, Rev. Mod. Phys. 5Miller D. C. Rev. Mod. Phys., 1933, v. 5, 203-242. Process Physics and Whitehead: the new science of space and time. R T Cahill, Conference. to be pub. in proceedingsCahill R. T. Process Physics and Whitehead: the new science of space and time. Whitehead 2006 Conference, Salzburg, to be pub. in proceedings, 2006. Process Physics: self-referential information and experiential reality. R T Cahill, To be pubCahill R. T. Process Physics: self-referential information and experiential reality. To be pub. Modern Michelson-Morley experiment using cryogenic optical resonators. H Müller, Phys. Rev. Lett. 912Müller H. et al. Modern Michelson-Morley experiment using cryogenic optical resonators. Phys. Rev. Lett., 2003, v. 91(2), 020401-1. Dynamical fractal 3-space and the generalised Schrödinger equation: Equivalence Principle and vorticity effects. R T Cahill, Progress in Physics. 1Cahill R. T. Dynamical fractal 3-space and the generalised Schrödinger equation: Equivalence Principle and vorticity effects. Progress in Physics, 2006, v. 1, 27-34. The Roland DeWitte 1991 experiment. R T Cahill, Progress in Physics. 3Cahill R. T. The Roland DeWitte 1991 experiment. Progress in Physics, 2006, v. 3, 60-65. Precision Measurements and Fundamental Constants. D G Torr, P Kolen, Bur. Stand. (U.S.), Spec. Pub. Taylor B. N. and Phillips W. D. Nat617Torr D. G. and Kolen P. Precision Measurements and Fundamental Constants, ed. by Taylor B. N. and Phillips W. D. Nat. Bur. Stand. (U.S.), Spec. Pub., 1984, v. 617, 675-679. . K K Illingworth, Phys. Rev. 3Illingworth K. K. Phys. Rev., 1927, v. 3, 692-696. . G Joos, Ann. der Physik. 7385Joos G. Ann. der Physik, 1930, Bd. 7, 385. . T S Jaseja, Phys. Rev., v. A133. 1221Jaseja T. S. et al. Phys. Rev., v. A133, 1964, 1221. The Michelson and Morley 1887 experiment and the discovery of absolute motion. R T Cahill, Progress in Physics. 3Cahill R. T. The Michelson and Morley 1887 experiment and the discovery of absolute motion. Progress in Physics, 2005, v. 3, 25-29. The Michelson and Morley 1887 experiment and the discovery of 3-space and absolute motion. R T Cahill, Australian Physics. 46Cahill R. T. The Michelson and Morley 1887 experiment and the discovery of 3-space and absolute motion. Australian Physics, Jan/Feb 2006, v. 46, 196-202. The detection of absolute motion: from 1887 to. R T Cahill, NPA Proceedings. Cahill R. T. The detection of absolute motion: from 1887 to 2005. NPA Proceedings, 2005, 12-16. The speed of light and the Einstein legacy. R T Cahill, Infinite Energy. 10Cahill R. T. The speed of light and the Einstein legacy: 1905-2005. Infinite Energy, 2005, v. 10(60), 28-27. The Einstein postulates 1905-2005: a critical review of the evidence. R T Cahill, Einstein and Poincaré: the Physical Vacuum. Dvoeglazov V. V.Apeiron PublCahill R. T. The Einstein postulates 1905-2005: a critical review of the evidence. In: Einstein and Poincaré: the Physical Vacuum, Dvoeglazov V. V. (ed.), Apeiron Publ., 2006. R T Cahill, arXiv:physics/0312082Quantum foam, gravity and gravitational waves. Cahill R. T. Quantum foam, gravity and gravitational waves. arXiv: physics/0312082. Absolute motion and gravitational effects. R T Cahill, Apeiron. 111Cahill R. T. Absolute motion and gravitational effects. Apeiron, 2004, v. 11(1), 53-111. dark matter" and the fine structure constant. R T Cahill, Gravity, Apeiron. 122v.Cahill R. T. Gravity, "dark matter" and the fine structure constant. Apeiron, 2005, v. 12(2), 144-177. Dark matter" as a quantum foam in-flow effect. R Cahill, Trends in Dark Matter Research. J. Val BlainNova Science, N.Y.Cahill R. T. "Dark matter" as a quantum foam in-flow effect. In: Trends in Dark Matter Research, ed. J. Val Blain, Nova Science, N.Y., 2005, 96-140. 3-Space in-flow theory of gravity: Boreholes, blackholes and the fine structure constant. R T Cahill, Progress in Physics. 2Cahill R. T. 3-Space in-flow theory of gravity: Boreholes, blackholes and the fine structure constant. Progress in Physics, 2006, v. 2, 9-16. Black holes in elliptical and spiral galaxies and in globular clusters. R T Cahill, Progress in Physics. 3Cahill R. T. Black holes in elliptical and spiral galaxies and in globular clusters. Progress in Physics, 2005, v. 3, 51-56. Black holes and quantum theory: the fine structure constant connection. R T Cahill, Progress in Physics. 4Cahill R. T. Black holes and quantum theory: the fine structure constant connection. Progress in Physics, 2006, v. 4, 44-50. Wiedemann's Ann., 1890, v. 41, 369; Electric waves, collection of scientific papers. H Hertz, Dover, N.YOn the fundamental equations of electro-magnetics for bodies in motionHertz H. On the fundamental equations of electro-magnetics for bodies in motion. Wiedemann's Ann., 1890, v. 41, 369; Electric waves, collection of scientific papers. Dover, N.Y., 1962. . G F Fitzgerald, Science. 13420Fitzgerald G. F. Science, 1889, v. 13, 420. Electric phenomena in a system moving with any velocity less than that of light. H A Lorentz, The Principle of Relativity. Dover, N.Y.Lorentz H. A. Electric phenomena in a system moving with any velocity less than that of light. In The Principle of Relativity, Dover, N.Y., 1952. On the Michelson-Morley experiment relating to the drift of the ether. W M Hicks, Phil. Mag. 3Hicks W. M. On the Michelson-Morley experiment relating to the drift of the ether. Phil. Mag., 1902, v. 3, 9-42. Novel Gravity Probe B frame-dragging effect. R T Cahill, Progress in Physics. 3Cahill R. T. Novel Gravity Probe B frame-dragging effect. Progress in Physics, 2005, v. 3, 30-33. Novel Gravity Probe B gravitational wave detection. R T Cahill, arXiv:physics/0408097Cahill R. T. Novel Gravity Probe B gravitational wave detection. arXiv: physics/0408097. http://www.scieng.flinders.edu.au/cpes/people/cahill r/ processphysics.html http://www.mountainman.com.au/process physics.
[]
[ "A generalized Kramers-Kronig transform for Casimir effect computations", "A generalized Kramers-Kronig transform for Casimir effect computations" ]
[ "Infn Sezione ", "Di " ]
[]
[]
Recent advances in experimental techniques now permit to measure the Casimir force with unprecedented precision. In order to achieve a comparable precision in the theoretical prediction of the force, it is necessary to accurately determine the electric permittivity of the materials constituting the plates along the imaginary frequency axis. The latter quantity is not directly accessible to experiments, but it can be determined via dispersion relations from experimental optical data. In the experimentally important case of conductors, however, a serious drawback of the standard dispersion relations commonly used for this purpose, is their strong dependence on the chosen lowfrequency extrapolation of the experimental optical data, which introduces a significant and not easily controllable uncertainty in the result. In this paper we show that a simple modification of the standard dispersion relations, involving suitable analytic window functions, resolves this difficulty, making it possible to reliably determine the electric permittivity at imaginary frequencies solely using experimental optical data in the frequency interval where they are available, without any need of uncontrolled data extrapolations.
10.1103/physreva.81.062501
[ "https://arxiv.org/pdf/1002.2603v2.pdf" ]
119,229,664
1002.2603
abc2dc3b4a1a3e462b8018ea816b68a4891aaf6e
A generalized Kramers-Kronig transform for Casimir effect computations 11 May 2010 Infn Sezione Di A generalized Kramers-Kronig transform for Casimir effect computations Napoli, ITALY11 May 2010(Dated: May 12, 2010)numbers: 0530-d7722Ch1220Ds Keywords: Casimirdispersion relations Recent advances in experimental techniques now permit to measure the Casimir force with unprecedented precision. In order to achieve a comparable precision in the theoretical prediction of the force, it is necessary to accurately determine the electric permittivity of the materials constituting the plates along the imaginary frequency axis. The latter quantity is not directly accessible to experiments, but it can be determined via dispersion relations from experimental optical data. In the experimentally important case of conductors, however, a serious drawback of the standard dispersion relations commonly used for this purpose, is their strong dependence on the chosen lowfrequency extrapolation of the experimental optical data, which introduces a significant and not easily controllable uncertainty in the result. In this paper we show that a simple modification of the standard dispersion relations, involving suitable analytic window functions, resolves this difficulty, making it possible to reliably determine the electric permittivity at imaginary frequencies solely using experimental optical data in the frequency interval where they are available, without any need of uncontrolled data extrapolations. I. INTRODUCTION One of the most intriguing predictions of Quantum electrodynamics is the existence of irreducible vacuum fluctuations of the electromagnetic (e.m.) field. It was Casimir's fundamental discovery [1] to realize that this purely quantum phenomenon was not confined to the atomic scale, as in the Lamb shift, but would rather manifest itself also at the macroscopic scale, in the form of a force of attraction between two discharged plates. For the idealized case of two perfectly reflecting plane-parallel plates at zero temperature, placed at a distance a in vacuum, Casimir obtained the following remarkably simple estimate of the force per unit area F C = π 2h c 240 a 4 .(1) An important step forward was made a few years later by Lifshitz and co-workers [2], who obtained a formula for the force between two homogeneous dielectric planeparallel slabs, at finite temperature. In this theory, of macroscopic character, the material properties of the slabs were fully characterized in terms of the respective frequency dependent electric permittivities ǫ(ω), accounting for the dispersive and dissipative properties of real materials. In this way, it was possible for the first time to investigate the influence of material properties on the magnitude of the Casimir force. Over ten years ago, a series of brilliant experiments [3,4] exploiting modern experimental techniques provided the definitive demonstration of the Casimir effect. * [email protected]%; These now historical experiments spurred enormous interest in the Casimir effect, and were soon followed by many other experiments. The subsequent experiments were aimed at diverse objectives. Some of them explored new geometries: while the works [3,4] used a sphereplate setup, the original planar geometry investigated by Casimir was adopted in the experiment [5], and a setup with crossed cylinders was considered in [6]. The important issue of the non trivial geometry dependence of the Casimir effect is also being pursued experimentally, using elaborate micro-patterned surfaces [7]. Other experiments aimed at demonstrating new possible uses of the Casimir force, like for example the actuation of micromachines [8], or at demonstrating the possibility of a large modulation of the Casimir force [9,10], which could also result in interesting technological applications. There are also experiments using superconducting Casimir cavities, that aim at measuring the change of the Casimir energy across the superconducting phase transition [11]. The experiments performed in the last ten years are just too numerous to mention them all here. For an updated account we refer the reader to the very recent review paper [12]. Apart from exploring new manifestations of the Casimir effect, a large experimental effort is presently being made also to increase the precision of Casimir force measurements, in simple geometries. Already in the early experiment [4] a precision upto one percent was obtained. More recently, a series of experiments with microtorsional oscillators [13] reached an amazing precision of 0.2 percent. The reader may wonder what is the interest in achieving such a high precision in this kind of experiments. There are several reasons why this is important. On one hand, in the theory of dispersion forces puzzling conceptual problems have recently emerged that are con-nected with the contribution of free charges to the thermal Casimir force, whose resolution crucially depends on the precision of the theory-experiment comparison [12]. On the other hand, the ability to accurately determine the Casimir force is also important for the purpose of obtaining stronger constraints on hypothetical long-range forces predicted by certain theoretical scenarios going beyond the Standard Model of particle physics [12]. The remarkable precision achieved in the most recent experiments poses a challenging demand on the theorist: is it possible to predict the magnitude of the Casimir force with a comparable level of precision, say of one percent? Assessing the theoretical error affecting present estimates of the Casimir force is a difficult problem indeed, because many different factors must be taken into account [12]. Consider the typical experimental setting of most of the current experiments, where the Casimir force is measured between two bodies covered with gold, placed in vacuum at a distance of a (few) hundred nanometers. In this separation range, the main factor to consider is the finite penetration depth of electromagnetic fields into the gold layer [25], resulting from the finite conductivity of gold. The tool to analyze the influence of such material properties as the conductivity on the Casimir effect is provided by Lifshitz theory [2]. This theory shows that for a separation of 100 nm, the finite conductivity of gold determines a reduction in the magnitude of the Casimir force of about fifty percent in comparison with the perfect metal result [14]. Much smaller corrections, that must nevertheless be considered if the force is to be estimated with percent precision, arise from the finite temperature of the plates and from their surface roughness. Moreover, geometric effects resulting from the actual shape of the plates should be considered. We should also mention that the magnitude of residual electrostatic forces between the plates, resulting from contact potentials and patch effects, must be carefully accounted for. For a discussion of all these issues, which received much attention in the recent literature on the Casimir effect, we again address the reader to Ref. [12]. See also the recent work [15]. In this paper, we focus our attention on the influence of the optical properties of the plates which, as explained above, is by far the most relevant factor to consider. As we pointed out earlier, in Lifshitz theory the optical properties of the plates enter via the frequency-dependent electric permittivity ǫ(ω) of the material constituting plates. In order to obtain an accurate prediction of the force, it is therefore of the utmost importance to use accurate data for the electric permittivity. The common practice adopted in all recent Casimir experiments with gold surfaces is to use tabulated data for gold (most of the times those quoted in Refs. [16]), suitably extrapolated at low frequencies, where optical data are not available, by simple analytic models (like the Drude model or the so-called generalized plasma model). However, already ten years ago Lamoreaux observed [17] that using tabulated data to obtain an accurate prediction of the Casimir force may not be a reliable practice, since optical properties of gold films may vary significantly from sample to sample, depending on the conditions of deposition. The same author stressed the importance of measuring the optical data of the films actually used in the force measurements, in the frequency range that is relevant for the Casimir force. The importance of this point was further stressed in [18] and received clear experimental support in a recent paper [19], where the optical properties of several gold films of different thicknesses, and prepared by different procedures, were measured ellipsometrically in a wide range of wavelengths, from 0.14 to 33 microns, and it was found that the frequency dependent electric permittivity changes significantly from sample to sample. By using the zero-temperature Lifshitz formula, the authors estimated that the observed sample dependence of the electric permittivity implies a variation in the theoretical value of the Casimir force, from one sample to another, easily as large as ten percent, for separations around 100 nm. It was concluded that in order to achieve a theoretical accuracy better than ten percent in the prediction of the Casimir force, it is necessary to determine the optical properties of the films actually used in the experiment of interest. The aim of this paper is to improve the mathematical procedure that is actually needed to obtain reliable estimates of the Casimir force, starting from experimental optical data on the material of the plates, like those presented in Ref. [19]. The necessity of such an improvement stems from the very simple and unavoidable fact that experimental optical data are never available in the entire frequency domain, but are always restricted to a finite frequency interval ω min < ω < ω max . To see why this constitutes problem we recall that Lifshitz formula, routinely used to interpret current experiments, expresses the Casimir force between two parallel plates as an integral over imaginary frequencies iξ of a quantity involving the dielectric permittivities of the plates ǫ(iξ). For finite temperature, the continuous frequency integration is replaced by a sum over discrete so-called Matsubara frequencies ω n = iξ n , where ξ n = 2πnk B T /h, with n a non-negative integer, and T the temperature of the plates. In any case, whatever the temperature, one needs to evaluate the permittivity of the plates for certain imaginary frequencies. We note that, in principle, recourse to imaginary frequencies is not mandatory because it is possible to rewrite Lifshitz formula in a mathematically equivalent form, involving an integral over the real frequency axis. In this case however the integrand becomes a rapidly oscillating function of the frequency, which hampers any possibility of numerical evaluation. In practice, the real-frequency form of Lifshitz formula is never used, and only its imaginary-frequency version is considered. We remark that occurrence of imaginary frequencies in the expression of the Casimir force, is a general feature of all recent formalisms, extending Lifshitz theory to non-planar geometries [20][21][22]. The problem is that the electric permittivity ǫ(iξ) at imaginary frequen-cies cannot be measured directly by any experiment. The only way to determine it by means of dispersion relations, which allow to express ǫ(iξ) in terms of the observable real-frequency electric permittivity ǫ(ω). In the standard version of dispersion relations [2], adopted so far in all works on the Casimir effect, ǫ(iξ)−1 is expressed in terms of an integral of a quantity involving the imaginary part ǫ ′′ (ω) of the electric permittivity: ǫ(iξ) − 1 = 2 π ∞ 0 dω ω ǫ ′′ (ω) ω 2 + ξ 2 .(2) The above formula shows that, in principle, a determination of ǫ(iξ) requires knowledge of ǫ ′′ (ω) at all frequencies while, as we said earlier, optical data are available only in some interval ω min < ω < ω max . In practice, the problem is not so serious on the high frequency side, because the fall-off properties of ǫ ′′ (ω) at high frequencies, together with the ω 2 factor in the denominator of the integrand, ensure that the error made by truncating the integral at a suitably large frequency ω max is small, provided that ω max is large enough. Typically, an ω max larger than, say, 15c/(2a), is good enough for practical purposes. Things are not so easy though on the low frequency side. In the case of insulators, optical data are typically available until frequencies ω min much smaller than the frequencies of all resonances of the medium. Because of this, ǫ ′′ (ω) is almost zero for ω < ω min , and therefore the error made by truncating the integral at ω min is again negligible. Problems arise however in the case of ohmic conductors, because then ǫ ′′ (ω) has a 1/ω singularity at ω = 0. As a result ǫ ′′ (ω) becomes extremely large at low frequencies, in such a way that the integral in Eq. (2) receives a very large contribution from low frequencies. For typical values of ω min that can be reached in practice (for example for gold, the tabulated data in [16] begin at ω min = 125 meV/h, while the data of [19] start at 38 meV/h) truncation of the integral at ω min results in a large error. The traditional remedy to this problem is to make some analytical extrapolation of the data, typically based on Drude model fits of the low-frequency region of data, from ω min to zero, and then use the extrapolation to estimate the contribution of the integral in the interval 0 < ω < ω min where data are not directly available. It is important to observe that this contribution is usually very large. For example, even in the case of Ref. [19], the relative contribution of the extrapolation is about fifty percent of the total value of the integral, in the entire range of imaginary frequencies that are needed for estimating the Casimir force. Clearly, this procedure is not very satisfying. The use of analytical extrapolations of the data introduces an uncertainty in the obtained values of ǫ(iξ), that is not easy to quantify. The result may in fact depend a lot on the form of the extrapolation, and there is no guarantee that the chosen expression is good enough. Consider for example Ref. [19], which constitutes the most accurate existing work on this problem. It was found there that the simple Drude model does not fit so well the data of all samples, making it necessary to improve it by the inclusion of an additional Lorentz oscillator. Moreover, it was found that for each sample the Drude parameters extracted from the data depended on the used fitting procedure, and were inconsistent which each other within the estimated errors, which is again an indication of the probable inadequacy of the analytical expression chosen for the interpolation. This state of things led us to investigate if it possible to determine accurately ǫ(iξ) solely on the basis of available optical data, without making recourse to data extrapolations. We shall see below that this is indeed possible, provided that Eq. (2) is suitably modified, in a way that involves multiplying the integrand by an appropriate analytical window function f (ω), which suppresses the contribution of frequencies not belonging to the interval ω min < ω < ω max . As a result of this modification, the error made by truncating the integral to the frequency range ω min < ω < ω max can be made negligible at both ends of the integration domain, rendering unnecessary any extrapolation of the optical data outside the interval where they are available. The procedure outlined in this paper should allow to better evaluate the theoretical uncertainty of Casimir force estimates resulting from experimental errors in the optical data. The plan of the paper is as follows: in Sec. II we derive a generalized dispersion relation for ǫ(iξ), involving analytic window functions f (z), and we provide a simple choice for the window functions. In Sec III we present the results of a numerical simulation of our window functions, for the experimentally relevant case of gold, and in Sec IV we estimate numerically the error on the Casimir pressure resulting from the use of our window functions. Sec V contains our conclusions and a discussion of the results. II. GENERALIZED DISPERSION RELATIONS WITH WINDOW-FUNCTIONS As it it well known [2], analyticity properties satisfied by the electric permittivity ǫ(ω) of any causal medium (and more in general by any causal response function, the magnetic permeability µ(ω) being another example) imply certain integral relations between the real part ǫ ′ (ω) and imaginary part ǫ ′′ (ω) of ǫ(ω), known as Kramers-Kronig or dispersion relations. The dispersion relation of interest to us is the one that permits to express the value ǫ(iξ) of the response function at some imaginary frequency iξ in terms of an integral along the positive frequency axis, involving ǫ ′′ (ω). It is convenient to briefly review here the simple derivation of this important result, which is an easy exercise in contour integration. For our purposes, it is more convenient to start from an arbitrary complex function u(z), with the following properties: u(z) is analytic in the upper complex plane C + = {z : Im(z) > 0}, fall's off to zero for large |z| like some power of |z|, and admits at most a simple pole at ω = 0. Consider now the closed integration contour Γ obtained by closing in the upper complex plane the positively oriented real axis, and let z 0 be any complex number in C + . It is then a simple matter to verify the identity: Γ dz z u(z) z 2 − z 2 0 = iπu(z 0 ) .(3) The assumed fall-off property of u(z) ensures that the half-circle of infinite radius forming Γ contributes nothing to the integral, and then from Eq. (3) we find: u(z 0 ) = 1 iπ ∞ −∞ dω ω u(ω) ω 2 − z 2 0 .(4) Consider now a purely imaginary complex number z 0 = iξ, and assume in addition that along the real axis u(ω) satisfies the symmetry property u(−ω) = u * (ω). From Eq. (4) we then find: u(iξ) = 2 π ∞ 0 dω ω u ′′ (ω) ω 2 + ξ 2 ,(5) which is the desired result. The standard dispersion relation Eq. (2) used to compute the electric permittivity for imaginary frequencies is a special case of the above relation, corresponding to choosing u(z) = ǫ(z) − 1. We note that Eq. (2) is valid both for insulators, which have a finite permittivity at zero frequency, as well as for ohmic conductors, whose permittivity has a 1/ω singularity in the origin. As we explained in the introduction Eq. (2), even though perfectly correct from a mathematical standpoint, has serious drawbacks, when it is used to numerically estimate ǫ(iξ) for ohmic conductors, starting from optical data available only in some interval ω min < ω < ω max , because the integral on the r.h.s. of Eq. (2) receives a large contribution from frequencies near zero, where data are not available. This difficulty can however be overcome in a very simple way, as we now explain. Consider a window function f (z), enjoying the following properties: f (z) is analytic in C + , it has no poles in C + except possibly a simple pole at infinity, and satisfies the symmetry property f (−z * ) = f * (z) .(6) Consider now Eq. (5), for u(z) = f (z)(ǫ(z)−1). Since for any medium (ǫ(z) − 1) falls off like z −2 at infinity [23], the quantity u(z) falls off at least like z −1 at infinity, and it satisfies all the properties required for Eq. (5) to hold. For any ξ such that f (iξ) = 0, we then obtain the following generalized dispersion relation: ǫ(iξ) − 1 = 2 π f (iξ) ∞ 0 dω ω ω 2 + ξ 2 Im[f (ω)(ǫ(ω) − 1)] .(7) We note that the above relation constitutes an exact result, generalizing the standard dispersion relation Eq. (2), to which it reduces with the choice f (z) = 1. Another form of dispersion relation, frequently used in the case of conductors or superconductors [11,24] is obtained by taking f (z) = i z into Eq. (7). Recalling the relation [23] ǫ(ω) = 1 + 4πi ω σ(ω) ,(8) it reads: ǫ(iξ) − 1 = 8 ξ ∞ 0 dω ω ω 2 + ξ 2 Im [σ(ω)] .(9) The above form is especially convenient in the case of superconductors, because it avoids the δ(ω) singularity characterizing the real part of the conductivity of these materials [24]. We observe now, and this is the key point, that there is no reason to restrict the choice of the function f (z) to these two possibilities. Indeed, we can take advantage of the freedom in the choice of f (z), to suppress the unwanted contribution of low frequencies (as well as of high frequencies), where experimental data on ǫ(ω) are not available. In order to do that, it is sufficient to choose a window function that goes to zero fast enough for ω → 0, as well as for ω → ∞. A convenient family of window functions which do the job is the following: f (z) = A z 2p+1 1 (z − w) 2q+1 + 1 (z + w * ) 2q+1 ,(10) where w is an arbitrary complex number such that Im(w) < 0, and p and q are integers such that p < q. The constant A is an irrelevant arbitrary normalization constant, that drops out from the generalized dispersion formula Eq. (7). As we see, in the limit z → 0, these functions vanish like z 2p+1 , and therefore by taking sufficiently large values for p we can obtain suppression of low frequencies to any desired level. On the other hand, for z → ∞, f (z) vanishes like z 2(p−q) , and therefore by taking sufficiently large values of q, we can obtain suppression of high frequencies. Moreover, by suitably choosing the free parameter w, we can also adjust the range of frequencies that effectively contribute to the integral on the r.h.s. of Eq. (7). In Figs. 1 and 2 we plot the real and imaginary parts (in arbitrary units) of our window functions f (ω), versus the frequency ω (expressed in eV). The two curves displayed correspond to the choices p = 1, q = 2 (dashed line) and p = 1, q = 3 (solid line). In both cases, the parameter w has the value w = (1 − 2 i) eV/h. We observe that along the real frequency axis, our window functions have non-vanishing real and imaginary parts. This is not a feature of our particular choice of the window functions, but it is an unavoidable consequence of our demand of analyticity on f (z). Indeed, for real frequencies ω the real and imaginary parts of f (ω) are related to each other by the usual Kramers-Kronig relations [2] that hold for the boundary values of analytic functions. In the case when f (z) vanishes at infinity, they read: f ′ (ω) = 1 π P ∞ −∞ dξ f ′′ (ξ) ξ − ω ,(11)f ′′ (ω) = − 1 π P ∞ −∞ dξ f ′ (ξ) ξ − ω ,(12) where the symbol P in front of the integrals denotes the principal value. These relation show that vanishing of f ′ (ω) implies that of f ′′ (ω) and viceversa, and therefore neither f ′ (ω) nor f ′′ (ω) can be identically zero. By virtue of this property of the window functions, it follows from Eq. (7) that both the real and imaginary parts of ǫ(ω) are needed to evaluate ǫ(iξ) (unless the standard choices f (z) ≡ 1 or f (z) = i z are made). We also note (see Fig. 1 and 2) that the real and imaginary parts of f (ω) do not have a definite sign. This feature also is a general consequence of our key demand that f (z) vanishes in the origin, as it can be seen by taking ω = 0 in Eqs. (11) and (12). Since the l.h.s. of both equations are required to vanish, the integrand on the r.h.s. cannot have a definite sign. Finally, in Fig 3 we show plots of two of our window functions f (iξ), versus the imaginary frequency ξ, expressed in eV, for the same two choices of parameters of Fig. 1 and 2. It is important to observe that the window functions f (z) are real along the imaginary axis (as it must be, as a consequence of the symmetry property Eq. (6)). However, the sign of f (iξ) is not definite, and as a result of this f (iξ) admits zeros along the imaginary axis. When using Eq. (7) for estimating ǫ(iξ) it is then important to choose the window function such that none of its zeroes coincides with the value of ξ for which ǫ(iξ) is being estimated. III. A NUMERICAL SIMULATION In this Section, we perform a simple simulation to test the degree of accuracy with which the quantity ǫ(iξ) can be reconstructed using our window functions, starting from data on ǫ(ω) referring to a finite frequency interval. To do that we can proceed as follows. According to the standard dispersion relation Eq. (2), the quantity ǫ(iξ) − 1 is equal to the integral on the r.h.s. of Eq. (2). Following Refs. [18,19], we can split this integral into three pieces, as follows: 2 π ∞ 0 dω ω ǫ ′′ (ω) ω 2 + ξ 2 = I low (ξ) + I exp (ξ) + I high (ξ) ,(13) where we set: I low (ξ) = 2 π ωmin 0 dω ω ǫ ′′ (ω) ω 2 + ξ 2 ,(14) I exp (ξ) = 2 π ωmax ωmin dω ω ǫ ′′ (ω) ω 2 + ξ 2 ,(15) and I high (ξ) = 2 π ∞ ωmax dω ω ǫ ′′ (ω) ω 2 + ξ 2 .(16) By construction, we obviously have: ǫ(iξ) − 1 = I low (ξ) + I exp (ξ) + I high (ξ) .(17) An analogous split can be performed in the integral on the r.h.s. of the other standard dispersion relation involving the conductivity Eq. (9): 8 ξ ∞ 0 dω ω ω 2 + ξ 2 Im [σ(ω)] = K low (ξ)+K exp (ξ)+K high (ξ) ,(18) with an obvious meaning of the symbols. Again, we have the identity: ǫ(iξ) − 1 = K low (ξ) + K exp (ξ) + K high (ξ) .(19) On the other hand, according to our generalized dispersion relation Eq. (7), the quantity ǫ(iξ) − 1 is also equal to the integral on the r.h.s. of Eq. (7). We can split this integral too in a way analogous to Eq. (13): 2 π f (iξ) ∞ 0 dω ω ω 2 + ξ 2 Im[f (ω)(ǫ(ω) − 1)] = J (p,q) low (ξ) + J (p,q) exp (ξ) + J (p,q) high (ξ) ,(20) where we set: J (p,q) low (ξ) = 2 π f (iξ) ωmin 0 dω ω ω 2 + ξ 2 Im[f (ω)(ǫ(ω) − 1)] ,(21)J (p,q) exp (ξ) = 2 π f (iξ) ωmax ωmin dω ω ω 2 + ξ 2 Im[f (ω)(ǫ(ω) − 1)] ,(22) and J (p,q) high (ξ) = 2 π f (iξ) ∞ ωmax dω ω ω 2 + ξ 2 Im[f (ω)(ǫ(ω) − 1)] . (23) Then by construction we also have: ǫ(iξ) − 1 = J (p,q) low (ξ) + J (p,q) exp (ξ) + J (p,q) high (ξ) .(24) The quantities I exp (ξ), K exp (ξ) and J (p,q) exp (ξ) evidently represent the contribution of the experimental data. On the contrary the quantities I low (ξ), K low (ξ) and J (p,q) low (ξ) can be determined only by extrapolating the data in the low frequency region 0 ≤ ω ≤ ω min , while determination of the quantities I high (ξ), K high (ξ) and J (p,q) high (ξ) is only possible after we extrapolate the data in the high frequency interval ω max ≤ ω < ∞. Ideally, we would like to have I low (ξ), I high (ξ), K low (ξ), K high (ξ), J (p,q) low (ξ) and J (p,q) high (ξ) as small as possible. To see how things work, we can perform a simple simulation of real experimental data. We imagine that the electric permittivity of gold is described by the following six-oscillators approximation [12], which is known to provide a rather good description of the permittivity of gold for the frequencies that are relevant to the Casimir effect: ǫ(ω) = 1 − ω 2 p ω(ω + iγ) + 6 j=1 g j ω 2 j − ω 2 − iγ j ω .(25) Here, ω p is the plasma frequency and γ is the relaxation frequency for conduction electrons, while the oscillator terms describe core electrons. The values of the parameters g j , ω j and γ j can be found in the second of Refs. [13]. For ω p and γ we use the reference values for crystalline bulk samples, ω p = 9 eV/h and γ = 0.035 eV/h. Of course with such a simple model for the permittivity of gold, there is no need to use dispersion relations to obtain the expression of ǫ(iξ), for this can be simply done by the substitution ω → iξ in the r.h.s. of Eq. (25): ǫ(iξ) = 1 + ω 2 p ξ(ξ + γ) + 6 j=1 g j ω 2 j + ξ 2 + γ j ξ .(26) Simulating the real experimental situation, let us pretend however that we know that the optical data of gold are described by Eq. (25) only in some interval ω min < ω < ω max , and assuming that we do not want to make extrapolations of the data outside the experimental interval, let us see how well the quantities I exp (ξ), K exp (ξ) and J (p,q) exp (ξ) defined earlier reconstruct the exact value of ǫ(iξ) − 1 given by Eq. (26). In our simulation we took ω min = 0.038 eV/h (representing the minimum frequency value for which data for gold films were measured in [19]) while for ω max we choose the value ω max = 30 eV/h. The chosen value of ω max is about thirty times the characteristic frequency c/(2a) for a separation a = 100 nm. The result of our simulation are summarized in Figs. 4 and 5. In Fig. 4, we report the relative per cent errors δ I = 100 [1 − I exp (ξ n )/(ǫ(iξ n ) − 1)] (black squares) and δ K = 100 [1 − K exp (ξ n )/(ǫ(iξ n ) − 1)] (grey triangles) which are made if the quantities I exp (ξ n ) or K exp (ξ n ) are used, respectively, as estimators of ǫ(iξ n ) − 1. The integer number on the abscissa labels the Matsubara mode ξ n = 2πnk B T /h (T = 300 K). Only the first sixty modes are displayed, which are sufficient to estimate the Casimir force at room temperature, for separations larger than 100 nm, with a precision better than one part in ten thousand. As we see, both I exp (ξ n ) and K exp (ξ n ) provide a poor approximation to ǫ(iξ n )−1, with I exp (ξ n ) performing somehow better at higher imaginary frequencies, and K exp (ξ n ) doing better at lower imaginary frequencies. Indeed, I exp (ξ n ) and K exp (ξ n ) suffer from opposite problems. On one hand the large error affecting I exp (ξ n ) arises mostly from neglect of the large low-frequency contribution I low (ξ n ), and to a much less extent from neglect of the high frequency contribution I high (ξ n ) (The magnitude of the high frequency contribution I high (ξ n ) is less than two percent of ǫ(iξ n ) − 1 for all n ≤ 60). The situation is quite the opposite in the case of K exp (ξ n ). This difference is of course due to the opposite limiting behaviors of the imaginary parts of the permittivity ǫ ′′ (ω) in the limits ω → 0, and ω → ∞, as compared to those of the imaginary part of the conductivity σ ′′ (ω). Indeed, for ω → 0, ǫ ′′ (ω) diverges like ω −1 , while σ ′′ (ω) approaches zero like ω. This explains while the low frequency contribution I low (ξ n ) is much larger than K low (ξ n ). On the other hand, in the limit ω → ∞, ǫ ′′ (ω) vanishes like ω −3 , while σ ′′ (ω) vanishes only like ω −1 . This implies that large frequencies are much less of a problem for I exp (ξ n ) than for K exp (ξ n ). The conclusion to be drawn from these considerations is that, if either of the two standard forms Eq. (2) or Eq. (9) of dispersion relations are used, in order to obtain a good estimate of ǫ(iξ n ) − 1, one is forced to extrapolate somehow the experimental data both to frequencies less than ω min , and larger than ω max . We can now consider our windowed dispersion relation, Eq. (7), with our choice of the window functions f (z) in Eq. (10). In Fig. 5, we display the relative per cent error δ (p,q) = 100 [1 − J (p,q) exp (ξ n )/(ǫ(iξ n ) − 1)] which is made if the quantity J (p,q) exp (ξ n ) is used as an estimator of ǫ(iξ n ) − 1. We considered two choices of parameters for our window functions in Eq. (10), i.e. p = 1, q = 2 (grey triangles) and p = 1, q = 3 (black squares). In both cases, we took for the parameter w the constant value w = (1 − 2 i) eV/h (See Figs. 1, 2 and 3). It is apparent from Fig. 5 that both window functions perform very well, for all considered Matsubara modes. The error made by using J (1,2) exp (ξ n ) is less than one percent, in absolute value, while the error made by using J Fig. 3). Such jumps can be easily avoided, further reducing at the same time the error, by making a different choice of the free parameter w for each value of n. We did not do this here for the sake of simplicity. It is clear that in concrete cases one is free to choose for each value of n, different values of all the parameters p, q and w, in such a way that the error is as small as possible. IV. SIMULATION OF THE CASIMIR FORCE In this Section, we investigate the performance of our window functions with respect to the determination of the Casimir force. We consider for simplicity the prototypical case of two identical plane-parallel homogeneous and isotropic gold plates, placed in vacuum at a distance a. As it is well known, the Casimir force per unit area is given by the following Lifshitz formula: P (a, T ) = k B T π n≥0 ′ dk ⊥ k ⊥ q n α=TE,TM e 2aqn r 2 α (iξ n , k ⊥ ) − 1 −1 ,(27) where the plus sign corresponds to an attraction between the plates. In this Equation, the prime over the n-sum means that the n = 0 term has to taken with a weight one half, T is the temperature, k ⊥ denotes the magnitude of the projection of the wave-vector onto the plane of the plates and q n = k 2 ⊥ + ξ 2 n /c 2 , where ξ n = 2πn k B T /h are the Matsubara frequencies. The quantities r α (iξ n , k ⊥ ) denote the familiar Fresnel reflection coefficients of the slabs for α-polarization, evaluated at imaginary frequencies iξ n . They have the following expressions: r TE (iξ n , k ⊥ ) = q n − k n q n + k n ,(28)r TM (iξ n , k ⊥ ) = ǫ(iξ n ) q n − k n ǫ(iξ n ) q n + k n ,(29) where k n = k 2 ⊥ + ǫ(iξ n )ξ 2 n /c 2 . We have simulated the error made in the estimate of P (a, T ) if the estimate of ǫ(iξ n ) provided by the windowapproximations J (p,q) exp (ξ n ) is used: ǫ(iξ n ) ≃ 1 + J (p,q) exp (ξ n ) ,(30) again assuming the simple six-oscillator model of Eq. (25) for ǫ(ω). The results are summarized in Fig 6, where we plot the relative error δ (p,q) P in percent, as a function of the separation a (in microns). The window functions that have been used are the same as in Fig. 5. We see from the figure that already with this simple and notoptimized choice of window functions, the error is much less than one part in a thousand in the entire range of separations considered, from 100 nm to one micron. V. CONCLUSIONS AND DISCUSSION In recent years, a lot of efforts have been made to measure accurately the Casimir force. At the moment of this writing, the most precise experiments using goldcoated micromechanical oscillators claim a precision better than one percent [13]. It is therefore important to see if an analogous level of precision in the prediction of the Casimir force can be obtained at the theoretical level. A precise determination of the theoretical error is indeed as important as reducing the experimental error, in order to address controversial questions that have emerged in the recent literature on dispersion forces, regarding the influence of free charges on the thermal correction to the Casimir force [12]. Addressing the theoretical error in the magnitude of the Casimir force is indeed difficult, because many physical effect must be accounted for. However, it has recently been pointed out [19] that perhaps the largest theoretical uncertainty results from incomplete knowledge of the optical data for the surfaces involved in the experiments. On one hand, the large variability depending on the preparation procedure, of the optical properties of gold coatings, routinely used in Casimir experiments, makes it necessary to accurately characterize the coatings actually used in any experiment. On the other hand, even when this characterization is done, another problem arises, because for evaluating the Casimir force one needs to determine the electric permittivity ǫ(iξ) of the coatings for certain imaginary frequencies iξ. This quantity is not directly accessible to any optical measurement, and the only way to determine it is via exploiting dispersion relations, that permit to express ǫ(iξ) in terms of the measurable values of the permittivity ǫ(ω) for real frequencies ω. When doing this, one is faced with the difficulty that optical data are necessarily known only in a finite interval of frequencies ω min < ω < ω max . This practical limitation constitutes a severe problem in the experimentally relevant case of good conductors, because of their large conductivity at low frequencies. With the standard forms of dispersion relations Eq. (2) and Eq. (9), one finds that for practical values of ω min and ω max , low frequencies less than ω min and/or large frequencies larger than ω max give a very large contribution to ǫ(iξ). In order to estimate ǫ(iξ) accurately, one is then forced to extrapolate available optical data outside the experimental region, on the basis of some theoretical model for ǫ(ω). Of course, this introduces a further element of uncertainty in the obtained values of ǫ(iξ), and the resulting theoretical error is difficult to estimate quantitatively. In this paper we have shown that this problem can be resolved by suitably modifying the standard dispersion relation used to compute ǫ(iξ), in terms of appropriate analytic window functions f (z) that suppress the contributions both of low and large frequencies. In this way, it becomes possible to accurately estimate ǫ(iξ) solely on the basis of the available optical data, rendering unnecessary any uncontrollable extrapolation of data. We have checked numerically the performance of simple choices of window functions, by making a numerical simulation based on an analytic fit of the optical properties of gold, that has been used in recent experiments on the Casimir effect [12]. We found that already very simple forms of the window functions permit to estimate the Casimir pressure with an accuracy better than one part in a thousand, on the basis of reasonable intervals of frequencies for the optical data. It would be interesting to apply these methods to the accurate optical data for thin gold films quoted in Ref. [19]. Before closing the paper, we should note that the relevance of the sample-to-sample dependence of the optical data observed in [19] for the theory of the Casimir effect has been questioned by the authors of Ref. [12], who observed that this dependence mostly originates from relaxation processes of free conduction electrons at infrared and optical frequencies, due for example to different grain sizes in thin films. The main consequence of these sample-dependent features is the large variability of the Drude parameters, extracted from fits of the lowfrequency optical data of the films, which constitutes the basic source of variation of the computed Casimir force reported in Ref. [19]. According to the authors of Ref. [12], relaxation properties of conduction electrons in thin films, described by the fitted values of the Drude parameters, are not relevant for the Casimir effect. Indeed, according to these authors the quantity ǫ(ω) to be used in Lifshitz formula should not be understood as the actual electric permittivity of the plate, as derived from optical measurements on the sample, but it should be rather regarded as a phenomenological quantity connected to but not identical to the optical electric permittivity of the film. The ansatz offered by them for ǫ(ω) is dubbed as generalized plasma model, and following Ref. [12] we denote it as ǫ gp (ω). This quantity is a semianalytical mathematical construct, defined by the formula: ǫ gp (ω) = ǫ c (ω) − ω 2 p /ω 2 ,(31) where ǫ c (ω) represents the contribution of core electrons, while the term proportional to the square of the plasma frequency ω p describes conduction electrons. The most striking qualitative feature of this expression is the neglect of ohmic dissipation in the contribution from conduction electrons, but this is not all. Indeed, the ansatz prescribes that only the core-electron contribution ǫ c (ω) should be extracted from optical data of the film. On the contrary, and more importantly, according to Ref. [12] the value of the plasma frequency ω p to be used in Eq. (31) should be the one pertaining to a perfect crystal of the bulk material, and not the one obtained by a Drude-model fit of the low-frequency optical data of the film actually used in the experiment. The justification provided for this choice of the plasma frequency by the authors of Ref. [12] is that the contribution of conduction electrons to the Casimir force should depend only on properties determined by the structure of the crystal cell, which are independent of the sample-to-sample variability determined by the peculiar grain structure of the film, reported in Ref. [19]. It should be noted that for gold, the value of the plasma frequency advocated in [12], ω p = 9 eV/h, is much higher than the fit values quoted in Ref. [19], which range from 6.8 to 8.4 eV/h. As a result, the approach advocated in Ref. [12] leads to larger magnitudes of the Casimir force, as compared to the values derived in Ref. [19], with differences ranging, depending on the sample, from 5 % to 14 % at 100 nm. There is no room here to further discuss the merits and faults of these approaches, and we refer the reader to [12] for a thorough analysis. It is fair to note though that a series of recent experiments by one experimental group [13] appears to favor the generalized plasma approach, and to rule out the more conventional approach based on actual optical data followed in Refs. [17,19]. The future will tell what is the correct description. In the meanwhile, we remark that whatever approach is followed, the methods proposed in this paper may prove useful to obtain more reliable estimates of the Casimir force for future experiments. FIG. 1 :FIG. 2 : 12Real part f ′ (ω) (in arbitrary units) of the window functions in Eq. (10) versus frequency ω (in eV/h). The window parameters are p = 1, q = 2 (dashed line), p = 1, q = 3 (solid line). In both cases, the parameter w has the value w = (1 − 2 i) eV/h. Imaginary part f ′′ (ω) (in arbitrary units) of the window functions in Eq. (10) versus frequency ω (in eV/h). The window parameters are p = 1, q = 2 (dashed line), p = 1, q = 3 (solid line). In both cases, the parameter w has the value w = (1 − 2 i) eV/h. FIG. 3 : 3Plots (in arbitrary units) of the window functions f (iξ) in Eq. (10) versus the imaginary frequency ξ (in eV/h). The window parameters are p = 1, q = 2 (dashed line), p = 1, q = 3 (solid line). In both cases, the parameter w has the w = (1 − 2 i) eV/h. FIG. 4 : 4Numerical simulation of the errors (in percent) in the estimate of ǫ(iξn) − 1 for gold, resulting from using the quantities Iexp(ξn) (black squares) and Kexp(ξn) (grey triangles) as estimators, in the hypothesis that data are available from ωmin = 0.038 eV/h to ωmax = 30 eV/h. The integer on the abscissa labels the Matsubara mode ξn = 2πnkBT /h (T = 300 K). FIG. 5 : 5(ξ n ) is less than 0.25 percent. The jumps displayed by the rela-Numerical simulation of the error (in percent) in the estimate of ǫ(iξn) − 1 for gold, resulting from using the quantity J (p,q) exp (ξn) as a estimator, in the hypothesis that data are available from ωmin = 0.038 eV/h to ωmax = 30 eV/h. The integer on the abscissa labels the Matsubara mode ξn = 2πnkBT /h (T = 300 K). Grey triangles are for the window function having p = 1, q = 2, black squares for p = 1, q = 3. In both cases w = (1 − 2 i) eV/h. tive errors inFig. 5(around n = 6 for the grey dots, and n = 14 for the black ones) correspond to the approximate positions of the zeroes of the respective window functions f (iξ) (see FIG. 6 : 6Simulation of the error (in percent), versus plate separation (in µm) in the estimate of the Casimir force per unit area, between two plane-parallel gold plates in vacuum at a temperature T = 300K, resulting from using J(p,q) exp (ξn) as an estimator of ǫ(iξn) − 1. The window functions are the same as in Fig. 5: the dashed line is for p = 1, q = 2 and the solid line for p = 1, q = 3. All values of the other parameters are same as in Fig. 5. Acknowledgements The author thanks the ESF Research Network CASIMIR for financial support. . H B G Casimir, Proc. K. Ned. Akad. Wet. Rev. 51793H. B. G. Casimir, Proc. K. Ned. Akad. Wet. Rev. 51, 793 (1948). . E M Lifshitz, Sov. Phys. JETP. 273E. M. Lifshitz, Sov. Phys. JETP 2, 73 (1956); E M Lifshitz, L P Pitaevskii, Landau and Lifshitz Course of Theoretical Physics: Statistical Physics Part II. Butterworth-HeinemannE. M. Lifshitz and L. P. Pitaevskii, Landau and Lifshitz Course of Theoretical Physics: Statistical Physics Part II (Butterworth-Heinemann, 1980). . S K Lamoreaux, Phys. Rev. Lett. 785475S. K. Lamoreaux, Phys. Rev. Lett. 78, 5 (1997); 81, 5475 (1998). . U Mohideen, A Roy, Phys. Rev. Lett. 814549U. Mohideen and A. Roy, Phys. Rev. Lett. 81, 4549 (1998); . A Roy, C.-Y. Lin, U Mohideen, Phys. Rev D. 60111101A. Roy, C.-Y. Lin, and U. Mohideen, Phys. Rev D 60, 111101(R) (1999). . G Bressi, G Carugno, R Onofrio, G Ruoso, Phys. Rev. Lett. 8841804G. Bressi, G. Carugno, R. Onofrio, and G. Ruoso, Phys. Rev. Lett. 88, 041804 (2002). . T Ederth, Phys. Rev. A. 6262104T. Ederth, Phys. Rev. A 62, 062104 (2000). . H B Chan, Y Bao, J Zou, R A Cirelli, F Klemens, W M Mansfield, C S Pai, Phys. Rev. Lett. 10130401H. B. Chan, Y. Bao, J. Zou, R. A. Cirelli, F. Klemens, W. M. Mansfield, and C. S. Pai, Phys. Rev. Lett. 101, 030401 (2008). . H B Chan, V A Aksyuk, R N Kleiman, D J Bishop, F Capasso, Science. 2911941H. B. Chan, V.A. Aksyuk, R.N. Kleiman, D.J. Bishop, and F. Capasso, Science 291, 1941 (2001). . F Chen, G L Klimchitskaya, V M Mostepanenko, U Mohideen, Phys. Rev. B. 1535338Opt. Expr.F. Chen, G. L. Klimchitskaya, V. M. Mostepanenko, and U. Mohideen, Opt. Expr. 15, 4823 (2007); Phys. Rev. B 76, 035338 (2007). . S De Man, K Heeck, R J Wijngaarden, D Iannuzzi, Phys. Rev. Lett. 10340402S. de Man, K. Heeck, R. J. Wijngaarden, and D. Iannuzzi, Phys. Rev. Lett. 103, 040402 (2009). . G Bimonte, E Calloni, G Esposito, L Milano, L Rosa, Phys. Rev. Lett. 94180402G. Bimonte, E. Calloni, G. Esposito, L. Milano, and L. Rosa, Phys. Rev. Lett. 94, 180402 (2005); . G Bimonte, E Calloni, G Esposito, L Rosa, Nucl. Phys. B. 726441G. Bimonte, E. Calloni, G. Esposito, and L. Rosa, Nucl. Phys. B 726, 441 (2005); . G Bimonte, J. Phys. A. 41164023G. Bimonte et al., J. Phys. A 41, 164023 (2008); . G Bimonte, Phys. Rev. A. 7862101G. Bimonte, Phys. Rev. A 78, 062101 (2008). . G L Klimchitskaya, U Mohideen, V M Mostepanenko, Rev. Mod. Phys. 811827G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepa- nenko, Rev. Mod. Phys. 81, 1827 (2009). . R S Decca, D Lopez, E Fischbach, G L Klimchitskaya, D E Krause, V M Mostepanenko, Ann. Phys. 31837R.S. Decca, D. Lopez, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, and V.M. Mostepanenko, Ann. Phys. 318, 37 (2005); . Eur. Phys. J. C. 5177101Phys. Rev. DEur. Phys. J. C 51, 963 (2007), Phys. Rev. D 75, 077101 (2007). . A Lambrecht, S Reynaud, Eur. Phys. J. D. 8309A. Lambrecht and S. Reynaud, Eur. Phys. J. D 8, 309 (2000). . A Naji, D S Dean, J Sarabadani, R R Horgan, R Podgornik, Phys. Rev. Lett. 10460601A. Naji, D. S. Dean, J. Sarabadani, R. R. Horgan, and R. Podgornik, Phys. Rev. Lett. 104, 060601 (2010). Handbook of Optical Constants of Solids. E. D. PalikAcademicNew YorkHandbook of Optical Constants of Solids, edited by E. D. Palik (Academic, New York, 1995). . S K Lamoreaux, Phys. Rev. A. 593149S. K. Lamoreaux, Phys. Rev. A 59, R3149 (1999). . I Pirozhenko, A Lambrecht, V B Svetovoy, New. J. Phys. 8238I. Pirozhenko, A. Lambrecht, and V. B. Svetovoy, New. J. Phys. 8, 238 (2006). . V B Svetovoy, P J Van Zwol, G Palasantzas, J M Th, De Hosson, Plys. Rev. B. 7735439V. B. Svetovoy, P. J. van Zwol, G. Palasantzas, and J. Th. M. De Hosson, Plys. Rev. B 77, 035439 (2008). . T Emig, N Graham, R L Jaffe, M Kardar, Phys. Rev. Lett. 99170403T. Emig, N. Graham, R. L. Jaffe, and M. Kardar, Phys. Rev. Lett. 99, 170403 (2007); . Phys. Rev. D. 7725005Phys. Rev. D 77, 025005 (2008). . O Kenneth, I Klich, Phys. Rev. B. 7814103O. Kenneth and I. Klich, Phys. Rev. B 78, 014103 (2008). . A Lambrecht, P A Neto, S Reynaud, New. J. Phys. 8243A. Lambrecht, P. A. Maia Neto, and S. Reynaud, New. J. Phys. 8, 243 (2006). L D Landau, E M Lifshitz, Landau and Lifshitz Course of Theoretical Physics: Electrodynamics of Continuous Media. New YorkPergamon PressL. D. Landau, and E. M. Lifshitz, Landau and Lifshitz Course of Theoretical Physics: Electrodynamics of Con- tinuous Media (Pergamon Press, New York, 1960). . G Bimonte, H Haakh, C Henkel, F Intravaia, arXiv:0907.3775v2J. Phys. A). in press onG. Bimonte, H. Haakh, C. Henkel, and F. Intravaia, arXiv:0907.3775v2 (in press on J. Phys. A)
[]
[ "CORE-COLLAPSE SUPERNOVAE AND HOST GALAXY STELLAR POPULATIONS", "CORE-COLLAPSE SUPERNOVAE AND HOST GALAXY STELLAR POPULATIONS" ]
[ "Patrick L Kelly ", "Robert P Kirshner " ]
[]
[]
We have used images and spectra of the Sloan Digital Sky Survey to examine the host galaxies of 519 nearby supernovae. The colors at the sites of the explosions, as well as chemical abundances, and specific star formation rates of the host galaxies provide circumstantial evidence on the origin of each supernova type. We examine separately SN II, SN IIn, SN IIb, SN Ib, SN Ic, and SN Ic with broad lines (SN Ic-BL). For host galaxies that have multiple spectroscopic fibers, we select the fiber with host radial offset most similar to that of the SN. Type Ic SN explode at small host offsets, and their hosts have exceptionally strongly star-forming, metal-rich, and dusty stellar populations near their centers. The SN Ic-BL and SN IIb explode in exceptionally blue locations, and, in our sample, we find that the host spectra for SN Ic-BL show lower average oxygen abundances than those for SN Ic. SN IIb host fiber spectra are also more metal-poor than those for SN Ib, although a significant difference exists for only one of two strong-line diagnostics. SN Ic-BL host galaxy emission lines show strong central specific star formation rates. In contrast, we find no strong evidence for different environments for SN IIn compared to the sites of SN II. Because our supernova sample is constructed from a variety of sources, there is always a risk that sampling methods can produce misleading results. We have separated the supernovae discovered by targeted surveys from those discovered by galaxyimpartial searches to examine these questions and show that our results do not depend sensitively on the discovery technique.
10.1088/0004-637x/759/2/107
[ "https://arxiv.org/pdf/1110.1377v3.pdf" ]
118,488,461
1110.1377
c4e317e9193a6e3f148d017e695dcf1896f1e266
CORE-COLLAPSE SUPERNOVAE AND HOST GALAXY STELLAR POPULATIONS 19 Oct 2012 Draft version February 3, 2013 Draft version February 3, 2013 Patrick L Kelly Robert P Kirshner CORE-COLLAPSE SUPERNOVAE AND HOST GALAXY STELLAR POPULATIONS 19 Oct 2012 Draft version February 3, 2013 Draft version February 3, 2013Preprint typeset using L A T E X style emulateapj v. 5/2/11Subject headings: supernovae: general -stars: abundances -galaxies: star formation -gamma-ray burst: general We have used images and spectra of the Sloan Digital Sky Survey to examine the host galaxies of 519 nearby supernovae. The colors at the sites of the explosions, as well as chemical abundances, and specific star formation rates of the host galaxies provide circumstantial evidence on the origin of each supernova type. We examine separately SN II, SN IIn, SN IIb, SN Ib, SN Ic, and SN Ic with broad lines (SN Ic-BL). For host galaxies that have multiple spectroscopic fibers, we select the fiber with host radial offset most similar to that of the SN. Type Ic SN explode at small host offsets, and their hosts have exceptionally strongly star-forming, metal-rich, and dusty stellar populations near their centers. The SN Ic-BL and SN IIb explode in exceptionally blue locations, and, in our sample, we find that the host spectra for SN Ic-BL show lower average oxygen abundances than those for SN Ic. SN IIb host fiber spectra are also more metal-poor than those for SN Ib, although a significant difference exists for only one of two strong-line diagnostics. SN Ic-BL host galaxy emission lines show strong central specific star formation rates. In contrast, we find no strong evidence for different environments for SN IIn compared to the sites of SN II. Because our supernova sample is constructed from a variety of sources, there is always a risk that sampling methods can produce misleading results. We have separated the supernovae discovered by targeted surveys from those discovered by galaxyimpartial searches to examine these questions and show that our results do not depend sensitively on the discovery technique. INTRODUCTION The only supernovae (SN) found in passive, elliptical galaxies are Type Ia (van den Bergh & Tammann 1991;van den Bergh et al. 2005). Finding these events in galaxies without ongoing star formation is strong evidence that long-lived (or relatively long-lived) progenitors contribute to the observed SN Ia population. SN of other spectroscopic types have been discovered only in star-forming galaxies: that is why we think these SN types are explosions of massive, short-lived stars. Our aim here is to use more detailed information on the hosts to help sort out the origin of the varieties of core-collapse events. Host galaxy measurements have started to identify patterns among the environments of the many spectroscopic types of core-collapse supernovae (e.g., van Dyk et al. 1996;Modjaz et al. 2008;Kelly et al. 2008). Here, we construct a nearby sample of supernova hosts where ground-based images provide useful spatial resolution: for the median redshift in our sample (z ≈ 0.02), one arcsecond corresponds to 400 par-tools for determining how star formation, stellar evolution, mass loss, and progenitor chemistry produce the diversity of core-collapse phenomena, circumstantial evidence can provide useful clues to these complex processes. The primary SN spectroscopic classes are organized around evidence of hydrogen and helium features (see Filippenko 1997 for a review). Young, massive stars with an intact hydrogen envelope at the time of their explosion yield hydrogen-rich spectra, the Type II class. When massive SN progenitors lose their hydrogen-rich shells, the explosion can produce a variety of spectroscopic outcomes. SN Ib have a hydrogen-deficient spectrum that shows helium features, while SN Ic do not show either hydrogen or helium lines. The chameleon SN IIb class shows the hydrogen lines of a SN II at first, but then shows helium lines, suggesting there is only a low-mass layer of hydrogen on the surface. Line widths are also important. The spectra of SN Ic sometimes show very broad lines, suggesting expansion of the surface at 0.1c: these are the broad-lined SN Ic (SN Ic-BL). Conversely, SN II are sometimes seen with exceptionally narrow lines. These are the SN IIn, which result from interaction between the ejecta and circumstellar matter. SN Ia are a distinct class whose spectra are characterized by the absence of hydrogen and the presence of a broad absorption feature at 6150Å that is attributed to Si. Unlike all the others, they are attributed to thermonuclear explosions in white dwarfs. Spectra of the SN are reported by their discoverers or, in many cases, by independent teams. The CfA Supernova program aims to obtain spectra of all the SN north of -20 • and brighter than 18th mag (e.g., Matheson et al. Note. -SN remaining of each spectroscopic type after applying inclusion criteria. Indented rows are subsets of the last unindented row. (1) SN collected in the Asiago Catalog updated through 2010 November 7 with z < 0.023 for targeted discoveries (and, for Table 1, z < 0.08 Asiago galaxy-impartial discoveries combined with the 72 Palomar Transient Factory (PTF) core-collapse SN discoveries from March 2009 through March 2010 (Arcavi et al. 2010)) and not classified as Type Ia; (2) SN discovered during period 1990-present to eliminate most discoveries made using photographic plates; (3) Asiago catalog or PTF SN classification not accompanied by ('?'; ambiguous identification) or (':'; type inferred from light curve not spectrum); (4) calcium-rich SN 2000ds, SN 2003dg, SN 2003dr, and SN 2005E are grouped apart from other SN (Ib+Ic) because of their potentially distinct progenitor population; (5) SN position coordinates in the host galaxy; (6) inside SDSS DR 8 imaging footprint; (7) retrieved SDSS images collectively cover host galaxy without header issue; (8) host galaxy not detected (SN 2006jl (IIn); SN 2006lh (II); SN 2007fl (II); SN 2008bb (II); SN 2008it (IIn); SN 2009dv (IIP); SN 2009lz (IIP); SN 2009ny (Ib); PTF09gyp (IIb)); (9) no contamination from nearby bright stars; (10) no contamination from residual SN light, the sample used for photometry measurements; (10a) amateur discoveries in the targeted photometry sample; (11) an SDSS host fiber available with SPECTROTYPE='GALAXY' and sufficient S/N to classify using BPT diagram; (12) no contamination from an active galactic nucleus (AGN) in SDSS spectrum; (12a) fibers that are positioned on the host galaxy nucleus; (12b) where the difference between the fiber's host offset and the SN host offset is less than 3 kpc; (12c) amateur discoveries in the spectroscopy sample; Two SN-LGRB had z < 0.08 (SN 1998bw andSN 2006aj), and only the host of SN 2006aj was inside the SDSS DR8. The middle and bottom sections of the Table correspond to the 'Photometry' and the 'Spectroscopic' samples, respectively, subsets of the SN remaining after the 'No Host Detected' criterion is applied. The imaging component of the SDSS DR8 spans 14555 square degrees and consists of 53.9 s u ′ g ′ r ′ i ′ z ′ exposures taken with the 2.5m telescope at Apache Point, New Mexico. Each frame consists of a 2048 × 1498 pixel array that samples a 13.5 ′ × 9.9 ′ field. The complementary fiber SDSS DR8 spectroscopic survey covers a 9274 square degree subset of the DR8 imaging footprint. Objects detected at greater than 5σ, selected as extended, and with r ′ -band magnitudes brighter than 17.77 comprise the main galaxy sample for spectroscopic targeting. When the r ′ -band 3" fiber magnitude is fainter than 19 magnitudes, fiber targets must meet additional criteria, and physical constraints limit adjacent fibers to be no closer than 55 ′′ (Strauss et al. 2002) in a single fiber mask. Because of their large angular sizes, nearby galaxies were often 'shredded' into multiple objects by the SDSS object detection algorithm [see Fig. 9 of Blanton et al. (2005)], and many of these galaxies were targeted in multiple locations with fibers. Wavelength coverage of the SDSS spectrograph extends from 3800 to 9200Å. Exposures typically are a total of 45 minutes taken in three separate 15 minute exposures. We admit only the spectra that the SDSS pipeline classifies as a galaxy (SPECTROTYPE='GALAXY'), a step that includes normal galaxies and Type 2 AGN but removes QSOs and Type 1 AGN. SAMPLE We assemble our SN samples from discoveries collected in the Asiago catalog (Barbon et al. 1999) through 2010 November 6 and 72 Palomar Transient Factory (PTF) core-collapse SN discoveries from March 2009 through March 2010 (Arcavi et al. 2010). Eight of the SN in the Asiago catalog (all Type II SN) are also among the Arcavi et al. (2010) PTF SN (IAU/PTF: 09ct/09cu; 09bk/09t; 09bj/09r; 09bl/09g; 09ir/09due; 09nu/09gtt; 10K/09icx; 10Z/10bau). Table 2 shows the criteria we describe in this Section on the galaxy-impartial and targeted SN samples. Excluding SN Contamination We consider images taken during the period from 3 months before to 12 months after discovery to be potentially contaminated by SN light. Arcavi et al. (2010) do not report the exact discovery dates of PTF SN, so, for these SN, the contamination window begins three months before the start and ends 12 months after the completion of the search period. To assemble SDSS frames of each SN host, we first queried the SDSS SkyServer for any frames within 9.75' of the host galaxy center. If none was available without possible contamination from SN light (even with partial coverage not including from the field center), we instead assembled and constructed mosaics from potentially contaminated images. Such mosaics were used only to measure the deprojected offsets of SDSS spectroscopic fibers and the SN site in each host galaxy. The header of each SDSS image provides keywords that define a tangent plane projection (TAN) which maps coordinates on the sky to pixel coordinates. We discarded the small number of images retrieved from the SDSS server that lacked the keywords. Spectroscopic Classes Our previous work has shown that SN Ic are more strongly associated with bright regions in their host galaxies' g ′ -band light than are SN Ib (Kelly et al. 2008), indicating that they have a distinct progenitor population, so we group them separately in this analysis. SN IIb and SN IIn subtypes are excluded from the "SN II" sample. A single SN with an associated long duration gamma-ray burst (LGRB), SN 2006aj, meets the sample criteria, but we consider it separately from SN Ic-BL discovered through their optical emission (which have no associated LGRB), as did Modjaz et al. (2008). Today's gamma-ray searches are not sensitive to normal SN explosions. From a comprehensive set of spectra, we update the classification of SN 2005az. This SN was discovered ap-proximately seventeen days before maximum and spectroscopically classified three days after discovery as a SN Ic by Quimby et al. (2005a). The Nearby Supernova Factory (SNF), from a spectrum taken five days after discovery, suggested it was a Type Ib (Aldering et al. 2005). Spectral cross correlation using the Supernova Identification code (SNID; Blondin & Tonry 2007), applied to 24 spectra taken by the CfA Supernova Group from approximately ten days before to twenty-five days after maximum, shows that it was a Type Ic explosion. We update the spectroscopic types of ten SN found in the Asiago catalog with reclassifications from CfA spectra (Modjaz et al. 2012, in preparation) using SNID. We also use new spectroscopic classifications for two SN from Sanders et al. (2012a), based on revisions by the authors of the original IAU circulars. The SN in our sample that have new classifications from these papers have footnotes in Tables 7 and 8. We exclude SN 2006jc, a peculiar SN Ib with narrow helium emission lines and an underlying broad-lined SN Ic spectrum (e.g., Foley et al. 2007;Pastorello et al. 2007), from our SN Ib statistical sample. The helium emission may reflect the collision of ejecta with a heliumrich circumstellar medium. We group calcium-rich SN 2000ds, SN 2003dg, SN 2003dr, and SN 2005E separately from other SN (Ib+Ic) because of their potentially distinct progenitor population (Perets et al. 2010). We exclude SN IIn imposters (e.g., Van Dyk et al. 2000;Maund et al. 2006), a group which includes SN 1997bs, SN 1999bw, SN 2000ch, and SN 2001ac. 3.3. Classification of SN as Type IIb While spectra taken over several epochs are necessary to observe the spectroscopic transition that defines SN IIb, such follow up is not always available. Fortunately, the spectra of Type IIb SN similar to SN 1993J are sufficiently distinctive that cross correlation with spectroscopic templates (e.g., SNID), has been able to identify substantial numbers of explosions as Type IIb from a single spectrum. Although classifications based on a single spectrum may overlook examples of SN IIb, the Type IIb explosions they do identify should be reliable. 3.4. Classification of SN as Type Ib/c SN The Asiago catalog entries sometimes have less information than the IAU Circulars and published papers. For example, SN 1997dq and SN 1997ef were listed in the Asiago catalog (as of November 2010) as "Type Ib/c" while Matheson et al. (2001) and Mazzali et al. (2004) identified them as SN Ic-BL. Motivated by these examples, we searched the circulars to see whether additional information was available. Despite making note of the presence or absence of helium more than ten days after the explosion, some authors report a Type Ib/c classification. Authors may feel that a SN Ib/c classification was sufficiently precise while, in other cases, they may have wanted to emphasize conflicting spectroscopic characteristics. An example of the latter is SN 2003A which was classified as a Type Ib/c by Filippenko & Chornock (2003) who noted that "[w]eak He I absorption lines are visible, but the overall spectrum resembles that of type-Ic supernovae." Note. -Estimates of the mean luminosities of the SN types. The standard deviation of the luminosity function is shown in parentheses. The LOSS (Li et al. 2011b) and the P60 (Drout et al. 2011) samples, respectively, are constructed differently, but differences between the mean luminosities of the SN species should be approximately consistent for these surveys. Li et al. (2011b) favor a much larger difference between SN Ib and SN Ic luminosities than that found by Drout et al. (2011). SN Ic-BL may be more intrinsically luminous than SN Ic. Luminosities above are before correction for extinction, for studying SN detection efficiency. Classifications by the Nearby Supernova Factory (Aldering et al. 2002;Wood-Vasey et al. 2004) reported in circulars include an unusually high percentage of Type Ib/c. The high fraction of SN Ib/c reported by the Nearby Supernova Factory survey is hard for us to assess without being able to see the spectra or use impartial classification techniques. We have therefore excluded SN discovered by the Nearby Supernova Factory from our statistical sample. Galaxy-Impartial and Targeted SN Surveys We measure the host galaxy properties of SN discovered by both targeted surveys, which aristocratically discover almost all their SN in luminous galaxies, as well as galaxy-impartial surveys, which democratically scan swaths of sky without special attention to specific galaxies. Galaxy-impartial surveys generally employ larger telescopes (e.g., the SDSS 2.5m; the PTF 1.2m) than targeted surveys (e.g., the KAIT 0.76m), have fainter limiting magnitudes, and image much greater numbers of lowmass galaxies. The SN harvested by galaxy-impartial surveys are found in host galaxies that are not apparently bright or nearby (and are not in bright galaxy catalogs). For example, in our sample, 34% (45/133) of galaxy-impartial SN but only 3.4% (13/387) of targeted SN have host galaxy masses smaller than 10 9 M ⊙ . 3.6. Identifying Galaxy-Impartial Discoveries We used the Discoverer column from the IAU classification 7 to determine the provenance of each SN. There are relatively few galaxy-impartial discovery teams, because discovering substantial numbers of SN by impartially scanning the sky requires significant dedicated observing time and investment in data processing. Any SN whose discovery team we did not identify as part of a galaxy-impartial search effort, including amateur discoveries, was considered a targeted discovery. Surveys that we considered galaxy-impartial are as follows: Catalina Real-Time Sky Survey and Siding Spring Survey (Djorgovski et al. 2011), La Sagra Sky Survey, PAN-STARRS (Kaiser et al. 2010), Palomar Transient Factory (Law et al. 2009), ROTSE (Yost et al. 2006), ESSENCE (Miknaitis et al. 2007), Palomar-Quest (Djorgovski et al. 2008), SDSS-II (Sako et al. 2005), Supernova Legacy Survey (Astier et al. 2006), Supernova Cosmology Project (Perlmutter et al. 1999), Near Earth Asteroid Tracking Program (Pravdo et al. 1999), High-z Supernova Search (Riess et al. 1998 (Hardin et al. 2000), Great Observatories Origins Deep Survey (GOODS) (Dickinson et al. 2003), Deep Lens Survey (Wittman et al. 2002), and, except for discoveries in targets IC342, M33, M74, M81, NGC 6984, and NGC 7331, the Texas Supernova Search (Quimby et al. 2005b). SN Detection The control time t is the total time period during which a SN with a specific light curve, luminosity, and extinction by dust along the line-of-sight would have been detected by a survey's observations of a given galaxy (e.g., Cappellaro et al. 1999;Leaman et al. 2011). Extinction along the line-of-sight to potential explosion sites in each monitored galaxy as well as to the actual sites of SN explosions, however, is challenging to estimate. Therefore, control times are generally estimated for apparent luminosity functions and magnitudes uncorrected for extinction, instead of for dust-free luminosity functions and magnitudes with a galaxy-by-galaxy correction for obscuring dust. The expectation value of the number of discoveries of type T SN during a survey in the ith monitored galaxy is, N T i = r T i × t T i ,(1) where r T i is the rate of and t T i is the control time for type T SN in the galaxy. The probability of detecting N T i type T SN in the ith galaxy imaged by a survey follows a Poisson distribution, P (N T i ) = Pois(r T i × t T i ).(2) The ith galaxy observed by the survey has properties that include, for example, the stellar mass M i . We are interested in making inferences about how the rates of the SN types may depend upon galaxy properties. The statistical approach we take in this paper is to look for differences among the distributions of host properties of the reported SN of each type S T (M i ) (see Section 6), S T (M i ) ≈ Pois(r T i × t T i ).(3) Here we cannot estimate control times because we lack information about the contributing surveys. We do not explicitly model the effect of different possible control times as well as other potential effects in our statistical analysis. In the following sections, however, we find evidence that control times may be sufficiently similar that any differences do not dominate the results we find. We consider the published luminosity functions, place appropriate redshift upper limits on samples, and examine the redshift distributions of galaxy-impartial SN discoveries. Only differences among control times for SN species that correlate with galaxy properties will introduce bias into our statistical analysis. The number of galaxies monitored by galaxy-impartial surveys grows with the volume within the limiting redshift (i.e., ∝ z 3 ). The number of galaxies monitored by targeted surveys, by contrast, likely does not increase as quickly with redshift, because the generally high-mass targets are selected from galaxy catalogs that have, for example, Malmquist bias. Luminosity Functions, Light Curves, and Detection Recent measurements have compared the mean luminosities of the core-collapse spectroscopic species, but any differences are not yet well constrained. Li et al. (2011b) (LOSS) and Drout et al. (2011) (Palomar 60") measured the mean peak absolute magnitudes (before correction for host galaxy extinction) of corecollapse species, and these values are shown in Table 3. Drout et al. (2011) found that SN Ic-BL are intrinsically brighter explosions, on average, than normal Type Ic explosions, but the LOSS sample included too few examples to corroborate a difference. Although Li et al. (2011b) found some evidence that SN Ib and SN Ic have different mean intrinsic luminosities, Drout et al. (2011) did not find a similar indication. Luminosity functions are broad, so the control time will not necessarily vary strongly with the mean SN luminosity. Li et al. (2011b) found, for example, that the SN (Ib+Ic) luminosity function has a standard deviation of 1.24 mag and that the SN II luminosity function has a standard deviation of 1.37 mag. Redshift Upper Limits for Targeted and Galaxy-Impartial Samples We exclude SN discovered at redshifts where only SN Ib, SN Ic, and SN II with brighter-than-average luminosities are detected. Targeted surveys generally employ smaller telescopes and have shallower limiting magnitudes, because their galaxy targets are nearby. We select LOSS and SDSS as the representative targeted and galaxy-impartial surveys respectively, because they are responsible for the greatest numbers of discoveries in each category. From luminosity functions and search limiting magnitudes, we estimate these upper redshift limits where detection efficiency falls below 50% as z = 0.023 and z = 0.08 for LOSS and SDSS-II, respectively. We use -16 as the mean absolute SN magnitude, because Li et al. (2011b) measured mean absolute magnitudes for SN (Ib+Ic), -16.09 ± 0.23 mag, and SN II, -16.05 ± 0.15 mag. For LOSS, which contributes 42% of our targeted sample, Leaman et al. (2011) report a median limiting magnitude of 18.8 ± 0.5, corresponding to a detection limit of z = 0.023 for SN (Ib+Ic) and SN II. LOSS survey observations are taken without a filter, and the total response function peaks in the R band (Li et al. 2011b). Dilday et al. (2010) report a ∼21.5 mag r ′ -band detection limit for the SDSS-II survey, which accounts for 33% of the galaxy-impartial sample. For our sample of galaxy-impartial discoveries, the redshift upper limit is z = 0.08, corresponding to the SDSS-II detection limit. The PTF, accounting for 32% of galaxy-impartial SN, has a limiting R-band magnitude of ∼20.8 mag which corresponds to an upper detection limit of z = 0.056. Varying the upper redshift limit for galaxy-impartial and targeted SN discoveries (e.g., from z = 0.023 to z = 0.02 or z = 0.06) does not alter the type-dependent trends we find. Table 4 presents the mean redshifts for each SN type. Amateur Discoveries From the information available in IAU circulars, we separated discoveries into those made by amateur astronomers, who generally use comparatively small telescopes, and by professional astronomers. Table 2 lists the numbers of SN in each sample that were amateur discoveries. We present significance values for several sample comparisons with and without amateur discoveries in Section 7. Spectroscopic Fiber Locations On Host Galaxy We identified SDSS spectroscopic fibers that targeted the galaxy nucleus by visually inspecting images and fiber positions. Oxygen abundance varies primarily with offset from the galaxy center, and we also determined which fibers have host offsets within 3 kpc of the SN host offset. The numbers of fibers in each category are listed in Table 2. TESTING FOR DETECTION-RELATED SYSTEMATICS Comparing Detection Control Times Indirectly with Galaxy-Impartial SN Discoveries Galaxy-impartial searches do not target specific galaxies, so their discovery rate may be expressed in terms of the rate of SN per unit volume, N T zi = r T zi × V zi × t T zi ,(4) where r T zi is the type T SN rate per unit volume, V zi is the volume within the survey field-of-view, and t T zi is the control time for type T SN that explode in the z i redshift bin (e.g., 0.01 < z < 0.015). The power-law form of the Schechter luminosity function (Schechter 1976) cuts off exponentially at the characteristic absolute magnitude, M * . The AGN and Galaxy Evolution Survey (AGES) found that M * becomes ∼0.2 mag brighter between z = 0 and z = 0.2 -Redshifts of SN discovered by galaxy-impartial surveys to z = 0.2 (left) and of SN discovered by targeted surveys in our sample (right). To increase the size of the sample, the left plot includes discoveries to z = 0.2, a higher upper limit than the z = 0.08 galaxy-impartial sample limit. Except for cosmic evolution, low-redshift galaxy-impartial surveys image the same galaxy populations at increasing redshift. This plot provides no suggestion that the detection efficiency functions, η(z, T, m lim , C), for SN II, SN IIb, SN IIn, SN Ib, SN Ic, and SN Ic-BL vary differently with redshift, although the numbers of some species are small. The plot on the right shows the redshift distributions of the SN in our sample discovered by targeted surveys. Among the 15 possible two-sample comparisons, the most extreme difference is between the SN Ic and SN IIn distributions and has p = 1.1%. (Cool et al. 2012). Moustakas et al. (2011) reported, also from AGES spectroscopy, that the mean gas-phase metallicity of galaxies decreases by only ∼0.02-0.03 dex from z = 0.1 to z = 0.2. Given this evidence for relatively modest changes in the galaxy population, we may reasonably expect the fractional representation of each SN type among SN that explode to change only modestly to z ≈ 0.2, r A zi r B zi ≈ r A zj r B zj ,(5) where z i and z j are redshifts less than 0.2 and A and B are distinct spectroscopic classes of SN. The relationships in Equations 4 and 5 suggest that changes in the ratio between the numbers of detected SN of two species with increasing redshift depends primarily upon changes in the control times for the species, d dz N A N B ≈ r A r B d dz t A t B(6) The left panel of Figure 1 plots the cumulative redshift distributions of core-collapse SN as well as Type Ia SN with z < 0.2 reported to the IAU by galaxy-impartial surveys We extend the redshift upper limit beyond the z = 0.08 galaxy-impartial limit to compile a larger sample of SN discoveries. This plot suggests that surveys may have similar control times for SN II, SN IIb, SN IIn, SN Ib, SN Ic, and SN Ic-BL with increasing redshifts, although the numbers of some species are small. The Type Ia SN are, however, discovered at greater redshifts than the core-collapse species because of greater intrinsic brightness. This test may only be sensitive to strong differences among control times, however, and we expect that a more complete analysis based on detailed knowledge of each survey and improved constraints on SN luminosity functions will find differences among the control times for the core-collapse spectroscopic types. The control times for targeted surveys are likely to have similar behavior from z = 0 to z = 0.023 (where η = 0.5) as those for galaxy-impartial surveys from z = 0 to z = 0.08 (where η = 0.5). Targeted surveys generally have shallower limiting magnitudes, but their galaxy targets are also at smaller distance. Redshift Distributions of Targeted SN Discoveries The redshift distributions of targeted SN, shown in the right panel of Figure 1, additionally depend on the set of galaxy search targets as well as type-dependent host galaxy preferences. A Malmquist effect in galaxy catalogs (e.g., the NGC) used to select targets (Leaman et al. 2011) means that more distant monitored galaxies are, on average, more luminous, and likely more metal-rich. The redshift distributions of the targeted SN samples in Figure 1 show greater differences than those for galaxyimpartial SN samples. Detection Related Systematics Here we have only discussed possible systematic errors associated with differences between the luminosity functions of the core-collapse species and the redshift dependence of the galaxies monitored by SN searches. In Section 9, we list additional potential sources of systematic error including fiber targeting and spectroscopic classification. METHODS SDSS Imaging Processing The WCS provided in SDSS DR8 frame*.fits headers is a TAN approximation to the asTrans*.fits full astrometric solution and has subpixel accuracy (private communication; M. Blanton), an improvement over the astrometric solution provided in DR7 fpC*.fit headers. We used SWarp (Bertin et al. 2002) to register and resample SDSS images of each host galaxy to a common pixel grid and coadded SDSS images in the u ′ g ′ r ′ i ′ z ′ bands. The SDSS DR8 frame*.fits images also feature an improved background subtraction (in comparison to DR7 fpC*.fit images). The DR8 background level is estimated from a spline fit across consecutive, adjacent frames from each drift scan, after masking objects in each image. Galaxy Photometry and Stellar Mass Host galaxy images were used to measure the color of the stellar population near the site of each SN, estimate each host's stellar mass, a proxy for chemical enrichment (e.g., Tremonti et al. 2004), and compute the deprojected host offsets of SN and SDSS spectroscopic fibers. The SExtractor measurement MAG AUTO, corresponding to the flux within 2.5 Kron (1980) radii, was used to estimate host galaxy stellar mass-to-light ratio (M/L) through fits with spectral energy distributions (SEDs) from PEGASE2 (Fioc & Rocca-Volmerange 1997) stellar population synthesis models using the appropriate SDSS instrumental response function. An estimate of the stellar mass was then computed: M = M/L × L where L is the galaxy's absolute luminosity. See Kelly et al. (2010) for a detailed description of the star formation histories used. Color Near Explosion Site We then estimated the host galaxy's color near the SN location using two techniques. The first and more simple method was to extract the u ′ g ′ r ′ i ′ z ′ flux inside of a circular aperture with 300 pc radius centered on the SN location, after subtracting the median of the peripheral background regions. While this technique is straightforward, a small number of apertures centered at the sites of SN at large host offsets or found in low-luminosity hosts had low S/N, especially in the u ′ and z ′ bands. The primary intent of the second method was to obtain higher S/N u ′ -band flux measurements near the sites of SN, in particular near the sites of the SN with faint hosts. To identify g ′ -band pixels with S/N > 1 associated with each host galaxy, we used SExtractor to generate a segmentation map of each image, which identifies the pixels associated with each object. We adjusted the SExtractor settings so that the segmentation map included only pixels with S/N > 1 (i.e., DETECT THRESH=1). The 20 pixels closest to the SN location contained in the g ′ -band segmentation maps define the aperture for measurements of u ′ -z ′ color and u ′ -band surface brightness. The aperture generally consists of the 20 pixels on the CCD array closest to the SN position (i.e., within a circle of radius ∼1 ′′ ). Only for twenty of the 519 SN is the average distance between aperture pixels and the explosion site greater than 0.8". Excluding measurements where the average pixel is more than 0.8" from the explosion coordinates does not affect the distributions we plot in this paper. The median uncertainties of the measured u ′ g ′ r ′ i ′ z ′ magnitudes are 0.11, 0.06, 0.06, 0.06, and 0.13 magnitudes, respectively. We correct for Galactic reddening using the Schlegel et al. (1998) dust maps. The reported SN positions come from a variety of sources that may rely on catalogs (e.g., USNO B1, 2MASS, or GSC) with more modest astrometric accuracy than available for SDSS images. Repeated photometric follow-up, available for some SN, can also be useful for improving the accuracy of explosion coordinates (e.g., Hicken et al. 2012). We use KCORRECT (Blanton & Roweis 2007) to estimate rest-frame u ′ g ′ r ′ i ′ z ′ magnitudes from the fluxes we measure. KCORRECT fits the input measured fluxes with a model for the rest-frame SED, and it uses this SED to estimate the rest-frame u ′ g ′ r ′ i ′ z ′ magnitudes for each galaxy. The estimated rest-frame u ′ − z ′ color, for example, therefore depends upon the full set of measured observer-frame u ′ g ′ r ′ i ′ z ′ magnitudes that inform the SED model. Consequently, even if u ′ and z ′ measurements are noisy, the estimated rest-frame u ′ − z ′ color may be informative, given constraints from less noisy measured g ′ r ′ i ′ fluxes. Selecting SDSS Spectroscopic Fibers To identify SDSS fibers that coincide with a host galaxy, we searched inside a catalog available online from an MPA-JHU collaboration 8 for fibers that fell within an aperture with radius (1.65/z)" placed on the host center and with redshifts that agree with that of the SN. For an object in the Hubble flow, this angle corresponds to a physical distance of approximately 34 kpc. At z = 0.03, for example, this radius subtends a 55" angle. If the g ′ -band SExtractor segmentation map ID at the fiber location was the same as the ID of the SN host galaxy, the fiber was considered a match to the galaxy after a visual check. The deprojected normalized offset of the fiber was then calculated by computing the offset at each pixel in the 3" fiber aperture and averaging these offsets weighted by each pixel's g ′ -band counts. AGN Activity Only the spectra classified by the SDSS pipeline as a galaxy spectrum (SPECTROTYPE='GALAXY') enter our analysis, a restriction that excludes quasi-stellar objects (QSO) and Type 1 active galactic nuclei (AGN) whose continua have significant non-stellar contributions. The SDSS 'galaxy' class, however, includes spectra with emission line strength ratios characteristic of Type 2 AGN and low ionization nuclear emission regions (LIN-ERs). The emission line patterns associated with AGN activity are significantly degenerate with variation in oxygen abundance, so AGN line ratios preclude metallicity measurements. We use the classifications of fiber spectra as star forming, low S/N star forming, composite, AGN, or low S/N AGN made available by the MPA-JHU group following Brinchmann et al. (2004). That analysis uses each spectrum's position on the Baldwin et al. (1981) Extinction Estimated from Balmer Ratios From the fiber spectra (closest in deprojected offset to the SN sites), we estimate host galaxy reddening A V using the Balmer decrement (Hα/Hβ), assuming the R V =3.1 Cardelli et al. (1989) extinction law. Following Osterbrock (1989), we assume a Case B recombination ratio of 2.85 when spectra are classified as star forming or low S/N star forming and a ratio of 3.1 when spectra are classified as composite, AGN, or low S/N AGN. Metallicity and Specific SFR Measurements Our analysis uses both (a) abundances and specific star formation rate estimates available from the MPA-JHU collaboration for SDSS fiber spectra and (b) abundances we compute using the Pettini & Pagel (2004) metallicity calibration. We only use galaxies with S/N>3 Hβ, Hα, [N ii] λ6584, and [O iii] λ5007, as designated by the MPA-JHU analysis. For abundance measurements, we only analyze spectra classified as star forming. For specific SFR estimates, we use star forming, composite, and AGN spectra. MPA-JHU Metallicity and Specific SFR To extract an oxygen abundance and specific SFR from a spectrum, the MPA-JHU collaboration first uses Charlot & Longhetti (2001) stellar population synthesis and photoionization models to calculate an extensive library of line strengths spanning potential effective gas parameters including gas density, temperature, and ionization as well as the dust-to-metal ratio. Then galaxy [O ii], Hβ, [O iii], Hα, [N ii] , and [S ii] optical nebular emission lines are fit simultaneously with the library and used to compute metallicity and specific SFR likelihood distributions. Here we use the median of these distributions as the oxygen abundance and specific SFR estimates. We refer to the metallicity estimates as T04 oxygen abundances, in reference to Tremonti et al. 2004 who employed the MPA-JHU values. When emission lines show AGN patterns, metallicity estimates are not possible from emission lines. For these spectra, the MPA-JHU group uses the strength of the 4000Å break [see Figure 11 of Brinchmann et al. (2004)] and the ratio Hα/Hβ to estimate the specific SFR 9 . To calibrate the 4000Å break as a specific SFR proxy, starforming spectra are placed into bins according to the strength of the 4000Å break as well as the Hα/Hβ ratio, a proxy for interstellar extinction. The galaxies in each bin are then used to compute the expected specific SFR for each set of parameters. Kauffmann et al. (2003) found that a sample of SDSS spectra of Type 2 AGN with median z ≈ 0.1 and selected according to the criteria we apply here show no evidence of a significant superposed AGN continuum. Schmitt et al. (1999) showed that AGN emission rarely accounts for more than 5% of the continuum of nuclear spectra of nearby galaxies with Type 2-patterned emission lines. Pettini and Pagel Metallicity Since we have no prejudice about which emission-line method is most correct, we have also computed abundances using the Pettini & Pagel (2004) (hereafter PP04) 9 http://www.mpa-garching.mpg.de/SDSS/DR7/sfrs.html prescription. This is based on the relative line strengths of Hβ, Hα, [N ii] λ6584, and [O iii] λ5007, after correcting for dust emission. The PP04 indicator relies on lines relatively close in wavelength, reducing its sensitivity to uncertainty in the extinction correction and does not require the [O ii] λ3727 line, which falls beyond the blue sensitivity of the SDSS spectrograph for objects at z < 0.02. Our measurements trace the Kewley & Ellison (2008) PP04 mass-metallicity relation of SDSS galaxies when stellar mass is plotted against nuclear metallicity for galaxies in the Hubble flow (z > 0.005). Comparison of Host Abundance Proxies Oxygen abundances measured from fibers centered on the host galaxy nucleus are, on average, only 0.01 dex (T04) greater than the abundance inferred from the stellar mass with the Tremonti et al. (2004) M-Z relation, with a scatter of 0.14 dex. If we instead select fibers closest in host offset to SN explosion sites, spectroscopic abundances are 0.053 dex (T04) less than abundances estimated from stellar masses with a scatter of 0.16 dex. STATISTICAL METHOD Kolmogorov-Smirnov Statistic In the following sections, we test the null hypothesis that two samples are drawn from a single underlying distribution using the Kolmogorov-Smirnov (KS) test. The KS test statistic is defined as D = sup x |F 1 (x) − F 2 (x)|, the maximum difference between the samples' cumulative distribution functions, where F n (x) = 1 n n i=1 I Xi≤x . The KS distribution is the distribution of the test statistic D, given the null hypothesis that two distributions are identical. The p-value is the probability of observing a value of the test statistic, D, more extreme than the observed value of D given the null hypothesis that the two samples are drawn from a single underlying distribution. Low p-values (< 5%) are significant evidence that the underlying distributions are distinct. When two independent samples are drawn from the same distribution, there is, by definition, a 5% random chance of obtaining a p-value less than 5%. If we were to make, for example, twenty comparisons among samples drawn from identical distributions, one misleading p < 5% difference would occur by chance on average. The number of independent comparisons we make in this paper should therefore be taken into account when comparisons yield p-values of modest significance (p ≈ 5%). We note that the host properties we measure are correlated (e.g., host color and metallicity), so independent comparisons are fewer than the total number of comparisons. RESULTS Instead of placing the numerical values of all statistical Kolmogorov-Smirnov (KS) tests in the following descriptions of results, we list many of them in Table 5, which includes comparisons for all types, and Table 6, which includes comparisons for SN IIb and SN Ic-BL, restricted to only targeted and only galaxy-impartial samples. Tables 7 and 8 list the measurements of the SN host galaxies. -Host galaxy u ′ − z ′ color versus u ′ surface brightness near SN location. The top panel plots the fraction of SN of each type with environments bluer than the horizontal axis coordinate, and the right panel plots the fraction of SN of each type whose environments have higher u ′ -band surface brightness than the vertical axis coordinate. SN IIb environments are bluer than SN Ib, SN Ic, and SN II environments (p = 0.03%, 0.2%, and 0.1%, respectively), and SN Ic-BL environments are bluer than those of SN Ib, SN Ic, and SN II (p = 0.04%, 0.09%, and 0.04%, respectively). SN Ib and SN Ic explode in regions with higher u ′ -band surface brightness than do SN II (p = 0.6% and 0.6%, respectively), and SN Ic sites have higher u ′ -band surface brightnesses than SN Ic-BL locations (3.8%). The aperture is the 20 host pixels with S/N > 1 in g ′ band nearest the SN location. Host Color and u ′ Surface Brightness Near Explosion Site As can be seen in Figure 2, SN IIb and SN Ic-BL erupt in exceptionally blue environments, while high u ′ -band surface brightness is typical of SN Ib and SN Ic sites. SN II sites show substantial overlap in color or surface brightness with the other classes. This plot shows u ′ -z ′ color versus u ′ -band surface brightness, measured inside an aperture consisting of the 20 pixels closest to the SN site with g ′ -band S/N > 1. The u ′ -z ′ color near the site of SN 2006aj, the SN-LGRB in our sample, was 0.88 mag. Figure 3 helps to explain the exceptionally blue u ′ -z ′ color of SN IIb and SN Ic-BL sites and the high u ′ -band surface brightnesses near SN Ib and Ic sites. At one set of extremes, SN Ic-BL have generally low mass hosts, while SN IIb explode at larger offsets when they occur in galaxies of large masses. At another extreme, SN Ib and especially SN Ic more often occur inside the g ′ -band half-light radius of massive galaxies, sites expected to have redder color and high surface brightness. Host Stellar Mass and SN Host Offsets Host galaxy mass is a moderately precise (∼0.1 dex) proxy for chemical abundance (e.g., Tremonti et al. -Stellar masses of host galaxies versus SN host offset, deprojected and normalized by host g ′ -band half-light radius. The top panel plots the fraction of each SN type with hosts less massive than the horizontal axis coordinate, and the right panel plots the fraction of SN of each type whose host offsets are greater than the vertical axis coordinate. SN Ic-BL are found in significantly less massive galaxies than are the SN Ib, SN Ic, or SN II (p = 0.1%, 0.8%, and 0.4%, respectively). Host galaxy stellar masses are estimated from PEGASE2 fits to u ′ g ′ r ′ i ′ z ′ host magnitudes. A two-sample KS test finds evidence (p = 0.7%) that SN IIb explode at larger host offsets than SN Ib, among the SN discovered in galaxies with log M > 9.5. SN Ic explode closer to their host centers than SN II (p = 0.5%). 2004) which does not suffer from the AGN selection effects. The hosts of SN (Ib+Ic), excluding SN Ic-BL, are more massive than SN II hosts. The host stellar mass of SN 2006aj, the only SN-LGRB in our sample, was 8.0 × 10 10 M ⊙ . We find with p = 12% that the SN IIn offset distribution is consistent with the SN II host offset distribution. Oxygen Abundance Measurements Closest to SN Positions To probe the metallicities of the core-collapse hosts, we measure oxygen abundance from the fiber spectrum with deprojected offset most similar to the SN offset. Among the 311 host galaxies with spectroscopic fibers measure-ments, 139 have multiple SDSS fiber spectra. SDSS fiber spectra generally target the central regions of host galaxies (see Tables 1 and 2), with an average host offset in our sample of 0.45 × r half −light . The low metallicities shown in Figure 4 for SN Ic-BL and SN IIb hosts and high metallicities for SN Ic hosts are consistent with the patterns we see among the species' colors near explosion sites, host offsets, and host masses. Every Abundance Measurement For galaxy-impartial discoveries, SN Ic-BL hosts (n = 3) follow a significantly more metal-poor distribution than the hosts of normal SN Ic (n = 4; p = 2.1%/2.1% for T04/PP04 calibrations). Among the Note. -P-values from Kolmogorov-Smirnov two-sample comparisons that include both targeted SN and galaxy-impartial SN discoveries. The two rows below "T04/PP04" show oxygen abundance statistics computed from spectra whose host offsets are within 3 kpc of the SN host offset. The statistics comparing offsets includes only SN found in massive galaxies (log M > 9.5). -Host oxygen abundance measured from SDSS 3" fiber spectrum with host radial offset most similar to that of SN explosion site. While Tremonti et al. (2004) spectroscopic abundances are plotted, we also measure abundances using the Pettini & Pagel (2004) calibration. Even when we consider only SN discovered by galaxy-impartial surveys, we find a statistically significant difference between the SN Ic-BL and the SN Ic host abundance distributions (p = 2.1%/2.1%, respectively for the T04/PP04 calibrations). When we consider only SN discovered by targeted surveys, we find a statistically significant difference between the SN IIb and the SN Ib host abundance distributions for one of two abundance diagnostics (p = 13%/1.8%, respectively for the T04/PP04 calibrations). Evidence for a difference between the SN IIb and SN Ib host distributions strengthens when all SN discoveries are considered (8.5%/0.9%). hosts of targeted discoveries, host galaxies of SN IIb (n = 13) follow a significantly more metal-poor distribution than hosts of SN Ib (n = 11; p = 13%/1.8%). Among the hosts of targeted and galaxy-impartial discoveries, host galaxies of SN IIb (n = 13) are more metal- -Host specific SFR estimated from SDSS 3" fiber spectrum with host radial offset most similar to that of SN explosion site. The sequence of the spectroscopic classes, arranged in order of the loss of the progenitor's outer hydrogen and helium envelopes (i.e., SN II, SN IIb, SN Ib, SN Ic), exhibit increasing average host galaxy specific SFR (SFR M −1 ⊙ yr −1 ), measured from SDSS fiber spectra. SN (Ib+Ic) hosts have greater specific SFR than SN II hosts (p = 0.3%). SN Ic-BL hosts have greater specific SFR than SN II hosts (3.5%). SDSS fibers largely sample light within the the host galaxy half-light radius and are often centered on the host galaxy nucleus. poor than hosts of SN Ib (n = 11; p = 13%/1.8%). The SN II host abundance distribution is more metalpoor than that of the SN Ic hosts, but a selection effect may inflate any difference. A higher fraction of SN II (20±3% (36/156)) than SN (Ib+Ic) host galaxy fiber spectra (9±4% (5/55)) have the emission line ratios of AGN (see Tables 1 and 2), which makes spectra unusable for abundance analysis. AGN occur pri- Note. -P-values and sample sizes from Kolmogorov-Smirnov two-sample comparisons that include all SN discoveries, targeted SN discoveries, galaxy-impartial SN discoveries, or only professional SN discoveries. The difference between the metallicity distributions of the hosts of Type Ic-BL and Type Ic SN is statistically even when including only SN discovered by galaxy-impartial hosts. The differences between the SN Ib and SN IIb host galaxy u ′ -z ′ color distributions as well as host galaxy metallicities are statistically significant when including only SN discovered by targeted surveys. The statistics from comparing host offsets includes only SN found in massive galaxies (log M > 9.5). marily in massive, metal-rich galaxies (M > 10 10 M ⊙ ; Kauffmann et al. 2003), so rejecting AGN spectra removes a higher fraction of metal-rich SN II hosts than of SN Ib/c hosts. We note that the presence of nuclear activity in a host galaxy does not necessarily mean that every SDSS host spectrum will show contamination, because SDSS fibers are sometimes offset from the nucleus (see Tables 1 and 2). A host galaxy with mass 10 10.5 M ⊙ , typical of an AGN host, will have an oxygen abundance of ∼9 dex (T04) and ∼8.75 dex (PP04) (e.g., Tremonti et al. 2004). SN IIn hosts follow a similar distribution to that of the entire SN II sample (p = 35%/53%). When Fiber and SN Host Offsets Are Similar Most galaxies have metallicity gradients, with abundance declining away from the galaxy center. Van Zee et al. 1998 found, for example, a mean radial abundance gradient of -0.052 dex kpc −1 for a sample of 11 NGC host galaxies. To assemble improved proxies for metallicity at the SN location, we selected fibers whose deprojected host offset (away from the galaxy center) was within 3 kpc of that of a SN. Among these fibers, the SN Ic-BL host spectra are significantly more metal-poor than both the SN Ib and SN Ic spectra. Without making a correction for the difference between fractions of SN II and SN Ib/c host SDSS spectra with no abundance estimate due to AGN contamination, the SN II host fibers (with similar host offset) are significantly less metal-rich than that of SN Ic host fibers. Median offset differences between the SDSS fiber and SN location (in kpc): SN Ib (1.02), SN Ic (1.32), SN Ic-BL (1.56), SN II (1.11), SN IIb (1.66). Host Specific Star Formation Rate from Fiber Spectra SDSS spectra provide an estimate of the specific SFR (SFR M −1 ⊙ yr −1 ) within the aperture of the fiber, which generally targets the host galaxy within the g ′ -band halflight radius. As can be seen in Figure 5, there is a progression of increasing specific SFR from SN II to SN Ib to SN Ic host spectra. SN Ic-BL host spectra also have significantly greater specific SFR than SN II host spectra. Using visual inspection, we identified fibers that target the host galaxy nucleus to z = 0.04 where the 3" fiber aperture primarily samples nuclear light. These spectra yield significant evidence that the nuclei of SN (Ib+Ic) host galaxies have greater specific star formation rates than those of SN II host galaxies (p = 10%). Strong central star formation among SN (Ib+Ic) hosts may overwhelm AGN-patterned emission and explain the relatively low AGN fraction among SN (Ib+Ic) host galaxies. Extinction Inferred from Spectra Although SN Ic hosts have stronger specific SFR within the half-light radius, the region where most SN Ic explode, the sites of SN Ic are not bluer than those of SN II (see Figure 2). Figure 6 shows that the high extinction of SN (Ib+Ic) host galaxies, measured from host spectra, may redden ongoing star formation in SN Ic host galaxies. -Ratio of stripped-envelope SN to SN II versus oxygen abundance (T04 calibration). The comparatively high fraction of SN (Ib+Ib/c+Ic) to SN II at subsolar metallicity in the right lower panel favors contributions from a binary progenitor population or explosions even after collapse to a black hole. Color points correspond to spectroscopic metallicity measurements, and gray points correspond to metallicities estimated from stellar masses using the Tremonti et al. (2004) mass-metallicity relation. The comparatively high fraction of SN II host fiber spectra with contamination from AGN activity (present only in massive, metal-rich galaxies) excludes a considerable fraction of metal-rich SN II host galaxies, inflating the apparent fraction of stripped-envelope SN in metal-rich galaxies (color points). Indeed, the stripped-envelope fraction is smaller using metallicities estimated from host galaxy stellar masses (gray) which do not suffer from an AGN selection effect. Dashed line is Eldridge et al. (2008) prediction for binary progenitors; dotted line is Eldridge et al. (2008) prediction for non-rotating single progenitors; and solid and dash-dot lines are Georgy et al. (2009) predictions for single, rotating progenitors (where a minimum helium envelope of 0.6 M ⊙ separates SN Ib from SN Ic progenitors). Whether core collapse to a black hole can yield a SN explosion is not clear (e.g., Fryer et al. 1999), especially if high angular momentum does not support an accretion disk (Woosley et al. (1993)). The Georgy et al. (2009) solid line prediction is where core collapse to a black hole produces SN while the dashed-line prediction is where core collapse to a black hole yields no SN. Vertical error bars reflect Poisson statistics while horizontal bars reflect the range of metallicities in each bin with the position of the vertical bar corresponding to the mean Z in the bin. Here Z ⊙ = 8.86 from Delahaye et al. (2010). The host galaxies of SN IIb have less extinction than SN (Ib+Ic) host galaxies. The average extinction difference between SN (Ib+Ic) and SN IIb hosts is A V ≈ 0.5 mag, a u ′ -z ′ reddening of ∼0.6 mag. The approximately similar internal extinctions of SN II and SN IIb hosts, however, suggest that the stellar populations near SN IIb likely are intrinsically bluer than those near SN II sites. SN (Ib+Ic) host reddening is consistent (p = 45%) with that estimated along the line-of-sight to 19 SN (Ib+Ic) from their light curve colors by Drout et al. RELATIVE FREQUENCIES OF CORE-COLLAPSE SN AS A FUNCTION OF METALLICITY We plot the ratio of stripped-envelope SN (including SN IIb) to SN II in our sample with increasing host galaxy oxygen abundance in Figure 7. Vertical error bars show the Poisson uncertainties, while horizontal bars indicate the range of metallicities in each bin. The color points are calculated from successful spectroscopic metallicity measurements, while the gray points are estimated using stellar mass as a metallicity proxy, applying the Tremonti et al. (2004) mass-metallicity relation. AGN emission, present disproportionately in SN II host spectra, is found primarily in high-mass, highmetallicity galaxies. This selection effect misleadingly inflates the apparent ratio SN (Ib+Ic) / SN II (color points) in the highest metallicity bin. Indeed, the ratio at high metallicity calculated instead using stellar masses as a proxy (which has no similar selection effect) is sig- nificantly lower (gray points). Earlier efforts using SDSS fiber spectra (i.e., Prieto et al. 2008), which also exclude AGN-contaminated spectra, have not noted this strong selection effect at high metallicity. While we have attempted to identify and limit systematic effects, we cannot fully know the selection biases (e.g., from classification, detection) that may affect the measured ratios. We compare the relative rates of SN types in our sample to the model predictions for single, rotating progenitors (Georgy et al. 2009), single, nonrotating progenitors (Eldridge et al. 2008), and binary progenitors (Eldridge et al. 2008). Plotted Georgy et al. (2009) predictions were made with the assumption that a minimum helium envelope of 0.6 M ⊙ separates the progenitors of SN Ib and SN Ic. Because core collapse to a black hole may not yield a SN explosion (e.g., Fryer et al. 1999), especially if high angular momentum does not support an accretion disk (Woosley et al. 1993), Georgy et al. (2009) calculated predictions where viable SN occur after core collapse to (a) only neutron stars and (b) neutron stars and black holes. These predictions adopted 2.7 M ⊙ as the maximum mass of neutron star (Shapiro & Teukolsky 1983) and use the Hirschi et al. (2005) Returning to our data, we note that the spectroscopic oxygen abundance measurements should, on average, be overestimates of the oxygen abundance at the SN site because the SDSS fibers are concentrated toward the inner regions of the galaxies. Likewise, abundances calculated from host masses and the Tremonti et al. (2004) M − Z relation should also be overestimates, because the Tremonti et al. (2004) relation is a fit to SDSS stellar masses and fiber metallicities. Wind-driven mass losses by single stars at low metallicity are thought to be comparatively modest (e.g., Eldridge et al. 2008;Smartt 2009;. To explain the presence of stripped-envelope SN in lowmetallicity environments in Figure 7, model comparison requires that collapse to a black hole or binary stripping be a viable route to stripped-envelope explosions. Single star models predict that the only stars that lose their outer envelopes at low metallicity are sufficiently massive that they collapse to black holes (e.g., Eldridge et al. 2008). Smith et al. (2011) note that single star models use constant rates of wind-driven mass loss substantially greater than those observed, although episodic mass loss may speed loss of the outer envelopes. Lower wind-loss rates would imply a diminished fraction of single progenitors. Splitting our sample in two at z = 0.015, the same trends persist in both the low and high redshift subsamples, providing some evidence that they do not result from luminosity-dependent selection effects. TESTS AND POTENTIAL SYSTEMATIC EFFECTS 9.1. Fiber Aperture Coverage SDSS spectroscopic fiber apertures have a fixed radius of 1.5". At increasing redshift, this aperture radius corresponds to larger physical scales: 0.3 kpc at z = 0.01; 0.6 kpc at z = 0.02; and 1.17 kpc at z = 0.04. For a sample of 11 NGC spiral galaxies, van Zee et al. (1998) found a mean radial abundance gradient of -0.052 dex kpc −1 . While metallicity gradients vary among galaxies and have some dependence on, for example, host galaxy morphology (e.g., Kewley et al. 2006) and the metallicity calibration (e.g., Moustakas et al. 2010), we use this as a representative value. Within the targeted sample (z < 0.023), nuclear spectroscopic fibers extend at most 0.68 kpc away from the host center, corresponding to systematic shifts of order only ∼0.025 dex. Galaxy-impartial SN discoveries (to z < 0.08) account for a significant fraction of only the SN Ic-BL sample. The difference between the median abundances for SN Ic-BL and SN Ic hosts is ∼0.5 dex, substantially greater than an aperture effect may yield. Classification There may be variation among the classification practices of the different surveys that contribute to our samples. A concern is that surveys that monitor different host galaxy populations (e.g., galaxy-impartial and targeted) could have different classification practices, such as use of automated classification tools (e.g., SNID) or multi-epoch spectroscopic follow up. For instance, the helium lines that identify SN Ib often emerge only after a couple of weeks (e.g., Li et al. 2011b). Fiber Targeting The SDSS object detection algorithm mistakenly split many galaxies of large angular size into two or more components [see Fig. 9 of Blanton et al. (2005)]. The SDSS targeting algorithm then placed fibers on these false components, sometimes at significant offset from the true galaxy center. The error rate of these algorithms could depend on galaxy morphology (e.g., irregularity or an interacting neighbor), and we checked whether the offsets of fiber measurements depend on SN type. However, we found no evidence of strong variation with SN type. SDSS fibers often target the local maxima of galaxy light distributions, including host nuclei and bright HII regions. In our sample, fibers have mean offset of 0.45 × r half −light , while matched fibers (where |r env − r f iber | < 3 kpc) have mean offset of 0.55 × r half −light . Therefore, fiber sites are highly likely to be more metal-rich on average than they would be if SDSS fibers sampled galaxy light distributions more democratically. However, the fibers' offset distribution does not vary strongly with SN type. The lifetimes of HII regions may be shorter than the those of the progenitors (private communication; N. Smith), and the signal at the SN site may be too weak. Programs that take host spectra at the location of the SN (e.g., Anderson et al. 2010;Modjaz et al. 2011;Leloudas et al. 2011) may only extract a metallicity estimate when there is sufficient nebular emission through the slit. Any such S/N requirement could possibly act as a type-dependent selection effect. The SDSS targets bright nuclei or HII regions, moderating any such effect in our analysis. DISCUSSION Our study of the host environments of core-collapse SN reported to the IAU or discovered by the PTF has revealed several statistically significant patterns. While we have constructed the sample to limit the influence of potential selection effects, we have a limited knowledge of the contributing SN search programs (e.g., cadence, limiting magnitude, classification methods). We have found that the u ′ -z ′ colors of the SN IIb and SN Ic-BL (without an associated LGRB) environments are blue in comparison to those of other stripped-envelope SN environments (see representative images in Figures 8, 9, and 10. The host specific SFR (SFR M −1 ⊙ yr −1 ) is higher, on average, for types whose SN spectra indicate more complete loss of the progenitor's outer envelopes (i.e., SN Ib, SN Ic, SN Ic-BL). Spectroscopy also shows that, in our sample, SN Ic-BL host galaxies are more metal-poor than the hosts of normal SN Ic explosions, while SN IIb hosts also are more metal-poor (for one of two abundance diagnostics) and have less extinction, on average, than SN Ib or SN Ic host galaxies. A surprising effect is that spectroscopic contamination by AGN is higher among SN II hosts than SN (Ib+Ic) hosts. This is important to the correct interpretation of host galaxy properties from SDSS spectroscopy. This study is statistical, and we have shown only that samples are drawn from differing underlying distributions in our comparisons. The distinctions we present are consistent with even considerable variation among the environments of individual examples of each SN type. Synthesizing Patterns There are strong connections among the typedependent patterns in host galaxy photometry and spectroscopy: • Host galaxies of SN Ic-BL in the sample generally have low mass and high specific SFR, helping to explain the blue colors at broad-lined SN Ic explosion sites. The SN IIb typically are found beyond the g ′ -band half-light radius in massive hosts, offering explanation for the blue colors of their sites. The SN Ic-BL and SN IIb host galaxy fiber spectra have lower abundances than SN Ic and SN Ib host galaxy fiber spectra closest in radius to the explosion site, respectively, although the SN IIb-SN Ib difference is significant for only one of two strong-line diagnostics. • SN Ic often erupt at small offsets in massive galaxies with strong specific SFR, high oxygen abundance, and high extinction measured from fiber spectra. These fibers generally collect light from within the host's g ′ -band half-light radius. These explosion sites help to explain the high surface brightnesses near SN Ic explosion sites. High interstellar reddening helps to explain why the colors near SN Ic sites have colors similar to those of SN II, despite their hosts' high specific SFR. The u ′ -z ′ color and u ′ surface brightness near SN explosion sites considerably separate SN IIb, SN Ic, and SN Ic (see Figure 2). SN Ic sites predominantly have high surface brightness, while SN IIb populate lowersurface brightness but extremely blue environments. SN Ib largely occupy the parameter space between the SN IIb and the SN Ic. By contrast, however, the explosion sites of SN II have no specific locus in the color-brightness plane. These patterns suggest that the fraction of the overall stripped-envelope SN (Ib+Ic+IIb) to Type II SN may not vary strongly with environment. Perhaps the stars that may explode as one stripped-envelope species at one value of mass in one environment instead exploded as another stripped-envelope species in other environments where, for example, the chemistry is different. SN Ib, SN IIb, and SN II Environments The best-studied example of a Type IIb, SN 1993J, exploded at a distance of only 3.6 Mpc in M81 (Filippenko et al. 1993;Matheson et al. 2000), and archival imaging revealed that its progenitor was a K-type supergiant (Aldering et al. 1994). HST imaging after the SN disappeared found evidence for a Btype supergiant binary companion (Van Dyk et al. 2002;Maund et al. 2004). More recent studies of the sites of other SN IIb suggest, however, that a fraction of the SN IIb population may erupt from massive single stars (e.g., Crockett et al. 2008). Chevalier & Soderberg (2010) analyzed the radio emission, optical shock breakout, and nebular emission of a sample of SN IIb to constrain the extent of their progenitors' envelopes and the properties of their circumstellar material. They favor two progenitor populations: (a) extended progenitors (SN 1993J, SN 2001gd) with hydrogen envelope mass greater than ∼ 0.1M ⊙ and slow, dense winds and (b) more compact and massive Wolf-Rayet progenitors (SN 1996cb, SN 2001ig, SN 2003bg, SN 2008ax, and SN 2008bo) with a less massive hydrogen envelope and lower density winds. PTF11eon/SN 2011dh, a SN IIb (Arcavi et al. 2011;Marion et al. 2011), was recently discovered by amateur astronomer Amadee Riou in M51, where preexplosion HST imaging exists of the SN site. Analysis of the archival images finds evidence for a supergiant with T ef f ≈ 6000 K at or near the SN site (Van Dyk et al. 2011;Maund et al. 2011). Radio and X-ray observations (Soderberg et al. 2012) and the optical spectroscopic and photometric evolution (Arcavi et al. 2011;Marion et al. 2012, in preparation) both favor a compact progenitor, however, suggesting that this star may be a binary companion or not associated with the SN. Our analysis finds three statistically significant, plausibly related patterns in the host environments of SN IIb: SN IIb environments are bluer than the environments of SN Ib, SN Ic, and SN II; their explosion sites may be more metal-poor than those of SN Ib or SN Ic (significant for one of two abundance diagnostics); and their host galaxy interstellar extinction is less than that of SN (Ib+Ic). These trends are statistically significant even when we analyze only the locations of targeted SN. An unambiguous implication of the exceptionally blue colors of SN IIb environments is that the Type IIb progenitor population is distinct from that of Type Ib explosions. Lack of hydrogen features near maximum light in SN Ib spectra may reflect a more extensive loss of the progenitor's hydrogen envelope. Comparatively metal-poor SN IIb host galaxies suggest that metals may play an important role in achieving this loss of the outer envelope. The SN IIb population may erupt from a combination of massive single stars and progenitors in close binary systems, so a possibility is that the blue colors of SN IIb environments indicate higher binary fractions. Although the current examples of each class are few, future efforts may be able to draw distinctions between the environments of the compact and extended SN IIb progenitors proposed by Chevalier & Soderberg (2010). Li et al. (2011a) recently reported that the hosts of SN IIb detected by LOSS had greater K-band luminosities than SN II-P hosts (with p = 6.9%). Lower SN IIb progenitor metallicities are consistent with the PTF's diminished fraction of SN IIb and SN Ic-BL in 'giant,' presumably metal-rich galaxies, than in 'dwarf' galaxies: 1 SN Ib, 3 SN IIb, and 2 SN Ic-BL, and 9 SN II in 'dwarf' galaxies, and 2 SN Ib, 2 SN IIb, 7 SN Ic, 1 SN Ic-BL and 42 SN II in 'giant' galaxies (Arcavi et al. 2010). SN Ic-BL Environments Type Ic-BL are the SN that have been associated with coincident LGRB explosions (Galama et al. 1998;Matheson et al. 2003;Stanek et al. 2003;Hjorth et al. 2003;Malesani et al. 2004;Modjaz et al. 2006;Sanders et al. 2012b; see Woosley & Bloom 2006 and Modjaz 2011 for reviews). Modjaz et al. (2008) showed that SN Ic-BL with associated LGRB prefer more metal-poor environments than do SN Ic-BL without an LGRB (but see Levesque et al. 2010). We find that host galaxies of SN Ic-BL (without an associated LGRB) follow a significantly more metal-poor distribution than the hosts of normal SN Ic (or SN Ib) explosions, even when only galaxy-impartial discoveries are considered. The colors of SN Ic-BL local environments also follow a bluer distribution than those of SN Ic, further evidence for different progenitor populations. SN Ic-BL host galaxies have strong specific SFRs, similar to those of normal SN Ic. Lower Type Ic-BL progenitor oxygen abundances may imply reduced rates of wind-driven mass loss, potentially enabling SN Ic-BL progenitor to retain greater angular momentum (e.g., Kudritzki 2002;Heger et al. 2003;Eldridge & Tout 2004;Vink & de Koter 2005). High angular momentum before the explosion may be important to the production of high velocity ejecta (Woosley et al. 1993;Thompson et al. 2004). Nonetheless, the means by which SN Ic-BL progenitors shed their outer envelopes, if not through their high metallicity, needs explanation and may involve Roche lobe overflow (Podsiadlowski et al. 1992;Nomoto et al. 1995), stellar mergers (Podsiadlowski et al. 2010), or perhaps deep mixing. Here our measurements support a picture where both SN Ib and SN Ic have more metal-rich hosts on average than SN Ic-BL, consistent with the host galaxy magnitudes measured by Arcavi et al. (2010). It presents a contrast with the results of Modjaz et al. (2011) who recently measured the oxygen abundances at the sites of SN Ic-BL, SN (Ib+IIb), and SN Ic. There the SN Ic-BL distribution falls intermediate between those of SN (Ib+IIb) and SN Ic, although it is more similar to the comparatively metal-poor SN (Ib+IIb) distribution and neither comparison is statistically significant. These contrasting trends may relate to fact that Modjaz et al. (2011) constructed their samples for each SN type from approximately equal numbers of galaxy-impartial and targeted SN discoveries, or the inclusion of SN IIb (which we find inhabit metal-poor environments) with SN Ib. Modjaz et al. (2011) measurements were also taken at the explosion site, which may often differ significantly from the host abundance measured from SDSS fiber spectra (0.13 dex average disagreement with nuclear fiber measurements). Svensson et al. (2010) found that host galaxies of LGRBs had smaller star masses than core-collapse SN hosts and had high surface brightness and more massive stellar populations. The only SN-LGRB that met our sample criteria, SN 2006aj, has low host stellar mass and comparatively blue u ′ -z ′ color near the explosion site. SN Ib, SN Ic, and SN II Environments In an earlier paper (Kelly et al. 2008), we showed that, while the positions of the other core-collapse SN follow the distribution of their hosts' light, Type Ic SN trace the brightest regions of their host galaxies in a pattern similar to that followed by LGRB (Fruchter et al. 2006). Possible explanations for this pattern include shorter lifetimes and higher masses of SN Ic progenitors (Raskin et al. 2008;Leloudas et al. 2010;Eldridge et al. 2011) as well as preference for metal-rich regions near the centers of hosts. Anderson & James (2008) showed, subsequently, that SN Ic also track their hosts' Hα emission more closely than SN II (their comparison with SN Ib lacked statistical significance). The SDSS fiber spectra of core-collapse host galaxies, which generally sample inside of the g ′ -band half-light radius, reveal an increasing progression of specific SFR from SN II to SN Ib to SN Ic (and SN Ic-BL) hosts. This pattern persists when we study only the spectra from fibers targeting the host nucleus. SN Ic explode at comparatively small host offset, linking them to the strong star formation near their hosts' centers. We find that the central star formation that yields SN Ic generally has high chemical abundance and extinction from interstellar dust. A SN Ic progenitor population tracking high metallicity would be expected to explode in massive galaxies with strong star formation in metalrich gas near their centers, the pattern we observe. The colors of SN Ib and SN Ic explosion sites may offer evidence that their progenitors are also younger and more massive than the progenitors of SN II. The distribution of the apparent u ′ -z ′ color at SN Ib and SN Ic sites is similar to that at SN II sites. However, we find that SN (Ib+Ic) host galaxies have higher interstellar extinction (∆A V ≈ 0.5 mag). This suggests that SN (Ib+Ic) sites have intrinsically bluer color than SN II sites, perhaps indicative of younger progenitor stellar populations. SN Ib explosion sites have higher u ′ -band surface brightnesses than SN II sites, while SN Ib host galaxies generally have lower abundance than SN Ic in our sample. There is no statistically significant difference between the SN Ib and SN Ic host offset distributions in our sample (p = 67%), which may imply that host offset cannot, on its own, explain the uniquely strong association of SN Ic with bright host galaxy pixels (Kelly et al. 2008). While analyses of pre-and post-explosion imaging have not yet identified a progenitor of a SN Ib or SN Ic, red supergiants have been found at the sites of SN II-P explosions (e.g., Barth et al. 1996;Van Dyk et al. 1999, 2003b, 2003aSmartt et al. 2001Smartt et al. , 2003Smartt et al. , 2004Li et al. 2005Li et al. , 2007Maund & Smartt 2005). Smartt et al. (2009) favor a 8.5-16.5 M ⊙ mass range for SN II progenitors, although extinction along the line of sight to the progenitors is not well constrained (e.g., Walmswell & Eldridge 2011). Smith et al. (2011) note that Wolf Rayet stars in binary systems, possible progenitors of SN Ib and SN Ic, are expected to be less luminous than single Wolf Rayet stars. Brighter companions may outshine Wolf Rayet progenitors, although massgaining companions may, in some cases, explode first (Podsiadlowski et al. 1992;Eldridge et al. 2011). Even for progenitors with close binary companions, metallicity and mass are expected to be important in determining the composition of the outer envelope, even though substantial mass loss may occur through Roche lobe overflow (Smith et al. 2011;Eldridge et al. 2011). Prantzos & Boissier (2003) and Boissier & Prantzos (2009) found that SN (Ib+Ib/c+Ic) hosts have greater absolute M B luminosities than SN II hosts. Prieto et al. (2008) presented the first comparison between the oxygen abundances of SN (Ib+Ic) and SN II host galaxies with large sample sizes. Using the T04 metallicities available for the SDSS DR4 spectra, they found p = 5% evidence for a difference (the sample may not have been large enough to determine the effect of AGN contamination). Van 2011)). When only abundances measured at the SN site from these three studies are compared, a significant difference between SN Ib and SN Ic metallicities computed with the Pettini & Pagel (2004) diagnostic is evident (M. Modjaz, private comm. and in preparation). SN IIn Environments Among our set of host measurements, we find no statistically significant differences between the characteristics of SN IIn host environments and those of normal SN II. Anderson & James (2009), who have also studied SN IIn explosion sites, found no significant difference between the mean radial offsets of 12 SN IIn and 35 SN IIP from the host galaxy center. Narrow line emission characterizes SN IIn spectra (Schlegel 1990) and is thought to be the result of the interaction of the ejecta with high density surrounding material. The existence of dense circumstellar material likely indicates strong pre-explosion mass loss (e.g., Chugai & Danziger 1994) and can increase the optical luminosity of the SN by thermalizing the emerging blast wave (e.g., Woosley et al. 2007;Smith et al. 2011;van Marle et al. 2010). Luminous Blue Variable (LBV) stars (e.g., η Car), with their high mass loss rates (> 10 −4 M ⊙ yr −1 ), have been suggested as candidate progenitors, although standard stellar modeling positions the LBV period before an ultimate Wolf-Rayet phase (e.g., Langer 1993;Maeder et al. 2005). Dwarkadas (2011) has recently suggested that observations may only present a convincing case for an LBV progenitor in the case of SN 2005gl (Gal-Yam et al. 2007;Gal-Yam & Leonard 2009). Other means of potentially producing regions of high density circumstellar material include, for example, pulsation-driven superwinds from red supergiants (RSGs) (Yoon & Cantiello 2010). CONCLUSIONS We have analyzed the properties of the environments of nearby SN reported by targeted and galaxy-impartial searches. Most of our data come from a small number of well-defined surveys, but we have included supernovae discovered by many individuals and groups. It is not possible for us to characterize the systematic effects introduced by each search. However, we have characterized the searches by their fundamental techniques and applied reasonable redshift limits to limit the strength of possible bias. We show that these details do not dominate the patterns we find. The SN IIb and SN Ic-BL in our sample erupt in environments with exceptionally blue color. SN IIb sites often have large host offsets, while SN Ic-BL generally have comparatively low mass host galaxies. By contrast, SN Ib and especially SN Ic environments have less extreme colors, similar to those of SN II sites, but with exceptionally high u ′ -band surface brightness. SN Ib and SN Ic generally erupt from regions within the g ′ -band half-light radii of high stellar mass galaxies. The colors and surface brightnesses of SN II as well as SN IIn environments show no strong distinguishing pattern. The centers of SN Ic host galaxies are generally dusty, metal-rich, and have high specific SFR. Stronger interstellar extinction associated with SN Ic sites may explain why they are not bluer than SN II sites, despite higher specific SFR. The central regions of SN Ib host galaxies are less metal-rich and have smaller specific SFR than those of SN Ic hosts. We find that the SN IIb host galaxy spectra closest in radius to the explosion site in our sample are more metal poor than the SN Ib host galaxy spectra, although this difference is statistically significant for only one of two strong-line diagnostics. SN Ic-BL host galaxies are also less metal-rich than SN Ic host galaxies, even among only galaxy-impartial discoveries. The specific SFR measured from fiber spectra is higher, on average, for types whose SN spectra indicate more complete loss of the progenitor's outer envelopes (e.g., SN Ic, SN Ic-BL). Even among only spectra of galaxy nuclei, SN (Ib+Ic) host spectra have stronger specific SFR than SN II host spectra. The non-negligible fraction of stripped-envelope SN in low-metallicity host galaxies may indicate that some stripped-envelope SN have binary progenitors or, alternatively, single progenitors that collapse to a black hole. Drout et al. (2011) have estimated the line-of-sight extinction instead inferred from the colors of SN light curves. The interstellar reddening we find from SDSS fiber spectra of SN Ib and SN Ic hosts yield consistent values of A V , although the SDSS fibers are generally positioned away from the explosion site. AGN emission, which makes spectra unusable for abundance measurements and is found primarily in highmetallicity galaxies, leads us to exclude a larger fraction of SN II (20±3% (36/156)) than SN (Ib+Ic) host spectra (9±4% (5/55)). This produces an overestimate of SN (Ib+Ic) / SN II in high-metallicity environments from SDSS spectra alone. The ratio is lower when we use host stellar mass as an oxygen abundance proxy, impervious to AGN. Stellar mass estimates, robust to AGN contamination, provide evidence that SN (Ib+Ic) / SN II increases in more massive, metal-rich galaxies, a trend that retains significance when we consider only targeted SN discoveries. None of the host measurements reveals a strong difference between SN IIn and normal SN II explosion environments. The accelerating rate of SN discovery promises to yield, over the next decade, much larger samples of each of the core-collapse species that we study in this paper. We urge the public archiving of spectra so that analyses can assign SN to consistent spectroscopic classes. Future study of explosion sites, aided by improved position information and uniform classification, will be a powerful tool to study progenitor properties and the evolution of massive stars. Thanks especially to Maryam Modjaz for her perceptive comments as well as revised spectroscopic classifi-cations and to David Burke for help with both supporting observations and editorial feedback. We also thank Peter Challis, Howie Marion, Nadia Zakamska, Georgios Leloudas, Shizuka Akiyama, Steve Allen, Roger Romani, Sung-Chul Yoon, Michael Blanton, Nathan Smith, Anja von der Linden, Mark Allen, and Douglas Applegate for their advice and help. We acknowledge the MPA-JHU collaboration for making their catalog publicly available and Google Sky for help in producing color galaxy images. RPK's supernova research at the Center for Astrophysics is supported by NSF grant AST0907903. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, Note. -Discov. Method notes whether the SN was discovered by a targeted (T) or a galaxy-impartial (I) search program. Discov. Pro/Am classifies the discoverers as professional (P) or amateur (A). A superscript number next to a type provides the origin of any spectroscopic reclassification (1: Modjaz et al. 2012, in prep.;2: Sanders et al. 2012a;3: this work). Discoverer is taken from the IAU classification a . a http://www.cfa.harvard.edu/iau/lists/Supernovae.html Fig. 1 . 1Fig. 1.-Redshifts of SN discovered by galaxy-impartial surveys to z = 0.2 (left) and of SN discovered by targeted surveys in our sample (right). To increase the size of the sample, the left plot includes discoveries to z = 0.2, a higher upper limit than the z = 0.08 galaxy-impartial sample limit. Except for cosmic evolution, low-redshift galaxy-impartial surveys image the same galaxy populations at increasing redshift. This plot provides no suggestion that the detection efficiency functions, η(z, T, m lim , C), for SN II, SN IIb, SN IIn, SN Ib, SN Ic, and SN Ic-BL vary differently with redshift, although the numbers of some species are small. The plot on the right shows the redshift distributions of the SN in our sample discovered by targeted surveys. Among the 15 possible two-sample comparisons, the most extreme difference is between the SN Ic and SN IIn distributions and has p = 1.1%. Fig. 2 . 2Fig. 2.-Host galaxy u ′ − z ′ color versus u ′ surface brightness near SN location. The top panel plots the fraction of SN of each type with environments bluer than the horizontal axis coordinate, and the right panel plots the fraction of SN of each type whose environments have higher u ′ -band surface brightness than the vertical axis coordinate. SN IIb environments are bluer than SN Ib, SN Ic, and SN II environments (p = 0.03%, 0.2%, and 0.1%, respectively), and SN Ic-BL environments are bluer than those of SN Ib, SN Ic, and SN II (p = 0.04%, 0.09%, and 0.04%, respectively). SN Ib and SN Ic explode in regions with higher u ′ -band surface brightness than do SN II (p = 0.6% and 0.6%, respectively), and SN Ic sites have higher u ′ -band surface brightnesses than SN Ic-BL locations (3.8%). The aperture is the 20 host pixels with S/N > 1 in g ′ band nearest the SN location. Fig. 3 . 3Fig. 3.-Stellar masses of host galaxies versus SN host offset, deprojected and normalized by host g ′ -band half-light radius. The top panel plots the fraction of each SN type with hosts less massive than the horizontal axis coordinate, and the right panel plots the fraction of SN of each type whose host offsets are greater than the vertical axis coordinate. SN Ic-BL are found in significantly less massive galaxies than are the SN Ib, SN Ic, or SN II (p = 0.1%, 0.8%, and 0.4%, respectively). Host galaxy stellar masses are estimated from PEGASE2 fits to u ′ g ′ r ′ i ′ z ′ host magnitudes. A two-sample KS test finds evidence (p = 0.7%) that SN IIb explode at larger host offsets than SN Ib, among the SN discovered in galaxies with log M > 9.5. SN Ic explode closer to their host centers than SN II (p = 0.5%). Fig. 4 . 4Fig. 4.-Host oxygen abundance measured from SDSS 3" fiber spectrum with host radial offset most similar to that of SN explosion site. While Tremonti et al. (2004) spectroscopic abundances are plotted, we also measure abundances using the Pettini & Pagel (2004) calibration. Even when we consider only SN discovered by galaxy-impartial surveys, we find a statistically significant difference between the SN Ic-BL and the SN Ic host abundance distributions (p = 2.1%/2.1%, respectively for the T04/PP04 calibrations). When we consider only SN discovered by targeted surveys, we find a statistically significant difference between the SN IIb and the SN Ib host abundance distributions for one of two abundance diagnostics (p = 13%/1.8%, respectively for the T04/PP04 calibrations). Evidence for a difference between the SN IIb and SN Ib host distributions strengthens when all SN discoveries are considered (8.5%/0.9%). Fig. 5 . 5Fig. 5.-Host specific SFR estimated from SDSS 3" fiber spectrum with host radial offset most similar to that of SN explosion site. The sequence of the spectroscopic classes, arranged in order of the loss of the progenitor's outer hydrogen and helium envelopes (i.e., SN II, SN IIb, SN Ib, SN Ic), exhibit increasing average host galaxy specific SFR (SFR M −1 ⊙ yr −1 ), measured from SDSS fiber spectra. SN (Ib+Ic) hosts have greater specific SFR than SN II hosts (p = 0.3%). SN Ic-BL hosts have greater specific SFR than SN II hosts (3.5%). SDSS fibers largely sample light within the the host galaxy half-light radius and are often centered on the host galaxy nucleus. Fig. 6 . 6-Host extinction estimated from 3" SDSS fiber spectrum with host radial offset most similar to that of SN explosion site. There is significant evidence that SN IIb and SN II hosts have less internal extinction than SN (Ib+Ic) host galaxies (p = 5.1% and 2.1%, respectively). The Drout et al.(2011) SN (Ib+Ic) A V extinction values estimated along the line of sight to the SN from light curve color and shape are consistent with the values we measure, although the spectroscopic fibers are largely not positioned at the SN site. Fig. 7 . 7Fig. 7.-Ratio of stripped-envelope SN to SN II versus oxygen abundance (T04 calibration). The comparatively high fraction of SN (Ib+Ib/c+Ic) to SN II at subsolar metallicity in the right lower panel favors contributions from a binary progenitor population or explosions even after collapse to a black hole. Color points correspond to spectroscopic metallicity measurements, and gray points correspond to metallicities estimated from stellar masses using the Tremonti et al. (2004) mass-metallicity relation. The comparatively high fraction of SN II host fiber spectra with contamination from AGN activity (present only in massive, metal-rich galaxies) excludes a considerable fraction of metal-rich SN II host galaxies, inflating the apparent fraction of stripped-envelope SN in metal-rich galaxies (color points). Indeed, the stripped-envelope fraction is smaller using metallicities estimated from host galaxy stellar masses (gray) which do not suffer from an AGN selection effect. Dashed line is Eldridge et al. (2008) prediction for binary progenitors; dotted line is Eldridge et al. (2008) prediction for non-rotating single progenitors; and solid and dash-dot lines are Georgy et al. (2009) predictions for single, rotating progenitors (where a minimum helium envelope of 0.6 M ⊙ separates SN Ib from SN Ic progenitors). Whether core collapse to a black hole can yield a SN explosion is not clear (e.g., Fryer et al. 1999), especially if high angular momentum does not support an accretion disk (Woosley et al. (1993)). The Georgy et al. (2009) solid line prediction is where core collapse to a black hole produces SN while the dashed-line prediction is where core collapse to a black hole yields no SN. Vertical error bars reflect Poisson statistics while horizontal bars reflect the range of metallicities in each bin with the position of the vertical bar corresponding to the mean Z in the bin. Here Z ⊙ = 8.86 from Delahaye et al. (2010). (2011) using an empirical model of SN Ib/c photometric color evolution. Comparison between the Drout et al. (2011) sample and the SN II host A V distribution yields p = 2.4%. Here we plot only the Drout et al. (2011) Gold and Silver SN. There is a median A V ≈ 1.2 mag extinction through SN (Ib+Ic) host fiber apertures (E(B − V ) ≈ 0.4 mag). Fig. 8 . 8-SDSS color composite images of 6 SN Ib, SN Ic, and SN II host galaxies in our sample. These include: SN 1990B (Ic), SN 2004cc (Ic), SN 2000de (Ib), SN 2001dc (IIP), SN 2004bs (Ib), SN 2004ec (IIn), and SN 2008ew (Ic). Red cross hatches show SDSS fiber positions yielding oxygen abundance measurements. An additional red circle marks fibers whose host offsets are within 3 kpc of the SN offset. Fig. 9 . 9-SDSS color composite images of 6 SN IIb in our sample. Their local environments are substantially bluer than those of SN Ib, SN Ic, and SN II. Red cross hatches show SDSS fiber positions yielding oxygen abundance measurements. An additional red circle marks fibers whose host offsets are within 3 kpc of the SN offset. Fig. 10 . 10-SDSS color composite images of 6 SN Ic-BL in our sample. SN Ic-BL in our sample occurred preferentially in lower-mass, low-metallicity host galaxies. Red cross hatches show SDSS fiber positions yielding oxygen abundance measurements. An additional red circle marks fibers whose host offsets are within 3 kpc of the SN offset. relation between neutron star mass and the mass of the carbon-oxygen core. Model predictions are parameterized by Z/Z ⊙ , requiring us to subtract the solar value from 12+log(O/H) estimates for each host galaxy to compute log(Z/Z ⊙ ). The value of the solar metallicity is, however, not well constrained. Atmospheric modeling favors lower solar values (e.g., 12+log(O/H)=8.69; Asplund et al. 2009) than helioseismic analyses (e.g., 12+log(O/H)=8.86; Delahaye et al. 2010). Here we use the helioseismic value of Delahaye et al. (2010). den Bergh 1997,Tsvetkov et al. (2004),Hakobyan et al. (2009),James (2009), andLeaman et al. (2011) have found that SN (Ib+Ib/c+Ic) occur preferentially toward galaxy centers, where oxygen abundances are generally higher.Habergham et al. (2010), examining 178 host galaxies for evidence of interaction, andAnderson et al. (2011), in a study of SN sites in Arp 299, have explored explanations for these patterns.Modjaz et al. (2011) find a significant difference (∼0.2 dex on average) between the oxygen abundances at the sites of 12 SN Ic and a mixed sample of 16 SN (Ib+IIb) for one of three oxygen abundance calibrations (although seeAnderson et al. (2010) andLeloudas et al. ( TABLE 1 1Galaxy-Impartial Sample Construction TABLE 2 Targeted 2Sample ConstructionCriterion II IIn IIb Ib Ic Ic-BL CC/Asiago/z < 0.023 577 71 46 70 118 11 Discovered 1990-Present 486 65 45 61 113 11 Confident Spec. Type 467 59 40 58 105 11 Not Ca-rich 467 59 40 57 105 11 SN Position 455 58 40 57 102 11 DR8 Imaging Footprint 247 36 22 34 57 9 Sufficient Coverage 234 34 19 31 55 8 No Host Detected 234 34 19 31 55 8 Host Photometry Sample No Bright Star 230 34 19 30 55 8 No SN Contamination 204 29 17 27 43 7 Amateur 90 14 9 10 16 5 Host Spectroscopy Sample Host Fiber 115 18 15 15 27 5 No AGN Contam. 91 13 12 12 26 5 Nuclear Fiber 42 8 7 6 13 3 Offset within 3 kpc 59 6 4 7 17 3 Amateur 46 3 8 8 11 4 TABLE 3 SN 3Luminosity FunctionsSurvey Ia II IIn IIb Ib Ic Ic-BL Ib+Ic LOSS -18.49 (0.76) -16.05 (0.17) -16.86 (0.59) -16.65 (0.40) -17.01 (0.17) -16.04 (0.31) ... -16.09 (0.23) P60 ... ... ... ... -17.0 (0.7) -17.4 (0.4) -18.3 (0.6) ... ), Experience de Recherche dObjets Sombres (EROS) 7 http://www.cfa.harvard.edu/iau/lists/Supernovae.html TABLE 4 Mean 4Redshifts for Each SN TypeNote. -Mean redshifts of each SN type in the galaxy-impartial and targeted samples.Survey Type II IIn IIb Ib Ic Ic-BL Galaxy-Impartial 0.042 0.044 0.033 0.041 0.035 0.045 Targeted 0.013 0.014 0.012 0.014 0.011 0.013 (hereafter BPT) diagram of [O iii] λ5007/Hβ and [N ii] λ6584/Hα line ratios. TABLE 5 5KS p-values for Combined SamplesMeasurement Figure Samples P-value u ′ -z ′ 2 Ic-BL vs. Ib, Ic, II 0.04%, 0.09%, 0.04% ... ... IIb vs. Ib, Ic, II 0.03%, 0.2%, 0.1% u ′ SB 2 Ib vs. Ic, II 0.6%, 55% ... ... Ic vs. II 0.6% log M 3 Ic-BL vs. II, IIb, Ib, Ic 0.4%, 13%, 0.1%, 0.8% ... ... IIb vs. II, Ib, Ic 66%, 10%, 47% ... ... II vs. Ib, Ic, (Ib+Ic) 0.7%, 19%, 0.5% Offset 3 Ic-BL vs. II, Ib, Ic 95%, 93%, 26%, 16% ... ... IIb vs. II, Ib, Ic 12%, 0.7%, 0.4% ... ... II vs. Ib, Ic 2.5%, 2.0% T04/PP04 4 II vs. Ib, Ic 15%/5.2%, 1.4%/5.2% < 3 kpc ... Ic-BL vs. Ib, Ic 1.6%/0.1%, 2.3%/0.02% ... ... II vs. Ib, Ic 60%/26%, 9.4%/3.0% SSFR 5 II vs. Ib, Ic, Ic-BL 17%, 3.4%, 3.5% ... ... Ib vs. Ic 26% A V 6 (Ib+Ic) vs. IIb, II 5.1%, 2.1% TABLE 6 6KS p-values for Different SamplesSample IIb vs. Ib Ic-BL vs. Ic u ′ -z ′ All SN 0.03% (22, 36) 0.09% (16, 42) Targeted 0.8% (19, 26) 10% (7, 37) Impartial 0.9% (4, 9) 4.9% (9, 5) No Amateur 1.9% (14, 25) 0.02% (11, 30) log M All SN 10% (22, 36) 0.8% (16, 42) Targeted 14% (19, 26) 64% (7, 37) Impartial 9.5% (4, 9) 63% (9, 5) No Amateur 7.9% (14, 25) 0.005% (11, 30) PP04 All SN 0.9% (13, 16) 0.4% (8, 26) Targeted 1.8% (13, 11) 15% (5, 23) Impartial 15% (1, 4) 2.1% (3, 4) No Amateur 19% (6, 7) 0.07% (4, 19) T04 All SN 8.5% (13, 16) 5.4% (8, 26) Targeted 13% (13, 11) 47% (5, 23) Impartial 15% (1, 4) 2.1% (3, 4) No Amateur 46% (6, 7) 0.2% (4, 19) Host All SN 0.7% (15, 33) 16% (7, 36) Offset Targeted 1.7% (16, 26) 1.6% (5, 35) Impartial ... (0, 6) 9.7% (2, 2) No Amateur 16% (9, 22) 20% (2, 25) the Max Planck Society, and the Higher Education Funding Council for England. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, Cambridge University, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. TABLE 7 Host 7Galaxy MeasurementsSN Type z Discov. Discov. Discoverer Method Pro/Am PTF09axi II 0.064 I P PTF PTF09bce II 0.023 I P PTF PTF09cjq II 0.019 I P PTF PTF09cvi II 0.030 I P PTF PTF09dra II 0.077 I P PTF PTF09ebq II 0.024 I P PTF PTF09ecm II 0.029 I P PTF PTF09fbf II 0.021 I P PTF PTF09fmk II 0.063 I P PTF PTF09fqa II 0.030 I P PTF PTF09hdo II 0.047 I P PTF PTF09hzg II 0.028 I P PTF PTF09iex II 0.020 I P PTF PTF09ige II 0.064 I P PTF PTF09ism II 0.029 I P PTF PTF09sh II 0.038 I P PTF PTF09tm II 0.035 I P PTF PTF09uj II 0.065 I P PTF PTF10bau II 0.026 I P PTF PTF10bgl II 0.030 I P PTF PTF10cd II 0.045 I P PTF PTF10con II 0.033 I P PTF PTF10cqh II 0.041 I P PTF PTF10cwx II 0.073 I P PTF PTF10cxq II 0.047 I P PTF PTF10cxx II 0.034 I P PTF PTF10czn II 0.045 I P PTF PTF10dk II 0.074 I P PTF PTF10hv II 0.052 I P PTF 2007rw II 1 0.009 T P Madison, Li (LOSS) 1990ah II 0.017 T P Pollas 1991ao II 0.016 T P Pollas 1992I II 0.012 T P Buil 1992ad II 0.004 T P Evans 1993W II 0.018 T P Pollas 1993ad II 0.017 T P Pollas 1994P II 0.004 T P Sackett 1994ac II 0.018 T P McNaught 1995H II 0.005 T P Mueller 1995J II 0.010 T A Johnson 1995V II 0.005 T A Evans 1995Z II 0.016 T P Mueller 1995ab II 0.019 T P Pollas 1995ag II 0.005 T P Mueller 1995ah II 0.015 T P Popescu et al. 1995ai II 0.018 T P Pollas 1996B II 0.014 T A Gabrijelcic 1996an II 0.005 T A Aoki 1996bw II 0.018 T P BAO Supernova Survey 1996cc II 0.007 T A Sasaki 1997W II 0.018 T P Berlind, Garnavich 1997aa II 0.012 T P BAO Supernova Survey 1997bn II 0.014 T P BAO Supernova Survey 1997bo II 0.012 T P BAO Supernova Survey 1997co II 0.023 T P BAO Supernova Search 1997cx II 0.005 T A Schwartz 1997db II 0.005 T A Schwartz 1997dn II 0.004 T A Boles 1997ds II 0.009 T P BAO Supernova Search 1998R II 0.007 T P Berlind, Carter 1998W II 0.012 T P LOSS 1998Y II 0.013 T P LOSS 1998ar II 0.012 T P BAO Supernova Survey 1998bm II 0.005 T P LOSS 1998dn II 0.001 T P BAO 1999D II 0.010 T P BAO Supernova Search 1999an II 0.005 T P BAO Supernova Survey 1999ap II 0.040 I P SCP 1999cd II 0.014 T P LOSS 1999dh II 0.011 T P LOSS 1999et II 0.016 T P Cappellaro 1999ge II 0.019 T P LOSS 1999gg II 0.014 T A Boles 1999gk II 0.008 T P Berlind TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am 1999gl II 0.017 T A Boles 2000I II 0.022 T A Puckett 2000au II 0.020 T A Puckett, Langoussis 2000cb II 0.006 T P LOSS 2000el II 0.010 T A Puckett, George 2000ez II 0.011 T A Armstrong 2000fe II 0.014 T P LOTOSS 2001H II 0.018 T A Holmes 2001J II 0.013 T P LOTOSS 2001K II 0.011 T P BAO 2001Q II 0.012 T P LOTOSS 2001aa II 0.021 T A Armstrong 2001ab II 0.017 T P LOTOSS 2001ae II 0.023 T P LOTOSS 2001ax II 0.020 I P Schaefer, QUEST 2001bk II 0.043 I P QUEST 2001cl II 0.016 T P LOTOSS 2001cm II 0.011 T P Beijing Observatory 2001cx II 0.016 T P LOTOSS 2001cy II 0.016 T P LOTOSS 2001ee II 0.015 T A Armstrong 2001fb II 0.032 I P SDSS 2001fc II 0.017 T A Puckett, Cox 2001ff II 0.013 T P LOTOSS 2001hg II 0.009 T A Puckett, Sehgal 2002an II 0.013 T A Sano 2002aq II 0.017 T P LOTOSS 2002bh II 0.017 T P LOTOSS 2002bx II 0.008 T P LOTOSS; Boles 2002ca II 0.011 T A Puckett, Kerns; LOTOSS 2002ce II 0.007 T A Arbour 2002ej II 0.016 T A Puckett, Kerns 2002em II 0.014 T A Armstrong 2002ew II 0.030 I P NEAT/Wood-Vasey et al. 2002gd II 0.009 T A Klotz; Puckett, Langoussis 2002hg II 0.010 T A Boles 2002hj II 0.024 I P NEAT/Wood-Vasey et al. 2002hm II 0.012 T A Boles 2002ig II 0.077 I P SDSS 2002in II 0.076 I P SDSS 2002ip II 0.079 I P SDSS 2002iq II 0.056 I P SDSS 2002jl II 0.064 I P NEAT/Wood-Vasey et al. 2003C II 0.017 T A Puckett, Cox 2003O II 0.016 T A Rich 2003bk II 0.004 T P LOTOSS 2003bl II 0.014 T P LOTOSS 2003cn II 0.018 T P LOTOSS 2003da II 0.014 T A Boles 2003dq II 0.046 I P NEAT/Wood-Vasey et al. 2003ej II 0.017 T P LOTOSS 2003hg II 0.014 T P LOTOSS 2003hk II 0.023 T A Boles; LOTOSS 2003hl II 0.008 T P LOTOSS 2003iq II 0.008 T A Llapasset 2003jc II 0.019 T P LOSS 2003kx II 0.006 T A Armstrong 2003ld II 0.014 T A Puckett, Cox 2003lp II 0.008 T A Puckett, Toth 2004D II 0.021 T P LOSS 2004G II 0.005 T A Kushida 2004T II 0.021 T P LOSS 2004Z II 0.023 T A Boles 2004bn II 0.022 T P LOSS 2004ci II 0.014 T A LOSS 2004dh II 0.019 T P LOSS 2004ei II 0.019 T A Boles 2004ek II 0.017 T A Boles; Puckett, Cox 2004em II 0.015 T A Armstrong 2004er II 0.015 T P LOSS 2004gy II 0.027 I P Quimby et al. 2004ht II 0.067 I P Frieman, SDSS 2004hv II 0.061 I P Frieman, SDSS Collaboration 2004hx II 0.014 I P Frieman, SDSS Collaboration 2004hy II 0.058 I P Frieman, SDSS Collaboration 2005H II 0.013 T P LOSS TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am 2005I II 0.018 T P LOSS 2005Y II 0.016 T P LOSS 2005Z II 0.019 T P LOSS 2005aa II 0.021 T P LOSS 2005ab II 0.015 T A Itagaki 2005au II 0.018 T A Arbour 2005bb II 0.009 T P LOSS 2005bn II 0.028 I P SubbaRao, SDSS Collaboration 2005ci II 0.008 T P LOSS 2005dp II 0.009 T A Itagaki 2005dq II 0.022 T A Armstrong 2005dz II 0.019 T A LOSS 2005eb II 0.015 T P LOSS 2005en II 0.017 T A LOSS 2005gi II 0.050 I P SDSS 2005gm II 0.022 T P Luckas, Trondal, Schwartz 2005ip II 0.007 T A Boles 2005kb II 0.015 I P SDSS 2005kh II 0.007 T P LOSS 2005kk II 0.017 T P LOSS 2005lb II 0.030 I P SDSS 2005lc II 0.010 I P SDSS 2005mg II 0.013 T A Newton, Puckett 2006J II 0.019 T P LOSS 2006O II 0.019 T A Rich 2006V II 0.016 T P Chen, Taiwan Supernova Survey 2006at II 0.015 T A Dintinjana, Mikuz 2006be II 0.007 T P LOSS 2006bj II 0.038 I P Quimby 2006bx II 0.019 T P LOSS 2006by II 0.019 T P LOSS 2006cx II 0.019 T P LOSS 2006dk II 0.016 T A Migliardi 2006dp II 0.019 T A Monard 2006ed II 0.017 T P LOSS 2006ee II 0.015 T P LOSS 2006ek II 0.020 T P LOSS 2006fg II 0.030 I P SDSS II 2006gs II 0.019 T A Itagaki 2006iu II 0.022 T P LOSS 2006iw II 0.030 I P SDSS II 2006kh II 0.060 I P SDSS 2006pc II 0.060 I P SDSS 2006qn II 0.022 T P Joubert, Li (LOSS) 2006st II 0.011 T P Winslow, Li (LOSS) 2007L II 0.018 T P Mostardi, Li (LOSS) 2007T II 0.013 T P Madison, Li (LOSS) 2007am II 0.010 T P Joubert, Li (LOSS) 2007an II 0.011 T A Migliardi 2007be II 0.013 T A Moretti, Tomaselli 2007fp II 0.019 T P Liou, Chen, et al. (Taiwan Supernova Survey) 2007gw II 0.016 T A Itagaki 2007ib II 0.030 I P SDSS 2007il II 0.021 T P Chu, Li (LOSS) 2007jn II 0.060 I P SDSS 2007kw II 0.070 I P SDSS 2007ky II 0.070 I P SDSS 2007lb II 0.060 I P SDSS 2007ld II 0.030 I P SDSS 2007lj II 0.040 I P SDSS 2007lx II 0.057 I P SDSS 2007md II 0.050 I P SDSS 2007sz II 0.020 I P ESSENCE 2007tn II 0.050 I P ESSENCE 2008N II 0.008 T P Winslow, Li, Filippenko (LOSS) 2008aa II 0.022 T P Madison, Li, Filippenko (LOSS) 2008ak II 0.008 T A Boles; Londero 2008bh II 0.015 T P Pignata et al. (CHASE); Narla, Li, Filippenko (LOSS) 2008bj II 0.019 I P Yuan et al. (ROTSE) 2008bl II 0.015 T A Duszanowicz 2008bx II 0.008 T A Puckett, Gagliano 2008ch II 0.013 T P LOSS 2008dw II 0.013 T P LOSS 2008ej II 0.021 T P LOSS 2008gd II 0.059 I P Yuan et al. (ROTSE) TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am 2008gz II 0.006 T A Itagaki 2009H II 0.005 T P LOSS 2009af II 0.009 T A Cortini 2009at II 0.005 T A Noguchi 2009ay II 0.022 T A Puckett, Peoples 2009bj II 0.027 I P Palomar TF 2009bk II 0.039 I P Palomar TF 2009bl II 0.040 I P Palomar TF 2009ct II 0.060 I P Palomar Transient Factory 2009dd II 0.002 T A Cortini 2009fe II 0.047 I P Kasliwal et al. (PTF) 2009hd II 0.002 T A Monard 2009jd II 0.025 I P Catelan, Drake et al. (CRTS) 2009jw II 0.020 T P LOSS 2009ls II 0.003 T A Nishiyama, Kabashima 2009nu II 0.040 I P Prieto, Drake et al. (CRTS) 2010K II 0.020 I P Prieto, Drake et al. (CRTS) 2010aw II 0.023 T P LOSS 2010gq II 0.018 I P Novoselnik et al. (La Sagra Sky Survey) 2010gs II 0.027 I P Novoselnik et al. (La Sagra Sky Survey) 2010ib II 0.019 T P Cenko et al. (LOSS) 2010id II 0.017 T P Cenko et al. (LOSS) 1993G II L 0.010 T P Treffers, Leibundgut, Filippenko, Richmond 2006W II L 0.016 T P LOSS 1990H II P 0.005 T P Perlmutter, Pennypacker, et al. 1991G II P 0.002 T P Mueller 1998bv II P 0.005 T P Kniazev et al. 1998dl II P 0.005 T P LOSS 1999ev II P 0.003 T A Boles 1999gi II P 0.002 T A Kushida 1999gn II P 0.005 T A Dimai 1999gq II P 0.001 T P LOSS 2000db II P 0.002 T A Aoki 2001R II P 0.014 T P LOTOSS 2001X II P 0.005 T P BAO 2001dc II P 0.007 T A Armstrong 2001dk II P 0.018 T A Boles 2001fv II P 0.005 T A Armstrong 2001ij II P 0.038 I P SDSS 2002ik II P 0.032 I P SDSS 2003J II P 0.003 T A Puckett, Newton; Kushida 2003Z II P 0.004 T P Qiu, Hu 2003aq II P 0.018 T A Boles 2003gd II P 0.002 T A Evans 2003ie II P 0.002 T A Arbour 2004A II P 0.003 T A Itagaki 2004am II P 0.001 T P LOSS 2004cm II P 0.004 I P SDSS 2004dd II P 0.013 T P LOSS 2004dg II P 0.005 T A Vagnozzi et al. 2004dj II P 0.000 T A Itagaki 2004du II P 0.017 T P LOSS 2004ez II P 0.005 T A Itagaki 2004fc II P 0.006 T P LOSS 2005ad II P 0.005 T A Itagaki 2005ay II P 0.003 T A Rich 2005cs II P 0.002 T A Kloehr 2006bp II P 0.003 T A Itagaki 2006fq II P 0.070 I P SDSS II 2006my II P 0.003 T A Itagaki 2006ov II P 0.005 T A Itagaki 2007aa II P 0.005 T A Doi 2007aq II P 0.021 T P Winslow, Li (LOSS) 2007av II P 0.005 T A Arbour 2007bf II P 0.018 T A Puckett, Guido 2007jf II P 0.070 I P SDSS 2007nw II P 0.060 I P SDSS 2007od II P 0.006 T A Maticic (PIKA) 2008F II P 0.018 T A Puckett, Sostero 2008X II P 0.007 T P Boles; Winslow, Li, Filippenko (LOSS) 2008az II P 0.010 T A Newton, Gagliano, Puckett 2008ea II P 0.015 T P Martinelli, Biagietti, Iafrate; LOSS 2008hx II P 0.022 T P LOSS 2008in II P 0.005 T A Itagaki 2009A II P 0.017 T P Pignata et al. (CHASE) TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am 2009E II P 0.007 T A Boles 2009W II P 0.017 I P Drake et al. (CRTS) 2009am II P 0.012 T P LOSS 2009ao II P 0.011 T P Pignata et al. (CHASE) 2009bz II P 0.011 T P LOSS 2009dh II P 0.060 I P Drake et al. (CRTS) 2009ga II P 0.011 T A Itagaki 2009hf II P 0.013 T A Monard 2009hq II P 0.007 T A Monard 2009ie II P 0.018 T P LOSS 2009lx II P 0.027 I P Drake et al. (CRTS) 2009md II P 0.004 T A Itagaki 2009my II P 0.011 T P LOSS 2010aj II P 0.021 T A Newton, Puckett 2010fx II P 0.017 T A Newton, Puckett 2010gf II P 0.019 T P Cenko et al. (LOSS) 2010hm II P 0.020 T P Cenko et al. (LOSS) 2010jc II P 0.024 I P Howerton et al. (CRTS); Puckett, Newton 1999br II P pec 0.003 T P LOSS 2000em II pec 0.019 T P Nearby Galaxies Supernova Search 2001dj II pec 0.018 T P LOTOSS 2003cv II pec 0.028 I P NEAT/Wood-Vasey et al. 2004by II pec 0.012 T A Armstrong 2004gg II pec 0.020 T P LOSS 2007ms II pec 0.040 I P SDSS 2006G II/IIb 0.017 T P LOSS PTF09dxv IIb 0.033 I P PTF PTF09fae IIb 0.067 I P PTF 2005U IIb 1 0.010 T P Mattila et al., Nuclear Supernova Search 1996cb IIb 0.002 T A Aoki 1997dd IIb 0.015 T A Aoki 2001ad IIb 0.011 T P BAO 2001gd IIb 0.003 T A Itagaki; Dimai 2003ed IIb 0.004 T A Itagaki 2004ex IIb 0.017 T P LOSS 2004gj IIb 0.021 T P LOSS 2005la IIb 1 0.019 I P Quimby, Mondol 2006dl IIb 0.022 T P LOSS 2006iv IIb 0.008 T A Duszanowicz 2006qp IIb 0.012 T A Itagaki 2006ss IIb 0.012 T A Boles 2007ay IIb 0.015 T P Mostardi, Li (LOSS) 2008ax IIb 0.002 T P Mostardi, Li, Filippenko (LOSS); Itagaki 2008cw IIb 0.032 I P Yuan et al. (ROTSE) 2008cx IIb 0.019 T A Monard 2008ie IIb 0.014 T P Pignata et al. (CHASE); Hirose 2009K IIb 0.012 T P Pignata et al. (CHASE) 2009ar IIb 0.026 I P Mahabal, Drake et al. (CRTS) 2009fi IIb 0.016 T A Boles 2009jv IIb 0.016 T A Gorelli, Newton, Puckett 2010am IIb 0.020 I P Drake et al. (CRTS) 1994Y IIn 0.009 T P Wren 1995G IIn 0.016 T A Evans, Shobbrook, Beaman 1996ae IIn 0.005 T A Vagnozzi, Piermarini, Russo 1996bu IIn 0.004 T A Kushida 1997ab IIn 0.013 T P Hagen, Reimers 1998S IIn 0.003 T P BAO Supernova Survey 1999eb IIn 0.018 T P LOSS 1999gb IIn 0.017 T P LOSS 2000ev IIn 0.015 T A Manzini 2001I IIn 0.017 T P LOTOSS 2001fa IIn 0.017 T P LOTOSS 2002ea IIn 0.014 T A Puckett, Newton 2002fj IIn 0.015 T A Monard 2003G IIn 0.011 T P LOTOSS 2003dv IIn 0.008 T P LOTOSS 2003ke IIn 0.021 T P LOSS 2004F IIn 0.017 T P LOSS 2005cp IIn 0.022 T P LOSS 2005db IIn 0.015 T A Monard 2005gl IIn 0.016 T A Puckett, Ceravolo 2006aa IIn 0.021 T P LOSS 2006am IIn 0.009 T P LOSS 2006bo IIn 0.015 T A Boles 2006cy IIn 0.036 I P Quimby, Mondol TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am 2006db IIn 0.023 I P Quimby, Mondol 2006gy IIn 0.019 I P Quimby 2006jd IIn 1 0.019 T P LOSS 2006tf IIn 0.074 I P Quimby, Castro, Mondol 2007K IIn 0.022 T P Madison, Li (LOSS) 2007cm IIn 0.016 T A Kloehr 2007rt IIn 0.022 T P Li (LOSS) 2008B IIn 0.019 T P Itagaki 2008fm IIn 0.039 I P Yuan et al. (ROTSE) 2008gm IIn 0.012 T P Pignata et al. (CHASE) 2008ip IIn 0.015 T A Kobayashi 2008ja IIn 0.069 I P Catelan, Drake et al. (CRTS) 2009nn IIn 0.046 I P Zheng, Yuan et al. (ROTSE) 2010jl IIn 0.011 T A Newton, Puckett 1994W IIn P 0.004 T A Cortini, Villi 2007pk IIn pec 0.017 T P Parisky, Li (LOSS) 2010al IIn pec 0.017 T A Rich PTF09awk Ib 0.062 I P PTF PTF09dfk Ib 0.016 I P PTF 1995F Ib 1 0.005 T P Lane, Gray 1997X Ib 1 0.004 T A Aoki 2005bf Ib 1 0.019 T A LOSS 2006fo Ib 1 0.021 I P SDSS II 2006lc Ib 2 0.016 I P SDSS 2007kj Ib 1 0.018 T A Itagaki 1991ar Ib 0.015 T P McNaught, Russell 1997dc Ib 0.011 T P BAO Supernova Survey 1998T Ib 0.010 T P BAO Supernova Survey 1998cc Ib 0.014 T P LOSS 1999di Ib 0.016 T A Puckett, Langoussis 1999dn Ib 0.009 T P Beijing Observatory Supernova Survey (Y. L. Qiu et al.) 1999eh Ib 0.007 T A Armstrong 2000de Ib 0.008 T A Migliardi 2000dv Ib 0.014 T P LOSS 2000fn Ib 0.016 T A Holmes 2002dg Ib 0.047 I P NEAT/Wood-Vasey et al. 2002hz Ib 0.018 T P LOTOSS 2003I Ib 0.018 T A Puckett, Langoussis 2003bp Ib 0.020 T P LOTOSS 2003gk Ib 0.011 T P LOTOSS 2004ao Ib 0.006 T P LOSS 2004bs Ib 0.017 T A Armstrong 2004gv Ib 0.020 T P Chen 2005O Ib 0.019 T P Chen 2005hl Ib 0.020 I P SDSS 2005hm Ib 0.030 I P SDSS 2005mn Ib 0.050 I P SDSS 2006ep Ib 0.015 T P LOSS; Itagaki 2007ag Ib 0.021 T A Puckett, Gagliano 2007ke Ib 0.017 T P Chu, Li (LOSS) 2007qx Ib 0.060 I P SDSS 2007uy Ib 0.007 T A Hirose 2008D Ib 0.007 T P Soderberg,Berger,Page 2008ht Ib 0.022 T P LOSS 2009ha Ib 0.015 T A Monard 2009jf Ib 0.008 T P LOSS 2010O Ib 0.010 T A Newton, Puckett 2002hy Ib pec 0.013 T A Monard 2006jc Ib pec 1 0.006 T A Itagaki; Puckett, Gorelli 2009lw Ib/IIb 0.016 T P LOSS 2010P Ib/IIb 0.010 T P Mattila, Kankare 2002dz Ib/c 0.018 T P LOTOSS 2003A Ib/c 0.022 T P LOTOSS 2003ih Ib/c 0.017 T A Armstrong 2006lv Ib/c 1 0.008 T A Duszanowicz 2007sj Ib/c 0.040 I P SDSS 2008fn Ib/c 0.030 I P Yuan et al. (ROTSE) 2008fs Ib/c 0.039 I P Yuan et al. (ROTSE) 2010br Ib/c 0.002 T A Nevski 2010gr Ib/c 0.017 T P Cenko et al. (LOSS) 2010is Ib/c 0.021 T P Cenko et al. (LOSS) 2001co Ib/c pec 0.017 T P LOTOSS PTF10bhu Ic 0.036 I P PTF PTF10bip Ic 0.051 I P PTF TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am 2004ib Ic 2 0.056 I P Frieman, SDSS Collaboration 2005az Ic 3 0.009 I P Quimby et al. 1990B Ic 0.008 T P Perlmutter, Pennypacker 1990U Ic 0.008 T P Pennypacker, Perlmutter, Marvin 1991N Ic 0.003 T P Perlmutter, Pennypacker, et al. 1994I Ic 0.002 T A Puckett, Armstrong; Johnson, Millar; Berry; Kushida 1995bb Ic 1 0.006 T P Tokarz, Garnavich 1996D Ic 0.016 T P Drissen, Robert, Dutil, Roy 1996aq Ic 0.005 T A Aoki 1997ei Ic 0.011 T A Aoki 1999bc Ic 0.021 I P SCP 1999bu Ic 0.009 T P LOSS 2000C Ic 0.013 T A Foulkes; Migliardi 2000cr Ic 0.012 T A Migliardi, Dimai 2000ew Ic 0.003 T A Puckett, Langoussis 2001ch Ic 0.010 T P LOTOSS 2001ci Ic 0.004 T P LOTOSS 2002J Ic 0.013 T P LOTOSS 2002ao Ic 0.005 T P LOTOSS 2002hn Ic 0.017 T P LOTOSS 2002ho Ic 0.008 T A Boles 2002jj Ic 0.014 T P LOTOSS 2003L Ic 0.021 T A Boles; LOTOSS 2003el Ic 0.019 T P LOTOSS 2003hp Ic 0.021 T P LOTOSS 2004C Ic 0.006 T P Dudley, Fischer 2004aw Ic 0.016 T A Boles; Itagaki 2004bf Ic 0.017 T P LOSS 2004bm Ic 0.004 T P LOSS 2004cc Ic 0.008 T P LOSS 2004dc Ic 0.021 T P LOSS 2004fe Ic 0.018 T P LOSS 2004gn Ic 0.006 T P LOSS 2005aj Ic 0.008 T A Puckett, Newton 2005eo Ic 0.017 T A LOSS 2005kl Ic 0.003 T A Migliardi 2006dg Ic 0.014 T P LOSS 2007cl Ic 0.022 T A Puckett, Sostero 2007nm Ic 0.046 I P Djorgovski et al. (Palomar-Quest) 2007rz Ic 0.013 T P Parisky, Li (LOSS) 2008ao Ic 0.015 T A Migliardi, Londero 2008du Ic 0.016 T P LOSS 2008ew Ic 0.020 T P LOSS 2008fo Ic 0.030 I P Yuan et al. (ROTSE) 2008hh Ic 0.019 T A Puckett, Crowley 2008hn Ic 0.011 T P LOSS 2009em Ic 0.006 T A Monard 2009lj Ic 0.015 T P LOSS 2010Q Ic 0.055 I P Graham, Drake et al. (CRTS) 2010do Ic 0.014 T P Monard; Cenko et al. (LOSS) 2010gk Ic 0.014 T P Li, Cenko et al. (LOSS) 2010io Ic 0.007 T A Duszanowicz 2003id Ic pec 0.008 T P LOTOSS PTF09sk Ic-bl 0.035 I P PTF 1997dq Ic-bl 0.003 T A Aoki 1997ef Ic-bl 0.012 T A Sano 1998ey Ic-bl 0.016 T A Arbour 2002ap Ic-bl 0.002 T A Hirose 2002bl Ic-bl 0.016 T A Armstrong 2003jd Ic-bl 0.019 T P LOSS 2004bu Ic-bl 0.018 T A Boles 2005nb Ic-bl 0.024 I P Quimby et al. 2006aj Ic-bl 0.033 I P Cusumano et al. 2006nx Ic-bl 0.050 I P SDSS 2006qk Ic-bl 0.060 I P SDSS 2007I Ic-bl 0.022 T P Lee, Li (LOSS) 2007bg Ic-bl 0.034 I P Quimby, Rykoff, Yuan 2007ce Ic-bl 0.046 I P Quimby 2010ah Ic-bl 0.050 I P Ofek et al. (Palomar Transient Factory ) 2010ay Ic-bl 0.067 I P Drake et al. (CRTS) TABLE 7 - 7ContinuedSN Type z Discov. Discov. Discoverer Method Pro/Am TABLE 8 Host 8Galaxy MeasurementsSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) PTF09axi II 0.63 9.29 +0.09 −0.09 1.56 PTF09bce II 0.95 10.59 +0.31 −0.11 2.47 PTF09cjq II 0.69 10.76 +0.18 −0.47 0.08 -12.09 +0.86 −1.20 -0.36 +0.23 −0.23 2.63 PTF09cvi II 1.61 7.64 +0.29 −0.35 1.81 PTF09dra II 0.96 10.59 +0.07 −0.10 0.18 -10.33 +0.25 −0.24 0.18 8.98 +0.04 −0.04 8.71 +0.04 −0.04 0.98 +0.12 −0.12 1.57 PTF09ebq II 0.21 PTF09ecm II 0.76 PTF09fbf II 0.65 10.11 +0.32 −0.09 1.08 -7.82 +0.12 −0.17 1.08 8.73 +0.01 −0.01 8.24 +0.01 −0.01 0.26 +0.03 −0.03 1.76 PTF09fmk II 1.00 10.39 +0.10 −0.11 2.15 PTF09fqa II 0.49 PTF09hdo II 1.04 PTF09hzg II 0.56 10.49 +0.06 −0.33 3.04 PTF09iex II 0.66 8.33 +0.37 −0.58 1.27 PTF09ige II 1.31 9.79 +0.09 −0.08 0.25 -9.85 +0.13 −0.12 0.25 8.85 +0.02 −0.03 8.63 +0.02 −0.02 0.56 +0.05 −0.05 1.15 PTF09ism II 2.14 9.50 +0.08 −0.31 0.83 -10.19 +0.24 −0.19 0.83 8.84 +0.07 −0.07 8.61 +0.06 −0.06 -0.13 +0.15 −0.15 2.40 PTF09sh II 1.25 9.97 +0.18 −0.11 0.12 -9.86 +0.12 −0.15 0.12 9.01 +0.01 −0.03 8.73 +0.01 −0.01 0.65 +0.04 −0.04 1.54 PTF09tm II 0.55 10.41 +0.18 −0.14 2.76 PTF09uj II 0.54 9.79 +0.09 −0.08 0.26 -10.14 +0.29 −0.26 0.26 8.83 +0.06 −0.09 8.63 +0.06 −0.06 1.13 +0.21 −0.21 1.74 PTF10bau II 0.80 10.35 +0.28 −0.11 1.22 -9.14 +0.16 −0.20 1.22 9.11 +0.01 −0.04 8.77 +0.02 −0.02 0.57 +0.04 −0.04 2.34 PTF10bgl II 0.83 10.69 +0.19 −0.11 0.94 -8.61 +0.26 −0.23 0.94 8.91 +0.01 −0.01 8.54 +0.01 −0.01 0.86 +0.02 −0.02 1.74 PTF10cd II 0.70 8.75 +0.12 −0.14 0.57 PTF10con II 0.17 0.05 -10.29 +0.24 −0.24 0.05 8.86 +0.05 −0.05 8.64 +0.04 −0.04 0.72 +0.12 −0.12 PTF10cqh II 1.32 10.83 +0.12 −0.12 1.83 PTF10cwx II 0.70 9.66 +0.08 −0.08 1.26 PTF10cxq II 0.39 8.95 +0.15 −0.08 1.65 PTF10cxx II 0.58 10.29 +0.17 −0.14 0.28 -9.84 +0.15 −0.15 0.28 9.05 +0.01 −0.01 8.78 +0.02 −0.02 1.19 +0.05 −0.05 2.28 PTF10czn II 1.63 10.51 +0.16 −0.17 0.63 PTF10dk II 0.66 8.32 +0.16 −0.17 1.40 PTF10hv II 0.09 10.30 +0.13 −0.06 0.08 -10.17 +0.17 −0.17 0.08 8.84 +0.04 −0.07 8.61 +0.03 −0.03 0.59 +0.09 −0.09 1.61 2007rw II 1 0.99 9.54 +0.05 −0.04 0.15 -9.39 +0.25 −0.41 0.15 8.69 +0.01 −0.01 8.47 +0.01 −0.01 0.08 +0.04 −0.04 0.87 1990ah II 0.67 9.87 +0.08 −0.55 1.21 1991ao II 1.42 9.59 +0.07 −0.56 2.15 1992I II 3.71 10.47 +0.53 −0.10 3.57 1992ad II 1.12 10.17 +0.07 −0.17 0.97 1993W II 0.71 9.62 +0.19 −0.50 1.97 1993ad II 1.48 10.34 +0.13 −0.53 1.78 1994P II 2.01 9.94 +0.04 −0.04 1.70 1994ac II 0.29 9.01 +0.55 −0.09 1.45 1995H II 0.66 9.96 +0.05 −0.05 1.20 1995J II 1.58 9.50 +0.04 −0.04 0.12 -10.17 +0.21 −0.19 0.12 8.64 +0.05 −0.06 8.58 +0.03 −0.03 0.26 +0.09 −0.09 0.67 1995V II 0.76 10.84 +0.04 −0.04 0.30 -8.42 +0.25 −0.22 0.30 9.11 +0.01 −0.01 8.80 +0.01 −0.01 0.92 +0.02 −0.02 1.77 1995Z II 0.96 10.05 +0.13 −0.54 2.45 1995ab II 1.48 8.80 +0.56 −0.08 0.56 1995ag II 0.53 10.76 +0.04 −0.05 1.61 1995ah II 0.86 8.72 +0.10 −0.54 0.35 -9.57 +0.13 −0.13 0.35 8.27 +0.14 −0.12 8.25 +0.02 −0.02 0.18 +0.05 −0.05 1.12 1995ai II 1.44 10.77 +0.12 −0.55 1.54 1996B II 0.60 10.90 +0.15 −0.52 2.34 1996an II 0.77 11.04 +0.05 −0.05 0.88 -8.58 +0.21 −0.27 0.88 8.91 +0.01 −0.01 8.63 +0.01 −0.01 0.83 +0.02 −0.02 1.30 1996bw II 1.50 10.61 +0.18 −0.50 1.90 1996cc II 0.87 9.80 +0.05 −0.05 1.63 1997W II 1.55 10.61 +0.18 −0.50 1.69 1997aa II 1.91 9.47 +0.57 −0.05 1.08 1997bn II 0.36 10.31 +0.06 −0.57 2.07 1997bo II 0.80 8.23 +0.54 −0.11 0.37 -8.50 +0.14 −0.20 0.37 7.92 +0.06 −0.06 7.99 +0.01 −0.01 0.04 +0.02 −0.02 1.78 1997co II 0.64 11.16 +0.25 −0.25 0.15 -11.72 +0.65 −1.30 1.28 +0.23 −0.23 2.50 1997cx II 0.60 9.70 +0.04 −0.04 1.26 TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) 1997db II 0.92 10.75 +0.04 −0.04 0.97 1997dn II 1.36 10.05 +0.04 −0.05 0.88 -7.76 +0.33 −0.32 0.88 8.98 +0.04 −0.01 8.52 +0.01 −0.01 1.28 +0.02 −0.02 1.77 1997ds II 0.53 9.68 +0.05 −0.05 1.89 1998R II 0.44 9.86 +0.07 −0.13 0.18 -9.86 +0.14 −0.13 0.18 8.97 +0.01 −0.01 8.69 +0.01 −0.01 1.11 +0.04 −0.04 2.57 1998W II 1.02 10.02 +0.53 −0.11 1.88 1998Y II 0.73 10.18 +0.56 −0.08 1.24 1998ar II 1.39 10.80 +0.41 −0.33 0.05 -12.15 +0.74 −1.23 0.06 +0.18 −0.18 1.70 1998bm II 0.96 8.66 +0.05 −0.06 0.23 -7.89 +0.13 −0.13 0.23 8.43 +0.06 −0.08 8.08 +0.01 −0.01 0.48 +0.02 −0.02 0.35 1998dn II 2.04 10.12 +0.04 −0.04 1.91 1999D II 1.79 8.79 +0.10 −0.05 1.80 1999an II 0.35 9.46 +0.04 −0.04 0.21 -9.24 +0.19 −0.18 0.21 8.62 +0.01 −0.02 8.32 +0.01 −0.01 0.12 +0.03 −0.03 0.82 1999ap II 0.47 9.48 +0.11 −0.19 0.16 0.43 +0.23 −0.23 1.57 1999cd II 1.28 11.36 +0.09 −0.56 1.07 -8.50 +0.27 −0.23 1.07 8.99 +0.01 −0.01 8.65 +0.01 −0.01 1.04 +0.02 −0.02 1.50 1999dh II 0.96 9.88 +0.57 −0.09 1.17 1999et II 1.18 9.41 +0.06 −0.05 1.66 1999ge II 0.49 0.03 -10.94 +0.42 −0.54 0.72 +0.08 −0.08 1999gg II 0.67 9.61 +0.56 −0.08 1.74 1999gk II 1.21 10.39 +0.04 −0.05 0.03 -10.78 +0.26 −0.23 0.03 9.05 +0.05 −0.05 8.76 +0.04 −0.04 0.50 +0.10 −0.10 1.21 1999gl II 0.09 11.03 +0.12 −0.54 2.45 2000I II 0.70 10.96 +0.22 −0.23 0.95 -9.33 +0.17 −0.18 0.95 9.17 +0.01 −0.01 8.89 +0.02 −0.02 0.78 +0.05 −0.05 2.23 2000au II 1.45 10.05 +0.53 −0.18 0.08 -12.19 +0.76 −1.20 -0.54 +0.26 −0.26 2.09 2000cb II 1.14 2000el II 0.29 9.92 +0.05 −0.05 2.46 2000ez II 0.87 9.84 +0.57 −0.08 0.96 -8.70 +0.17 −0.29 0.96 8.67 +0.02 −0.01 8.30 +0.01 −0.01 0.02 +0.03 −0.03 1.10 2000fe II 1.28 0.08 -10.57 +0.39 −0.38 3.02 +0.07 −0.07 2001H II 0.24 10.43 +0.07 −0.57 2.16 2001J II 1.00 9.59 +0.55 −0.08 0.17 -10.12 +0.14 −0.14 0.17 8.80 +0.02 −0.01 8.67 +0.02 −0.02 0.21 +0.05 −0.05 0.95 2001K II 0.88 10.00 +0.54 −0.10 0.07 2.21 2001Q II 1.04 2001aa II 1.91 10.86 +0.09 −0.29 0.08 -10.55 +0.34 −0.30 1.16 +0.07 −0.07 2.49 2001ab II 0.95 10.43 +0.20 −0.49 2.28 2001ae II 1.83 10.63 +0.25 −0.16 0.15 -9.42 +0.16 −0.16 0.15 9.22 +0.01 −0.07 8.88 +0.01 −0.01 1.27 +0.03 −0.03 0.45 2001ax II 1.56 10.58 +0.31 −0.63 3.32 2001bk II 0.95 8.82 +0.13 −0.12 2.01 2001cl II 1.33 10.83 +0.08 −0.62 2.30 2001cm II 1.08 11.45 +0.10 −0.54 0.42 -11.67 +0.77 −1.00 1.72 +0.30 −0.30 2.38 2001cx II 1.54 10.25 +0.50 −0.14 2.06 2001cy II 0.37 10.75 +0.13 −0.53 2.67 2001ee II 1.29 10.97 +0.20 −0.51 1.91 2001fb II 1.21 1.22 -9.30 +0.13 −0.14 1.22 8.61 +0.04 −0.03 8.38 +0.02 −0.02 0.97 +0.06 −0.06 2001fc II 0.67 10.31 +0.21 −0.45 2.98 2001ff II 0.32 10.46 +0.16 −0.50 0.15 -10.34 +0.21 −0.21 0.15 8.99 +0.02 −0.03 8.77 +0.03 −0.03 1.31 +0.08 −0.08 2.81 2001hg II 1.41 10.48 +0.05 −0.05 0.79 -9.02 +0.26 −0.27 0.79 9.13 +0.01 −0.01 8.83 +0.02 −0.02 0.69 +0.04 −0.04 1.37 2002an II 1.56 10.71 +0.13 −0.55 1.68 2002aq II 2.96 9.88 +0.51 −0.14 2.71 2002bh II 1.37 2002bx II 2.02 0.13 -10.91 +0.48 −0.51 1.48 +0.09 −0.09 2002ca II 1.33 10.52 +0.23 −0.50 0.08 -9.74 +0.14 −0.15 0.08 9.05 +0.03 −0.01 8.83 +0.01 −0.01 0.90 +0.03 −0.03 2.26 2002ce II 0.67 0.23 -8.80 +0.35 −0.18 0.23 8.97 +0.01 −0.01 8.57 +0.01 −0.01 0.48 +0.02 −0.02 2002ej II 0.91 2002em II 1.80 10.72 +0.05 −0.58 2.48 2002ew II 1.35 9.44 +0.28 −0.11 0.39 -9.64 +0.14 −0.17 0.39 8.72 +0.07 −0.02 8.56 +0.02 −0.02 0.16 +0.05 −0.05 1.40 2002gd II 1.71 10.14 +0.05 −0.05 1.43 2002hg II 0.95 9.99 +0.06 −0.06 1.05 -9.09 +0.15 −0.23 1.05 8.85 +0.01 −0.01 8.56 +0.01 −0.01 0.49 +0.04 −0.04 1.75 2002hj II 1.39 9.39 +0.30 −0.08 1.03 2002hm II 0.86 9.28 +0.56 −0.07 0.22 -9.31 +0.13 −0.13 0.22 8.71 +0.04 −0.02 8.45 +0.01 −0.01 0.06 +0.03 −0.03 0.49 2002ig II 0.41 2002in II 0.41 9.16 +0.09 −0.09 0.41 -9.95 +0.21 −0.21 0.41 8.37 +0.21 −0.20 8.39 +0.06 −0.06 0.32 +0.18 −0.18 2.01 2002ip II 0.50 2002iq II 0.42 0.63 -9.55 +0.14 −0.15 0.63 8.29 +0.15 −0.12 8.25 +0.03 −0.03 0.14 +0.07 −0.07 2002jl II 0.03 8.01 +0.14 −0.15 1.82 2003C II 0.91 10.59 +0.11 −0.56 1.63 2003O II 1.54 2003bk II 0.28 10.62 +0.04 −0.04 0.10 -11.36 +0.74 −1.07 1.80 +0.14 −0.14 3.62 2003bl II 1.13 0.04 -8.98 +0.13 −0.13 0.04 9.26 +0.04 −0.01 8.89 +0.01 −0.01 1.99 +0.02 −0.02 2003cn II 2.15 9.75 +0.54 −0.09 0.09 -10.06 +0.15 −0.16 0.09 9.09 +0.01 −0.01 8.81 +0.02 −0.02 0.60 +0.05 −0.05 0.53 2003da II 0.59 9.60 +0.54 −0.11 0.27 -10.16 +0.21 −0.20 0.27 8.86 +0.02 −0.02 8.64 +0.02 −0.02 1.39 +0.08 −0.08 2.12 2003dq II 0.40 2003ej II 1.46 9.74 +0.56 −0.07 0.05 -10.48 +0.34 −0.31 0.05 8.95 +0.05 −0.08 8.75 +0.05 −0.05 0.93 +0.14 −0.14 0.85 TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) 2003hg II 0.29 11.55 +0.09 −0.56 3.04 2003hk II 0.70 11.10 +0.27 −0.14 2.42 2003hl II 0.52 11.45 +0.05 −0.05 2.67 2003iq II 1.09 11.44 +0.06 −0.08 2.65 2003jc II 1.82 9.10 +0.56 −0.06 -0.38 2003kx II 0.53 9.79 +0.05 −0.04 2.55 2003ld II 0.24 10.03 +0.55 −0.10 0.54 -8.73 +0.17 −0.24 0.54 8.78 +0.01 −0.02 8.44 +0.01 −0.01 0.65 +0.02 −0.02 1.95 2003lp II 0.97 9.67 +0.06 −0.06 1.56 2004D II 0.85 0.10 -10.39 +0.27 −0.25 0.10 9.08 +0.05 −0.03 8.77 +0.04 −0.04 1.08 +0.10 −0.10 2004G II 1.30 10.38 +0.05 −0.04 0.87 -9.03 +0.19 −0.31 0.87 8.71 +0.05 −0.01 8.42 +0.02 −0.02 -0.12 +0.04 −0.04 1.31 2004T II 0.93 10.58 +0.32 −0.07 0.08 -11.28 +0.58 −1.15 0.16 +0.20 −0.20 2.30 2004Z II 1.20 10.02 +0.19 −0.29 2.09 2004bn II 1.10 10.54 +0.29 −0.12 0.18 -10.00 +0.16 −0.15 0.18 9.17 +0.01 −0.01 8.82 +0.02 −0.02 1.02 +0.05 −0.05 1.65 2004ci II 0.93 0.81 -8.43 +0.14 −0.13 0.81 9.17 +0.01 −0.01 8.86 +0.01 −0.01 2.16 +0.03 −0.03 2004dh II 0.77 2004ei II 0.09 10.15 +0.57 −0.06 3.13 2004ek II 3.85 10.28 +0.52 −0.13 2.08 2004em II 1.54 2004er II 1.48 9.89 +0.56 −0.08 1.25 2004gy II 1.16 2004ht II 1.28 10.31 +0.11 −0.10 0.17 -10.32 +0.36 −0.32 1.30 +0.05 −0.05 2.09 2004hv II 1.47 8.81 +0.10 −0.10 0.65 2004hx II 2.17 8.64 +0.54 −0.10 0.97 2004hy II 1.44 9.81 +0.10 −0.10 0.40 -10.15 +0.30 −0.22 0.40 8.69 +0.10 −0.11 8.55 +0.08 −0.08 0.03 +0.23 −0.23 1.76 2005H II 0.72 10.33 +0.52 −0.11 0.19 -9.06 +0.13 −0.14 0.19 9.11 +0.01 −0.01 8.74 +0.01 −0.01 1.15 +0.02 −0.02 2.06 2005I II 0.90 2005Y II 0.56 9.15 +0.58 −0.09 0.16 -9.83 +0.16 −0.16 0.16 8.82 +0.01 −0.02 8.61 +0.02 −0.02 0.69 +0.05 −0.05 1.62 2005Z II 0.59 0.08 -9.75 +0.24 −0.25 0.08 9.26 +0.06 −0.07 8.79 +0.03 −0.03 1.90 +0.09 −0.09 2005aa II 0.93 10.37 +0.17 −0.29 2.65 2005ab II 1.14 10.49 +0.56 −0.09 0.31 -9.70 +0.20 −0.23 0.31 9.11 +0.04 −0.04 8.79 +0.03 −0.03 2.31 +0.11 −0.11 1.43 2005au II 0.76 10.05 +0.57 −0.09 0.98 -8.86 +0.20 −0.26 0.98 8.92 +0.01 −0.05 8.59 +0.01 −0.01 0.68 +0.04 −0.04 1.47 2005bb II 0.41 9.93 +0.06 −0.06 0.57 -8.92 +0.15 −0.15 0.57 8.86 +0.01 −0.02 8.58 +0.01 −0.01 1.28 +0.03 −0.03 2.60 2005bn II 0.06 9.18 +0.29 −0.11 0.34 -10.08 +0.14 −0.14 0.34 9.05 +0.01 −0.01 8.78 +0.02 −0.02 0.78 +0.05 −0.05 2.42 2005ci II 0.45 9.63 +0.05 −0.05 0.13 -9.78 +0.16 −0.16 0.13 8.72 +0.02 −0.05 8.57 +0.02 −0.02 0.60 +0.05 −0.05 1.71 2005dp II 1.29 9.77 +0.05 −0.05 0.11 -9.67 +0.13 −0.13 0.11 8.73 +0.02 −0.01 8.55 +0.01 −0.01 0.67 +0.04 −0.04 1.09 2005dq II 0.70 2005dz II 1.68 9.76 +0.57 −0.09 2.12 2005eb II 0.59 10.67 +0.07 −0.58 2.72 2005en II 0.64 10.54 +0.50 −0.17 0.20 -9.55 +0.14 −0.15 0.20 9.22 +0.01 −0.07 8.80 +0.01 −0.01 2.36 +0.05 −0.05 1.46 2005gi II 1.13 8.99 +0.16 −0.18 0.67 2005gm II 1.94 10.78 +0.22 −0.29 0.14 -11.91 +0.72 −1.31 1.27 +0.23 −0.23 1.71 2005ip II 1.09 10.68 +0.06 −0.05 1.17 -9.10 +0.24 −0.18 1.17 9.13 +0.03 −0.01 8.86 +0.01 −0.01 0.55 +0.03 −0.03 2.06 2005kb II 0.72 8.84 +0.55 −0.09 0.16 -10.26 +0.16 −0.17 0.16 8.32 +0.14 −0.13 8.38 +0.02 −0.02 0.34 +0.06 −0.06 1.56 2005kh II 3.82 10.40 +0.05 −0.07 1.72 2005kk II 1.81 9.62 +0.55 −0.08 0.10 1.10 2005lb II 0.71 8.71 +0.26 −0.35 2.18 2005lc II 1.03 8.18 +0.56 −0.08 0.25 -9.70 +0.43 −0.29 0.25 8.32 +0.36 −0.27 8.37 +0.07 −0.07 -0.31 +0.18 −0.18 1.57 2005mg II 0.74 2006J II 0.44 9.58 +0.55 −0.11 1.82 2006O II 1.04 10.27 +0.24 −0.42 1.56 2006V II 2.03 10.14 +0.56 −0.07 2.38 2006at II 0.85 2006be II 0.95 9.98 +0.05 −0.05 0.61 -9.75 +0.26 −0.27 0.61 8.83 +0.02 −0.06 8.54 +0.02 −0.02 0.64 +0.05 −0.05 2.10 2006bj II 0.31 9.45 +0.15 −0.16 0.19 -10.23 +0.27 −0.23 0.19 8.79 +0.06 −0.09 8.64 +0.05 −0.05 0.62 +0.16 −0.16 2.07 2006bx II 4.48 10.13 +0.22 −0.43 0.42 2006by II 0.31 10.67 +0.21 −0.53 0.06 -10.53 +0.33 −0.41 2.83 +0.03 −0.03 2.77 2006cx II 0.37 9.99 +0.55 −0.08 2.05 2006dk II 0.89 10.60 +0.14 −0.51 0.07 -12.31 +0.80 −1.15 0.77 +0.20 −0.20 2.43 2006dp II 0.66 10.90 +0.12 −0.54 2.77 2006ed II 0.91 10.20 +0.19 −0.56 0.08 -9.74 +0.20 −0.19 0.08 9.01 +0.03 −0.01 8.75 +0.02 −0.02 1.85 +0.07 −0.07 1.74 2006ee II 1.26 11.18 +0.05 −0.58 3.06 2006ek II 4.26 2006fg II 0.22 7.92 +0.27 −0.32 1.63 2006gs II 0.84 10.44 +0.09 −0.57 0.08 -12.10 +0.78 −1.13 -0.29 +0.30 −0.30 2.39 2006iu II 0.31 2006iw II 0.87 9.70 +0.14 −0.21 0.18 -10.24 +0.29 −0.23 0.18 8.82 +0.06 −0.09 8.61 +0.05 −0.05 0.44 +0.16 −0.16 1.98 2006kh II 0.52 9.50 +0.08 −0.12 0.58 -9.95 +0.14 −0.15 0.58 9.04 +0.01 −0.05 8.74 +0.02 −0.02 0.45 +0.04 −0.04 2.10 2006pc II 1.13 10.01 +0.11 −0.13 0.37 -9.70 +0.22 −0.23 0.37 8.71 +0.07 −0.06 8.59 +0.04 −0.04 1.61 +0.14 −0.14 2.23 2006qn II 0.39 10.11 +0.30 −0.09 2.06 TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) 2006st II 3.44 10.54 +0.09 −0.53 1.23 2007L II 1.82 10.03 +0.11 −0.53 0.08 -10.74 +0.45 −0.76 0.23 +0.23 −0.23 1.64 2007T II 1.28 10.35 +0.20 −0.50 0.08 -11.43 +0.57 −1.17 0.67 +0.17 −0.17 1.73 2007am II 0.60 10.68 +0.05 −0.05 0.07 -9.63 +0.43 −0.42 0.90 +0.01 −0.01 1.64 2007an II 1.18 10.53 +0.10 −0.54 0.84 -8.29 +0.20 −0.28 0.84 8.99 +0.01 −0.01 8.69 +0.01 −0.01 1.11 +0.03 −0.03 1.23 2007be II 1.55 10.09 +0.56 −0.09 0.14 -10.33 +0.21 −0.22 0.14 9.05 +0.04 −0.04 8.79 +0.02 −0.02 1.71 +0.08 −0.08 2.73 2007fp II 0.36 10.64 +0.11 −0.55 1.08 -9.31 +0.14 −0.14 1.08 9.11 +0.01 −0.01 8.83 +0.02 −0.02 0.86 +0.04 −0.04 2.14 2007gw II 0.72 10.60 +0.14 −0.51 0.07 -12.31 +0.80 −1.15 0.77 +0.20 −0.20 2.38 2007ib II 1.83 9.93 +0.19 −0.10 0.14 -10.02 +0.13 −0.14 0.14 9.01 +0.01 −0.01 8.76 +0.02 −0.02 0.51 +0.05 −0.05 0.94 2007il II 1.08 10.31 +0.31 −0.09 0.08 -10.36 +0.22 −0.22 0.08 9.08 +0.02 −0.04 8.78 +0.03 −0.03 0.71 +0.08 −0.08 1.57 2007jn II 3.57 9.08 +0.14 −0.14 1.10 2007kw II 1.83 10.80 +0.11 −0.10 1.43 2007ky II 1.19 10.90 +0.14 −0.11 2.53 2007lb II 0.57 7.98 +0.17 −0.17 0.60 2007ld II 0.62 8.01 +0.25 −0.35 0.79 2007lj II 0.61 8.02 +0.21 −0.24 1.53 2007lx II 1.23 10.71 +0.08 −0.13 1.53 -9.16 +0.16 −0.18 1.53 8.70 +0.03 −0.06 8.45 +0.03 −0.03 -0.02 +0.07 −0.07 1.55 2007md II 1.10 10.79 +0.16 −0.19 2.71 2007sz II 1.46 9.10 +0.37 −0.58 0.96 2007tn II 1.34 10.10 +0.13 −0.22 2.17 2008N II 0.41 10.53 +0.05 −0.05 0.27 -8.91 +0.15 −0.17 0.27 9.13 +0.01 −0.01 8.84 +0.01 −0.01 1.41 +0.03 −0.03 2.38 2008aa II 0.82 10.96 +0.14 −0.26 2.44 2008ak II 0.78 10.26 +0.04 −0.05 1.79 2008bh II 1.06 11.02 +0.21 −0.50 1.92 2008bj II 0.71 8.51 +0.57 −0.06 0.17 -9.93 +0.20 −0.16 0.17 8.37 +0.17 −0.17 8.38 +0.04 −0.04 -0.03 +0.10 −0.10 1.23 2008bl II 0.81 9.69 +0.57 −0.09 0.06 -10.27 +0.17 −0.15 0.06 9.07 +0.01 −0.02 8.84 +0.04 −0.04 0.25 +0.07 −0.07 1.14 2008bx II 0.53 9.36 +0.04 −0.05 1.04 2008ch II 0.70 11.10 +0.14 −0.51 2.79 2008dw II 0.45 9.61 +0.07 −0.55 0.18 -9.48 +0.17 −0.17 0.18 8.42 +0.12 −0.14 8.36 +0.02 −0.02 0.40 +0.06 −0.06 0.86 2008ej II 0.36 10.86 +0.07 −0.32 3.73 2008gd II 1.50 9.95 +0.11 −0.09 0.23 -10.28 +0.30 −0.28 0.23 8.84 +0.07 −0.08 8.66 +0.06 −0.06 0.73 +0.18 −0.18 1.33 2008gz II 0.51 11.01 +0.06 −0.07 2.02 2009H II 0.96 11.04 +0.05 −0.05 0.88 -8.58 +0.21 −0.27 0.88 8.91 +0.01 −0.01 8.63 +0.01 −0.01 0.83 +0.02 −0.02 2.56 2009af II 0.55 9.98 +0.05 −0.06 2.12 2009at II 0.73 10.74 +0.06 −0.07 0.07 -10.33 +0.27 −0.28 0.07 8.97 +0.04 −0.05 8.73 +0.04 −0.04 1.82 +0.13 −0.13 2.68 2009ay II 0.94 10.76 +0.20 −0.22 2.19 2009bj II 5.55 8.99 +0.17 −0.26 2.22 2009bk II 8.77 9.64 +0.19 −0.09 0.20 -10.19 +0.12 −0.15 0.20 8.70 +0.03 −0.04 8.60 +0.03 −0.03 0.23 +0.07 −0.07 0.60 2009bl II 1.10 9.90 +0.14 −0.10 0.20 -10.39 +0.18 −0.17 0.20 8.78 +0.04 −0.07 8.63 +0.04 −0.04 0.51 +0.10 −0.10 1.27 2009ct II 1.98 10.58 +0.13 −0.17 1.85 2009dd II 0.19 11.26 +0.07 −0.07 0.72 -8.20 +0.13 −0.14 0.72 9.09 +0.01 −0.01 8.76 +0.01 −0.01 2.13 +0.02 −0.02 2.75 2009fe II 0.73 10.77 +0.05 −0.18 2.42 2009hd II 0.72 11.88 +0.05 −0.06 0.46 -9.54 +0.14 −0.16 0.46 9.15 +0.01 −0.01 8.84 +0.01 −0.01 0.84 +0.03 −0.03 1.81 2009jd II 2.39 9.37 +0.31 −0.08 1.09 2009jw II 0.35 9.82 +0.12 −0.07 0.15 -10.26 +0.16 −0.17 0.15 9.04 +0.04 −0.04 8.77 +0.02 −0.02 0.99 +0.05 −0.05 2.60 2009ls II 0.58 10.73 +0.05 −0.05 0.08 -10.98 +0.22 −0.17 0.08 8.96 +0.04 −0.05 8.77 +0.04 −0.04 0.16 +0.10 −0.10 1.93 2009nu II 1.82 9.21 +0.17 −0.25 0.47 2010K II 0.53 8.33 +0.37 −0.58 1.16 2010aw II 1.59 10.11 +0.32 −0.09 1.20 -8.20 +0.15 −0.22 1.20 8.62 +0.04 −0.02 8.29 +0.01 −0.01 0.51 +0.03 −0.03 1.36 2010gq II 0.47 9.92 +0.54 −0.13 0.67 -10.10 +0.16 −0.16 0.67 9.13 +0.01 −0.01 8.84 +0.02 −0.02 0.93 +0.06 −0.06 2.59 2010gs II 1.19 10.22 +0.31 −0.07 0.07 -10.69 +0.44 −0.57 1.22 +0.09 −0.09 2.15 2010ib II 0.64 9.40 +0.54 −0.09 1.44 2010id II 7.52 10.91 +0.08 −0.57 -0.36 1993G II L 0.86 10.57 +0.57 −0.07 0.14 -9.38 +0.54 −0.79 2.21 +0.02 −0.02 2.05 2006W II L 1.07 10.37 +0.15 −0.51 2.70 1990H II P 0.45 10.94 +0.07 −0.08 0.04 -10.95 +0.41 −0.55 1.04 1.34 +0.08 −0.08 2.80 1991G II P 1.11 11.26 +0.07 −0.07 1.23 -7.71 +0.18 −0.30 1.23 8.99 +0.01 −0.01 8.51 +0.01 −0.01 1.12 +0.02 −0.02 2.28 1998bv II P 1.26 8.36 +0.04 −0.05 0.23 -9.26 +0.19 −0.21 0.23 8.11 +0.04 −0.09 8.18 +0.01 −0.01 -0.01 +0.03 −0.03 1.14 1998dl II P 0.88 11.04 +0.05 −0.05 0.88 -8.58 +0.21 −0.27 0.88 8.91 +0.01 −0.01 8.63 +0.01 −0.01 0.83 +0.02 −0.02 2.10 1999ev II P 1.08 11.64 +0.04 −0.03 3.17 1999gi II P 0.65 11.13 +0.05 −0.05 0.74 -8.32 +0.13 −0.20 0.74 9.19 +0.01 −0.01 8.90 +0.01 −0.01 0.90 +0.02 −0.02 1.21 1999gn II P 0.86 11.48 +0.05 −0.08 0.83 -8.24 +0.44 −0.33 0.83 9.25 +0.01 −0.01 8.84 +0.01 −0.01 0.53 +0.02 −0.02 1.16 1999gq II P 1.06 9.95 +0.04 −0.04 1.25 -9.81 +0.07 −0.10 1.25 7.86 +0.08 −0.05 8.21 +0.01 −0.01 0.31 +0.02 −0.02 1.03 2000db II P 0.76 10.72 +0.05 −0.05 0.98 -8.10 +0.24 −0.25 0.98 8.87 +0.01 −0.01 8.46 +0.01 −0.01 0.79 +0.02 −0.02 1.30 2001R II P 1.59 11.07 +0.24 −0.48 2.07 2001X II P 0.88 0.82 -9.15 +0.20 −0.22 0.82 9.15 +0.01 −0.01 8.84 +0.01 −0.01 0.80 +0.04 −0.04 2001dc II P 0.73 0.13 -11.69 +0.72 −1.19 2.03 +0.10 −0.10 2001dk II P 1.29 9.73 +0.55 −0.06 2.09 TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) 2001fv II P 1.47 10.37 +0.05 −0.06 0.12 -11.19 +0.53 −0.93 -0.05 +0.14 −0.14 2.15 2001ij II P 0.85 0.75 0.75 8.99 +0.02 −0.04 8.75 +0.04 −0.04 0.27 +0.09 −0.09 2002ik II P 1.56 10.56 +0.16 −0.18 0.16 -10.68 +0.41 −0.43 1.32 +0.15 −0.15 2.05 2003J II P 0.57 11.27 +0.04 −0.04 0.72 -9.75 +0.24 −0.28 0.72 8.95 +0.05 −0.03 8.71 +0.04 −0.04 2.17 +0.16 −0.16 3.39 2003Z II P 1.37 0.04 -10.43 +0.21 −0.20 0.04 9.06 +0.04 −0.02 8.74 +0.02 −0.02 1.08 +0.08 −0.08 2003aq II P 0.88 9.27 +0.54 −0.09 0.09 -9.57 +0.15 −0.17 0.09 8.95 +0.01 −0.01 8.70 +0.01 −0.01 0.43 +0.02 −0.02 1.61 2003gd II P 1.66 11.52 +0.05 −0.05 0.94 2003ie II P 1.22 11.18 +0.09 −0.14 1.09 -8.90 +0.13 −0.18 1.09 9.09 +0.01 −0.01 8.80 +0.01 −0.01 0.55 +0.03 −0.03 0.84 2004A II P 1.90 10.55 +0.04 −0.05 2.03 2004am II P 0.22 12.39 +0.05 −0.15 0.05 -8.45 +0.07 −1.20 4.63 +0.03 −0.03 4.82 2004cm II P 0.09 9.61 +0.04 −0.05 0.09 -9.19 +0.11 −0.13 0.09 8.73 +0.07 −0.05 8.48 +0.05 −0.05 3.75 +0.18 −0.18 1.96 2004dd II P 0.91 9.82 +0.58 −0.07 1.29 2004dg II P 1.04 10.97 +0.10 −0.14 0.18 1.14 +0.12 −0.12 1.99 2004dj II P 1.33 1.57 -11.03 +0.02 −0.02 1.57 8.52 +0.05 −0.20 8.55 +0.04 −0.04 -0.15 +0.10 −0.10 2004du II P 0.65 2004ez II P 1.64 10.67 +0.05 −0.06 0.99 -8.04 +0.35 −0.43 0.99 9.07 +0.01 −0.01 8.69 +0.01 −0.01 0.79 +0.02 −0.02 1.85 2004fc II P 0.16 10.62 +0.05 −0.06 0.19 -10.03 +0.16 −0.17 0.19 9.01 +0.01 −0.01 8.75 +0.02 −0.02 1.03 +0.05 −0.05 2.98 2005ad II P 1.96 10.16 +0.04 −0.05 0.79 -8.84 +0.21 −0.32 0.79 8.91 +0.01 −0.01 8.58 +0.01 −0.01 0.16 +0.04 −0.04 1.24 2005ay II P 1.02 11.13 +0.05 −0.07 0.83 -8.88 +0.12 −0.12 0.83 9.10 +0.01 −0.01 8.74 +0.01 −0.01 0.74 +0.03 −0.03 1.34 2005cs II P 0.62 12.11 +0.12 −0.10 0.62 -10.17 +0.14 −0.12 0.62 9.05 +0.04 −0.04 8.78 +0.03 −0.03 0.13 +0.07 −0.07 1.11 2006bp II P 1.60 11.59 +0.05 −0.08 2.16 2006fq II P 0.42 10.04 +0.10 −0.09 0.31 -9.68 +0.10 −0.11 0.31 8.99 +0.01 −0.01 8.69 +0.01 −0.01 0.58 +0.04 −0.04 1.66 2006my II P 1.08 11.13 +0.06 −0.06 0.50 -9.53 +0.16 −0.17 0.50 9.05 +0.01 −0.01 8.82 +0.01 −0.01 1.11 +0.04 −0.04 2.36 2006ov II P 0.85 11.48 +0.05 −0.08 0.82 -8.24 +0.44 −0.33 0.82 9.25 +0.01 −0.01 8.84 +0.01 −0.01 0.53 +0.02 −0.02 1.13 2007aa II P 1.63 1.17 -8.83 +0.19 −0.17 1.17 9.23 +0.01 −0.01 8.87 +0.01 −0.01 0.70 +0.02 −0.02 2007aq II P 3.25 10.81 +0.14 −0.28 0.06 -10.15 +0.17 −0.16 0.06 9.14 +0.01 −0.03 8.79 +0.02 −0.02 1.03 +0.05 −0.05 1.22 2007av II P 0.20 10.58 +0.04 −0.04 0.11 -10.46 +0.25 −0.25 0.11 9.01 +0.04 −0.02 8.77 +0.03 −0.03 1.50 +0.11 −0.11 3.35 2007bf II P 1.60 10.16 +0.23 −0.53 0.09 -12.07 +0.76 −1.16 0.52 +0.24 −0.24 1.60 2007jf II P 0.89 9.48 +0.13 −0.13 0.34 -10.05 +0.15 −0.15 0.34 8.42 +0.14 −0.17 8.43 +0.04 −0.04 0.25 +0.12 −0.12 1.93 2007nw II P 0.75 10.31 +0.14 −0.14 2.19 2007od II P 3.37 8.87 +0.05 −0.05 2.19 2008F II P 1.93 2008X II P 0.61 9.22 +0.05 −0.05 0.12 -10.31 +0.29 −0.24 0.12 8.59 +0.07 −0.06 8.57 +0.05 −0.05 0.31 +0.14 −0.14 1.39 2008az II P 0.42 10.01 +0.06 −0.06 2.26 2008ea II P 0.88 10.81 +0.14 −0.51 2.08 2008hx II P 0.80 10.84 +0.18 −0.27 0.36 -8.95 +0.12 −0.12 0.36 9.09 +0.01 −0.01 8.79 +0.01 −0.01 1.86 +0.02 −0.02 2.79 2008in II P 1.83 11.47 +0.06 −0.12 2.17 -7.60 +0.13 −0.25 2.17 8.91 +0.01 −0.01 8.37 +0.01 −0.01 0.77 +0.02 −0.02 2.25 2009A II P 1.37 8.64 +0.57 −0.07 0.55 2009E II P 1.34 9.22 +0.05 −0.05 0.12 -10.31 +0.29 −0.24 0.12 8.59 +0.07 −0.06 8.57 +0.05 −0.05 0.31 +0.14 −0.14 0.21 2009W II P 2.45 8.81 +0.10 −0.55 0.15 2009am II P 1.33 10.60 +0.55 −0.10 0.67 -9.92 +0.25 −0.25 0.67 9.09 +0.03 −0.04 8.82 +0.03 −0.03 1.66 +0.10 −0.10 2.95 2009ao II P 0.82 10.96 +0.12 −0.55 0.09 -10.03 +0.20 −0.19 0.09 9.15 +0.04 −0.04 8.78 +0.02 −0.02 1.82 +0.06 −0.06 2.39 2009bz II P 1.09 9.08 +0.56 −0.07 0.10 -9.94 +0.13 −0.12 0.10 8.58 +0.04 −0.03 8.42 +0.02 −0.02 -0.02 +0.05 −0.05 1.19 2009dh II P 1.34 8.05 +0.17 −0.17 2.29 2009ga II P 0.76 2009hf II P 1.58 11.06 +0.11 −0.54 2.28 2009hq II P 1.43 10.28 +0.05 −0.05 1.50 -8.45 +0.20 −0.31 1.50 8.84 +0.01 −0.03 8.50 +0.01 −0.01 0.58 +0.03 −0.03 1.39 2009ie II P 2.54 10.06 +0.04 −0.04 1.07 2009lx II P 2.35 10.61 +0.17 −0.24 0.11 -10.68 +0.35 −0.35 1.54 +0.06 −0.06 1.81 2009md II P 0.85 10.35 +0.04 −0.04 0.22 -10.12 +0.14 −0.15 0.22 9.01 +0.01 −0.01 8.77 +0.02 −0.02 0.78 +0.04 −0.04 1.70 2009my II P 1.06 10.41 +0.11 −0.53 0.20 1.17 2010aj II P 1.17 10.67 +0.25 −0.24 1.19 2010fx II P 1.60 9.85 +0.55 −0.09 1.55 2010gf II P 1.01 10.28 +0.23 −0.46 1.56 2010hm II P 1.48 10.50 +0.28 −0.10 2.62 2010jc II P 2.29 10.77 +0.19 −0.25 0.06 -11.95 +0.73 −1.32 -0.16 +0.34 −0.34 0.56 1999br II P pec 1.21 10.71 +0.05 −0.05 1.07 -8.43 +0.14 −0.25 1.07 9.03 +0.01 −0.01 8.65 +0.01 −0.01 0.46 +0.02 −0.02 1.95 2000em II pec 0.43 9.96 +0.18 −0.49 2.60 2001dj II pec 3.41 10.63 +0.55 −0.09 2.90 2003cv II pec 0.92 8.34 +0.31 −0.08 1.17 2004by II pec 1.51 10.03 +0.54 −0.11 1.14 2004gg II pec 1.40 2007ms II pec 0.25 8.61 +0.20 −0.24 0.94 2006G II/IIb 1.23 10.97 +0.50 −0.16 2.94 PTF09dxv IIb 1.29 PTF09fae IIb 0.86 8.39 +0.11 −0.13 0.65 2005U IIb 1 0.27 10.57 +0.57 −0.07 0.13 -9.38 +0.54 −0.79 2.21 +0.02 −0.02 0.98 1996cb IIb 1.19 9.79 +0.04 −0.04 0.13 -9.68 +0.14 −0.15 0.13 8.54 +0.03 −0.12 8.39 +0.01 −0.01 0.38 +0.04 −0.04 0.96 TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) 1997dd IIb 2.21 11.07 +0.19 TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) TABLE 8 - 8ContinuedSN Type Offset Log M Fib. Off. SSFR Fib. Off. T04 PP04 A V u ′ -z ′ Norm. (dex) (AGN) (Metal) (dex) (dex) (mag) (local) secs assuming H o =73 km s −1 Mpc −1 . We use images from SDSS Data Release 8 (DR 8) to measure the color at the supernova sites and to estimate the hosts' stellar masses, and Sloan spectra to determine the hosts' oxygen abundances, specific star formation rate (SFR), and the interstellar reddening. Although these are blunt http://www.mpa-garching.mpg.de/SDSS/DR8/ 0.97Note. -Offset Norm. is the deprojected offset normalized by the host galaxy half-light radius. Fib. Off. (AGN) is the normalized offset of the SDSS spectroscopic fiber with offset most similar to the SN offset that has sufficient S/N to estimate dust extinction and SSFR. A superscript number next to a type provides the origin of any spectroscopic reclassification (1:Modjaz et al. 2012, in prep.;2: Sanders et al. 2012a; 3: this work). Fib. Off. (Metal) is the normalized offset of the SDSS spectroscopic fiber with offset most similar to the SN offset that is classified as star-forming which enables an oxygen abundance estimate. . G Aldering, SPIE. 483661Aldering, G. et al. 2002, SPIE, 4836, 61 . G Aldering, R M Humphreys, M Richmond, AJ. 107662Aldering, G., Humphreys, R. M., & Richmond, M. 1994, AJ, 107, 662 G Aldering, The Astronomer's Telegram. 4511Aldering, G. et al. 2005, The Astronomer's Telegram, 451, 1 . J P Anderson, R A Covarrubias, P A James, M Hamuy, S M Habergham, MNRAS. 4072660Anderson, J. P., Covarrubias, R. A., James, P. A., Hamuy, M., & Habergham, S. M. 2010, MNRAS, 407, 2660 . J P Anderson, S M Habergham, P A James, MNRAS. 416567Anderson, J. P., Habergham, S. M., & James, P. A. 2011, MNRAS, 416, 567 . J P Anderson, P A James, MNRAS. 390559MNRASAnderson, J. P., & James, P. A. 2008, MNRAS, 390, 1527 --. 2009, MNRAS, 399, 559 . I Arcavi, 1106.3551ApJ. 721777ApJArcavi, I. et al. 2010, ApJ, 721, 777 --. 2011, ApJ, 742, L18, 1106.3551 . M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481 . P Astier, A&A. 44731Astier, P. et al. 2006, A&A, 447, 31 . J A Baldwin, M M Phillips, R Terlevich, PASP. 935Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5 . R Barbon, V Buondí, E Cappellaro, M Turatto, arXiv:astro-ph/9908046A&AS. 139531Barbon, R., Buondí, V., Cappellaro, E., & Turatto, M. 1999, A&AS, 139, 531, arXiv:astro-ph/9908046 . A J Barth, S D Van Dyk, A V Filippenko, B Leibundgut, M W Richmond, AJ. 1112047Barth, A. J., van Dyk, S. D., Filippenko, A. V., Leibundgut, B., & Richmond, M. W. 1996, AJ, 111, 2047 E Bertin, Y Mellier, M Radovich, G Missonnier, P Didelon, B Morin, Astronomical Data Analysis Software and Systems XI. D. A. Bohlender, D. Durand, & T. H. HandleySan FranciscoASP228Bertin, E., Mellier, Y., Radovich, M., Missonnier, G., Didelon, P., & Morin, B. 2002, in Astronomical Data Analysis Software and Systems XI, ed. D. A. Bohlender, D. Durand, & T. H. Handley (San Francisco: ASP), 228 . M R Blanton, S Roweis, AJ. 133734Blanton, M. R., & Roweis, S. 2007, AJ, 133, 734 . M R Blanton, AJ. 1292562Blanton, M. R. et al. 2005, AJ, 129, 2562 . S Blondin, AJ. 143126Blondin, S. et al. 2012, AJ, 143, 126 . S Blondin, J L Tonry, ApJ. 6661024Blondin, S., & Tonry, J. L. 2007, ApJ, 666, 1024 . S Boissier, N Prantzos, A&A. 503137Boissier, S., & Prantzos, N. 2009, A&A, 503, 137 . J Brinchmann, S Charlot, S D M White, C Tremonti, G Kauffmann, T Heckman, J Brinkmann, MNRAS. 3511151Brinchmann, J., Charlot, S., White, S. D. M., Tremonti, C., Kauffmann, G., Heckman, T., & Brinkmann, J. 2004, MNRAS, 351, 1151 . E Cappellaro, R Evans, M Turatto, A&A. 351459Cappellaro, E., Evans, R., & Turatto, M. 1999, A&A, 351, 459 . J A Cardelli, G C Clayton, J S Mathis, ApJ. 345245Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245 . S Charlot, M Longhetti, MNRAS. 323887Charlot, S., & Longhetti, M. 2001, MNRAS, 323, 887 . R A Chevalier, A M Soderberg, ApJ. 71140Chevalier, R. A., & Soderberg, A. M. 2010, ApJ, 711, L40 . N N Chugai, I J Danziger, MNRAS. 268173Chugai, N. N., & Danziger, I. J. 1994, MNRAS, 268, 173 . R J Cool, ApJ. 74810Cool, R. J. et al. 2012, ApJ, 748, 10 . R M Crockett, MNRAS. 3915Crockett, R. M. et al. 2008, MNRAS, 391, L5 . F Delahaye, M H Pinsonneault, L Pinsonneault, C J Zeippen, 1005.0423Delahaye, F., Pinsonneault, M. H., Pinsonneault, L., & Zeippen, C. J. 2010, ArXiv e-prints, 1005.0423 M Dickinson, M Giavalisco, Team, The Mass of Galaxies at Low and High Redshift. R. Bender & A. Renzini324Dickinson, M., Giavalisco, M., & GOODS Team. 2003, in The Mass of Galaxies at Low and High Redshift, ed. R. Bender & A. Renzini, 324 . B Dilday, ApJ. 7151021Dilday, B. et al. 2010, ApJ, 715, 1021 . S G Djorgovski, 1102.5004Astronomische Nachrichten. 329263Djorgovski, S. G. et al. 2008, Astronomische Nachrichten, 329, 263 --. 2011, ArXiv e-prints, 1102.5004 . M R Drout, ApJ. 74197Drout, M. R. et al. 2011, ApJ, 741, 97 . V V Dwarkadas, MNRAS. 4121639Dwarkadas, V. V. 2011, MNRAS, 412, 1639 . J J Eldridge, R G Izzard, C A Tout, MNRAS. 3841109Eldridge, J. J., Izzard, R. G., & Tout, C. A. 2008, MNRAS, 384, 1109 . J J Eldridge, N Langer, C A Tout, MNRAS. 4143501Eldridge, J. J., Langer, N., & Tout, C. A. 2011, MNRAS, 414, 3501 . J J Eldridge, C A Tout, MNRAS. 35387Eldridge, J. J., & Tout, C. A. 2004, MNRAS, 353, 87 . A V Filippenko, ARA&A. 35309Filippenko, A. V. 1997, ARA&A, 35, 309 A V Filippenko, From Twilight to Highlight: The Physics of Supernovae. W. Hillebrandt & B. Leibundgut171Filippenko, A. V. 2003, in From Twilight to Highlight: The Physics of Supernovae, ed. W. Hillebrandt & B. Leibundgut, 171 . A V Filippenko, R Chornock, IAU Circ. 80424Filippenko, A. V., & Chornock, R. 2003, IAU Circ., 8042, 4 A V Filippenko, W D Li, R R Treffers, M Modjaz, Astronomical Society of the Pacific Conference Series. B. Paczynski, W.-P. Chen, & C. Lemme246121Small Telescope Astronomy on Global ScalesFilippenko, A. V., Li, W. D., Treffers, R. R., & Modjaz, M. 2001, in Astronomical Society of the Pacific Conference Series, Vol. 246, IAU Colloq. 183: Small Telescope Astronomy on Global Scales, ed. B. Paczynski, W.-P. Chen, & C. Lemme, 121 . A V Filippenko, T Matheson, L C Ho, ApJ. 415103Filippenko, A. V., Matheson, T., & Ho, L. C. 1993, ApJ, 415, L103 . M Fioc, B Rocca-Volmerange, 9912.0179A&A. 326950Fioc, M., & Rocca-Volmerange, B. 1997, A&A, 326, 950 --. 1999, ArXiv e-prints, 9912.0179 . R J Foley, N Smith, M Ganeshalingam, W Li, R Chornock, A V Filippenko, ApJ. 657105Foley, R. J., Smith, N., Ganeshalingam, M., Li, W., Chornock, R., & Filippenko, A. V. 2007, ApJ, 657, L105 . A S Fruchter, Nature. 441463Fruchter, A. S. et al. 2006, Nature, 441, 463 . C L Fryer, S E Woosley, D H Hartmann, ApJ. 526152Fryer, C. L., Woosley, S. E., & Hartmann, D. H. 1999, ApJ, 526, 152 . A Gal-Yam, D C Leonard, Nature. 458865Gal-Yam, A., & Leonard, D. C. 2009, Nature, 458, 865 . A Gal-Yam, ApJ. 656372Gal-Yam, A. et al. 2007, ApJ, 656, 372 . T J Galama, Nature. 395670Galama, T. J. et al. 1998, Nature, 395, 670 . C Georgy, G Meynet, R Walder, D Folini, A Maeder, A&A. 502611Georgy, C., Meynet, G., Walder, R., Folini, D., & Maeder, A. 2009, A&A, 502, 611 . S M Habergham, J P Anderson, P A James, ApJ. 717342Habergham, S. M., Anderson, J. P., & James, P. A. 2010, ApJ, 717, 342 . A A Hakobyan, G A Mamon, A R Petrosian, D Kunth, M Turatto, A&A. 5081259Hakobyan, A. A., Mamon, G. A., Petrosian, A. R., Kunth, D., & Turatto, M. 2009, A&A, 508, 1259 . D Hardin, A&A. 362419Hardin, D. et al. 2000, A&A, 362, 419 . A Heger, C L Fryer, S E Woosley, N Langer, D H Hartmann, ApJ. 591288Heger, A., Fryer, C. L., Woosley, S. E., Langer, N., & Hartmann, D. H. 2003, ApJ, 591, 288 . M Hicken, ApJS. 12Hicken, M. et al. 2012, ApJS, 200, 12 . R Hirschi, G Meynet, A Maeder, A&A. 443581Hirschi, R., Meynet, G., & Maeder, A. 2005, A&A, 443, 581 . J Hjorth, Nature. 423847Hjorth, J. et al. 2003, Nature, 423, 847 . N Kaiser, SPIE. 7733Kaiser, N. et al. 2010, SPIE, 7733 . G Kauffmann, MNRAS. 3461055Kauffmann, G. et al. 2003, MNRAS, 346, 1055 . P L Kelly, M Hicken, D L Burke, K S Mandel, R P Kirshner, ApJ. 715743Kelly, P. L., Hicken, M., Burke, D. L., Mandel, K. S., & Kirshner, R. P. 2010, ApJ, 715, 743 . P L Kelly, R P Kirshner, M Pahre, ApJ. 6871201Kelly, P. L., Kirshner, R. P., & Pahre, M. 2008, ApJ, 687, 1201 . L J Kewley, S L Ellison, ApJ. 6811183Kewley, L. J., & Ellison, S. L. 2008, ApJ, 681, 1183 . L J Kewley, M J Geller, E J Barton, AJ. 131Kewley, L. J., Geller, M. J., & Barton, E. J. 2006, AJ, 131, 2004 . R G Kron, ApJS. 43305Kron, R. G. 1980, ApJS, 43, 305 . R P Kudritzki, ApJ. 577389Kudritzki, R. P. 2002, ApJ, 577, 389 . N Langer, Space Sci. Rev. 66365Langer, N. 1993, Space Sci. Rev., 66, 365 . N M Law, PASP. 1211395Law, N. M. et al. 2009, PASP, 121, 1395 . J Leaman, W Li, R Chornock, A V Filippenko, MNRAS. 4121419Leaman, J., Li, W., Chornock, R., & Filippenko, A. V. 2011, MNRAS, 412, 1419 . G Leloudas, A&A. 53095Leloudas, G. et al. 2011, A&A, 530, A95 . G Leloudas, J Sollerman, A J Levan, J P U Fynbo, D Malesani, J R Maund, A&A. 51829Leloudas, G., Sollerman, J., Levan, A. J., Fynbo, J. P. U., Malesani, D., & Maund, J. R. 2010, A&A, 518, A29 . E M Levesque, L J Kewley, J F Graham, A S Fruchter, ApJ. 71226Levesque, E. M., Kewley, L. J., Graham, J. F., & Fruchter, A. S. 2010, ApJ, 712, L26 . W Li, R Chornock, J Leaman, A V Filippenko, D Poznanski, X Wang, M Ganeshalingam, F Mannucci, MNRAS. 4121473Li, W., Chornock, R., Leaman, J., Filippenko, A. V., Poznanski, D., Wang, X., Ganeshalingam, M., & Mannucci, F. 2011a, MNRAS, 412, 1473 . W Li, MNRAS. 4121441Li, W. et al. 2011b, MNRAS, 412, 1441 . W Li, S D Van Dyk, A V Filippenko, J.-C Cuillandre, PASP. 117121Li, W., Van Dyk, S. D., Filippenko, A. V., & Cuillandre, J.-C. 2005, PASP, 117, 121 . W Li, X Wang, S D Van Dyk, J.-C Cuillandre, R J Foley, A V Filippenko, ApJ. 6611013Li, W., Wang, X., Van Dyk, S. D., Cuillandre, J.-C., Foley, R. J., & Filippenko, A. V. 2007, ApJ, 661, 1013 A Maeder, G Meynet, R Hirschi, Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis. T. G. Barnes III & F. N. Bash33679Astronomical Society of the Pacific Conference SeriesMaeder, A., Meynet, G., & Hirschi, R. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 336, Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, ed. T. G. Barnes III & F. N. Bash, 79 . D Malesani, ApJ. 6095Malesani, D. et al. 2004, ApJ, 609, L5 G H Marion, The Astronomer's Telegram. 34351Marion, G. H. et al. 2011, The Astronomer's Telegram, 3435, 1 . T Matheson, A V Filippenko, L C Ho, A J Barth, D C Leonard, AJ. 1201499Matheson, T., Filippenko, A. V., Ho, L. C., Barth, A. J., & Leonard, D. C. 2000, AJ, 120, 1499 . T Matheson, A V Filippenko, W Li, D C Leonard, J C Shields, AJ. 1211648Matheson, T., Filippenko, A. V., Li, W., Leonard, D. C., & Shields, J. C. 2001, AJ, 121, 1648 . T Matheson, ApJ. 5991598AJMatheson, T. et al. 2003, ApJ, 599, 394 --. 2008, AJ, 135, 1598 . J R Maund, ApJ. 73937Maund, J. R. et al. 2011, ApJ, 739, L37 . J R Maund, S J Smartt, MNRAS. 360288Maund, J. R., & Smartt, S. J. 2005, MNRAS, 360, 288 . J R Maund, MNRAS. 369390Maund, J. R. et al. 2006, MNRAS, 369, 390 . J R Maund, S J Smartt, R P Kudritzki, P Podsiadlowski, G F Gilmore, Nature. 427129Maund, J. R., Smartt, S. J., Kudritzki, R. P., Podsiadlowski, P., & Gilmore, G. F. 2004, Nature, 427, 129 . P A Mazzali, J Deng, K Maeda, K Nomoto, A V Filippenko, T Matheson, ApJ. 614858Mazzali, P. A., Deng, J., Maeda, K., Nomoto, K., Filippenko, A. V., & Matheson, T. 2004, ApJ, 614, 858 . G Miknaitis, ApJ. 666674Miknaitis, G. et al. 2007, ApJ, 666, 674 . M Modjaz, 1105.5297Astronomische Nachrichten. 332Modjaz, M. 2011, Astronomische Nachrichten, 332, 434, 1105.5297 . M Modjaz, L Kewley, J S Bloom, A V Filippenko, D Perley, J M Silverman, ApJ. 7314Modjaz, M., Kewley, L., Bloom, J. S., Filippenko, A. V., Perley, D., & Silverman, J. M. 2011, ApJ, 731, L4 . M Modjaz, ApJ. 13521AJModjaz, M. et al. 2008, AJ, 135, 1136 --. 2006, ApJ, 645, L21 . J Moustakas, R C KennicuttJr, C A Tremonti, D A Dale, J Smith, D Calzetti, ApJS. 190233Moustakas, J., Kennicutt, Jr., R. C., Tremonti, C. A., Dale, D. A., Smith, J., & Calzetti, D. 2010, ApJS, 190, 233 . J Moustakas, 1112.3300Moustakas, J. et al. 2011, ArXiv e-prints, 1112.3300 . K I Nomoto, K Iwamoto, T Suzuki, Phys. Rep. 256173Nomoto, K. I., Iwamoto, K., & Suzuki, T. 1995, Phys. Rep., 256, 173 D E Osterbrock, D E Osterbrock, A Pastorello, Astrophysics of gaseous nebulae and active galactic nuclei. 447829Osterbrock, D. E. 1989, Astrophysics of gaseous nebulae and active galactic nuclei, ed. Osterbrock, D. E. Pastorello, A. et al. 2007, Nature, 447, 829 . H B Perets, Nature. 465322Perets, H. B. et al. 2010, Nature, 465, 322 . S Perlmutter, ApJ. 517565Perlmutter, S. et al. 1999, ApJ, 517, 565 . M Pettini, B E J Pagel, MNRAS. 34859Pettini, M., & Pagel, B. E. J. 2004, MNRAS, 348, L59 . P Podsiadlowski, N Ivanova, S Justham, S Rappaport, MNRAS. 406840Podsiadlowski, P., Ivanova, N., Justham, S., & Rappaport, S. 2010, MNRAS, 406, 840 . P Podsiadlowski, P C Joss, J J L Hsu, ApJ. 391246Podsiadlowski, P., Joss, P. C., & Hsu, J. J. L. 1992, ApJ, 391, 246 . N Prantzos, S Boissier, A&A. 406259Prantzos, N., & Boissier, S. 2003, A&A, 406, 259 . S H Pravdo, AJ. 1171616Pravdo, S. H. et al. 1999, AJ, 117, 1616 . J L Prieto, K Z Stanek, J F Beacom, ApJ. 673999Prieto, J. L., Stanek, K. Z., & Beacom, J. F. 2008, ApJ, 673, 999 . R Quimby, P Mondol, P Hoeflich, J C Wheeler, C Gerardy, IAU Circ. 85031Quimby, R., Mondol, P., Hoeflich, P., Wheeler, J. C., & Gerardy, C. 2005a, IAU Circ., 8503, 1 . R M Quimby, F Castro, C L Gerardy, P Hoeflich, S J Kannappan, P Mondol, M Sellers, J C Wheeler, BAAS. 371431American Astronomical Society Meeting AbstractsQuimby, R. M., Castro, F., Gerardy, C. L., Hoeflich, P., Kannappan, S. J., Mondol, P., Sellers, M., & Wheeler, J. C. 2005b, in BAAS, Vol. 37, American Astronomical Society Meeting Abstracts, 1431 . C Raskin, E Scannapieco, J Rhoads, M Della Valle, ApJ. 689358Raskin, C., Scannapieco, E., Rhoads, J., & Della Valle, M. 2008, ApJ, 689, 358 . A G Riess, arXiv:astro-ph/9805201AJ. 1161009Riess, A. G. et al. 1998, AJ, 116, 1009, arXiv:astro-ph/9805201 M Sako, 22nd Texas Symposium on Relativistic Astrophysics. P. Chen, E. Bloom, G. Madejski, & V. PatrosianSako, M. et al. 2005, in 22nd Texas Symposium on Relativistic Astrophysics, ed. P. Chen, E. Bloom, G. Madejski, & V. Patrosian, 415-420 . N E Sanders, 1110.2363ApJ. 756Sanders, N. E. et al. 2012a, ArXiv e-prints, 1206.2643 --. 2012b, ApJ, 756, 184, 1110.2363 . P Schechter, ApJ. 203297Schechter, P. 1976, ApJ, 203, 297 . D J Schlegel, D P Finkbeiner, M Davis, ApJ. 500525Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 . E M Schlegel, MNRAS. 244269Schlegel, E. M. 1990, MNRAS, 244, 269 . H R Schmitt, T Storchi-Bergmann, R Fernandes, MNRAS. 303173Schmitt, H. R., Storchi-Bergmann, T., & Cid Fernandes, R. 1999, MNRAS, 303, 173 Black holes, white dwarfs, and neutron stars: The physics of compact objects Smartt. S L Shapiro, S A Teukolsky, ARA&A. 4763Shapiro, S. L., & Teukolsky, S. A. 1983, Black holes, white dwarfs, and neutron stars: The physics of compact objects Smartt, S. J. 2009, ARA&A, 47, 63 . S J Smartt, J J Eldridge, R M Crockett, J R Maund, MNRAS. 3951409Smartt, S. J., Eldridge, J. J., Crockett, R. M., & Maund, J. R. 2009, MNRAS, 395, 1409 . S J Smartt, G F Gilmore, N Trentham, C A Tout, C M Frayn, ApJ. 55629Smartt, S. J., Gilmore, G. F., Trentham, N., Tout, C. A., & Frayn, C. M. 2001, ApJ, 556, L29 . S J Smartt, J R Maund, G F Gilmore, C A Tout, D Kilkenny, S Benetti, MNRAS. 343735Smartt, S. J., Maund, J. R., Gilmore, G. F., Tout, C. A., Kilkenny, D., & Benetti, S. 2003, MNRAS, 343, 735 . S J Smartt, J R Maund, M A Hendry, C A Tout, G F Gilmore, S Mattila, C R Benn, Science. 303499Smartt, S. J., Maund, J. R., Hendry, M. A., Tout, C. A., Gilmore, G. F., Mattila, S., & Benn, C. R. 2004, Science, 303, 499 . N Smith, W Li, A V Filippenko, R Chornock, MNRAS. 4121522Smith, N., Li, W., Filippenko, A. V., & Chornock, R. 2011, MNRAS, 412, 1522 . A M Soderberg, ApJ. 75278Soderberg, A. M. et al. 2012, ApJ, 752, 78 . K Z Stanek, ApJ. 59117Stanek, K. Z. et al. 2003, ApJ, 591, L17 . M A Strauss, AJ. 1241810Strauss, M. A. et al. 2002, AJ, 124, 1810 . K M Svensson, A J Levan, N R Tanvir, A S Fruchter, L.-G Strolger, 1001.5042MNRAS. 405Svensson, K. M., Levan, A. J., Tanvir, N. R., Fruchter, A. S., & Strolger, L.-G. 2010, MNRAS, 405, 57, 1001.5042 . T A Thompson, P Chang, E Quataert, ApJ. 611380Thompson, T. A., Chang, P., & Quataert, E. 2004, ApJ, 611, 380 . C A Tremonti, ApJ. 613898Tremonti, C. A. et al. 2004, ApJ, 613, 898 . D Y Tsvetkov, N N Pavlyuk, O S Bartunov, Astronomy Letters. 30197AJTsvetkov, D. Y., Pavlyuk, N. N., & Bartunov, O. S. 2004, Astronomy Letters, 30, 729 van den Bergh, S. 1997, AJ, 113, 197 . S Van Den Bergh, W Li, A V. ; S Filippenko, G A Tammann, ARA&A. 117363PASPvan den Bergh, S., Li, W., & Filippenko, A. V. 2005, PASP, 117, 773 van den Bergh, S., & Tammann, G. A. 1991, ARA&A, 29, 363 . S D Van Dyk, AJ. 14319Van Dyk, S. D. et al. 2012, AJ, 143, 19 . S D Van Dyk, P M Garnavich, A V Filippenko, P Höflich, R P Kirshner, R L Kurucz, P Challis, PASP. 1141322Van Dyk, S. D., Garnavich, P. M., Filippenko, A. V., Höflich, P., Kirshner, R. P., Kurucz, R. L., & Challis, P. 2002, PASP, 114, 1322 . S D Van Dyk, M Hamuy, A V Filippenko, AJ. 111van Dyk, S. D., Hamuy, M., & Filippenko, A. V. 1996, AJ, 111, 2017 . S D Van Dyk, 1106.2897Van Dyk, S. D. et al. 2011, ArXiv e-prints, 1106.2897 . S D Van Dyk, W Li, A V Filippenko, PASP. 1151289PASPVan Dyk, S. D., Li, W., & Filippenko, A. V. 2003a, PASP, 115, 448 --. 2003b, PASP, 115, 1289 . S D Van Dyk, C Y Peng, A J Barth, A V Filippenko, AJ. 1182331Van Dyk, S. D., Peng, C. Y., Barth, A. J., & Filippenko, A. V. 1999, AJ, 118, 2331 . S D Van Dyk, C Y Peng, J Y King, A V Filippenko, R R Treffers, W Li, M W Richmond, PASP. 1121532Van Dyk, S. D., Peng, C. Y., King, J. Y., Filippenko, A. V., Treffers, R. R., Li, W., & Richmond, M. W. 2000, PASP, 112, 1532 . A J Van Marle, N Smith, S P Owocki, B Van Veelen, MNRAS. 4072305van Marle, A. J., Smith, N., Owocki, S. P., & van Veelen, B. 2010, MNRAS, 407, 2305 . L Van Zee, J J Salzer, M P Haynes, A A O&apos;donoghue, T J Balonek, AJ. 1162805van Zee, L., Salzer, J. J., Haynes, M. P., O'Donoghue, A. A., & Balonek, T. J. 1998, AJ, 116, 2805 . J S Vink, A De Koter, A&A. 442587Vink, J. S., & de Koter, A. 2005, A&A, 442, 587 . J J Walmswell, J J Eldridge, 1109.4637Walmswell, J. J., & Eldridge, J. J. 2011, ArXiv e-prints, 1109.4637 . D M Wittman, SPIE. 483673Wittman, D. M. et al. 2002, SPIE, 4836, 73 . W M Wood-Vasey, New Astronomy Reviews. 48637Wood-Vasey, W. M. et al. 2004, New Astronomy Reviews, 48, 637 . S E Woosley, S Blinnikov, A Heger, Nature. 450390Woosley, S. E., Blinnikov, S., & Heger, A. 2007, Nature, 450, 390 . S E Woosley, J S Bloom, ARA&A. 44507Woosley, S. E., & Bloom, J. S. 2006, ARA&A, 44, 507 . S E Woosley, N Langer, T A Weaver, ApJ. 411823Woosley, S. E., Langer, N., & Weaver, T. A. 1993, ApJ, 411, 823 . S Yoon, M Cantiello, ApJ. 71762Yoon, S., & Cantiello, M. 2010, ApJ, 717, L62 . S.-C Yoon, S E Woosley, N Langer, ApJ. 725940Yoon, S.-C., Woosley, S. E., & Langer, N. 2010, ApJ, 725, 940 . S A Yost, Astronomische Nachrichten. 327803Yost, S. A. et al. 2006, Astronomische Nachrichten, 327, 803
[]
[ "Conformal Motions and the Duistermaat-Heckman Integration Formula 1", "Conformal Motions and the Duistermaat-Heckman Integration Formula 1" ]
[ "Lori D Paniak \nDepartment of Physics\nDepartment of Theoretical Physics\nUniversity of British Columbia Vancouver\nBritish Columbia\nV6T 1Z1Canada\n", "Gordon W Semenoff \nDepartment of Physics\nDepartment of Theoretical Physics\nUniversity of British Columbia Vancouver\nBritish Columbia\nV6T 1Z1Canada\n", "Richard J Szabo \nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordU.K\n" ]
[ "Department of Physics\nDepartment of Theoretical Physics\nUniversity of British Columbia Vancouver\nBritish Columbia\nV6T 1Z1Canada", "Department of Physics\nDepartment of Theoretical Physics\nUniversity of British Columbia Vancouver\nBritish Columbia\nV6T 1Z1Canada", "University of Oxford\n1 Keble RoadOX1 3NPOxfordU.K" ]
[]
We derive a geometric integration formula for the partition function of a classical dynamical system and use it to show that corrections to the WKB approximation vanish for any Hamiltonian which generates conformal motions of some Riemannian geometry on the phase space. This generalizes previous cases where the Hamiltonian was taken as an isometry generator. We show that this conformal symmetry is similar to the usual formulations of the Duistermaat-Heckman integration formula in terms of a supersymmetric Ward identity for the dynamical system. We present an explicit example of a localizable Hamiltonian system in this context and use it to demonstrate how the dynamics of such systems differ from previous examples of the Duistermaat-Heckman theorem.
10.1016/0370-2693(96)00086-x
[ "https://arxiv.org/pdf/hep-th/9511199v1.pdf" ]
15,775,384
hep-th/9511199
6bb9f92a9711ad849ecaf578b9ee673b440bd285
Conformal Motions and the Duistermaat-Heckman Integration Formula 1 arXiv:hep-th/9511199v1 28 Nov 1995 June 28, 2021 Lori D Paniak Department of Physics Department of Theoretical Physics University of British Columbia Vancouver British Columbia V6T 1Z1Canada Gordon W Semenoff Department of Physics Department of Theoretical Physics University of British Columbia Vancouver British Columbia V6T 1Z1Canada Richard J Szabo University of Oxford 1 Keble RoadOX1 3NPOxfordU.K Conformal Motions and the Duistermaat-Heckman Integration Formula 1 arXiv:hep-th/9511199v1 28 Nov 1995 June 28, 2021 We derive a geometric integration formula for the partition function of a classical dynamical system and use it to show that corrections to the WKB approximation vanish for any Hamiltonian which generates conformal motions of some Riemannian geometry on the phase space. This generalizes previous cases where the Hamiltonian was taken as an isometry generator. We show that this conformal symmetry is similar to the usual formulations of the Duistermaat-Heckman integration formula in terms of a supersymmetric Ward identity for the dynamical system. We present an explicit example of a localizable Hamiltonian system in this context and use it to demonstrate how the dynamics of such systems differ from previous examples of the Duistermaat-Heckman theorem. Understanding the circumstances under which a partition function is given exactly by its semi-classical approximation reveals connections between classical mechanics and the geometry and topology of the associated phase space. The Duistermaat-Heckman theorem [1]- [3] has been the basis of so-called localization theory in both mathematics and physics [4]- [12] (see [13] for a recent review) and it provides geometric criteria for the exactness of the semiclassical approximation for the finite-dimensional partition function Z(T ) = M d 2n x det ω(x) e iT H(x)(1) which describes the statistical dynamics (with imaginary temperature) of a classical Hamiltonian system. Here M is a 2n-dimensional phase space and ω µν (x) is an antisymmetric tensor field on M which is non-degenerate, det ω(x) = 0 ∀x ∈ M, and whose matrix inverse The partition function (1) has an asymptotic expansion for large-T with coefficients determined by the method of stationary-phase approximation [14]. The Duistermaat-Heckman theorem states that if the phase space M is compact and closed and the classical timeevolution x(t) of the dynamical system traces out a torus (S 1 ) m in M, then the partition function (1) is given exactly by the leading order term det ω(p) det H(p) e iT H(p) (2) of its stationary-phase loop-expansion [1]. Here H(x) µν ≡ ∂ µ ∂ ν H(x) is the non-degenerate Hessian matrix of H and η H (p) is the spectral asymmetry of the Hessian at p (the difference between the number of positive and negative eigenvalues of H(p)). A fundamental assumption that leads to the Duistermaat-Heckman theorem and its generalizations [3] is the existence of a globally-defined metric tensor g = 1 2 g µν (x)dx µ dx ν on M which is invariant under the classical flows x(t) ∈ M of the Hamiltonian system, i.e. g(x(t)) = g(x(0))(3) The condition (3) is a very restrictive one on the Hamiltonian dynamics as it implies that H must generate a global U(1)-action on M [10]. The set of Hamiltonian systems which obey these constraints has been examined in [9,11,12]. In this Letter we will show that the Duistermaat-Heckman integration formula (2) for the partition function (1) still holds when the geometric assumption (3) is replaced by the weaker condition that there exists a metric tensor g on M which is invariant under the classical time evolution of the dynamical system up to a change of scale, g(x(t)) = e tΛ(x(t)) g(x(0))(4) for some smooth function Λ(x) on M. We shall argue that this extended geometric requirement is similar to the isometry condition (3) from the point of view of localizing (1) onto the critical point set I(H), except that the classical dynamics now possess behaviour not normally observed for systems whose partition functions can be localized. We illustrate these features for some explicit examples which show how the extension (4) expands the set of previously studied Hamiltonian systems for which the Duistermaat-Heckman formula holds. First, we will discuss some features of the integration in (1). We describe the exterior differential calculus of the manifold M by introducing a set of anticommuting Grassmann variables ψ µ which are to be identified locally with the basis elements ψ µ ∼ dx µ of the cotangent bundle T * M of M. A differential m-form is represented by contracting a rank-m antisymmetric tensor function on M with ψ µ 1 · · · ψ µm and it can be regarded as a function η(x, ψ) = 1 m! η µ 1 ···µm (x)ψ µ 1 · · · ψ µm on the super-manifold M ⊗ T * M. The integration of differential forms is defined on M⊗T * M by introducing the usual Berezin rules for integrating Grassmann variables, dψ µ ψ µ = 1, dψ µ 1 = 0. With these rules, we can absorb the determinant of the symplectic 2-form ω ≡ 1 2 ω µν (x)ψ µ ψ ν into the exponential in (1) and write the classical partition function as Z(T ) = 1 (iT ) n M⊗T * M d 2n x d 2n ψ e iT S(x,ψ) ≡ M α(5) where we have introduced the inhomogeneous differential form α = 1 (iT ) n e iT (H+ω) = e iT H n k=0 (iT ) k−n k! ω k(6) From the Berezin rules the integration in (5) is non-zero only on the top-form (degree 2n) component ω n of α, and all forms in (6) of Grassmann-degree higher than 2n vanish because of the fermionic nature of the variables ψ µ . The integral in (5) can be thought of as the partition function of a zero-dimensional quantum field theory with bosonic fields x µ , fermion fields ψ µ , and action S(x, ψ) ≡ H(x) + ω(x, ψ). The Hamiltonian vector field is defined by the equation V µ (x) ≡ ω µν (x)∂ ν H(x) or dH = −i V ω(7) where the exterior derivative operator d = ψ µ ∂ ∂x µ maps m-forms to (m + 1)-forms, and i V = V µ (x) ∂ ∂ψ µ is the interior multiplication operator which contracts differential forms to 1 lower degree with the vector field V . Both d and i V are nilpotent, d 2 = (i V ) 2 = 0, and are graded derivations, i.e. they define operators Q whose action on differential forms obeys the graded Leibniz rule Q(ηβ) = (Qη) β + (−1) m η (Qβ), where η is an m-form. The flowṡ x µ (t) = V µ (x(t))(8) of the Hamiltonian vector field define the classical equations of motion of the dynamical system. Note that by definition the symplectic 2-form ω is closed, i.e. dω = 0. We now introduce the Cartan equivariant exterior derivative operator [3] Q V = d + i V(9) which is a graded derivation that maps m-forms into the sum of (m − 1)-and (m + 1)-forms. If we think of commuting, even-degree forms as representing bosons and anti-commuting, odd-degree forms as representing fermions, then this suggests that Q V represents some sort of supersymmetry operator in the dynamical theory. However, unlike the operators d and i V , Q V is not nilpotent in general. The square of Q V is given by the Weil identity Q 2 V β = (di V + i V d)β = L V β(10) for the Lie derivative L V along V acting on differential forms. The form L V β represents the infinitesimal (t → 0) variation as x(0) → x(t) of the form β under the flows (8) of V . The operator Q V is therefore nilpotent on the kernel ker L V = {β ∈ Λ * M : L V β = 0} of the linear derivation L V which represents the subspace of differential forms which are invariant under the classical dynamics of the Hamiltonian system, i.e. for which β(x(t)) = β(x(0)). Such forms are known as equivariant differential forms [3]. From the definition (7) and the fact that ω is closed it follows that the action in (5) satisfies Q V S(x, ψ) = (d + i V )(H + ω) = dH + i V ω ≡ 0(11) This means that, if we interpret Q V as a supersymmetry charge, then the action S is supersymmetric and the partition function (5) determines a supersymmetric quantum field theory. In this setting, Q V determines an N = 1 2 supersymmetry algebra Q 2 V = L V and the BRST complex of physical states is the space ker L V of equivariant differential forms. In the mathematics literature the BRST cohomology of the charge 3,13]. Notice also that, because of the Leibniz rule for Q V , the differential form (6) is also supersymmetric, Q V α = 0. Q V (i.e. the space of Q V -closed forms η, Q V η = 0, modulo Q V -exact forms η = Q V β) is called the U(1)-equivariant cohomology of M generated by the action of V on M [2, We shall now derive a general integration formula for the partition function (1) in terms of geometrical objects on the phase space which will allow us to examine its localization features explicitly. Consider the integral Z(s) = M α e −sQ V β(12) where β is an arbitrary globally defined differential form on M. We assume that Z(s) is a regular function of s ∈ IR + and that its s → 0 and s → ∞ limits exist. Its s → 0 limit is just the integral Z(T ) = M α of interest while its s → ∞ limit represents a localization of (5) onto the smaller subspace of M where Q V β = 0. Then (12) and the identity Z(0) = lim s→∞ Z(s) − ∞ 0 ds d ds Z(s)(13) imply that the partition function (5) can be determined as Z(T ) = lim s→∞ M α e −sQ V β + ∞ 0 ds M Q V (αβ) e −sQ V β(14) where we have used the fact that α is supersymmetric. Consider first the last integration in (14). Since Q V is a graded derivation we can integrate by parts to get M Q V (αβ) e −sQ V β = M Q V αβ e −sQ V β + M αβQ V e −sQ V β(15) In the first integral on the right-hand side of (15) there is an i V -exact integrand. Since interior multiplication reduces the order of a form by one and integration over the manifold is nonzero only on the top-degree component of any differential form, it follows that this integral vanishes. As for the d-exact integration in this same integral, we can use Stokes' theorem to write it as an integral over the (2n − 1)-dimensional boundary ∂M of the manifold M. Finally, in the last integral we recognize the Lie derivative from (10), and hence ∞ 0 ds M Q V (αβ) e −sQ V β = ∞ 0 ds ∂M αβ e −sQ V β − s M αβ(L V β) e −sQ V β(16) To carry out the first integration in (14), we must explicitly specify the form β. In principle there are many possibilities for β [10,11,13], but in order to obtain finite results in the limit s → ∞ we need to ensure that the form Q V β has a 0-form component to produce an exponential damping factor, since higher order forms will contribute only polynomially due to antisymmetry (see (6)). This is guaranteed only if β has a 1-form component. Thus it is only the 1-form part of β that will be relevant in the following, and so without loss of generality we assume that β ≡ B µ ψ µ . Furthermore, we need for the 0-form part V µ B µ of Q V β to attain its global minimum at zero so that the large-s limit of (14) yields a non-zero result. This boundedness requirement is equivalent to the condition that the component of B along V has the same orientation as V . In order to implement such a condition we need to introduce a globally defined Euclidean-signature metric tensor g µν (x) on the phase space. Then the most general form of β up to components orthogonal to V is given by β(x, ψ) = f (x)g(V, ·) ≡ f (x)g µν (x)V µ (x)ψ ν(17) where f (x) is any strictly-positive smooth-function on M. With this choice for β we have Q V β = K V + Ω V where K V = f · g(V, V ) ≡ f · g µν V µ V ν , Ω V = d[f · g(V, ·)] ≡ f (2g · ∇V − L V g) + (df )g(V, ·) (18) Here ∇ ≡ d + Γ is the usual covariant derivative with Γ the Levi-Civita-Christoffel (affine) connection associated with the Riemannian metric g, and (L V g) µν = g µλ ∇ ν V λ + g νλ ∇ µ V λ(19) are the components of the Lie derivative of g along V . We now substitute these identities into (14) to write the first integration there as a sum over the critical point set I(H) which coincides with the zero locus of the Hamiltonian vector field V , lim s→∞ M α e −sQ V β = lim s→∞ M⊗T * M d 2n x d 2n ψ e iT (H+ 1 2 ωµν ψ µ ψ ν ) (iT ) n e −sf ·gµν V µ V ν − s 2 (Ω V )µν ψ µ ψ ν = 2πi T n M⊗T * M d 2n x d 2n ψ e iT (H(x)+ 1 2 ωµν (x)ψ µ ψ ν ) f −n (x) δ(V (x)) det g(x) × Pfaff Ω V (x) δ(ψ) = 2πi T n p∈I(H) f −n (p) e iT H(p) | det dV (p)| Pfaff Ω V (p) det g(p)(20) Finally, we can rewrite the determinants in (20) using the fact that at a critical point p ∈ I(H) the Hessian of H can be written using the Hamilton equations (7) as H(p) µν ≡ ∂ µ ∂ ν H(p) = −(∂ µ V λ )(p)ω λν (p)(21) and likewise from the definition of the 2-form Ω V in (18) we have g µλ (p)(Ω V ) λν (p) = f (p) 2ω µλ (p)H(p) λν − g µλ (p)(L V g) λν (p)(22) where g µν is the matrix inverse of g µν . Substituting (21) and (22) into the large-s limit integral (20) in (14) and combining this with (16) and the choice of β in (17), we arrive at our final expression for the integral (1) in terms of geometrical characteristics of the phase space Z(T ) = 2π T n p∈I(H) e i π 4 η H (p) det ω(p) det H(p) e iT H(p) det (1 − H −1 ωg −1 L V g/2) (p) + 1 (iT ) n ∞ 0 ds ∂M e iT H−sK V (n − 1)! f · g(V, ·) (iT ω − sΩ V ) n−1 − 1 (iT ) n ∞ 0 ds s M e iT H−sK V (n − 1)! f 2 · g(V, ·)(L V g)(V, ·) (iT ω − sΩ V ) n−1(23) where we have used the fact that the fermion field g(V, ·) is nilpotent, and the factor e i π The expression (23) represents an alternative to the conventional loop-expansion [14] which explicitly takes into account the geometric symmetries that make the 1-loop approximation exact. It is readily seen that, for closed phases spaces (∂M = ∅), (23) reduces to the Duistermaat-Heckman integration formula (2) whenever the Hamiltonian vector field V is a conformal Killing vector of a metric g, i.e. L V g = Λg(24) which in local coordinates reads g µλ ∇ ν V λ + g νλ ∇ µ V λ = 1 n ∇ λ V λ g µν = 1 n ∇ λ ω λρ ∂ ρ H g µν(25) where the smooth-function Λ(x) = tr ∇V (x)/n is fixed by contracting both sides of (24) with g µν . Notice that the scaling function Λ(x) in (24) vanishes at the critical points of H, so that the only possible global Hamiltonian conformal Killing vectors are those which generate global isometries of g, Λ(x) ≡ 0 almost everywhere on M, or non-homothetic transformations for which Λ(x) is a globally-defined non-constant function on M. The former case, wherein the Hamiltonian vector field is a Killing vector of some globally-defined Riemannian metric tensor on M, is well-known to represent a quite general class of dynamical systems for which the Duistermaat-Heckman formula is exact [3], [5]- [13]. In that case, we set f = 1 in (17) so that then L V β = (L V g)(V, ·) = 0, i.e. β ∈ ker L V is a supersymmetric fermion field. Then the derivation of (23) also serves to show that the localization integral (12) coincides with the partition function (1) for all s. This is because the supersymmetric action S − sQ V β in Z(s) is cohomologous under Q V to the supersymmetric action S, and the partition function (5) depends only on the BRST cohomology class of S, and not on its particular representative. This feature is often refered to as the equivariant localization principle [2,8,9,11,13] -we can topologically renormalize the action S(x, ψ) without changing the value of the integral Z(T ) in (5). This can be thought of as a Ward identity associated with the "hidden" supersymmetry of the dynamical theory. Notice that if H has no stationary points on the boundary ∂M of M, then the first s-integration on the right-hand side of (16) can be carried out explicitly and yields Z (0) ∂M (T ) = ∂M e iT H g(V, V ) g(V, ·) n−1 k=0 (−1) k (n − k − 1)! Ω V K V k ω n−k−1 (iT ) k+1(26) which represents the additional contribution to the Duistermaat-Heckman integration formula (2) for manifolds with boundary [3]. The case of a non-vanishing Λ(x) is similar to the isometry case from the point of view of the localization mechanism discussed above. Note that away from the critical points of H we can choose the function f (x) in (17) so that β = g(V, ·)/g(V, V ). With this choice for the localization 1-form β it is easy to show that away from the critical point set of the Hamiltonian it satisfies L V β = 0. Thus away from the subset I(H) ⊂ M the conformal Killing condition can be cast into the same supersymmetric context as the isometry condition by a rescaling of the metric tensor in (24), g µν → G µν = g µν /g(V, V ), for which L V G = 0. Of course the rescaled metric G µν (x) is only defined on M − I(H), but all that is needed to establish the localization of (5) onto the zeroes of the vector field V (i.e. the equivariant localization principle) is an invariant metric tensor (or equivalently an equivariant differential form β) which is defined everywhere on M except possibly in an arbitrarily small neighbourhood of I(H) [7,13]. However, in contrast to isometry generators the generators of conformal transformations need not correspond to a global U(1)-action on M. This is because although the isometry group of a compact space is itself compact, the conformal group need not be. We might therefore expect that globally the case of a non-vanishing scaling function Λ(x) in (24) represents a new sort of localizable Hamiltonian dynamics. To explore this possibility, we now turn to an explicit example of a Hamiltonian system which generates non-zero conformal motions of a Riemannian metric. We consider the plane M = IR 2 with its usual flat Euclidean metric which in complex coordinates is ds 2 = dzdz. In this case the conformal Killing equations (25) become simply ∂zV z = ∂ z Vz = 0, and thus the conformal group of the flat plane is generated by arbitrary holomorphic vector fields V z = F (z), Vz =F (z) (these generate the infinite-dimensional classical Virasoro algebra). The classical equations of motion determined by the Hamiltonian flows of these vector fields are therefore the arbitrary analytic coordinate transformationsż (t) = F (z(t)) ,ż(t) =F (z(t))(27) We shall now explicitly construct a Hamiltonian system associated with such a vector field. For definiteness, we consider the conformal Killing vector which describes a Hamiltonian system with n + 1 distinct stationary points, V z = iβz(1 − α 1 z) · · · (1 − α n z)(28) at z = 0 and z = 1/α i , where β, α i ∈ C. The associated scaling function in (24) is then Λ(z,z) = ∂ z V z + ∂zVz(29) The integrability of the Hamiltonian equations (7) requires that the symplectic 2-form be invariant under the flows of V , i.e. L V ω = di V ω = 0. This leads to the first-order linear partial differential equation V z ∂ z ω zz + Vz∂zω zz = −Λ(z,z)ω zz(30) where ω ≡ ω zz ψ z ψz. The equation (30) is easily solved by separation of variables, and the solution for the symplectic 2-form with arbitrary separation parameter λ ∈ IR is ω (λ) zz (z,z) = w λ (z)w λ (z)/V z Vz(31) where w λ (z) = e iλ dz/V z = z (1 − α 1 z) A 1 · · · (1 − α n z) An λ/β(32) and the constants A i (α 1 , . . . , α n ) = (α i ) n−1 j =i 1 α i − α j(33) are the coefficients of the partial fraction decomposition (V z ) −1 = 1 iβ 1 z + n i=1 A i 1 − α i z(34) To ensure that (31) is a single-valued function on C, we restrict the α k 's to all have the same phase, so that A i (α 1 , . . . , α n ) ∈ IR's, and the parameter β to be real-valued. The Hamiltonian equations (7) can now be integrated up with the vector field (28) and the symplectic 2-form (31), from which we find the family of Hamiltonians H (λ) β,α i (z,z) = 1 λ z (1 − α 1 z) A 1 · · · (1 − α n z) An λ/β z (1 −ᾱ 1z ) A 1 · · · (1 −ᾱ nz ) An λ/β(35) To ensure that this Hamiltonian has only non-degenerate critical points we set λ = β. This also guarantees that the level (constant energy) curves of this Hamiltonian coincide with the curves which are the solutions of the equations of motion (27). Since the Hamiltonian (35) either vanishes or is infinite on its critical point set, it is easy to show that the partition function (1) is independent of α k and coincides with the anticipated result from the Duistermaat-Heckman integration formula (2), Z(T ) = dz dz ω (β) zz (z,z) e iT H (β) β,α i (z,z) = 2πiβ T(36) This partition function coincides with that of the simple harmonic oscillator Hamiltonian H = 1 β (p 2 + q 2 ) ∼ zz/β. Indeed, if we set α i = 0, then ω (39) are unbounded and go out to infinity as E → 1/αᾱ. Notice that this particular example is also applicable to the compactified case where M is the Riemann sphere S 2 ≃ C ∪ {∞}. There the conformal group is the finite-dimensional Lie group SL(2,C)/Z Z 2 ≃ SO(3, 1) of projective conformal (Möbius) transformations (i.e. for which V z (z) is at most quadratic in z) and volume forms contain an additional factor (1 + ww) −2 associated with the compactness of S 2 . In this case the Möbius transformation z → w β (z) above is a diffeomorphism of the Riemann sphere. As for the plane, however, the conformal dynamics (39) are quite different than the isometric dynamics generated by the usual height function [6,7,11], and this is related to the fact that while the isometry group SO(3) of S 2 is compact, its conformal group is not. The conformal group structures on spaces like S 2 give novel generalizations of the localizable systems which are usually associated with coadjoint orbits of the appropriate isometry groups and the quantization of spin systems [6,7,11,12]. Next, we consider the more complicated example of the cubic conformal Killing vector V z = iβz(1 − α 2 z 2 ) for which the classical trajectories are determined by the equation e iβ(t−t 0 ) = z(t) 1 − α 2 z 2 (t)(40) Here the more complicated trajectories which coincide with the level curves of the Hamiltonian exhibit a distinct difference between the usual circular orbits of the linear (harmonic oscillator) and quadratic vector field discussed above. Additionally, in this case for energies above E c = 1/β|α| 2 we realize a one-to-two mapping (37) of the plane as the point at w = ∞ is now mapped to z = ±1/α. For the example α = β = 1 depicted in Fig. 2 there is a central hour-glass shaped region which can be seen to be in a one-to-one correspondence with the domain orbits. For energies greater than E c = 1 in this case, the classical trajectories of the system depend crucially on initial conditions as there are equivalent orbits about each singularity at z = ±1. The apparent equivalence between localizable Hamiltonian systems and harmonic oscillator Hamiltonians is also observed for those which generate isometries [11]. It is a consequence of the fact that these Hamiltonians generate circle actions, which is the basic ingredient in the Duistermaat-Heckman theorem. This large degree of symmetry in the theory is precisely what is required to reduce the complicated integrations in (1) to Gaussian (harmonic oscillator) ones and hence render the semi-classical approximation to the partition function exact. It would be interesting to construct conformal integrable models on more general spaces other than the ones we have considered above where the equivalence with the harmonic oscillator dynamics was quite transparent because of the holomorphic-antiholomorphic decomposition of the Hamiltonian system. In the general case we expect that more complicated Hamiltonians which generate conformal motions will share common features with those which are isometry generators, but the dynamics of these systems will be quite different. These different dynamical structures could play an important role in path integral localizations which are expressed in terms of trajectories on the phase space [5,10]. It would be very interesting to see if these general conformal symmetries of the classical theory remain unbroken by quantum corrections in a quantum mechanical path integral generalization. The absence of such a conformal anomaly could then lead to a generalization of the above extended localizations to path integral localization formulas. The appearance of the larger (non-compact) conformal group in certain settings may also lead to interesting new structures, such as in coadjoint orbit quantization [6,7,11,12] or the nonabelian generalizations of the Duistermaat-Heckman theorem [8] which employ the full isometry group of the phase space. We wish to thank D. Austin, R. Froese, I. Kogan, A. Polyakov, A. Niemi and O. Tirkkonen for helpful discussions. ω µν defines the Poisson brackets {x µ , x ν } = ω µν (x) of the dynamical system. H is a smooth Hamiltonian function on M which for simplicity we assume has a finite set of critical points I(H) = {p ∈ M : dH(p) = 0} each of which is non-degenerate. Z ( 0 ) 0( proper account of the sign of the Pfaffian Pfaff dV (p) ∼ det ω −1 H(p) in (20) at each critical point p ∈ I(H). becomes the Darboux 2-form and H (β) β,0 the harmonic oscillator Hamiltonian. Furthermore, the scaling function (29) vanishes, the Killing vector V z = iβz generates rotations of the plane, and the Hamiltonian flows (27) are the circular orbits z(t) = e iβ(t−t 0 ) about the origin in the complex plane of period 2π/β. Figure 1 : 1Conformal flows for the quadratic generator V z = iz(1 − z). Figure 2 : 2Conformal flows for the cubic generator V z = iz(1 − z 2 ). This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. This is the classic example of a dynamical system with WKB-exact partition function, and moreover it is the unique localizable system on a homogeneous phase space[11](i.e. one with ∇ω = 0, for which the only possible conformal motions (25) are isometries).In fact, we can integrate up the flow equation(27)in the general case and we find that the classical trajectories z(t) are determined by the equationThe coordinate change z → w β (z) is just the finite conformal transformation generated by the vector field (28) and it maps the dynamical system (ωβ,α i ) onto the harmonic oscillator H ∝ ww, ω ∝ ψ w ψw with circular classical trajectories w(t) = e iβ(t−t 0 ) . This transformation is in general multi-valued and has singularities at the critical points z = 1/α i of the Hamiltonianβ,α i . It is therefore not a diffeomorphism of the plane for α i = 0 and the Hamiltonian systemβ,α i ) is not globally isomorphic to the simple harmonic oscillator. The transformations (37) are one-to-one in a neighbourhood of the origin but are oneto-many when the energy of the system is above a critical value E c . Asymptotically, the Hamiltonian H (β) β,α i tends to a finite value which is given byIf we consider the effect of the change of variable (37) on the Hamiltonian system with H = w βwβ , then circular orbits of energy less than E c are mapped into closed orbits which are contractible to the origin and are in a one-to-one correspondence with the domain orbits in the complex z-plane. As the energy tends towards E c these orbits grow larger and at the critical energy they actually reach infinity and return in a finite time determined by the frequency β. Above the critical energy the transformation (37) is in general one-to-many whereby single orbits are mapped to distinct orbits about each of the n singular points {z k = 1/α k } n k=1 of the Hamiltonian. This complicated behaviour of the conformal flows (37) is in marked contrast to the nature of the harmonic oscillator orbits which always just encircle the origin.The orbit (39) describes a circle in the complex plane centered, for total energy βH = E, at the point Eᾱ(E|α| 2 −1) −1 and of radius √ E |E|α| 2 −1| −1(Fig. 1). In this case the (invertible)Möbius transformation (37) effectively maps the point at w β = ∞ to z = 1/α, and the flows . J J Duistermaat, G J Heckman, Invent. Math. 69153J. J. Duistermaat and G. J. Heckman, Invent. Math. 69 (1982), 259; 72 (1983), 153 . M F Atiyah, R Bott, Topology. 231M. F. Atiyah and R. Bott, Topology 23 (1984), 1 . N Berline, M Vergne, C R Acad, Sci, Duke Math. J. 295539N. Berline and M. Vergne, C. R. Acad. Sci. Paris 295 (1982), 539; Duke Math. J. 50 (1983), 539 M F Atiyah ; 43, ; J.-M Bismut, ; A Hietämaki, A Yu, A J Morozov, K Niemi, Palo, Proc. XXVII Intern. Ahrenshoop Symp. D. Lüst and G. WeigtXXVII Intern. Ahrenshoop Symp131175Phys. Lett.M. F. Atiyah, Asterisque 131 (1985), 43; J.-M. Bismut, Commun. Math. Phys. 98 (1985), 213; 103 (1986), 127; A. Hietämaki, A. Yu. Morozov, A. J. Niemi and K. Palo, Phys. Lett. B263 (1991), 417; T. Kärki and A. J. Niemi, in Proc. XXVII Intern. Ahrenshoop Symp., eds. D. Lüst and G. Weigt, DESY 94-053 (1994), 175 . M Blau, E Keski-Vakkuri, A J Niemi, Phys. Lett. 24692M. Blau, E. Keski-Vakkuri and A. J. Niemi, Phys. Lett. B246 (1990), 92 . A J Niemi, P Pasanen, Phys. Lett. 253349A. J. Niemi and P. Pasanen, Phys. Lett. B253 (1991), 349 . E Keski-Vakkuri, A J Niemi, G W Semenoff, O Tirkkonen, Phys. Rev. 443899E. Keski-Vakkuri, A. J. Niemi, G. W. Semenoff and O. Tirkkonen, Phys. Rev. D44 (1991), 3899 . E Witten, J. Geom. Phys. 9303E. Witten, J. Geom. Phys. 9 (1992), 303 . H M Dykstra, J D Lykken, E J Raiten, Phys. Lett. 302223H. M. Dykstra, J. D. Lykken and E. J. Raiten, Phys. Lett. B302 (1993), 223 . A J Niemi, O Tirkkonen, Ann. Phys. 235318A. J. Niemi and O. Tirkkonen, Ann. Phys. 235 (1994), 318 . R J Szabo, G W Semenoff, Nucl. Phys. 421391R. J. Szabo and G. W. Semenoff, Nucl. Phys. B421 (1994), 391 . G W Semenoff, R J Szabo, Mod. Phys. Lett. 92705G. W. Semenoff and R. J. Szabo, Mod. Phys. Lett. A9 (1994), 2705 . M Blau, G Thompson, J. Math. Phys. 362192M. Blau and G. Thompson, J. Math. Phys. 36 (1995), 2192 L Hörmander, The Analysis of Linear Partial Differential Operators. BerlinSpringer-VerlagL. Hörmander, The Analysis of Linear Partial Differential Operators, Springer-Verlag (Berlin) (1983)
[]
[ "Statistical and knowledge supported visualization of multivariate data", "Statistical and knowledge supported visualization of multivariate data" ]
[ "Magnus Fontes [email protected] \nCentre for Mathematical Sciences\nLund University\nBox 118SE-22100LundSweden\n" ]
[ "Centre for Mathematical Sciences\nLund University\nBox 118SE-22100LundSweden" ]
[]
In the present work we have selected a collection of statistical and mathematical tools useful for the exploration of multivariate data and we present them in a form that is meant to be particularly accessible to a classically trained mathematician. We give self contained and streamlined introductions to principal component analysis, multidimensional scaling and statistical hypothesis testing. Within the presented mathematical framework we then propose a general exploratory methodology for the investigation of real world high dimensional datasets that builds on statistical and knowledge supported visualizations. We exemplify the proposed methodology by applying it to several different genomewide DNA-microarray datasets. The exploratory methodology should be seen as an embryo that can be expanded and developed in many directions. As an example we point out some recent promising advances in the theory for random matrices that, if further developed, potentially could provide practically useful and theoretically well founded estimations of information content in dimension reducing visualizations. We hope that the present work can serve as an introduction to, and help to stimulate more research within, the interesting and rapidly expanding field of data exploration.
10.1007/978-3-642-20236-0_6
[ "https://arxiv.org/pdf/1008.5374v1.pdf" ]
59,067,628
1008.5374
36716a1de4d1e6cb47a1a5a05e4477aca633ec0a
Statistical and knowledge supported visualization of multivariate data Magnus Fontes [email protected] Centre for Mathematical Sciences Lund University Box 118SE-22100LundSweden Statistical and knowledge supported visualization of multivariate data In the present work we have selected a collection of statistical and mathematical tools useful for the exploration of multivariate data and we present them in a form that is meant to be particularly accessible to a classically trained mathematician. We give self contained and streamlined introductions to principal component analysis, multidimensional scaling and statistical hypothesis testing. Within the presented mathematical framework we then propose a general exploratory methodology for the investigation of real world high dimensional datasets that builds on statistical and knowledge supported visualizations. We exemplify the proposed methodology by applying it to several different genomewide DNA-microarray datasets. The exploratory methodology should be seen as an embryo that can be expanded and developed in many directions. As an example we point out some recent promising advances in the theory for random matrices that, if further developed, potentially could provide practically useful and theoretically well founded estimations of information content in dimension reducing visualizations. We hope that the present work can serve as an introduction to, and help to stimulate more research within, the interesting and rapidly expanding field of data exploration. Introduction. In the scientific exploration of some real world phenomena a lack of detailed knowledge about governing first principles makes it hard to construct well-founded mathematical models for describing and understanding observations. In order to gain some preliminary understanding of involved mechanisms and to be able to make some reasonable predictions we then often have to recur to purely statistical models. Sometimes though, a stand alone and very general statistical approach fails to exploit the full exploratory potential for a given dataset. In particular a general statistical model a priori often does not incorporate all the accumulated field-specific expert knowledge that might exist concerning a dataset under consideration. In the present work we argue for the use of a set of statistical and knowledge supported visualizations as the backbone of the exploration of high dimensional multivariate datasets that are otherwise hard to model and analyze. The exploratory methodology we propose is generic but we exemplify it by applying it to several different datasets coming from the field of molecular biology. Our choice of example application field is in principle anyhow only meant to be reflected in the list of references where we have consciously striven to give references that should be particularly useful and relevant for researchers interested in bioinformatics. The generic case we have in mind is that we are given a set of observations of several different variables that presumably have some interrelations that we want to uncover. There exist many rich real world sources giving rise to interesting examples of such datasets within the fields of e.g. finance, astronomy, meteorology or life science and the reader should without difficulty be able to pick a favorite example to bear in mind. We will use separate but synchronized Principle Component Analysis (PCA) plots of both variables and samples to visualize datasets. The use of separate but synchronized PCA-biplots that we argue for is not standard and we claim that it is particularly advantageous, compared to using traditional PCA-biplots, when the datasets under investigation are high dimensionsional. A traditional PCA-biplot depicts both the variables and the samples in the same plot and if the dataset under consideration is high dimensional such a joint variable/sample plot can easily become overloaded and hard to interpret. In the present work we give a presentation of the linear algebra of PCA accentuating the natural inherent duality of the underlying singular value decomposition. In addition we point out how the basic algorithms easily can be adapted to produce nonlinear versions of PCA, so called multidimensional scaling, and we illustrate how these different versions of PCA can reveal relevant structure in high dimensional and complex real world datasets. Whether an observed structure is relevant or not will be judged by knowledge supported and statistical evaluations. Many present day datasets, coming from the application fields mentioned above, share the statistically challenging peculiarity that the number of measured variables (p) can be very large (10 4 ≤ p ≤ 10 10 ), while at the same time the num-ber of observations (N) sometimes can be considerably smaller (10 1 ≤ N ≤ 10 3 ). In fact all our example datasets will share this so called "large p small N" characteristic and our exploratory scheme, in particular the statistical evaluation, is well adapted to cover also this situation. In traditional statistics one usually is presented with the reverse situation, i.e. "large N small p", and if one tries to apply traditional statistical methods to "large p small N" datasets one sometimes runs into difficulties. To begin with, in the applications we have in mind here, the underlying probability distributions are often unknown and then, if the number of observations is relatively small, they are consequently hard to estimate. This makes robustness of employed statistical methods a key issue. Even in cases when we assume that we know the underlying probability distributions or when we use very robust statistical methods the "large p small N" case presents difficulties. One focus of statistical research during the last few decades has in fact been driven by these "large p small N" datasets and the possibility for fast implementations of statistical and mathematical algorithms. An important example of these new trends in statistics is multiple hypothesis testing on a huge number of variables. High dimensional multiple hypothesis testing has stimulated the creation of new statistical tools such as the replacement of the standard concept of p-value in hypothesis testing with the corresponding q-value connected with the notion of false discovery rate, see [12], [13], [48], [49] for the seminal ideas. As a remark we point out that multivariate statistical analogues of classical univariate statistical tests sometimes can perform better in multiple hypothesis testing, but then a relatively small number of samples normally makes it necessary to first reduce the dimensionality of the data, for instance by using PCA, in order to be able to apply the multivariate tests, see e.g. [14], [33], [32] for ideas in this direction. In the present work we give an overview and an introduction to the above mentioned statistical notions. The present work is in general meant to be one introduction to, and help to stimulate more research within, the field of data exploration. We also hope to convince the reader that statistical and knowledge supported visualization already is a versatile and powerful tool for the exploration of high dimensional real world datasets. Finally, "Knowledge supported" should here be interpreted as "any use of some extra information concerning a given dataset that the researcher might possess, have access to or gain during the exploration" when analyzing the visualization. We illustrate this knowledge supported approach by using knowledge based annotations coming with our example datasets. We also briefly comment on how to use information collected from available databases to evaluate or preselect groups of significant variables, see e.g. [11], [16], [31], [42] for some more far reaching suggestions in this direction. 2 Singular value decomposition and Principal component analysis. Singular value decomposition (SVD) was discovered independently by several mathematicians towards the end of the 19'th century. See [47] for an account of the early history of SVD. Principal component analysis (PCA) for data analysis was then introduced by Pearson [25] in 1901 and independently later developed by Hotelling [24]. The central idea in classical PCA is to use an SVD on the column averaged sample matrix to reduce the dimensionality in the data set while retaining as much variance as possible. PCA is also closely related to the Karhunen-Loève expansion (KLE) of a stochastic process [29], [34]. The KLE of a given centered stochastic process is an orthonormal L 2 -expansion of the process with coefficients that are uncorrelated random variables. PCA corresponds to the empirical or sample version of the KLE, i.e. when the expansion is inferred from samples. Noteworthy here is the Karhunen-Loève theorem stating that if the underlying process is Gaussian, then the coefficients in the KLE will be independent and normally distributed. This is e.g. the basis for showing results concerning the optimality of KLE for filtering out Gaussian white noise. PCA was proposed as a method to analyze genomewide expression data by Alter et al. [1] and has since then become a standard tool in the field. Supervised PCA was suggested by Bair et.al. as a regression and prediction method for genomewide data [8], [9], [17]. Supervised PCA is similar to normal PCA, the only difference being that the researcher preconditions the data by using some kind of external information. This external information can come from e.g. a regression analysis with respect to some response variable or from some knowledge based considerations. We will here give an introduction to SVD and PCA that focus on visualization and the notion of using separate but synchronized biplots, i.e. plots of both samples and variables. Biplots displaying samples and variables in the same usually twodimensional diagram have been used frequently in many different types of applications, see e.g. [20], [21], [22] and [15], but the use of separate but synchronized biplots that we present is not standard. We finally describe the method of multidimensional scaling which builds on standard PCA, but we start by describing SVD for linear operators between finite dimensional euclidean spaces with a special focus on duality. Dual singular value decomposition Singular value decomposition is a decomposition of a linear mapping between euclidean spaces. We will discuss the finite dimensional case and we consider a given linear mapping L : R N −→ R p . Let e 1 , e 2 , . . . , e N be the canonical basis in R N and let f 1 , f 2 , . . . , f p be the canonical basis in R p . We regard R N and R p as euclidean spaces equipped with their respective canonical scalar products, (·, ·) R N and (·, ·) R p , in which the canonical bases are orthonormal. Let L * : R p −→ R N denote the adjoint operator of L defined by (L(u), v) R p = (u, L * (v)) R N ; u ∈ R N ; v ∈ R p . (2.1) Observe that in applications L(e k ), k = 1, 2, . . . , N, normally represent the arrays of observed variable values for the different samples and that L * (f j ), j = 1, 2, . . . , p, then represent the observed values of the variables. In our example data sets, the unique p × N matrix X representing L in the canonical bases, i.e. X jk = (f j , L(e k )) R p ; j = 1, 2, . . . , p ; k = 1, 2, . . . , N , contains measurements for all variables in all samples. The transposed N × p matrix X T contains the same information and represents the linear mapping L * in the canonical bases. The goal of a dual SVD is to find orthonormal bases in R N and R p such that the matrices representing the linear operators L and L * have particularly simple forms. We start by noting that directly from (2.1) we get the following direct sum decompositions into orthogonal subspaces R N = Ker L ⊕ Im L * (where Ker L denotes the kernel of L and Im L * denotes the image of L * ) and R p = Im L ⊕ Ker L * . We will now make a further dual decomposition of Im L and Im L * . Let r denote the rank of L : R N −→ R p , i.e. r = dim (Im L) = dim (Im L * ). The rank of the positive and selfadjoint operator L * • L : R N −→ R N is then also equal to r, and by the spectral theorem there exist values λ 1 ≥ λ 2 ≥ · · · ≥ λ r > 0 and corresponding orthonormal vectors u 1 , u 2 , . . . , u r , with u k ∈ R N , such that L * • L(u k ) = λ 2 k u k ; k = 1, 2, . . . , r . (2.2) If r < N, i.e. dim (Ker L) > 0, then zero is also an eigenvalue for L * • L : R N −→ R N with multiplicity N − r. Using the orthonormal set of eigenvectors {u 1 , u 2 , . . . , u r } for L * • L spanning Im L * , we define a corresponding set of dual vectors v 1 , v 2 , . . . , v r in R p by L(u k ) =: λ k v k ; k = 1, 2, . . . , r . (2.3) From (2.2) it follows that L * (v k ) = λ k u k ; k = 1, 2, . . . , r (2.4) and that L • L * (v k ) = λ 2 k v k ; k = 1, 2, . . . , r . (2.5) The set of vectors {v 1 , v 2 , . . . , v r } defined by (2.3) spans Im L and is an orthonormal set of eigenvectors for the selfadjoint operator L • L * : R p −→ R p . We thus have a completely dual setup and canonical decompositions of both R N and R p into direct sums of subspaces spanned by eigenvectors corresponding to the distinct eigenvalues. We make the following definition. Definition 2.1 A dual singular value decomposition system for an operator pair (L, L * ) is a system consisting of numbers λ 1 ≥ λ 2 ≥ · · · ≥ λ r > 0 and two sets of orthonormal vectors, {u 1 , u 2 , . . . , u r } and {v 1 , v 2 , . . . , v r } with r = rank (L) = rank (L * ), satisfying (2.2)-(2.5) above. The positive values λ 1 , λ 2 , . . . , λ r are called the singular values of (L, L * ). We will call the vectors u 1 , u 2 , . . . , u r principal components for Im L * and the vectors v 1 , v 2 , . . . , v r principal components for Im L. Given a dual SVD system we now complement the principal components for Im L * , u 1 , u 2 , . . . , u r , to an orthonormal basis u 1 , u 2 , . . . , u N in R N and the principal components for Im L, v 1 , v 2 , . . . , v r , to an orthonormal basis v 1 , v 2 , . . . , v p in R p . In these bases we have that (v j , L(u k )) R p = (L * (v j ), u k ) R N = λ k δ jk if j, k ≤ r 0 otherwise . (2.6) This means that in these ON-bases L : R N −→ R p is represented by the diagonal p × N matrix D 0 0 0 (2.7) where D is the r × r diagonal matrix having the singular values of (L, L * ) in descending order on the diagonal. The adjoint operator L * is represented in the same bases by the transposed matrix, i.e. a diagonal N × p matrix. We translate this to operations on the corresponding matrices as follows. Let U denote the N × r matrix having the coordinates, in the canonical basis in R N , for u 1 , u 2 , . . . u r as columns, and let V denote the p × r matrix having the coordinates, in the canonical basis in R p , for v 1 , v 2 , . . . v r as columns. Then (2.6) is equivalent to X = V DU T and X T = UDV T . This is called a dual singular value decomposition for the pair of matrices (X, X T ). Notice that the singular values and the corresponding separate eigenspaces for L * • L as described above are canonical, but that the set {u 1 , u 2 , . . . , u r } (and thus also the connected set {v 1 , v 2 , . . . , v r }) is not canonically defined by L * • L. This set is only canonically defined up to actions of the appropriate orthogonal groups on the separate eigenspaces. Dual principal component analysis We will now discuss how to use a dual SVD system to obtain optimal approximations of a given operator L : R N −→ R p by operators of lower rank. If our goal is to visualize the data, then it is natural to measure the approximation error using a unitarily invariant norm, i.e. a norm · that is invariant with respect to unitary transformations on the variables or on the samples, i.e. L = V • L •U for all V and U s.t. V * V = Id and U * U = Id . (2.8) Using an SVD, directly from (2.8) we conclude that such a norm is necessarily a symmetric function of the singular values of the operator. We will present results for the L 2 -norm of the singular values, but the results concerning optimal approximations are actually valid with respect to any unitarily invariant norm, see e.g. [35] and [40] for information in this direction. We omit proofs, but all the results in this section are proved using SVDs for the involved operators. The Frobenius (or Hilbert-Schmidt) norm for an operator L : R N −→ R p of rank r is defined by L 2 F := r ∑ k=1 λ 2 k , where λ k , k = 1, 2, . . . , r are the singular values of (L, L * ). Now let M n×n denote the set of real n × n matrices. We then define the set of orthogonal projections in R n of rank s ≤ n as P n s := {Π ∈ M n×n ; Π * = Π ; Π • Π = Π ; rank(Π) = s} . One important thing about orthogonal projections is that they never increase the Frobenius norm, i.e. Lemma 2.1 Let L : R N −→ R p be a given linear operator. Then Π • L F ≤ L F for all Π ∈ P p s and L • Π F ≤ L F for all Π ∈ P N s . Using this Lemma one can prove the following approximation theorems. Theorem 2.1 Let L : R N −→ R p be a given linear operator. Then sup Π p ∈P p s ; Π N ∈P N s Π p • L • Π N F = sup Π∈P p s Π • L F = = sup Π∈P N s L • Π F = min(s,r) ∑ k=1 λ 2 k 1/2 (2.9) and equality is attained in (2.9) by projecting onto the min(s, r) first principal components for Im L and Im L * . Theorem 2.2 Let L : R N −→ R p be a given linear operator. Then inf Π p ∈P p s ; Π N ∈P N s L − Π p • L • Π N F = inf Π∈P p s L − Π • L F = = inf Π∈P N s L − L • Π F = max(s,r) ∑ k=min(s,r)+1 λ 2 k 1/2 (2.10) and equality is attained in (2.10) by projecting onto the min(s, r) first principal components for Im L and Im L * . We loosely state these results as follows. Projection dictum Projecting onto a set of first principal components maximizes average projected vector length and also minimizes average projection error. We will briefly discuss interpretation for applications. In fact in applications the representation of our linear mapping L normally has a specific interpretation in the original canonical bases. Assume that L(e k ), k = 1, 2, . . . , N represent samples and that L * (f j ), j = 1, 2, . . . , p represents variables. To begin with, if the samples are centered, i.e. N ∑ k=1 L(e k ) = 0 , then L 2 F corresponds to the statistical variance of the sample set. The basic projection dictum can thus be restated for sample-centered data as follows. Projection dictum for sample-centered data Projecting onto a set of first principal components maximizes the variance in the set of projected data points and also minimizes average projection error. In applications we are also interested in keeping track of the value X jk = (f j , L(e k )) . (2.11) It represents the j'th variable's value in the k'th sample. Computing in a dual SVD system for (L, L * ) in (2.11) we get X jk = λ 1 (e k , u 1 )(f j , v 1 ) + · · · + λ r (e k , u r )(f j , v r ) . (2.12) Now using (2.3) and (2.4) we conclude that X jk = 1 λ 1 (e k , L * (v 1 ))(f j , L(u 1 )) + · · · + 1 λ r (e k , L * (v r ))(f j , L(u r )) . Finally this implies the fundamental biplot formula X jk = 1 λ 1 (L(e k ), v 1 )(L * (f j ), u 1 ) + · · · + 1 λ r (L(e k ), v r )(L * (f j ), u r ) . (2.13) We now introduce the following scalar product in R r (a, b) λ := 1 λ 1 a 1 b 1 + · · · + 1 λ r a r b r ; a, b ∈ R r . Equation (2.13) thus means that if we express the sample vectors in the basis v 1 , v 2 , . . . , v r for Im L and the variable vectors in the basis u 1 , u 2 , . . . , u r for Im L * , then we get the value of X jk simply by taking the (·, ·) λ -scalar product in R r between the coordinate sequence for the k'th sample and the coordinate sequence for the j'th variable. This means that if we work in a synchronized way in R r with the coordinates for the samples (with respect to the basis v 1 , v 2 , . . . , v r ) and with the coordinates for the variables (with respect to the basis u 1 , u 2 , . . . , u r ) then the relative positions of the coordinate sequence for a variable and the coordinate sequence for a sample in R r have a very precise meaning given by (2.13). Now let S ⊂ {1, 2, . . . , r} be a subset of indices and let |S| denote the number of elements in S. Then let Π p S : R p −→ R p be the orthogonal projection onto the subspace spanned by the principal components for Im L whose indices belong to S. In the same way let Π N S : R N −→ R N be the orthogonal projection onto the subspace spanned by the principal components for Im L * whose indices belong to S. If L(e k ), k = 1, 2, . . . , N represent samples, then Π p S • L(e k ), k = 1, 2, . . . , N, represent S-approximative samples, and correspondingly if L * (f j ), j = 1, 2, . . . , p, represent variables then Π N S • L * (f j ), j = 1, 2, . . . , p, represent S-approximative variables. We will interpret the matrix element X S jk := (f j , Π p S • L(e k )) (2.14) as representing the j'th S-approximative variable's value in the k'th S-approximative sample. By the biplot formula (2.13) for the operator Π p S • L we actually have X S jk = ∑ m∈S 1 λ m (L(e k ), v m )(L * (f j ), u m ) . (2.15) If |S| ≤ 3 we can visualize our approximative samples and approximative variables working in a synchronized way in R |S| with the coordinates for the approximative samples and with the coordinates for the approximative variables. The relative positions of the coordinate sequence for an approximative variable and the coordinate sequence for an approximative sample in R |S| then have the very precise meaning given by (2.15). Naturally the information content of a biplot visualization depends in a crucial way on the approximation error we make. The following result gives the basic error estimates. Theorem 2.3 With notations as above we have the following projection error estimates p ∑ j=1 N ∑ k=1 |X jk − X S jk | 2 = ∑ i / ∈S |λ i | 2 (2.16) sup j=1,...,p ; k=1,...,N |X jk − X S jk | ≤ sup i / ∈S |λ i | . (2.17) We will use the following statistics for measuring projection content: Definition 2.2 With notations as above, the L 2 -projection content connected with the subset S is by definition α 2 (S) := ∑ i∈S |λ i | 2 ∑ r i=1 |λ i | 2 . We note that, in the case when we have sample centered data, α 2 (S) is precisely the quotient between the amount of variance that we have "captured" in our projection and the total variance. In particular if α 2 (S) = 1 then we have captured all the variance. Theorem 2.3 shows that we would like to have good control of the distributions of eigenvalues for general covariance matrices. We will address this issue for random matrices below, but we already here point out that we will estimate projection information content, or the signal to noise ratio, in a projection of real world data by comparing the observed L 2 -projection content and the L 2 -projection contents for corresponding randomized data. Nonlinear PCA and multidimensional scaling We begin our presentation of multidimensional scaling by looking at the reconstruction problem, i.e. how to reconstruct a dataset given only a proposed covariance or distance matrix. In the case of a covariance matrix, the basic idea is to try to factor a corresponding sample centered SVD or slightly rephrased by taking the square root of the covariance matrix. Once we have established a reconstruction scheme we note that we can apply it to any proposed "covariance" or "distance" matrix, as long as they have the correct structure, even if they are artificial and a priori are not constructed using euclidean transformations on an existing data matrix. This opens up the possibility for using "any type" of similarity measures between samples or variables to construct artificial covariance or distance matrices. We consider a p × N matrix X where the N columns {x 1 , . . . , x N } consist of values of measurements for N samples of p variables. We will throughout this section assume that p ≥ N. We introduce the N × 1 vector 1 = [1, 1, . . . , 1] T , and we recall that the N × N covariance matrix of the data matrix X = [x 1 , . . . , x N ] is given as C(x 1 , . . . , x N ) = (X − 1 N X 1 1 T ) T (X − 1 N X 1 1 T ) . We will also need the (squared) distance matrix defined by D jk (x 1 , . . . , x N ) := |x j − x k | 2 ; j, k = 1, 2, . . . , N . We will now consider the problem of reconstructing a data matrix X given only the corresponding covariance matrix or the corresponding distance matrix. We first note that since the covariance and the distance matrix of a data matrix X both are invariant under euclidean transformations in R p of the columns of X, it will, if at all, only be possible to reconstruct the p × N matrix X modulo euclidean transformations in R p of its columns. We next note that the matrices C and D are connected. In fact we have Proposition 1 Given data points x 1 , x 2 , . . . , x N in R p and the corresponding covariance and distance matrices, C and D, we have that 19) or in matrixnotation, D jk = C j j + C kk − 2C jk . (2.18) Furthermore C jk = 1 2N N ∑ i=1 (D i j + D ik ) − 1 2 D jk − 1 2N 2 N ∑ i,m=1 D im ,(2.C = − 1 2 I − 11 T N D I − 11 T N . (2.20) Proof. Let y i := x i − 1 N N ∑ j=1 x j ; i = 1, 2, . . . , N . Note that C jk = y T j y k and D jk = |y j − y k | 2 , and that ∑ N j=1 y j = 0. Equality (2.18) above is simply the polarity condition D jk = |y j | 2 + |y k | 2 − 2C jk . (2.21) Moreover, since N ∑ j=1 C jk = 0 and N ∑ k=1 C jk = 0 , by summing over both j and k in (2.21) above we get N ∑ j=1 |y j | 2 = 1 2N N ∑ j,k=1 D jk . (2.22) On the other hand, by summing only over j in (2.21) we get N ∑ j=1 D jk = N|y k | 2 + N ∑ j=1 |y j | 2 . (2.23) Combining (2.22) and (2.23) we get |y k | 2 = 1 N N ∑ j=1 D jk − 1 2N 2 N ∑ j,k=1 D jk . Plugging this into (2.21) we finally conclude that D jk = 1 N N ∑ i=1 (D i j + D ik ) − 1 N 2 N ∑ i, j=1 D i j − 2C jk . This is (2.19). q.e.d. Now let M N×N denote the set of all real N ×N matrices. To reconstruct a p×N data matrix X = [x 1 , . . . , x N ] from a given N × N covariance or N × N distance matrix amounts to invert the mappings: Φ : R p × · · · × R p (x 1 , x 2 , . . . , x N ) → C(x 1 , x 2 , . . . , x N ) ∈ M N×N , and Ψ : R p × · · · × R p (x 1 , x 2 , . . . , x N ) → D(x 1 , x 2 , . . . , x N ) ∈ M N×N . In general it is of course impossible to invert these mappings since both Φ and Ψ are far from surjectivity and injectivity. Concerning injectivity, it is clear that both Φ and Ψ are invariant under the euclidean group E(p) acting on R p × · · · × R p , i.e. under transformations (x 1 , x 2 , . . . , x N ) → (Sx 1 + b, Sx 2 + b, . . . , Sx n + b) , where b ∈ R p and S ∈ O(p). This makes it natural to introduce the quotient manifold (R p × · · · × R p )/E(p) and possible to define the induced mappings Φ and Ψ, well defined on the equivalence classes and factoring the mappings Φ and Ψ by the quotient mapping. We will write Φ : ([x 1 , x 2 , . . . , x N ]) → C(x 1 , x 2 , . . . , x N ) and Ψ : ([x 1 , x 2 , . . . , x N ]) → D(x 1 , x 2 , . . . , x N ) . We shall show below that both Φ and Ψ are injective. Concerning surjectivity of the maps Φ and Ψ, or Φ and Ψ, we will first describe the image of Φ. Since the images of Φ and Ψ are connected through Proposition 1 above this implicitly describes the image also for Ψ. It is theoretically important that both these image sets turn out to be closed and convex subsets of M N×N . In fact we claim that the following set is the image set of Φ: P N×N := A ∈ M N×N ; A T = A , A ≥ 0 , A 1 = 0 . To begin with it is clear that the image of Φ is equal to the image of Φ and that it is included in P N×N , i.e. (R p × · · · R p )/E(p) ([x 1 , x 2 , . . . , x N ]) → C(x 1 , x 2 , . . . , x N ) ∈ P N×N . The following proposition implies that P N×N is the image set of Φ and it is the main result of this subsection. Proposition 2 The mapping Φ: (R p × · · · × R p )/E(p) ([x 1 , x 2 , . . . , x N ]) → C(x 1 , x 2 , . . . , x N ) ∈ P N×N is a bijection. Proof. If A ∈ P N×N we can, by the spectral theorem, find a unique symmetric and positive N × N matrix B = [b 1 , b 2 , . . . , b N ] (the square root of A) with rank(B) = rank(A) such that B 2 = A and B 1 = 0. We now map the points (b 1 , b 2 , . . . , b N ) laying in R N isometrically to points (x 1 , x 2 , . . . , x N ) in R p . This is trivially possible since p ≥ N. The corresponding covariance matrix C(x 1 , . . . , x N ) will be equal to A. This proves surjectivity. That the mapping Φ is injective follows directly from the following lemma. Proof. Use the Gram-Schmidt orthogonalization procedure on both sets at the same time. q.e.d. q.e.d. We will in our explorations of high dimensional real world datasets below use "artificial" distance matrices constructed from geodesic distances on carefully created graphs connecting the samples or the variables. These distance matrices are converted to unique corresponding covariance matrices which in turn, as described above, give rise to canonical elements in (R p × · · · × R p )/E(p). We then pick sample centered representatives on which we perform PCA. In this way we can visualize low dimensional "approximative graph distances" in the dataset. Using graphs in the sample set constructed from a k nearest neighbors or a locally euclidean approximation procedure, this approach corresponds to the ISOMAP algorithm introduced by Tenenbaum et. al. [52]. The ISOMAP algorithm can as we will see below be very useful in the exploration of DNA microarray data, see Nilsson et. al. [38] for one of the first applications of ISOMAP in this field. We finally remark that if a proposed artificial distance or covariance matrix does not have the correct structure, i.e. if for example a proposed covariance matrix does not belong to P N×N , we begin by projecting the proposed covariance matrix onto the unique nearest point in the closed and convex set P N×N and then apply the scheme presented above to that point. The basic statistical framework. We will here fix some notation and for the non-statistician reader's convenience at the same time recapitulate some standard multivariate statistical theory. In particular we want to stress some basic facts concerning robustness of statistical testing. Let S be the sample space consisting of all possible samples (in our example datasets equal to all trials of patients) equipped with a probability measure P : 2 S −→ [0, +∞] and let X = (X 1 , . . . , X p ) T be a random vector from S into R p . The coordinate functions X i : S −→ R, i = 1, 2, . . . , p are random variables and in our example datasets they represent the expression levels of the different genes. We will be interested in the law of X, i.e. the induced probability measure P(X −1 (·)) defined on the measurable subsets of R p . If it is absolutely continuous with respect to Lebesgue measure then there exists a probability density function (pdf) f X (·) : R p −→ [0, ∞) that belongs to L 1 (R p ) and satisfies P({s ∈ S ; X(s) ∈ A}) = A f X (x) dx (3.1) for all events (i.e. all Lebesgue measurable subsets) A ⊂ R p . This means that the pdf f X (·) contains all necessary information in order to compute the probability that an event has occurred, i.e. that the values of X belong to a certain given set A ⊂ R p . All statistical inference procedures are concerned with trying to learn as much as possible about an at least partly unknown induced probability measure P(X −1 (·)) from a given set of N observations, {x 1 , x 2 , . . . , x N } (with x i ∈ R p for i = 1, 2, . . . , N), of the underlying random vector X. Often we then assume that we know something about the structure of the corresponding pdf f X (·) and we try to make statistical inferences about the detailed form of the function f X (·). The most important probability distribution in multivariate statistics is the multivariate normal distribution. In R p it is given by the p-variate pdf n : R p −→ (0, ∞) where n(x) := (2π) −p/2 |Γ| −1/2 e − 1 2 (x−µ) T Γ −1 (x−µ) ; x ∈ R p . (3.2) It is characterized by the symmetric and positive definite p × p matrix Γ and the p-column vector µ, and |Γ| stands for the absolute value of the determinant of Γ. If a random vector X : S −→ R p has the p-variate normal pdf (3.2) we say that X has the N(µ, Γ) distribution. If X has the N(µ, Γ) distribution then the expected value of X is equal to µ i.e. E (X) := S X(s) dP = µ ,(3.3) and the covariance matrix of X is equal to Γ, i.e. C (X) := S (X − E (X))(X − E (X)) T dP = Γ . (3.4) Assume now that X 1 , X 2 , . . . , X N are given independent and identically distributed (i.i.d.) random vectors. A test statistic T is then a function (X 1 , X 2 , . . . , X N ) → T (X 1 , X 2 , . . . , X N ). Two important test statistics are the sample mean vector of a sample of size N X N := 1 N N ∑ i=1 X i , and the sample covariance matrix of a sample of size N S N := 1 N − 1 N ∑ i=1 (X i − X)(X i − X) T . If X 1 , X 2 , . . . , X N are independent and N(µ, Γ) distributed, then the mean X N has the N(µ, 1 N Γ) distribution. In fact this result is asymptotically robust with respect to the underlying distribution. This is a consequence of the well known and celebrated central limit theorem: Theorem 3.1 If the random p vectors X 1 , X 2 , X 3 , . . . are independent and identically distributed with means µ ∈ R p and covariance matrices Γ, then the limiting distribution of (N) 1/2 X N − µ as N −→ ∞ is N(0, Γ). The central limit theorem tells us that, if we know nothing and still need to assume some structure on the underlying p.d.f., then asymptotically the N(µ, Γ) distribution is the only reasonable assumption. The distributions of different statistics are of course more or less sensitive to the underlying distribution. In particular the standard univariate Student t-statistic, used to draw inferences about a univariate sample mean, is very robust with respect to the underlying probability distribution. In for example the study on statistical robustness [41] the authors conclude that: "...the two-sample t-test is so robust that it can be recommended in nearly all applications." This is in contrast with many statistics connected with the sample covariance matrix. A central example in multivariate analysis is the set of eigenvalues of the sample covariance matrix. These statistics have a more complicated behavior. First of all, if X 1 , X 2 , . . . , X N with values in R p are independent and N(µ, Γ) distributed then the sample covariance matrix is said to have a Wishart distribution W p (N, Γ). If N > p the Wishart distribution is absolutely continuous with respect to Lebesgue measure and the probability density function is explicitly known, see e.g. Theorem 7.2.2. in [3]. If N >> p then the eigenvalues of the sample covariance matrix are good estimators for the corresponding eigenvalues of the underlying covariance matrix Γ, see [2] and [3]. In the applications we have in mind we often have the reverse situation, i.e. p >> N, and then the eigenvalues for the sample covariance matrix are far from consistent estimators for the corresponding eigenvalues of the underlying covariance matrix. In fact if the underlying covariance matrix is the identity matrix it is known (under certain growth conditions on the underlying distribution) that if we let p depend on N and if p/N −→ γ ∈ (0, ∞) as N −→ ∞, then the largest eigenvalue for the sample covariance matrix tends to (1 + √ γ) 2 , see e.g. [55], and not to 1 as one maybe could have expected. This result is interesting and can be useful, but there are many open questions concerning the asymptotic theory for the "large p, large N case", in particular if we go beyond the case of normally distributed data, see e.g. [7], [18], [26], [27] and [30] for an overview of the current state of the art. To estimate the information content or signal to noise ratio in our PCA plots we will therefore rely mainly on randomization tests and not on the (not well enough developed) asymptotic theory for the distributions of eigenvalues of random matrices. Controlling the false discovery rate When we perform for example a Student t-test to estimate whether or not two groups of samples have the same mean value for a specific variable we are performing a hypothesis test. When we do the same thing for a large number of variables at the same time we are testing one hypothesis for each and every variable. It is often the case in the applications we have in mind that tens of thousands of features are tested at the same time against some null hypothesis H 0 , e.g. that the mean values in two given groups are identical. To account for this multiple hypotheses testing, several methods have been proposed, see e.g. [54] for an overview and comparison of some existing methods. We will give a brief review of some basic notions. Following the seminal paper by Benjamini and Hochberg [12], we introduce the following notation. We consider the problem of testing m null hypotheses H 0 against the alternative hypothesis H 1 . We let m 0 denote the number of true nulls. We then let R denote the total number of rejections, which we will call the total number of statistical discoveries, and let V denote the number of false rejections. In addition we introduce stochastic variables U and T according to Table 1. Accept H 0 Reject H 0 Total H 0 true U V m 0 H 1 true T S m 1 m − R R m The false discovery rate was loosely defined by Benjamini and Hochberg as the expected value E( V R ). More precisely the false discovery rate is defined as FDR := E( V R | R > 0) P(R > 0) . (4.1) The false discovery rate measures the proportion of Type I errors among the statistical discoveries. Analogously we define corresponding statistics according to Table 2. We note that the FNDR is precisely the proportion of Type II errors E(V /R) False discovery rate (FDR) E(T /(m − R)) False negative discovery rate (FNDR) E(T /(T + S)) False negative rate (FNR) E(V /(U +V )) False positive rate (FPR) among the accepted null hypotheses , i.e. the non-discoveries. In the datasets that we encounter within bioinformatics we often suspect m 1 << m and so if R, which we can observe, is relatively small, then the FNDR is controlled at a low level. As pointed out in [39], apart from the FDR which measures the proportion of false positive discoveries, we usually are interested in also controlling the FNR, i.e. we do not want to miss too many true statistical discoveries. We will address this by using visualization and knowledge based evaluation to support the statistical analysis. In our exploration scheme presented in the next section we will use the step down procedure on the entire list of p-values for the statistical test under consideration suggested by Benjamini and Hochberg in [12] to control the FDR. We will also use the q-value, computable for each separate variable, introduced by Storey, see [48] and [49]. The q-value in our analyses is defined as the lowest FDR for which the particular hypothesis under consideration would be accepted under the Benjami-Hochberg step down procedure. A practical and reasonable threshold level for the q-value to be informative that we will use is q < 0.2. The basic exploration scheme We will look for significant signals in our data set in order to use them e.g. as a basis for variable selection, sample classification and clustering. If there are enough samples one should first of all randomly partition the set of samples into a training set and a testing set, perform the analysis with the training set and then validate findings using the testing set. This should then ideally be repeated several times with different partitions. With very few samples this is not always feasible and then, in addition to statistical measures, one is left with using knowledge based evaluation. One should then remember that the ultimate goal of the entire exploration is to add pieces of new knowledge to an already existing knowledge structure. To facilitate knowledge based evaluation, the entire exploration scheme is throughout guided by visualization using PCA biplots. When looking for significant signals in the data, one overall rule that we follow is: • Detect and then remove the strongest present signal. Detect a signal can e.g. mean to find a sample cluster and a connected list of variables that discriminate the cluster. We can then for example (re)classify the sample cluster in order to use the new classification to perform more statistical tests. After some statistical validation we then often remove a detected signal, e.g. a sample cluster, in order to avoid that a strong signal obscures a weaker but still detectable signal in the data. Sometimes it is of course convenient to add the strong signal again at a later stage in order to use it as a reference. We must constantly be aware of the possibility of outliers or artifacts in our data and so we must: • Detect and remove possible artifacts or outliers. An artifact is by definition a detectable signal that is unrelated to the basic mechanisms that we are exploring. An artifact can e.g. be created by different experimental setups, resulting in a signal in the data that represents different experimental conditions. Normally if we detect a suspected artifact we want to, as far as possible, eliminate the influence of the suspected artifact on our data. When we do this we must be aware that we normally reduce the degrees of freedom in our data. The most common case is to eliminate a single nominal factor resulting in a splitting of our data in subgroups. In this case we will mean-center each group discriminated by the nominal factor, and then analyze the data as usual, with an adjusted number of degrees of freedom. The following basic exploration scheme is used • Reduce noise by PCA and variance filtering. Assess the signal/noise ratio in various low dimensional PCA projections and estimate the projection information contents by randomization. • Perform statistical tests. Evaluate the statistical tests using the FDR, randomization and permutation tests. • Use graph-based multidimensional scaling (ISOMAP) to search for signals/clusters. The above scheme is iterated until "all" significant signals are found and it is guided and coordinated by synchronized PCA-biplot visualizations. 6 Some biological background concerning the example datasets. The rapid development of new biological measurement methods makes it possible to explore several types of genetic alterations in a high-throughput manner. Different types of microarrays enable researchers to simultaneously monitor the expression levels of tens of thousands of genes. The available information content concerning genes, gene products and regulatory pathways is accordingly growing steadily. Useful bioinformatics databases today include the Gene Ontology project (GO) [5] and the Kyoto Encyclopedia of Genes and Genomes (KEGG) [28] which are initiatives with the aim of standardizing the representation of genes, gene products and pathways across species and databases. A substantial collection of functionally related gene sets can also be found at the Broad Institute's Molecular Signatures Database (MSigDB) [36] together with the implemented computational method Gene Set Enrichment Analysis (GSEA) [37], [51]. The method GSEA is designed to determine whether an a priori defined set of genes shows statistically significant and concordant differences between two biological states in a given dataset. Bioinformatic data sets are often uploaded by researchers to sites such as the National Centre for Biotechnology Information's database Gene Expression Omnibus (GEO) [19] or to the European Bioinformatics Institute's database Array-Express [4]. In addition data are often made available at local sites maintained by separate institutes or universities. Analysis of microarray data sets There are many bioinformatic and statistical challenges that remain unsolved or are only partly solved concerning microarray data. As explained in [43], these include normalization, variable selection, classification and clustering. This state of affairs is partly due to the fact that we know very little in general about underlying statistical distributions. This makes statistical robustness a key issue concerning all proposed statistical methods in this field and at the same time shows that new methodologies must always be evaluated using a knowledge based approach and supported by accompanying new biological findings. We will not comment on the important problems of normalization in what follows but refer to e.g. [6] where different normalization procedures for the Affymetrix platforms are compared. In addition, microarray data often have a non negligible amount of missing values. In our example data sets we will, when needed, impute missing values using the K-nearest neighbors method as described in [53]. All visualizations and analyses are performed using the software Qlucore Omics Explorer [45]. Effects of cigarette smoke on the human epithelial cell transcriptome. We begin by looking at a gene expression dataset coming from the study by Spira et. al. [46] of effects of cigarette smoke on the human epithelial cell transcriptome. It can be downloaded from National Center for Biotechnology Informations (NCBI) Gene Expression Omnibus (GEO) (DataSet GDS534, accession no. GSE994). It contains measurements from 75 subjects consisting of 34 current smokers, 18 former smokers and 23 healthy never smokers. The platform used to collect the data was Affymetrix HG-U133A Chip using the Affymetrix Microarray suite to select, prepare and normalize the data, see [46] for details. One of the primary goals of the investigation in [46] was to find genes that are responsible for distinguishing between current smokers and never smokers and also investigate how these genes behaved when a subject quit smoking by looking at the expression levels for these genes in the group of former smokers. We will here focus on finding genes that discriminate the groups of current smokers and never smokers. • We begin our exploration scheme by estimating the signal/noise ratio in a sample PCA projection based on the three first principal components. We use an SVD on the data correlation matrix, i.e. the covariance matrix for the variance normalized variables. In Figure 1 we see the first three principal components for Im L and the 75 patients plotted. The first three principal components contain 25% of the total variance in the dataset and so for this 3-D projection α 2 ({1, 2, 3}, obsr) = 0.25. Using randomization we estimate the expected value for a corresponding dataset (i.e. a dataset containing the same number of samples and variables) built on independent and normally distributed variables to be approximately α 2 ({1, 2, 3}, rand) = 0.035. We have thus captured around 7 times more variation than what we would have expected if the variables were independent and normally distributed. This indicates that we do have strong signals present in the dataset. • Following our exploration scheme we now look for possible outliers and artifacts. The projected subjects are colored according to smoking history, but it is clear from Figure 1 that most of the variance in the first principal component (containing 13% of the total variance in the data) comes from a signal that has a very weak association with smoking history. We nevertheless see a clear splitting into two subgroups. Looking at supplied clinical annotations one can conclude that the two groups are not associated to gender, age or race traits. Instead one finds that all the subjects in one of the groups have low subject description numbers whereas all the subjects except one in the other group have high subject description numbers. In Figure 2 we have colored the subjects according to description number. This suspected artifact signal does not correspond to any in the dataset (Dataset GDS 534, NCBIs GEO) supplied clinical annotation. One can hypothesize that the description number could reflect for instance the order in which the samples were gathered and thus could be an artifact. Even if the two groups actually correspond to some interesting clinical variable, like disease state, that we should investigate separately, we will consider the splitting to be an artifact in our investigation. We are interested in using all the assembled data to look for genes that discriminate between current smokers and never smokers. We thus eliminate the suspected artifact by mean-centering the two main (artifact) groups. After elimination of the strong artifact signal, the first three principal components contain "only" 17% of the total variation. • Following our exploration scheme we filter the genes with respect to variance visually searching for a possibly informative three dimensional projection. When we filter down to the 630 most variable genes, the three first principal components have an L 2 -projection content of α 2 ({1, 2, 3}) = 0.42, whereas by estimation using randomization we would have expected it to be 0.065. The projection in Figure 3 is thus probably informative. We have again colored the samples according to smoking history as above. The third principal component, containing 9% of the total variance, can be seen to quite decently separate the current smokers from the never smokers. We note that this was impossible to achieve without removing the artifact signal since the artifact signal completely obscured this separation. • Using the variance filtered list of 630 genes as a basis, following our exploration scheme, we now perform a series of Student t-tests between the groups of current smokers and never smokers, i.e. 34 + 23 = 57 different subjects. For a specific level of significance we compute the 3-dimensional (i.e. we let S = {1, 2, 3}) L 2 -projection content resulting when we keep all the rejected null hypotheses, i.e. statistical discoveries. For a sequence of t-tests parameterized by the level of significance we now try to find a small level of significance and at the same time an observed L 2 -projection content with a large quotient compared to the expected projection content estimated by randomization. We supervise this procedure visually using three dimensional PCA-projections looking for visually clear patterns. For a level of significance of 0.00005, leaving a total of 43 genes (rejected nulls) and an FDR of 0.0007 we have α 2 ({1, 2, 3}, obsr) = 0.71 whereas the expected projection content for randomized data α 2 ({1, 2, 3}, rand) = 0.21. We have thus captured more than 3 times of the expected projection content and at the same time approximately 0.0007 × 43 = 0.0301 genes are false discoveries and so with high probability we have found 43 potentially important biomarkers. We now visualize all 75 subjects using these 43 genes as variables. In Figure 4 we see a synchronized biplot with samples to the left and variables to the right. The sample plot shows a perfect separation of current smokers and never smokers. In the variable plot we see genes (green) that are upregulated in the current smokers group to the far right. The top genes according to q-value for the Student t-test between current smokers and never smokers, that are upregulated in the current smokers group and downregulated in the never smokers group, are given in Table 3. In Table 4 we list the top genes that are downregulated in the current smokers group and upregulated among the never smokers. Analysis of various muscle diseases. In the study by Bakay et. al. [10] the authors studied 125 human muscle biopsies from 13 diagnostic groups suffering from various muscle diseases. The platforms used were Affymetrix U133A and U133B chips. The dataset can be downloaded from NCBIs Gene Expression Omnibus (DataSet GDS2855, accession no. GSE3307). We will analyze the dataset looking for phenotypic classifications and also looking for biomarkers for the different phenotypes. • We first use 3-dimensional PCA-projections of the samples of the data correlation matrix, filtering the genes with respect to variance and visually searching for clear patterns. When filtering out the 300 genes having most variability over the sample set we see several samples clearly distinguishing themselves and we capture 46% of the total variance compared to the, by randomization estimated, expected 6%. The plot in Figure 5 thus contains strong signals. Comparing with the color legend we conclude that the patients suffering from spastic paraplegia (Spg) contribute a strong signal. More precisely, three of the subjects suffering from the variant Spg-4 clearly distinguish themselves, while the remaining patient in the Spg-group suffering from Spg-7 falls close to the rest of the samples. • We perform Student t-tests between the spastic paraplegia group and the normal group. Three out of four subjects in the group Spastic paraplegia (Spg) clearly distinguish themselves. These three suffer from Spg-4, while the remaining Spg-patient suffers from Spg-7. As before we now, in three dimensional PCA-projections, visually search for clearly distinguishable patterns in a sequence of Student t-tests parametrized by level of significance, while at the same time trying to obtain a small FDR. At a level of significance of 0.00001, leaving a total of 37 genes (rejected nulls) with an FDR of 0.006, the first three principal components capture 81% of the variance compared to the, by randomization, expected 31%. Table 5 lists the top genes upregulated in the group spastic paraplegia. We can add that these genes are all strongly upregulated for the three particular subjects suffering from Spg-4, while that pattern is less clear for the patient suffering from Spg-7. • In order to find possibly obscured signals, we now remove the Spastic paraplegia group from the analysis. We also remove the Normal group from the analysis since we really want to compare the different diseases. Starting anew with the entire set of genes, filtering with respect to variance, we visually obtain clear patterns for the 442 most variable genes. The first three principal components capture 46% of the total variance compared to the, by randomization estimated, expected 6%. • Using these 442 most variable genes as a basis for the analysis, we now construct a graph connecting every sample with its two nearest (using euclidean distances in the 442-dimensional space) neighbors. As described in the section on multidimensional scaling above, we now compute geodesic distances in the graph between samples, and construct a resulting distance (between samples) matrix. We then convert this distance matrix to a corresponding covariance matrix and finally perform a PCA on this covariance matrix. The resulting plot (together with the used graph) of the so constructed three dimensional PCA-projection is depicted in Figure 6. Figure 6: Effect of the ISOMAP-algorithm. We can identify a couple of clusters corresponding to the groups juvenile dermatomyositis, amyotophic lateral sclerosis, acute quadriplegic myopathy and Emery Dreifuss FSHD. Comparing with the Color legend in Figure 5, we clearly see that the groups juvenile dermatomyositis, amyotophic lateral sclerosis, acute quadriplegic myopathy and also Emery-Dreifuss FSHD distinguish themselves. One should now go on using Student t-tests to find biomarkers (i.e. genes) distinguishing these different groups of patients, then eliminate these distinct groups and go on searching for more structure in the dataset. Pediatric Acute Lymphoblastic Leukemia (ALL) We will finally analyze a dataset consisting of gene expression profiles from 132 different patients, all suffering from some type of pediatric acute lymphoblastic leukemia (ALL). For each patient the expression levels of 22282 genes are analyzed. The dataset comes from the study by Ross et. al. [44] and the primary data are available at the St. Jude Children's Research Hospital's website [50]. The platform used to collect this example data set was Affymetrix HG-U133 chip, using the Affymetrix Microarray suite to select, prepare and normalize the data. As before we start by performing an SVD on the data correlation matrix visually searching for interesting patterns and assessing the signal to noise ratio by comparing the actual L 2 -projection content in the real world data projection with the expected L 2 -projection content in corresponding randomized data. • We filter the genes with respect to variance, looking for strong signals. In Figure 7 we see a plot of a three dimensional projection using the 873 most variable genes as a basis for the analysis. We clearly see that the group T-ALL is mainly responsible for the signal resulting in the first principal component occupying 18% of the total variance. In fact by looking at supplied annotations we can conclude that all of the other subjects in the dataset are suffering from B-ALL, the other main ALL type. • We now perform Student t-tests between the group T-ALL and the rest. We parametrize by level of significance and visually search for clear patterns. In Figure 8 we see a biplot based on the 70 genes that best discriminate between T-ALL and the rest. The FDR is extremely low FDR = 1.13e − 24 telling us that with a very high probability the genes found are relevant discoveries. The most significantly upregulated genes in the T-ALL group are CD3D, CD7, TRD@, CD3E, SH2D1A and TRA@. The most significantly downregulated genes in the T-ALL group are CD74, HLA-DRA, HLA-DRB, HLA-DQB and BLNK. By comparing with gene-lists from the MSig Data Base (see [36]) we can see that the genes that are upregulated in the T-ALL group (CD3D, CD7 and CD3E) are represented in lists of genes connected to lymphocyte activation and lymphocyte differentiation. • We now remove the group T-ALL from the analysis and search for visually clear patterns among three dimensional PCA-projections filtrating the genes with respect to variance. Starting anew with the entire list of genes, filtering with respect to variance, a clear pattern is obtained for the 226 most variable genes. We capture 43% of the total variance as compared to the expected 6.5%. We thus have strong signals present. • Using these 226 most variable genes as a basis for the analysis, we now construct a graph connecting every sample with its two nearest neighbors. Figure 9: Effect of the ISOMAP-algorithm. We can clearly see the groups E2A-PBX1, MLL and TEL-AML1. The group TEL-AML1 is connected to a subgroup of the group called Other (white). We now perform the ISOMAP-algorithm with respect to this graph. The resulting plot (together with the used graph) of the so constructed three dimensional PCAprojection is depicted in Figure 9. We can clearly distinguish the groups E2A-PBX1, MLL and TEL-AML1. The group TEL-AML1 is connected to a subgroup of the group called Other. This subgroup actually corresponds to the Novel Group discovered in the study by Ross et.al. [44]. Note that by using ISOMAP we discovered this Novel subgroup only by variance filtering the genes showing that ISOMAP is a useful tool for visually supervised clustering. Acknowledgement I dedicate this review article to Professor Gunnar Sparr. Gunnar has been a role model for me, and many other young mathematicians, of a pure mathematician that evolved into contributing serious applied work. Gunnar's help, support and general encouragement have been very important during my own development within the field of mathematical modeling. Gunnar has also been one of the ECMI pioneers introducing Lund University to the ECMI network. I sincerely thank Johan Råde for helping me to learn almost everything I know about data exploration. Without him the here presented work would truly not have been possible. Applied work is best done in collaboration and I am blessed with Thoas Fioretos as my long term collaborator within the field of molecular biology. I am grateful for what he has tried to teach me and I hope he is willing to continue to try. Finally I thank Charlotte Soneson for reading this work and, as always, giving very valuable feed-back. Keywords: multivariate statistical analysis, principal component analysis, biplots, multidimensional scaling, multiple hypothesis testing, false discovery rate, microarray, bioinformatics Lemma 2. 2 2Let {y k } N k=1 and {ỹ k } N k=1 be two sets of vectors in R p . If y T k y j =ỹ T kỹ j for j, k = 1, 2, . . . , N , then there exists an S ∈ O(p) such that S(y k ) =ỹ k for k = 1, 2, . . . , N . Figure 1 : 1The 34 current smokers (red), 18 former smokers (blue) and 23 never smokers (green) projected onto the three first principal components. The separation into two groups is not associated with any supplied clinical annotation and is thus a suspected artifact. Figure 2 : 2The red samples have high description numbers (≥ 58) and the green samples have low description numbers (≤ 54). The blue sample has number 5. Figure 3 : 3We have filtered by variance keeping the 630 most variable genes. It is interesting to see that the third principle component containing 9% of the total variance separates the current smokers (red) from the never smokers (green) quite well. Figure 4 : 4A synchronized biplot showing samples to the left and variables to the right. Figure 5 : 5PCA-projection of samples based on the variance filtered top 300 genes. Figure 7 : 7Variance filtered PCA-projection of the correlation datamatrix based on 873 genes. The group T-ALL clearly distinguish itself. Figure 8 : 8FDR = 1.13e − 24. A synchronized biplot showing samples to the left and genes to the right. The genes are colored according to their expression level in the T-ALL group. Red = upregulated and green = downregulated. Table 1 : 1Test statistics Table 2 : 2Statistical discovery ratesExpected value Name Table 3 : 3Top genes upregulated in the current smokers group and downregulated in the never smokers group.Gene symbol q-value NQO1 5.59104067771824e-08 GPX2 2.31142232391279e-07 ALDH3A1 2.31142232391279e-07 CLDN10 3.45691439169953e-06 FTH1 4.72936617815058e-06 TALDO1 4.72936617815058e-06 TXN 4.72936617815058e-06 MUC5AC 3.77806345774405e-05 TSPAN1 4.50425200297664e-05 PRDX1 4.58227420582093e-05 MUC5AC 4.99131989472012e-05 AKR1C2 5.72678146958168e-05 CEACAM6 0.000107637125805187 AKR1C1 0.000195523829628407 TSPAN8 0.000206106293159401 AKR1C3 0.000265342898771159 Table 4 : 4Top genes downregulated in the current smokers group and upregulated in the never smokers group.Gene symbol q-value MT1G 4.03809377378893e-07 MT1X 4.72936617815058e-06 MUC5B 2.38198903402317e-05 CD81 3.1605221864278e-05 MT1L 3.1605221864278e-05 MT1H 3.1605221864278e-05 SCGB1A1 4.50425200297664e-05 EPAS1 4.63861480935914e-05 FABP6 0.00017793865432854 MT2A 0.000236481909692626 MT1P2 0.000251264650053933 Table 5 : 5Top genes upregulated in the Spastic paraplegia group (Spg-4).Gene symbol q-value RAB40C 0.0000496417 SFXN5 0.000766873 CLPTM1L 0.00144164 FEM1A 0.0018485 HDGF2 0.00188435 WDR24 0.00188435 NAPSB 0.00188435 ANKRD23 0.00188435 Botstein Singular value decomposition for genome-wide expression data processing and modeling. O Alter, P Brown, D , Proceedings of the National Academy of Science. 9718O. Alter, P. Brown, D. Botstein Singular value decomposi- tion for genome-wide expression data processing and model- ing. Proceedings of the National Academy of Science 97 (18), (2000) pp. 10101-10106. Asymptotic theory for principal component analysis Ann. T W Anderson, Math. Statist. 34T.W. Anderson Asymptotic theory for principal component analysis Ann. Math. Statist. 34, (1963) pp 122-148. Anderson An introduction to multivariate statistical analysis. T W , WileyHoboken, NJ3rd ed.T. W. Anderson An introduction to multivariate statistical analysis. 3rd ed. (2003), Wiley, Hoboken, NJ. The European Bioinformatics Institute's database ArrayEx. The European Bioinformatics Institute's database ArrayEx- press: http://www.ebi.ac.uk/microarray-as/ae/ The Gene Ontolgy Consortium. Gene Ontology: tool for the unification of biology. M Ashburner, Nat. Genet. 25M. Ashburner et.al. The Gene Ontolgy Consortium. Gene On- tology: tool for the unification of biology. Nat. Genet. 25, (2000), pp. 25-29. Comparison of Affymetrix data normalization methods using 6,926 experiments across five array generations. R Autio, BMC Bioinformatics. 10suppl.1 S24R. Autio et. al. Comparison of Affymetrix data normalization methods using 6,926 experiments across five array genera- tions. BMC Bioinformatics Vol. 10, suppl.1 S24 (2009). Methodologies in spectral analysis of large dimensional random matrices, A review. Z D Bai, Stat. Sinica. 9Z. D. Bai Methodologies in spectral analysis of large dimen- sional random matrices, A review. Stat. Sinica 9 (1999) pp. 611-677. Tibshirani Semi-supervised methods to predict patient survival from gene expression data. E Bair, R , PLOS Biology. 2E. Bair, R. Tibshirani Semi-supervised methods to predict pa- tient survival from gene expression data. PLOS Biology 2, (2004), pp. 511-522. Tibshirani Prediction by supervised principle components. E Bair, T Hastie, D Paul, R , J. Amer. Stat. Assoc. 101E. Bair, T. Hastie, D. Paul, R. Tibshirani Prediction by super- vised principle components. J. Amer. Stat. Assoc. 101, (2006), pp. 119-137. Nuclear envelope dystrophies show a transcriptional fingerprint suggesting disruption of Rb-MyoD pathways in muscle regeneration. M Bakay, Brain. Pt 4)M. Bakay et al. Nuclear envelope dystrophies show a tran- scriptional fingerprint suggesting disruption of Rb-MyoD pathways in muscle regeneration. Brain (2006), 129(Pt 4), pp. 996-1013. A statistical framework for testing functional categories in microarray data. The Annals of. W T Barry, A B Nobel, F A Wright, Appl. Stat. 21W.T. Barry, A.B. Nobel, F.A. Wright A statistical framework for testing functional categories in microarray data. The An- nals of Appl. Stat. Vol. 2, No 1, (2008), pp. 286-315. Hochberg Controlling the false discovery rate: a practical and powerful approach to multiple testing. Y Benjamini, Y , J. R. Stat. Soc. Ser. B. 57Y. Benjamini, Y. Hochberg Controlling the false discovery rate: a practical and powerful approach to multiple testing J. R. Stat. Soc. Ser. B, 57, (1995) pp. 289-300. Hochberg On the adaptive control of the false discovery rate in multiple testing with independent statistics. Y Benjamini, Y , J. Edu. Behav. Stat. 25Y. Benjamini, Y. Hochberg On the adaptive control of the false discovery rate in multiple testing with independent statistics. J. Edu. Behav. Stat., 25, (2000), pp. 60-83. Yekutieli The control of the false discovery rate in multiple testing under dependency. Y Benjamini, D , The Annals of Statistics. 29Y. Benjamini, D. Yekutieli The control of the false discov- ery rate in multiple testing under dependency. The Annals of Statistics, 29,(2001), pp. 1165-1188. Ter Braak Interpreting canonical correlation analysis through biplots of structure correlations and weights. C J F , Psychometrika. 553C. J.F. Ter Braak Interpreting canonical correlation analy- sis through biplots of structure correlations and weights. Psy- chometrika Vol. 55, No 3 (1990) pp. 519-531. Zhang Supervised principle component analysis for gene set enrichment of microarray data with continuous or survival outcome. X Chen, L Wang, J D Smith, B , Bioinformatics. 24X. Chen, L. Wang, J.D. Smith, B. Zhang Supervised princi- ple component analysis for gene set enrichment of microarray data with continuous or survival outcome. Bioinformatics Vol 24, no 21, (2008) pp. 2474-2481. Preconditioning" for feature selection and regression in high-dimensional problems. P Debashis, E Bair, T Hastie, R Tibshirani, The Annals of Statistics. 364P. Debashis, E. Bair, T. Hastie, R. Tibshirani "Precondition- ing" for feature selection and regression in high-dimensional problems. The Annals of Statistics, Vol. 36, No 4, (2008), pp. 1595-1618. P Diaconis, Patterns in eigenvalues: The 70th Josiah Willard Gibbs Lecture. 40P. Diaconis Patterns in eigenvalues: The 70th Josiah Willard Gibbs Lecture. Bulletin of the AMS Vol. 40, No 2 (2003) pp. 155-178. National Centre for Biotechnology Information's database Gene Expression Omnibus. National Centre for Biotechnology Informa- tion's database Gene Expression Omnibus (GEO): http://www.ncbi.nlm.nih.gov/geo/ The biplot graphic display of matrices with application to principal component analysis. K R Gabriel, Biometrika. 58K. R. Gabriel The biplot graphic display of matrices with application to principal component analysis. Biometrika 58, (1971) pp. 453-467. K R Biplot, Encyclopedia of Statistical Sciences. S. KotzN.L. JohnsonNew YorkWiley1K.R. Gabriel Biplot. In S. Kotz; N.L. Johnson (Eds.) Encyclo- pedia of Statistical Sciences Vol 1 (1982) New York; Wiley pp. 263-271. . J C Gower, D J Hand, Biplots, Monographs on Statistics and Applied. 54J.C. Gower, D.J. Hand Biplots. Monographs on Statistics and Applied probability 54; . &amp; Chapman, Hall; London, Chapman & Hall; London (1996). Hotelling The generalization of Student's ratio. H , Ann. Math. Statist. 2H. Hotelling The generalization of Student's ratio, Ann. Math. Statist. (1931), Vol. 2, pp 360-378. Analysis of a complex of statistical variables into principal components. H Hotelling, J. Educ. Psychol. 24H. Hotelling Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24, (1933), pp 417- 441; pp 498-520. On lines and planes of closest fit to systems of points in space. K Pearson, Phil. Mag. 6K. Pearson On lines and planes of closest fit to systems of points in space. Phil. Mag. (6), 2, (1901), pp 559-572. On the distribution of the largest eigenvalue in Principle Components Analysis. I M Johnstone, Ann. of Statistics. 292I. M. Johnstone On the distribution of the largest eigenvalue in Principle Components Analysis. Ann. of Statistics Vol 29, No 2 (2001) pp. 295-327. High dimensional statistical inference and random matrices. I M Johnston, Prooc. of the Intern. congress of Math. I. M. Johnston High dimensional statistical inference and ran- dom matrices. Prooc. of the Intern. congress of Math, Madrid, Spain 2006, (EMS 2007). Kyoto Encyclopedia of Genes and Genomes. M Kanehisa, S Goto, Kegg , Nucleic Acids Res. 28M. Kanehisa, S. Goto KEGG:Kyoto Encyclopedia of Genes and Genomes. Nucleic Acids Res. 28, (2000), pp. 27-30. K Karhunen, Über lineare Methoden in der Wahrscheinlichkeitsrechnung. 37179NoK. Karhunen,Über lineare Methoden in der Wahrschein- lichkeitsrechnung, Ann. Acad. Sci. Fennicae. Ser. A. I. Math.- Phys., No 37, (1947), pp. 179. Karoui Spectrum estimation for large dimensional covariance matrices using random matrix theory. N El, The Ann. of Statistics. 366N. El Karoui Spectrum estimation for large dimensional co- variance matrices using random matrix theory. The Ann. of Statistics Vol 36, No 6 (2008) pp. 2757-2790. Ontological analysis of gene expression data: current tools, limitations, and open problems. P Khatri, S Draghici, Bioinformatics. 21P. Khatri, S. Draghici, Ontological analysis of gene expression data: current tools, limitations, and open problems. Bioinfor- matics, Vol. 21, no 18, (2005), pp. 3587-3595. Statistical methods of translating microarray data into clinically relevant diagnostic information in colorectal cancer. B S Kim, Bioinformatics. 21B.S. Kim et. al. Statistical methods of translating microarray data into clinically relevant diagnostic information in colorec- tal cancer. Bioinformatics, 21, (2005), pp. 517-528. A multivariate approach for integrating genome-wide expression data and biological knowledge. S W Kong, T W Pu, P J Park, Bioinformatics. 22S.W. Kong, T.W. Pu, P.J. Park A multivariate approach for in- tegrating genome-wide expression data and biological knowl- edge. Bioinformatics Vol. 22, no 19, (2006), pp. 2373-2380. M Loève, ISBN 0-387- 90262-7Probability theory. Springer-VerlagII4th ed.M. Loève, Probability theory. Vol. II, 4th ed., Graduate Texts in Mathematics, Vol. 46, Springer-Verlag, 1978, ISBN 0-387- 90262-7. Symmetric gauge functions and unitarily invariant norms. L Mirsky, The quarterly journal of mathematics. 111L. Mirsky Symmetric gauge functions and unitarily invari- ant norms. The quarterly journal of mathematics Vol 11 No 1 (1960) pp. 50-59. The Broad Institute's Molecular Signatures Database. The Broad Institute's Molecular Signatures Database (MSigDB): http://www.broadinstitute.org/gsea/msigdb/ Pgc-1 alpha-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes. V K Mootha, Nat. Genet. 34V. K. Mootha et. al. Pgc-1 alpha-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes Nat. Genet. 34, (2003), pp 267-273. Fontes Approximate geodesic distances reveal biologically relevant structures in microarray data. J Nilsson, T Fioretos, M Höglund, M , Bioinformatics. 206J. Nilsson, T. Fioretos, M. Höglund, M. Fontes Approximate geodesic distances reveal biologically relevant structures in microarray data. Bioinformatics Vol. 20, no 6, (2004), pp 874-880. Y Pawitan, S Michiels, S Koscielny, A Gusnanto, A , Ploner False discovery rate, sensitivity and sample size for microarray studies Bioinformatics. 21Y. Pawitan, S. Michiels, S. Koscielny, A. Gusnanto and A. Ploner False discovery rate, sensitivity and sample size for microarray studies Bioinformatics Vol. 21 no. 13 (2005), pp. 3017-3024. Separation theorems for singular values of matrices and their applications in multivariate analysis. C R Rao, J. of Multivariate analysis. 9C. R. Rao Separation theorems for singular values of matrices and their applications in multivariate analysis. J. of Multivari- ate analysis 9 (1979) pp. 362-377. Guiard How robust are tests for two independent samples? Journal of statistical planning and inference. D Rasch, F Teuscher, V , 137D. Rasch, F. Teuscher, V. Guiard How robust are tests for two independent samples? Journal of statistical planning and in- ference 137 (2007) pp. 2706-2720. Enrichment or depletion of a GO category within a class of genes: which test?. I Rivals, L Personnaz, L Taing, M-C Potier, Bioinformatics. 23I. Rivals, L. Personnaz, L. Taing, M-C Potier Enrichment or depletion of a GO category within a class of genes: which test? Bioinformatics Vol. 23, no 4, (2007), pp. 401-407. Dopazo Editorial note: Papers on normalization, variable selection, classification or clustering of microarray data. D M Rocke, T Ideker, O Troyanskaya, J Queckenbush, J , Bioinformatics. 256D.M. Rocke, T. Ideker, O. Troyanskaya, J. Queckenbush, J. Dopazo Editorial note: Papers on normalization, variable se- lection, classification or clustering of microarray data Bioin- formatics Vol. 25, no 6, (2009), pp. 701-702. Classification of pediatric acute lymphoblastic leukemia by gene expression profiling Blood. M E Ross, 102M.E. Ross et. al. Classification of pediatric acute lymphoblas- tic leukemia by gene expression profiling Blood 15 October 2003, Vol 102, No 8, pp 2951-2959. . Qlucore Omics Explorer, Qlucore AB. Qlucore Omics Explorer, Qlucore AB, www.qlucore.com A Spira, Effects of Cigarette Smoke on the Human Airway Epithelial Cell Transcriptome Proceedings of the National Academy of Sciences. 101A. Spira et al. Effects of Cigarette Smoke on the Human Air- way Epithelial Cell Transcriptome Proceedings of the Na- tional Academy of Sciences, Vol. 101, No. 27 (Jul. 6, 2004), pp. 10143-10148. G W Stewart, On the early history of the singular value decomposition SIAM Review. 35G.W. Stewart On the early history of the singular value de- composition SIAM Review, Vol 35, no 4 (1993) pp. 551-566. A direct approach to false discovery rates. J D Storey, J.R. Stat. Soc. Ser. B. 64J.D. Storey A direct approach to false discovery rates. J.R. Stat. Soc. Ser. B, 64, (2002), pp. 479-498. Tibshirani Statistical significance for genomewide studies. J D Storey, R , Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA100J. D. Storey, R. Tibshirani Statistical significance for genomewide studies. Proc. Natl. Acad. Sci. USA, 100, (2003), pp. 9440-9445. Children's Research Hospital. St, Jude, St. Jude Children's Research Hospital: http://www.stjuderesearch.org/data/ALL3/index.html Gene set enrichment analysis: a knowledgebased approach for interpreting genome wide expression profiles. A Subramanian, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA102A. Subramanian et.al. Gene set enrichment analysis: a knowl- edgebased approach for interpreting genome wide expression profiles. Proc. Natl. Acad. Sci. USA, 102, (2005) pp. 15545- 15550. A Global Geometric Framework for Nonlinear Dimensionality Reduction. J B Tenenbaum, V Silva, J C Langford, Science. 290J. B. Tenenbaum, V. de Silva and J. C. Langford A Global Ge- ometric Framework for Nonlinear Dimensionality Reduction. Science 290 (22 Dec. 2000): pp. 2319-2323. Missing value estimatin methods for DNA microarrays. O Troyanskaya, Bioinformatics. 176O. Troyanskaya et. al. Missing value estimatin methods for DNA microarrays. Bioinformatics Vol. 17, no 6, (2001) pp. 520-525. Bickis A clarifying comparison of methods for controlling the false discovery rate. Y Yin, C E Soteros, M G , Journal of statistical planning and inference. 139Y. Yin, C. E. Soteros, M.G. Bickis A clarifying comparison of methods for controlling the false discovery rate. Journal of statistical planning and inference 139 (2009) pp. 2126-2137. On the limit of the largest eigenvalue of the large dimensional sample covariance matrix Probab. Y Q Yin, Z D Bai, P R Krishnaiah, Theory Related Fields. 78Y. Q. Yin, Z. D. Bai, P.R. Krishnaiah On the limit of the largest eigenvalue of the large dimensional sample covariance matrix Probab. Theory Related Fields 78 (1988) pp. 509-521.
[]
[ "Evaluating the Performance of Twitter-based Exploit Detectors", "Evaluating the Performance of Twitter-based Exploit Detectors" ]
[ "Daniel Alves De Sousa [email protected] \nSchool of Computer Science (FACOM)\nFederal University of Uberlândia (UFU) Uberlândia\nMGBrazil\n", "Elaine Ribeiro De Faria [email protected] \nSchool of Computer Science (FACOM)\nFederal University of Uberlândia (UFU) Uberlândia\nMGBrazil\n", "Rodrigo Sanches Miani \nSchool of Computer Science (FACOM)\nFederal University of Uberlândia (UFU) Uberlândia\nMGBrazil\n" ]
[ "School of Computer Science (FACOM)\nFederal University of Uberlândia (UFU) Uberlândia\nMGBrazil", "School of Computer Science (FACOM)\nFederal University of Uberlândia (UFU) Uberlândia\nMGBrazil", "School of Computer Science (FACOM)\nFederal University of Uberlândia (UFU) Uberlândia\nMGBrazil" ]
[]
Patch prioritization is a crucial aspect of information systems security, and knowledge of which vulnerabilities were exploited in the wild is a powerful tool to help systems administrators accomplish this task. The analysis of social media for this specific application can enhance the results and bring more agility by collecting data from online discussions and applying machine learning techniques to detect real-world exploits. In this paper, we use a technique that combines Twitter data with public database information to classify vulnerabilities as exploited or not-exploited. We analyze the behavior of different classifying algorithms, investigate the influence of different antivirus data as ground truth, and experiment with various time window sizes. Our findings suggest that using a Light Gradient Boosting Machine (LightGBM) can benefit the results, and for most cases, the statistics related to a tweet and the users who tweeted are more meaningful than the text tweeted. We also demonstrate the importance of using ground-truth data from security companies not mentioned in previous works.
10.5753/sbseg.2020.19257
[ "https://arxiv.org/pdf/2011.03113v1.pdf" ]
226,278,107
2011.03113
648cf6a26fdaa15f63bad180e42bd054f4f7921e
Evaluating the Performance of Twitter-based Exploit Detectors Daniel Alves De Sousa [email protected] School of Computer Science (FACOM) Federal University of Uberlândia (UFU) Uberlândia MGBrazil Elaine Ribeiro De Faria [email protected] School of Computer Science (FACOM) Federal University of Uberlândia (UFU) Uberlândia MGBrazil Rodrigo Sanches Miani School of Computer Science (FACOM) Federal University of Uberlândia (UFU) Uberlândia MGBrazil Evaluating the Performance of Twitter-based Exploit Detectors Patch prioritization is a crucial aspect of information systems security, and knowledge of which vulnerabilities were exploited in the wild is a powerful tool to help systems administrators accomplish this task. The analysis of social media for this specific application can enhance the results and bring more agility by collecting data from online discussions and applying machine learning techniques to detect real-world exploits. In this paper, we use a technique that combines Twitter data with public database information to classify vulnerabilities as exploited or not-exploited. We analyze the behavior of different classifying algorithms, investigate the influence of different antivirus data as ground truth, and experiment with various time window sizes. Our findings suggest that using a Light Gradient Boosting Machine (LightGBM) can benefit the results, and for most cases, the statistics related to a tweet and the users who tweeted are more meaningful than the text tweeted. We also demonstrate the importance of using ground-truth data from security companies not mentioned in previous works. Introduction Exploit detection is an essential application for system administrators as system updates sometimes involve rigorous impact analysis and even severe adaptations or migrations. An exploited vulnerability on a particular system should demand urgent responses from its administrator in the sense of prioritizing patches. Given this scenario and the fact that many vulnerabilities might never be exploited in real-world attacks [Nayak et al. 2014], knowledge of which vulnerabilities are more likely to be exploited in the wild can be an excellent tool for system administrators to prioritize patch deployments. There are a few metrics that could be used with this purpose (such as the Common Vulnerability Score System -CVSS base scores and the Microsoft Update Severity Rating System), but they err on the side of caution [Younis and Malaiya 2015]. The analysis of social media data can leverage this process by taking advantage of the community's discussions on the topic [Shrestha et al. 2020]. The work presented in [Sabottke et al. 2015] has shown that hackers, system administrators, and software vendors discuss vulnerabilities on social media like Twitter. The authors also showed, for the first time, that this information could be used to create a framework for predicting exploits using machine learning techniques. Several works published after that investigated the feasibility of such Twitter-based early exploit detectors ( [Bullough et al. 2017], [Queiroz et al. 2017], and [Chen et al. 2019]). Despite these works shown the potential of using Twitter data to detect exploits, they still have limitations. References [Bullough et al. 2017] and [Queiroz et al. 2017], for example, uses only Support Vector Machine (SVM) as its classifier algorithm. [Chen et al. 2019] partially solves this issue by adding several other classifiers in their study. However, they do not provide a performance comparison using the original dataset described in [Sabottke et al. 2015]. Another issue is related to the use of a single source, Symantec Intrusion Protection Signature, for building the ground-truth of real-world exploits. As discussed in [Sabottke et al. 2015], this is a notable limitation since Symantec does not cover all platforms and products uniformly. Moreover, none of the previous work evaluates the impact of training Twitterbased exploit detectors using past data to predict the future. For example, suppose that an exploit detector was trained using data from 2017. What happens if only data from 2018 would be presented to this model? Would add more training data gives better performance? This discussion is important to evaluate the practical implications of using Twitter data to build exploit detectors and consequently helping prioritize which vulnerabilities to patch. Based on these analyses, we propose to evaluate the extent to which different factors influence the performance of Twitter-based exploit detectors. We focus on exploring some machine learning characteristics, training classifiers with different time-window sizes, and evaluating the impact of ground-truth labels from different sources. The paper has four main contributions. First, we identify a suitable classifier for building Twitter-based exploit detectors using a five-year dataset composed of tweets and vulnerability information. Second, we develop a ground-truth for labeling real-world exploits using data from sources other than Symantec. Third, we provide empirical evidence that using ground-truth information from a single vendor can bias the model toward some vulnerabilities and induce to non-optimal performance on real-world scenarios. Fourth, we examine if the performance of an exploit detector model is affected along the time. Our results suggest that models trained and tested using data from a single calendar year outperform those trained with data from previous years. This indicates that selecting the right amount of past information that will feed the model is decisive to improve its performance. The rest of this paper is organized as follows. Section 2 presents the basic terminology about security vulnerabilities. Section 3 reviews and compares related studies with this work. Section 4 details the dataset, features and classifiers that are used in our system architecture. Section 5 presents the results and shows some threats to validity. Finally, Section 6 concludes the paper and suggests future work. Terminology Common Vulnerabilities and Exposures (CVE) is a list maintained by MITRE which assigns a unique number to each disclosed vulnerability. Meltdown vulnerability, for instance, is identified by the number CVE-2017-5754. Aside from the official channels, some vulnerabilities may also be disclosed through forums, social media, or blogs, which may lead to a situation where the CVE of a non-patched flaw can be published. In either scenario, official and non-official disclosure, malicious hackers can take advantage of a vulnerability to harm unpatched systems. Exploit is the term used to define the techniques or tools developed with this goal. Exploits can be divided into two categories: proof of concept (PoC) and real-world (RW) exploits [Sabottke et al. 2015]. While the first is developed as part of the disclosure process to demonstrate a particular vulnerability, the latest is created to perform real attacks. Although some PoC may be used in real-world scenarios, others are too impractical to be. Therefore, vulnerabilities with exploits in the wild are a subset of the ones with PoC exploits. Related Work Many previous works have addressed the task of using machine learning to predict whether a vulnerability will be exploited or not. [Bozorgi et al. 2010] trained an SVM classifier using features extracted from the Open Source Vulnerability Database (OSVDB) and the NVD to predict if a vulnerability is likely to be exploited. As ground truth, the authors used a metric called "Exploit Classification" from the OSVDB, no longer available since 2016. Despite getting nearly 90% of accuracy, the ground truth used presents a very high positive rate (exploited vulnerabilities), contrasting with most related works ( [Nayak et al. 2014], and [Bilge and Dumitras 2012], for example). [Sabottke et al. 2015] introduced the use of Twitter to help classifying vulnerabilities as exploited or not exploited. Like [Bozorgi et al. 2010], the authors acquired data from the NVD and the OSVDB, but they added an extra set of features extracted from Twitter, including text and statistics about tweets and users who tweeted. The work divides the ground truth into two groups, PoC and RW exploits, using the Exploit Database (EDB) 1 as a source for PoC and Symantec's antivirus and IPS signatures for RW. They collected tweets from February 2014 and January 2015 and found evidence that the use of Twitter data could increase the classifier's overall performance. The paper, however, does not explore other classifiers options (only SVM was used) or methods to overcome dataset imbalance, it uses only one year's worth of data, and relies on a single antivirus vendor for RW ground truth information. [Queiroz et al. 2017] used a similar approach to detect useful information about security vulnerabilities using Twitter data. They collected posts from security specialists from March 2016 to early March 2017 and manually labeled training data. Despite not being the focus of the paper, the approach was able to identify useful alerts about vulnerability exploits. [Bullough et al. 2017] raised questions about prior work's methodology and highlighted how small changes in using the dataset could affect the performance of predictive models. The authors have been especially critical about temporal intermixing caused by random splitting data for train and test. Such ideas are valuable and should be considered when planning new models, but their conclusion about using a temporal split may not be accurate. In Section 5.4, we reproduce this test in a variety of ways, and our results suggest that performance differences may have different reasons. Furthermore, the work uses only PoC ground truth from EDB and found 18% of their CVEs exploited, a value significantly above those presented in works about RW exploits. [Chen et al. 2019] used an ensemble of regression algorithms to predict when a vulnerability will be exploited, both for PoC and RW scenarios. As features, the authors created a graph-based model relating CVE-Authors-Tweet and, for ground truth, only Symantec's data was used. The authors also approached the temporal intermixing issue, demonstrating how, in some cases, the CVSS is not available at the time of the vulnerability's disclousure. In our work, we will demonstrate in section 5.1 that the CVSS plays a small part in the classifier's performance. Nonetheless, those issues may indicate that more relevant NVD data could be affected similarly and should also be studied. Proposal and Experiments In this work, we evaluate and propose improvements in the Twitter-based exploit detection method presented in [Sabottke et al. 2015]. We chose to use that paper as our baseline because other related works were not entirely comparable to the best of our knowledge, displaying very different rates of exploited vulnerabilities or using completely different data sources. We start by using the same dataset from the mentioned work, which contains messages posted on Twitter, together with public data, and we experiment with different machine learning techniques to classify if a vulnerability will be exploited. We then extend the ground truth with information from different anti-malware software, and finally, we extend the dataset with data from 2015 to 2018. In all cases, we only consider already cataloged vulnerabilities (those to which a CVE number was assigned). To summarize, this paper has four main goals: • Compare classification algorithms: we want to compare the performance of four well-known algorithms on classifying vulnerabilities as "exploited" or "not exploited" based on data from Twitter and public vulnerabilities databases. Namely, we compared Support Vector Machines, Logistic Regression, XGBoost, and LightGBM. We also intend to evaluate how different groups of features impact each algorithm. In our method, we divide features into four groups: Twitter text, Twitter metadata, CVSS score and subscores, and a set of data from public vulnerabilities databases (mainly from the NVD). • Compare Multiple Ground Truth: previous works rely only on Symantec's antivirus and intrusion protection system (IPS) signatures to indicate real-world exploits. We want to evaluate if other antivirus databases can provide useful insight and improve prediction performance. • Class Balancing: we want to verify if class balancing methods can improve the overall results, given that class imbalance is one of the main challenges of this task. • Updated Data and Different Time Window Sizes: we want to evaluate how the method behaves on more recent data and understand how the classifier is affected by temporal splits and changes in the volume of training and testing data. To do that, we create time windows with different sizes covering different periods to train and test our model. Our classifier considers each CVE as an instance to which should be assigned true, if the vulnerability was exploited, or false otherwise. The features used to characterize each instance summarize the data collected about a specific CVE. They contain a Bagof-Words (BoW) representation of tweets mentioning that CVE, Twitter statistics and metadata related to those tweets, and public database information about the vulnerability. In Section 4.3 we detail these features. To train a classifier, we need a way to resolve if a vulnerability has any known exploit. On the first test, we use the same ground truth data as defined in [Sabottke et al. 2015]: the ExploitDB (EDB) and Symantec's antivirus and intrusion protection system (IPS) signatures. For all remaining tests, we improve the ground truth with data from Avast 2 , ESET 3 , and Trend Micro 4 . The EDB is an online resource of known exploits that also provides information about the vulnerabilities affected. While the EDB is an excellent resource of PoC exploits, an antivirus signature is probably the best indicator that an exploit has been spotted in the wild. Past works only included Symantec's database as a source for such information, but we were able to find similar data from other vendors. In Section 5.2, we analyze the quality of data and how they can impact past results. Fig. 1 represents the system used in this work. Data gathering process and its sources (numbers 1 to 4) are detailed in subsection 4.2. Feature extraction process is presented in 4.3 while the machine learning techniques and the balancing phase (number 5) are clarified in 4.4. We developed several Python scripts for supporting the data gathering and feature extraction process. We also use the scikit-learn library [Pedregosa et al. 2011] for conducting the classification tasks. System's Architecture Dataset For the first part of this work, we used the same dataset described in [Sabottke et al. 2015] as it was a good baseline for comparison. In essence, it consists of tweets collected using Twitter's Streaming API from February 2014 to January 2015 using the keyword "CVE". Each tweet was automatically associated with a vulnerability using the CVE number, and then, to each vulnerability, detailed data (such as vendor, CVSS, and description) were collected from the NVD and the Open Source Vulnerability Database (OSVDB). The authors were able to collect 287,717 tweets containing explicit references to CVE numbers and referencing 5,865 different CVEs from the period studied. For the second part of our research, we collected tweets containing the word "CVE" from January 2015 to December 2018, filtered out those referring to vulnerabilities outside our test period and those not mentioning a valid CVE number. We were able to find 44,570 messages, mentioning 6,643 vulnerabilities discussed by 4,033 users. Table 1 shows the number of CVEs mentioned on Twitter by year. In 2018, for example, 23% of all CVEs disclosed in that year were mentioned at least one time on Twitter. Unlike [Sabottke et al. 2015], which collected tweets through Twitter Stream API, we collected messages by searching old tweets using the GetOldTweets3 tool [Mottl 2018]. We believe this may be the cause of the difference in information volume since accounts and messages deleted will not be found in our method. We also collected data from the NVD for feature extraction. Section 4.3 contains the list of features to each CVE. As discussed in [Sabottke et al. 2015], we also used two different ground truth for the dataset: one for proof-of-concept (PoC) and another for real-world (RW) exploits. Therefore some of the experiments are also divided into two categories, one for each type of exploit. For PoC we used data from ExploitDB. For the real-worlds exploits, most related studies use signatures from Symantec's antivirus and IPS. We included information from four other vendors: Avast, ESET, Trend Micro, and Kaspersky. We believe that relying on a single vendor can lead to biased results and less efficient learning from the model. In all cases, not all signatures mention the CVE exploited, imposing some limitations on the results. Features For all of our tests, we divided the features used by the classifier into four categories: Twitter text, Twitter statistics and metadata, CVSS score, and Public Vulnerabilities Databases. Twitter text represents a combination of all messages tweeted about a CVE. Table 2 shows a summary of our features. We represent the text with the Bag of Word (BoW) model, and the words chosen were the same as in [Sabottke et al. 2015]. The keyword dataset comprises 36 words and some examples include: 0day, advisory, beware, ssl, and fix. Twitter statistics contain data such as the number of retweets related to a CVE and the combined number of followers from all users who tweeted a CVE. Public Vulnerabilities Database originally included information from the NVD and the OSVDB, but we used only the former since the latter is no longer available. The CVSS category contains features representing the CVSS vectors. For the 2015 to 2018 data, we included features for the CVSS 3.0. We also included impact and exploitability subscores for both versions 2.0 and 3.0. Classifiers We used the SVM classifier as our baseline since it is well suitable for text categorization [Joachims 1998] and because it was used in most related works. We tested several supervised classification algorithms, but we will approach the ones which stood out best: Logistic Regression, XGBoost, and Light Gradient Boosting Machine (LightGBM). We also tried several class balancing algorithms available on Python's imbalanced-learn API [imbalanced-learn API documentation 2019]. In this paper, we will cover only the ones which performed best with our application: Synthetic Minority Over-sampling Technique (SMOTE), Adaptive Synthetic (ADASYN), Nearest-Neighbor (AllKNN), and the Random Under Sampler (RUS). The first two being over-sampling techniques and the other two under-sampling algorithms. We used the stratified 10-fold cross-validation and averaged the results. When testing the balancing algorithms, we applied the methods on the training set of each fold. Because our dataset contains features with different data types, and the experimented algorithms also use different types as input, we tested multiple scaling and feature representation methods for each algorithm and used the one which performed best. All categoricalordinal data (mostly related to the CVSS score) or binary features were first converted to numeric values. Standardization was done through sklearn's StandardScaler. Experimental Results To compare our results with the original work ( [Sabottke et al. 2015]), we use as our baseline an SVM classifier with no class balancing and Symantec as the single source of information about real-world exploits. Results are shown using the values of precision ( T P T P +F P , the fraction of correct positive prediction), recall ( T P T P +F N , the fraction of positive cases that were correctly predicted), and F-score (2 * precision * recall precision+recall , the harmonic mean of precision and recall). We use the precision-recall (PR) curve as the visual representation for all experiments, considering the database is highly imbalanced. Table 3 shows how imbalanced the classes are. Since we already have conservative exploits indicators on the CVSS scores, we prioritize the increase of the precision over the recall without sacrificing the F-score. For each algorithm, we tested two scenarios: PoC and RW exploits. To extract more significant conclusions, we plotted the classifier results when training and testing with each subset of features. We also plot a line for the results using all features combined. By adopting this strategy, it is possible to see how different features may favor a particular algorithm. Furthermore, we ran tests with combinations of subsets, e.g., CVSS and Twitter Statistics, and eventually concluded that certain features are not suitable for some of the algorithms. We have shortened the names of the categories: Words stands for Twitter text, Twitter Stats stands for Twitter statistics and metadata, CVSS stands for the CVSS score and, Database stands for Public Vulnerabilities Databases. A description of each subset can be found in Section 4.3. Table 4 shows the overall performance of the tested algorithms using the same dataset described in [Sabottke et al. 2015] which encompasses data from February 2014 to January 2015. Significance was calculated using a 10-fold cross-validated paired t-test between the algorithm's and the baseline's f-score. P-values greater than 0.05 implies no significant improvement. We also used the f-score because, in our tests, the SVM was the only algorithm which had recall greater than the precision, so using either of these measures could lead to incorrect conclusions. The results reveal that, regardless of the algorithm, there is still room for improvement. We believe this can be achieved with modifications to the preprocessing and the data-gathering technique. We'll discuss that in Section 6. Next, we analyze each of the used algorithms. Fig. 2 shows the difference between the Precision-recall curves for PoC and real-world exploits. The first characteristic we would like to highlight is how the classifier performs better with PoC data, probably because vulnerabilities with exploits published on the ExploitDB are likely to have references on their description on NVD or OSDB. Therefore the subset of features extracted from public vulnerabilities databases plays a big part in the overall performance for PoC (this holds for all other algorithms). Analyzing the Performance of Different Algorithms and Groups of Features Secondly, it is possible to notice how, in real-world scenarios, the Twitter Statistics subset has an essential contribution to the overall result. As will be demonstrated here, this characteristic depends on the algorithm. However, our results indicate that the "who said it" might be a more relevant question than "what was said" to determine if a tweet indicates an exploited vulnerability. Another characteristic is how the CVSS tends to be a conservative indicator. In other words, using just that score leads to labeling most of the vulnerability as exploited, hence the low precision and high recall. On the other hand, Words subset tends to be the other way around: higher precision but very low recall. This behavior reflects how certain words on the BoW representation, such as "exploit" or "beware", are particularly efficient in detecting exploits but are generally related to only a subset of the exploits. Logistic Regression (LG) Considering the precision, the Logistic Regression (LG) was the best performing algorithm for the RW scenario. When looking at F-score, the Logistic Regression was the Fig. 2 c) shows the PR curves for our test. In it, we display different combinations of features. We can highlight how Twitter statistics was useful for all the tests it was included, once again. It is worth mentioning that the model was one of the fastest to train and test. XGBoost and LightGBM Both of the ensembles tested had good results, but only one of them, the LightGBM, had a P-value of less than 0.05. Besides, the XGBoost demonstrated the second-longest execution time, ahead of only the SVM. The LightGBM, on the other hand, was the fastest algorithm to execute in our tests. Figure 3 shows the PR curves for both algorithms. In conclusion, the LightGBM was the best performing algorithm, considering the F-score, during our initial tests by a small margin, followed by the Logistic Regression. Throughout our other tests, the LightGBM was able to improve more than the Logistic Regression, as will be shown in Section 5.3. Finally, we would like to point out how the CVSS subset of features plays a more significant part when testing with PoC exploits. For RW exploits, the Twitter statistics and the NVD data become more relevant. Multiple Ground Truth In this experiment, we investigate how using different antivirus signatures as ground truth interferes with the classifier efficiency. In our research, we were able to find public databases with lists and descriptions of signatures from the following vendors: Avast, ESET, Symantec, and Trend Micro. We then developed web crawlers to collect this information, searched for CVEs mentions, and created an expanded ground truth that will be shared with the general public. To our knowledge, all previous works use only Symantec to tell if a vulnerability has been exploited in the wild. Our findings indicate that, from 2015 to 2018, at least 248 of the 1,338 real-world exploited vulnerabilities are not mentioned by Symantec. Furthermore, considering years before 2015, Symantec's database misses 518 real-world exploited vulnerabilities in a total of 2,655 mentioned in signatures descriptions. Notice that this does not necessarily mean their antivirus has no signature for those exploits, but it indicates that no reference was made in the description. Table 5 shows the amount of exploited vulnerabilities mentioned by each vendor from 2015 to 2018. ESET and Trend Micro contain limited information compared to Avast and Symantec, so we combined their numbers and labeled them "Other". Also, notice how the quantities change drastically over time, we will discuss that at the end of this section. One of our concerns about using public signature descriptions was to avoid vendors that would only include PoC information and count it as "exploit detection". To verify if we were getting new information, for each vendor, we compared the lists of exploited vulnerabilities to our list of PoC. We found that a significant amount of exploited vulnerabilities were nor mentioned by Symantec or EDB, so it is reasonable to consider them as real-world exploits. Figure 4 shows the intersection of our new list of exploited vulnerabilities with Symantec's and EDB's (from 2015 to 2018). To check how the new ground truth affects our classifier, we conducted a series of tests where the model was trained using labels from a single vendor but tested with the combination of all vendors (the combined ground truth). Table 7 contains the precision, recall, and F-score for each test with our best performing algorithm, the LightGBM. The results suggest that in 2014, [Sabottke et al. 2015] could have achieved equal Table 7. Results with combined ground truth or better results using Avast's database instead of Symantec's (the P-value larger than 0.05 means the improvement is not significant enough for us to declare it better, despite the F-score). Most importantly, we conclude that using information from a single vendor can bias the model toward some vulnerabilities and lead to non-optimal performance on real-world scenarios. As we will explain in Section 5.3, the gains of a combined ground truth can even be emphasized when using a class balancing algorithm. Unfortunately, as shown in Table 5, since 2015, Avast, has gradually decreased the volume of information about its signatures. From this experiment, we conclude that using multiple ground truth sources is crucial for any machine learning application dealing with exploit detectors. Despite that, we see less of this information available in recent years, which may indicate less effective classifiers on future works. Class-balancing In another experiment, we compared methods for dealing with class imbalance. The severe imbalance in our application motivated this test, as shown in Table 3. In this test, we ran class balancing methods with all four classification algorithms, and once again, the LighGBM outperformed the others. We used a Python library called "imbalanced-learn" and tested several algorithms of both undersampling and oversampling. In our tests, we used the combined ground truth from Section 5.2 on real-world exploits and executed these algorithms for the training set from each fold of the 10-fold cross-validation. Table 6 shows the results for the best-performing ones. As we can see, the SVM and the LightGBM behave differently with the balancing algorithms. While the baseline could only get similar F-score values and no statistically significant improvement, the LightGBM showed a considerable increase in performance, with a P-value of 0.001, when used with the AllKNN, an under-sampling technique. Other algorithms were not able to obtain statistically different results, although some achieved superior F-score. In general, the AllKNN outperformed the original experiments for all classification algorithms, both for PoC and real-world exploits. It is worth mentioning that when using only Symantec's ground truth, improvements were more modest, highlighting the importance of using multiple sources for ground truth. Updated Data and Time Window Sizes With our last test, we wanted two things. First, to understand if the results obtained using data from 2014 would remain with an updated dataset. Second, to answer if training with data from a more extended period would influence the results. To do that, we first trained and tested the model separating our data by year, from 2015 to 2018. We then ran a series of experiments where the model trained with different time windows from years before 2018 but tested with data from 2018. More specifically, the model trained with the following groups of years: 2017, then from 2016 to 2017, and finally, from 2015 to 2017. Here, we would like to evaluate whether adding more training data would positive influence the performance of the model.In this experiment, we used the LightGBM with the combined ground truth. The results are shown in Table 8 and Table 9. We can see from Table 9 that results remain reasonably constant throughout the years, except for 2016, probably due to the small number of exploited vulnerabilities mentioned on Twitter (see Table 5). Moreover, as mentioned in Section 5.2, the contribution of Avast and other antiviruses to our ground truth diminished in recent years, making the model more dependent on Symantec as its only source of information. When training with different time windows, our results suggest that there may be some relation between CVEs from the same year that allows better learning from the model. When training with data from past years, F-score remained around 0.2. From our perspective, this difference was not expected since all data used for the model's features were equally available for all of the years. To check if information learned by the model was getting old overtime or if it was missing new information, we ran tests with multiple combinations of time windows (e.g., we would train just with 2015 or with 2015 and 2016, then with 2016 and 2017; we also tested with multiple years, and even trained with years ahead of the testing one). In all cases, the performance was significantly lower than the single year approach. We believe this behavior can relate to the fact that some vulnerabilities are disclosed together and be part of the same malware issue. The WannaCry ransomware outbreak, for example, was related to six different CVEs (CVE-2017-0143, CVE-2017-0144, CVE-2017-0145, CVE-2017-0146, CVE-2017-0147, and CVE-2017, all with a fairly similar description, CVSS, and affecting the same products. If that is the case, maybe it is necessary to detect these groups of CVEs and treat them as a single vulnerability. To conclude, we found no empirical evidence that using more than one year of data can benefit a model that uses a temporal batch split. Threats to validity Our experiments were conducted with tweets acquired using a tool called GetOldTweets3 [Mottl 2018], which can search for Twitter messages regardless of their posting time but cannot obtain deleted tweets or tweets from deleted users. In contrast, if we were using the Twitter Stream API, messages would be obtained and stored in our database when posted. This limitation leads to a diminished tweet volume. We believe our contributions can apply for batch training with or without a temporal split, but it is necessary to mention that time intermixing can be a limitation on our tests using single year data. In our method, we also discarded tweets referencing older vulnerabilities (for example, we only consider CVEs disclosed in 2018 when collecting Twitter mentions in 2018), which may prevent the system from detecting new exploits on old vulnerabilities. Conclusion and Future Work In this paper, we explored several aspects of Twitter-based exploit detectors. We have compared the performance of four commonly used classification algorithms, conducted tests with different data sources for ground truth to real-world exploits, applied balancing algorithms, and trained and tested our classifier with different time windows. By selecting a suitable algorithm, managing class imbalance issues, and adding new ground truth, we were able to outperform the baseline and evaluate the performance of exploit detectors using updated data. Some of our results raised questions that we would like to tackle in future work, some of them involve: monitoring the NVD data to check which of our features is available when a CVE is published; understanding how related CVEs interfere on our classifier and if that is the real cause of inferior performance using time windows; enhance the tweet searching by using keywords other than "CVE". We also want to experiment with different feature selection and text representations. Figure 1 . 1System Architecture Figure 2 . 2a) SVM -real-world exploits (b) SVM -PoC exploits (c) LG -real-world exploits Precision-recall curves for SVM and Logistic Regression second best. Figure 3 . 3Precision-recall curves for XGBoost and LightGBM Figure 4 . 4The intersection of the lists of exploited vulnerabilities Table 1 . 1Vulnerabilities mentioned on Twitter# of Vulnerabilities Mentioned on Twitter % 2015 6484 822 13% 2016 6447 776 12% 2017 14714 1292 9% 2018 16556 3753 23% Table 2 . 2Summarized List of FeaturesPosition Category Data type 0 -35 Twitter text BoW 36 -47 Twitter metadata All Numeric 48 -56 CVSS Score 2.0 3 Numeric, 6 Categorical Ordinal 57 -67 CVSS Score 3.0 3 Numeric, 8 Categorical Ordinal 68 -78 Public Vulnerabilities Databases 6 Numeric, 5 Binary Table 3 . 3Number of CVEs exploited compared to total Table 4 . 4Overall results5.1.1. SVM (Baseline) Table 5 . 5Number of exploited vulnerabilities mentioned by vendorsSVM LR LightGBM Precision Recall F-score Precision Recall F-score Precision Recall F-score Baseline 0.2244 0.8278 0.3531 0.5850 0.1094 0.1844 0.5458 0.2844 0.3740 ADASYN I 0.2271 0.8285 0.3565 0.1936 0.8403 0.3147 0.4640 0.3774 0.4162 SMOTE I 0.2391 0.8044 0.3686 0.2200 0.8181 0.3467 0.4867 0.3785 0.4258 AllKNN II 0.2271 0.8285 0.3565 0.4955 0.3007 0.3743 0.4956 0.5267 0.5107 RUS II 0.2330 0.8303 0.3639 0.1322 0.8921 0.2302 0.2212 0.8642 0.3523 I Over-sampling technique II Under-sampling technique Table 6 . 6Results for class balancing for RW exploits Table 8 . 8Results for updated data -Training and testing using a one-year window Figure 5. Precision-recall curves for each year on RW exploits (2015 -2018)Table 9. Results for training with different time windowsTrain Test Precision Recall F-score 2018 (baseline) 2018 0.6999 0.5529 0.6178 2017 2018 0.3333 0.1654 0.2211 2016 2017 2018 0.3051 0.1353 0.1875 2015 2016 2017 2018 0.3208 0.1278 0.1828 https://www.exploit-db.com https://www.avast.com/exploit-protection.php 3 https://www.virusradar.com/en/threat encyclopaedia 4 https://www.trendmicro.com/vinfo/us/threat-encyclopedia Before we knew it: An empirical study of zero-day attacks in the real world. L Bilge, T Dumitras, Proceedings of the ACM Conference on Computer and Communications Security. the ACM Conference on Computer and Communications SecurityBilge, L. and Dumitras, T. (2012). Before we knew it: An empirical study of zero-day attacks in the real world. In Proceedings of the ACM Conference on Computer and Communications Security, pages 833-844. Beyond heuristics: learning to classify vulnerabilities and predict exploits. M Bozorgi, L Saul, S Savage, G Voelker, Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. the 16th ACM SIGKDD international conference on Knowledge discovery and data miningBozorgi, M., Saul, L., Savage, S., and Voelker, G. (2010). Beyond heuristics: learning to classify vulnerabilities and predict exploits. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 105-114. Predicting exploitation of disclosed software vulnerabilities using open-source data. B L Bullough, A K Yanchenko, C L Smith, J R Zipkin, IWSPA 2017 -Proceedings of the 3rd ACM International Workshop on Security and Privacy Analytics. Bullough, B. L., Yanchenko, A. K., Smith, C. L., and Zipkin, J. R. (2017). Predicting exploitation of disclosed software vulnerabilities using open-source data. IWSPA 2017 -Proceedings of the 3rd ACM International Workshop on Security and Privacy Ana- lytics, co-located with CODASPY 2017, pages 45-53. Using twitter to predict when vulnerabilities will be exploited. H Chen, R Liu, N Park, V Subrahmanian, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningChen, H., Liu, R., Park, N., and Subrahmanian, V. (2019). Using twitter to predict when vulnerabilities will be exploited. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, pages 3143-3152. imbalanced-learn api. API documentation. imbalanced-learn API documentation (2019). imbalanced-learn api. https:// imbalanced-learn.readthedocs.io/en/stable/api.html. Text categorization with support vector machines: Learning with many relevant features. T Joachims, European conference on machine learning. SpringerJoachims, T. (1998). Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning, pages 137-142. Springer. . D Mottl, Mottl, D. (2018). Getoldtweets3. https://pypi.org/project/ Some vulnerabilities are different than others. K Nayak, D Marino, P Efstathopoulos, T Dumitraş, International Workshop on Recent Advances in Intrusion Detection. SpringerNayak, K., Marino, D., Efstathopoulos, P., and Dumitraş, T. (2014). Some vulnerabilities are different than others. In International Workshop on Recent Advances in Intrusion Detection, pages 426-446. Springer. Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. Predicting software vulnerability using security discussion in social media. A Queiroz, B Keegan, F Mtenzi, European Conference on Information Warfare and Security. ECCWSQueiroz, A., Keegan, B., and Mtenzi, F. (2017). Predicting software vulnerability using security discussion in social media. European Conference on Information Warfare and Security, ECCWS, pages 628-634. Vulnerability disclosure in the age of social media: Exploiting twitter for predicting real-world exploits. C Sabottke, O Suciu, T Dumitras, USENIX Security Symposium. Sabottke, C., Suciu, O., and Dumitras, T. (2015). Vulnerability disclosure in the age of social media: Exploiting twitter for predicting real-world exploits. In USENIX Security Symposium, pages 1041-1056. Multiple social platforms reveal actionable signals for software vulnerability awareness: A study of github, twitter and reddit. P Shrestha, A Sathanur, S Maharjan, E Saldanha, D Arendt, S Volkova, PLOS ONE. 153Shrestha, P., Sathanur, A., Maharjan, S., Saldanha, E., Arendt, D., and Volkova, S. (2020). Multiple social platforms reveal actionable signals for software vulnerability aware- ness: A study of github, twitter and reddit. PLOS ONE, 15(3):1-28. Comparing and evaluating cvss base metrics and microsoft rating system. A A Younis, Y K Malaiya, 2015 IEEE International Conference on Software Quality, Reliability and Security. Younis, A. A. and Malaiya, Y. K. (2015). Comparing and evaluating cvss base metrics and microsoft rating system. In 2015 IEEE International Conference on Software Quality, Reliability and Security, pages 252-261.
[]
[ "Sparse Representation Classification Beyond 1 Minimization and the Subspace Assumption", "Sparse Representation Classification Beyond 1 Minimization and the Subspace Assumption" ]
[ "Cencheng Shen ", "Li Chen ", "Yuexiao Dong ", "Carey E Priebe " ]
[]
[]
The sparse representation classifier (SRC) has been utilized in various classification problems, which makes use of 1 minimization and works well for image recognition satisfying a subspace assumption. In this paper we propose a new implementation of SRC via screening, establish its equivalence to the original SRC under regularity conditions, and prove its classification consistency under a latent subspace model and contamination. The results are demonstrated via simulations and real data experiments, where the new algorithm achieves comparable numerical performance and significantly faster.Index Terms-feature screening, marginal regression, angle condition, stochastic block model !
10.1109/tit.2020.2981309
[ "https://arxiv.org/pdf/1502.01368v4.pdf" ]
88,513,284
1502.01368
7836738db0251de055fbda0f786ce5fbc2c8f054
Sparse Representation Classification Beyond 1 Minimization and the Subspace Assumption Cencheng Shen Li Chen Yuexiao Dong Carey E Priebe Sparse Representation Classification Beyond 1 Minimization and the Subspace Assumption 1Index Terms-feature screeningmarginal regressionangle conditionstochastic block model ! The sparse representation classifier (SRC) has been utilized in various classification problems, which makes use of 1 minimization and works well for image recognition satisfying a subspace assumption. In this paper we propose a new implementation of SRC via screening, establish its equivalence to the original SRC under regularity conditions, and prove its classification consistency under a latent subspace model and contamination. The results are demonstrated via simulations and real data experiments, where the new algorithm achieves comparable numerical performance and significantly faster.Index Terms-feature screening, marginal regression, angle condition, stochastic block model ! INTRODUCTION Sparse coding is widely recognized as a useful tool in machine learning, thanks to the theoretical advancement in regularized regression and 1 minimization [1], [2], [3], [4], [5], [6], [7], [8], as well as numerous classification and clustering applications in computer vision and pattern recognition [9], [10], [11], [12], [13], [14]. In this paper, we concentrate on the sparse representation classification (SRC), which is proposed in [9] and exhibits state-of-the-art performance for robust face recognition. It is easy to implement, work well for data satisfying the subspace assumption (e.g. face recognition, motion segmentation, and activity recognition), Cencheng Shen is with Department of Applied Economics and Statistics at University of Delaware, Li Chen is with Intel, Yuexiao Dong is with Department of Statistical Science at Temple University, and Carey E. Priebe is with Department of Applied Mathematics and Statistics at Johns Hopkins University (email: [email protected]; [email protected]; [email protected]; [email protected]). This work was partially supported by Johns Hopkins University Human Language Technology Center of Excellence, the XDATA program of the Defense Advanced Research Projects Agency administered through Air Force Research Laboratory contract FA8750-12-2-0303 and the SIMPLEX program through SPAWAR contract N66001-15-C-4041, and the National Science Foundation Division of Mathematical Sciences award DMS-1712947. This paper was presented in part at Joint Statistical Meeting and ICML Learning and Reasoning with Graphs workshop. The authors thank the editor and reviewer for their constructive and valuable comments that lead to significant improvements of the manuscript. is robust against data contamination, and can be extended to block-wise algorithm and structured data sets [15], [16], [17]. Given a set of training data X = [x 1 , . . . , x n ] ∈ R m×n with the corresponding known class labels Y = [y 1 , . . . , y n ], the task here is to classify a new testing observation x of unknown label. SRC identifies a small subset X ∈ R m×s from the training data to best represent the testing observation, calculates the least square regression coefficients, and computes the regression residual for classification. Comparing to nearestneighbor and nearest-subspace classifiers, SRC exhibits better finite-sample performance on face recognition and is argued to be robust against image occlusion and contamination. Other steps being standard, the most crucial and time-consuming part of SRC is to extract the appropriate sparse representation for the testing observation. Among all possible representations, the sparse representation X that minimizes the residual and the sparsity level s often yields a better inference performance by the statistic principle of parsimony and biasvariance trade-off. By adding the 0 constraint to the linear regression problem, one can minimize the residual and the sparsity level s at the same time. As 0 minimization is NP hard and arXiv:1502.01368v4 [stat.ML] 8 Jan 2020 unfeasible for large samples, 1 minimization is the best substitute due to its computational advantage, which has a rich theoretical literature on exact sparsity recovery under various conditions [3], [18], [5], [6], [7], [8]. Towards this direction, it is argued in [9] that SRC is able to find the most appropriate representation and ensures successful face recognition under the subspace assumption: if data of the same class lie in the same subspace while data of different classes lie in different subspaces, then the sparse representation X identified by 1 minimization shall only consist of observations from the correct class. Moreover, using 1 minimization and assuming existence of perfect representation, [13] derives a theoretical condition for perfect variable selection. However, to achieve correct classification, the sparse representation X does not need to perfectly represent the testing observation, nor only selects training data of the correct class. Indeed, a perfect representation is generally not possible when the feature (or dimension) size m exceeds the sample size n, while an approximate representation is often non-unique. A number of literature have also pointed out that neither 1 minimization nor the subspace assumption are indispensable for SRC to perform well [19], [20], [21], [22]. Intuitively, SRC can succeed whenever the sparse representation X contains some training data of the correct class, and the correct class can dominate the regression coefficients. It is not really required to recover the most sparse representation by 1 minimization or achieve a perfect variable selection under the subspace assumption. The above insights motivate us to propose a faster SRC algorithm and investigate its classification consistency. In Section 2 we introduce basic notations and review the original SRC framework. Section 3 is the main section: in Section 3.1 we propose a new SRC algorithm via screening and a slightly different classification rule; in Section 3.2 we compare and establish the equivalence between the two classification rules under regularity conditions; then we prove the consistency of SRC under a latent subspace mixture model in Section 3.3, which is further extended to contamination models and network models. Our results better explain the success and applicability of SRC, making it more appealing in terms of theoretical foundation, computational complexity and general applicability. The new SRC algorithm performs much faster than before and achieves comparable numerical performance, as supported by a wide variety of simulations and real data experiments on images and network graphs in Section 4. All proofs are in Section 5. PRELIMINARY Notations Let X = [x 1 , x 2 , . . . , x n ] ∈ R m×n be the training data matrix and Y = [y 1 , y 2 , . . . , y n ] ∈ [K] n be the known class label vector, where m is the number of dimensions (or feature size), n is the number of observations (or sample size), and K is the number of classes with [K] = [1, . . . , K]. Denote (x, y) ∈ R m × [K] as the testing pair and y is the true but unobserved label. As a common statistical assumption, we assume (x, y), (x 1 , y 1 ), · · · , (x n , y n ) are all independent realizations from a same distribution F XY . A classifier g n (x, D n ) is a function that estimates the unknown label y ∈ [K] based on the training pairs D n = {(x 1 , y 1 ), · · · , (x n , y n )} and the testing observation x. For brevity, we always denote the classifier as g n (x), and the classifier is correct if and only if g n (x) = y. Throughout the paper, we assume all observations are of unit norm ( x i 2 = 1) because SRC scales all observations to unit norm by default. The sparse representation is a subset of the training data, which we denote as X = [ x 1 , x 2 , . . . , x s ] ∈ R m×s , where each x i is selected from the training data X , and s is the number of observations in the representation, or the sparsity level. Once X is determined,β denotes the s × 1 least square regression coefficients between X and x, and the regression residual equals x − Xβ 2 . For each class k ∈ [K] and a given X , we define X k = { x i ∈ X , i = 1, . . . , s | y x i = k} X −k = { x i ∈ X , i = 1, . . . , s | y x i = k}. Namely, X k is the subset of X that contains all observations from class k, and X −k = X − X k . Moreover, denoteβ k as the regression coefficients ofβ corresponding to X k , andβ −k as the regression coefficients corresponding to X −k , i.e., X kβk + X −kβ−k = Xβ. Note that the original SRC in Algorithm 1 uses the class-wise regression residual x− X kβk 2 to classify. 1 SRC consists of three steps: subset selection, least square regression, and classification via regression residual. Algorithm 1 describes the original algorithm: Equation 1 identifies the sparse representation X and computes the regression coefficientsβ; then Equation 2 assigns the class by minimizing the class-wise regression residual. In terms of computation time complexity, the 1 minimization step requires at least O(mns), while the classification step is much cheaper and takes O(msK). Sparse Representation Classification by The 1 minimization step is the only computational expensive part of SRC. Computationwise, there exists various greedy and iterative implementations of similar complexity, such as 1 homotopy method [1], [2], [4], orthogonal matching pursuit (OMP) [23], [24], augmented Lagrangian method [12], among many others. We use the homotopy algorithm for subsequent analysis and numerical comparison without delving into the algorithmic details, as most L1 minimization algorithms share similar performances as shown in [12]. Note that model selection is inherent to 1 minimization or almost all variable selection methods, i.e., one need to either specify a tolerance noise level or a maximum sparsity level in order for the iterative algorithm to stop. The choice does not affect the theorems, but can impact the actual numerical performance and thus a separate topic for investigation [25], [26]. In this paper we simply set the maximal sparsity level s = min{n/ log(n), m} for both the 1 minimization here and the latter screening Algorithm 1 Sparse Representation Classification by 1 Minimization and Magnitude Rule Input: The training data matrix X , the known label vector Y, the testing observation x, and an error level . 1 Minimization: For each testing observation x, find X andβ that solves the 1 minimization problem: β = arg min β 1 subject to x − X β 2 ≤ .(1) Classification: Assign the testing observation by minimizing the class-wise residual, i.e., g 1 n (x) = arg min k∈[K] x − X kβk 2 ,(2) break ties deterministically. We name this classification rule as the magnitude rule. Output: The estimated class label g 1 n (x). method in Section 3.1, which achieves good empirical performance for both algorithms. MAIN RESULTS In this section, we present the new SRC algorithm, investigate its equivalence to the original SRC algorithm, prove the classification consistency under a latent subspace mixture model, followed by further generalizations. Note that one advantage of SRC is that it is applicable to both high-dimensional problems (m ≥ n) and low-dimensional problems (m < n). Its finitesample numerical success mostly lies in highdimensional domains where traditional classifiers often fail. For example, the feature size m is much larger than the sample size n in all the image data we consider, and m = n for the network adjacency matrices. The new SRC algorithm inherits the same advantage, and all our theoretical results hold regardless of m. SRC via Screening and Angle Rule The new SRC algorithm is presented in Algorithm 2, which replaces 1 minimization by screening, then assigns the class by minimizing the class-wise residual in angle. To distinguish with the magnitude rule of Algorithm 1, we name Equation 3 as the angle rule. Algorithm 2 has a better computation complexity due to the screening procedure, which simply chooses s observations out of X that are most correlated with the testing observation x, only requiring O(mn + n log(n)) in complexity instead of O(mns) for 1. The screening procedure has recently gained popularity as a fast alternative of regularized regression for high-dimensional data analysis. The speed advantage makes it a suitable candidate for efficient data extraction for extremely large m, and can be equivalent to 1 and 0 minimization under various regularity conditions [27], [28], [29], [30], [31], [32]. In particular, setting the maximal sparsity level as s = max{n/ log(n), m} is shown to work well for screening [27], thus the default choice in this paper. Equivalence Between Angle Rule and Magnitude Rule The angle rule in Algorithm 2 appears different from the magnitude rule in Algorithm 1. For given sparse representation X and the regression vectorβ, we analyze these two rules and establish their equivalence under certain conditions. Theorem 1. Given X and x, we have g 1 n (x) = g scr n (x) when either of the following conditions holds. • K = 2 and X is of full rank; • Data of different classes are orthogonal to each other, i.e., θ( X yβy , X kβk ) = 0 for all k = y. These conditions are quite common in classification: binary classification problems are prevalent in many supervised learning tasks, and random vectors in high-dimensional space are orthogonal to each other with probability increasing to 1 as the number of dimension increases [33]. Indeed, in all the multiclass highdimensional simulations and experiments we run in Section 4.3, the two classification rules yield very similar classification errors. We chose Algorithm 2 Sparse Representation Classification by Screening and Angle Rule Input: The training data matrix X , the known label vector Y, and the testing observation x. Screening: Calculate Ω = {|x T 1 x|, |x T 2 x|, · · · , |x T n x|} ( T is the transpose) , and sort the elements by decreasing order. Take X = {x (1) , x (2) , . . . , x (s) } with s = min{n/ log(n), m}, where |x T (i) x| is the ith largest element in Ω. Regression: Solve the ordinary least square problem between X and x. Namely, com- pute β = X −1 x where X −1 is the Moore- Penrose inverse. Classification: Assign the testing observation by g scr n (x) = arg min k∈[K] θ(x, X kβk ),(3) where θ denotes the angle between vectors. Break ties deterministically. We name this classification rule as the angle rule. Output: The estimated class label g scr n (x). to use the angle rule in the new SRC algorithm because it provides a direct path to classification consistency while the magnitude rule does not. Consistency under Latent Subspace Mixture Model To investigate the consistency of SRC, we first formalize the probabilistic setting of classification based on [34]. Let (X, Y ), (X 1 , Y 1 ), . . . , (X n , Y n ) i.i.d. ∼ F XY denote the random variables of the sample realizations (x, y), (x 1 , y 1 ), . . . , (x n , y n ). The prior probability of each class k is denoted by ρ k ∈ [0, 1] with K k=1 ρ k = 1, and the probability error is defined by L(g n ) = P rob(g n (X) = Y ). The classifier that minimizes the probability of error is called the Bayes classifier, whose error rate is optimal and denoted by L * . The sequence of classifiers g n is consistent for a certain distribution F XY if and only if L(g n ) → L * as n → ∞. SRC cannot be universally consistent, i.e., there exists some distribution F XY such that SRC is not consistent. A simple example is a two-dimensional data space where all the data lie on the same line passing through the origin, then SRC cannot distinguish between them (as the normalized data are essentially a single point), whereas a simple linear discriminant without normalization is consistent. To that end, we propose the following model: Definition (Latent Subspace Mixture Model). We say (X, Y ) ∼ F XY ∈ (R m × [K] ) follows a latent subspace mixture model if and only if there exists a lower-dimensional continuously supported latent variable U ∈ R d (d ≤ m), and m × d matrices W k ∈ R (m×d) for each k ∈ [K] such that X|Y = W Y U. Namely, we observe a high-dimensional object X, and there exists a hidden lowdimensional latent variable U and an unobserved class-dependent transformation W Y . The latent subspace mixture model well reflects the original subspace assumption: data of the same class lie in the same subspace, while data of different classes lie in different subspaces. The subspace location is determined by W k , and the model does not require a perfect linear recovery. Similar models have been used in a number of probabilistic highdimensional analysis, e.g., probabilistic principal component analysis in [35]. Note that each subspace represented by W k does not need to be equal dimension. As long as m denotes the maximum dimensions of all W k , then the matrix representation can capture all lowerdimensional transformations. For example, let K = 2, m = 3, d = 2, U = (u 1 , u 2 ) T , and W 1 =   1 0 0 1 0 0   W 2 =   1 −1 0 0 0 0   . Then the subspace associated with class 1 is two-dimensional with X|(Y = 1) = (u 1 , u 2 , u 1 + u 2 ) T , and the subspace associated with class 2 is one-dimensional with X|(Y = 2) = (u 1 − u 2 , 0, 0) T . Definition (The Angle Condition). Under the latent subspace mixture model, denote W = [W 1 |W 2 | · · · |W K ] ∈ R (m×Kd) as the concatenation of all possible W k , and W/W k denotes the same concatenation excluding W k . We say W satisfies the angle condition if and only if span(W k ) ∩ span(W/W k ) = {0} (4) for each k ∈ [K]. Essentially, the condition states that the subspace of each class does not overlap with other subspaces from other classes, therefore testing data from one class cannot be perfectly represented by any linear combinations of the training data from other classes. The angle condition and the latent subspace mixture model allow data of the same class to be arbitrarily close in angle, while data of different classes to always differ in angle, which leads to the classification consistency of SRC. Theorem 2. Under the latent subspace mixture model and W satisfying the angle condition, Algorithm 2 is consistent with L * being zero, i.e., L(g scr n ) n→∞ → L * = 0. Robustness against Contamination In feature contamination, certain features or dimensions of the data are contaminated or unobserved, thus treated as zero. Under the latent subspace mixture model, this can be equivalently characterized by imposing the contam-ination on the transformation matrix W k , i.e., some entries of W k are 0. By default, we assume there is no degenerate observation where all features are contaminated to 0. Definition (Latent Subspace Mixture Model with Fixed Contamination). Under the latent subspace mixture model, define V k as the 1 × m contamination vector for each class k: V k (j) = 1 when jth dimension is not contaminated, V k (j) = 0 when jth dimension is contaminated. Then the contaminated random variable X is X|Y = diag(V Y )W Y U, where diag(V k ) is an m × m diagonal matrix satis- fying diag(V k )(j, j) = V k (j). A more interesting contamination model is the following: Definition (Latent Subspace Mixture Model with Random Contamination). Under the latent subspace mixture model, for each class k define V k ∈ [0, 1] 1×m as the contamination probability vector, and Bernoulli(V Y ) as the corresponding 0-1 contamination vector where 0 represents the entry being contaminated. Then the contaminated random variable X is X|Y = diag(Bernoulli(V Y ))W Y U. The two contamination models are very similar, except one being governed by a fixed vector while the other is being governed by a random process. Theorem 3. Under the fixed contamination model, Algorithm 2 is consistent when W V = [diag(V 1 )W 1 | · · · |diag(V K )W K ] satisfies the angle condition. Under the random contamination model, Algorithm 2 is consistent when W V = [diag(I(V 1 = 1))W 1 | · · · |diag(I(V K = 1))W K ] satisfies the angle condition, where I is the indicator function. Note that the notation I(V k = 1) represents a 0 − 1 vector that applies the indicator function element-wise to class k data, which has an entry of 1 if and only if the respective dimension of class k is un-contaminated. For example, let Namely, for class 1 data, the last three dimensions are randomly contaminated but not the first three dimensions; for class 2 data, the first two and the fifth dimension are not contaminated; for class 3 data, the first two and the last dimension are not contaminated. The theorem holds for the random contamination model when the common un-contaminated dimensions satisfy the angle condition, e.g., the first two dimensions in the above example. Since W V can be regarded as a projected version of W where the projection is enforced by the contamination, Theorem 3 essentially states that if the angle condition can still hold for a projected version of W, then SRC is still consistent. If all dimensions can be randomly contaminated with non-zero probability, W V becomes the empty matrix and the theorem no longer holds. This is because different subspaces may now overlap with each other simply chance, thus SRC cannot be as consistent as before. In practice, one generally has no prior knowledge about how exactly the data is contaminated. Thus Theorem 3 suggests that SRC can still perform well when the number of contaminated feature is relatively small or sparse among all features. The simulation in Section 4.1.1 shows that under the two contamination models in Theorem 3, SRC performs almost as well as the no-contamination case; whereas if all dimensions are allowed to be contaminated, SRC exhibits a much worse classification error. Consistency under Stochastic Block-Model SRC is shown as a robust vertex classifier in [14], exhibiting superior performance than other classifiers for both simulated and real networks. Here we prove SRC consistency for the stochastic block model [36], [37], [38], which is a popular network model commonly used for classification and clustering. Although the results are extend-able to undirected, weighted, and other similar graph models, for ease of presentation we concentrate on the directed and unweighted SBM. Definition (Directed and Unweighted Stochastic Block Model (SBM)). Given the class membership Y. A directed stochastic block model generates an n × n binary adjacency matrix X via a class connectivity matrix V ∈ [0, 1] K×K by Bernoulli distribution B(·): X (i, j) = B(V (y i , y j )). From the definition, the adjacency matrix produced by SBM is a high-dimensional object that is characterized by a low-dimensional class connectivity matrix. It is thus similar to the latent subspace mixture model. Theorem 4. Denote ρ ∈ [0, 1] K as the 1 × K vector of prior probability, I {Y =i} as the Bernoulli random variable of probability ρ i , and define a set of new random variables Q k and their un-centered correlations q kl as Q k = K i=1 I {Y =i} V (k, Y ), q kl = E(Q k Q l ) E(Q 2 k )E(Q 2 l ) ∈ [0, 1] for all k, l ∈ [K]. Then Algorithm 2 is consistent for SBM vertex classification when the regression coefficients are constrained to be non-negative and q 2 kl · E(Q 2 k ) E(Q 2 l ) < E(Q k ) E(Q l )(5) for all 1 ≤ k < l ≤ K. Equation 5 essentially allows data of the same class to be more similar in angle than data of different classes, thus inherently the same as the angle condition for the latent subspace mixture model. Note that the theorem requires the regression coefficients to be non-negative. This is actually relaxed in the proof, which presents a more technical condition that maintains consistency while allowing some nega-tive coefficients. Since network data are nonnegative and SBM is a binary model, in practice the regression coefficients are mostly nonnegative in either 1 minimization or screening. Alternatively, it is also easy to enforce the non-negative constraint in the algorithm if needed [39], which yields similar numerical performance. NUMERICAL EXPERIMENTS In this section we compare the new SRC algorithm by screening to the original SRC algorithm in various simulations and experiments. The evaluation criterion is the leave-one-out error: within each data set, one observation is hold out for testing and the remaining are used for training, do the classification, and repeat until each observation in the given data is holdout once. The simulations show that SRC is consistent under both latent subspace mixture model and stochastic block model and is robust against contamination. The phenomenon is the same for real image and network data. Overall, we observe that Algorithm 2 performs very similar to Algorithm 1 in accuracy and achieves so with significantly better running time. Additional algorithm comparison is provided in Section 4.3 to compare various different choices on the variable selection and classification steps. Note that we also compared two common benchmarks, the k-nearest-neighbor classifier and linear discriminant analysis for a number of network simulations and real graphs in [14], which shows SRC is significantly better than those common benchmarks and thus not repeated here. Image-Related Experiments Latent Subspace Mixture Simulation The model parameters are set as: m = 5, d = 2, K = 3 with ρ 1 = ρ 3 = 0.3, ρ 2 = 0.4. The W k matrices are: which satisfies the angle condition in Theorem 2. We generate sample data (X , Y) for n = 30, 60, . . . , 300, compute the leave-one-out error, then repeat for 100 Monte-Carlo replicates and plot the average errors in Figure 1. The left panel has no contamination, while the center panel has 20% of the features contaminated to 0 per each observation. In both panels, Algorithm 2 is almost as good as Algorithm 1. W 1 =       3 1 1 1 1 1 1 1 · ·       W 2 =       1 1 3 1 1 1 1 1 · ·       W 3 =       1 1 1 1 3 1 1 1 · ·       In the right panel of Figure 1, we show how different contamination models may affect consistency. Consider three different contamination models: the same random contamination as in the center panel that sets each dimension to 0 randomly for each observation; a fixed contamination that sets 20% dimensions to 0 for the same features throughout all data; and a second random contamination that never contaminate the first three dimensions (thus keeping the angle condition) with 20% probability randomly sets each of the remaining dimensions to 0 for each observation. The first contamination model is not guaranteed to be consistent, while the remaining two are guaranteed consistent by Theorem 3. We use the same model setting as before except letting m = 10 and growing sample size up-to 1000 to better compare the convergence of the errors. Indeed, the results support the consistency theorem: for the fixed contamination or the random contamination satisfying the angle condition, SRC achieves almost perfect classification; while for the purely random contamination, the SRC error remains very high and is far from being optimal. Face and Object Images Next we experiment on two image data sets where original SRC excels at. The Extended Yale B database has 2414 face images of 38 individuals under various poses and lighting conditions [40], [41], which are re-sized to 32 × 32. Thus m = 1024, n = 2414, and K = 38. The Columbia Object Image Library (Coil20) [42] consists of 400 object images of 20 objects under various angles, and each image is also of size 32 × 32. In this case m = 1024, n = 400, and K = 20. The leave-one-out errors are reported in the first two rows of Table 2, and the running times are reported in the first two columns of Table 1. The new SRC algorithm is similar as the original SRC in error rate with a far better running time. Next we verify the robustness of Algorithm 2 against contamination. Figure 3 shows some examples of the image data, pre and post contamination. As the contamination rate increases from 0 to 50% of the pixels, the error rate increases significantly. Algorithm 2 still enjoys the same classification performance as the original SRC in Algorithm 1, as shown in top two panels of Figure 4. Network-Related Experiments Stochastic Block Model Simulation Next we generate the adjacency matrix by the stochastic block model. We set K = 3 with Article Hyperlinks and Neural Connectome In this section we apply SRC to vertex classification of network graphs. The first graph is collected from Wikipedia article hyperlinks [43]. A total of 1382 English documents based on the 2neighborhood of the English article "algebraic geometry" are collected, and the adjacency matrix is formed via the documents' hyperlinks. This is a directed, unweighted, and sparse graph without self-loop, where the graph density is 1.98% (number of edges divided by the maximal number of possible edges). There are five classes based on article contents (119 articles in category class, 372 articles about people, 270 articles about locations, 191 articles on date, and 430 articles are real math). Thus, we have m = n = 1382 and K = 5. The second graph we consider is the electric neural connectome of Caenorhabditis elegans [44], [45], [46]. The hermaphrodite C.elegans somatic nervous system has over two hundred neurons, classified into 3 classes: motor neurons, interneurons, and sensory neurons. The adjacency matrix is also undirected, unweighted, and sparse with density 1.32%. This is a relatively small data set where m = n = 253 and K = 3. The leave-one-out errors are reported in the first two rows of Table 3, the running times are reported in the last two columns of Table 1, and the contaminated classification performance are shown in the bottom panels of Figure 4. The interpretation and performance curve are very similar to those of the image data, where Algorithm 2 is much faster without losing performance. Experiments on Various Algorithmic Choices The SRC algorithm can be implemented by other algorithm choices. For example, one may adopt a different variable selection technique for the first step in SRC, e.g., the OMP from [24] is an ideal greedy algorithm to use, and screening followed by 1 minimization may achieve better variable selection as argued in [27]. For Table 2 for the image experiments and in Table 3 for the network experiments. Overall, we do not observe much difference in accuracy regardless of these choices: 1. for the variable selection step, OMP is quite similar as 1 homotopy, and screening • 1 does not improve screening either, which implies that it suffices to simply use the fastest screening method for SRC. 2. for the classification rule, the angle and magnitude rules also behave either exactly the same or very similar, and one can be slightly better than another depending on the data in use. PROOFS Theorem 1 Proof To prove the theorem, we first state the following Lemma: Lemma 1. Given the sparse representation X and the testing observation x, g 1 n (x) = y if and only if X −yβ−y 2 < X −kβ−k 2 for all classes k = y. And g scr n (x) = y if and only if θ(x, X yβy ) < θ(x, X kβk ) for all classes k = y. Proof. This lemma can be proved as follows: the testing observation can be decomposed via X as x = Xβ + = X kβk + X −kβ−k + for any class k, where is the regression residual orthogonal to both X kβk and X −kβ−k . For the magnitude rule, g 1 n (x) = y if and only if x − X yβy < x − X kβk for all k = y ⇔ X −yβ−y + < X −kβ−k + for all k = y ⇔ X −yβ−y < X −kβ−k for all k = y, where the last line follows because of the orthogonality of the regression residual . For the angle rule, it is immediate that g scr n (x) = y if and only if θ(x, X yβy ) < θ(x, X kβk ) for all k = y. This completes the proof of Lemma 1. Now we prove Theorem 1: Proof. As x = X yβy + X −yβ−y + , it follows that cos θ(x, X yβy ) = x T X yβy /( x 2 X yβy 2 ) = ( X yβy 2 2 + ( X −yβ−y ) T X yβy )/ X yβy 2 = X yβy 2 + ( X −yβ−y ) T X yβy / X yβy 2 = X yβy 2 + X −yβ−y 2 · cos θ( X yβy , X −yβ−y ). The first line expresses the angle via normalized inner products; the second line decomposes x, and is eliminated because it is orthogonal to both X yβy and X −yβ−y ; the third line divides the square of X yβy by itself; and the last line reexpresses the remainder term by angle. Using the above equation and Lemma 1, we show that the magnitude rule is the same as the angle rule under either of the following conditions: • When K = 2 and X is of full rank: first note that the angle between vectors satisfies −1 ≤ cos θ( X yβy , X −yβ−y ) ≤ 1. Moreover, when the representation is of full rank, X yβy and X −yβ−y cannot be in the same direction, so the ≤ 1 inequality becomes strict. Next, as there are only two classes, X −yβ−y becomes the representation of the other class, and cos θ( X yβy , X −yβ−y ) is the same for both classes. Assume Y = 1 and Y = 2 are the two classes. Then Equation 6 simplifies to cos θ(x, X 1β1 ) = a + b · c cos θ(x, X 2β2 ) = b + a · c, where a = X 1β1 , b = X 2β2 , and c = cos θ( X yβy , X −yβ−y ) < 1. Without loss of generality, assume g 1 n (x) = 1, which is equivalent to a > b by Lemma 1. This inequality holds if and only if cos θ(x, X 1β1 ) > cos θ(x, X 2β2 ) or equivalently θ(x, X 1β1 ) < θ(x, X 2β2 ). Thus g scr n (x) = 1 by Lemma 1 on the angle rule. Therefore, the magnitude and the angle rule are the same. • When data of one class is always orthogonal to data of another class: under this condition, cos θ( X kβk , X −kβ−k ) = 0 for any k, so Equation 6 simplifies to cos θ(x, X kβk ) = X kβk = x − X −kβ−k − = x − − X −kβ−k , where we used the fact that X kβk , X −kβ−k and are all pairwise orthogonal to each other in the third line. Since x and are both fixed in the classification step, the class y with the smallest X −yβ−y has the largest cos θ(x, X yβy ) and thus the smallest angle. Therefore, g 1 n (x) = g scr n (x) by Lemma 1. Theorem 2 Proof Recall that (x, y) denotes the testing observation pair that is generated by (X, Y ). To prove the theorem, we state two more lemmas here: Lemma 2. Given a testing pair (X, Y ) generated under a latent subspace mixture model satisfying the angle condition. Denote X −Y = [X 1 , X 2 , . . . , X s ] as a collection of random variables X i with Y i = Y , and C = [c 1 , c 2 , . . . , c s ] as nonzero a coefficient vector of size s. Then it holds that min{θ(X, X −Y · C)} > 0.(7) Proof. This lemma essentially states that the testing data X cannot be perfectly explained by any linear combination of the training data from the incorrect class, i.e., X = X −Y · C for any C. Note that C is not necessarily the regression coefficient, rather any arbitrary coefficients. Under the latent subspace mixture model, X = W Y U for the testing data and X i = W Y i U i for the training data where each Y i = Y . If there exists a vector C such that X = X −Y C, then X = W Y · U = X −Y C = s i=1 W Y i · U i · c i ⇔ span(W Y ) ∩ span(W/W Y ) = {0}, which contradicts the angle condition. Lemma 3. Given a testing pair (X, Y ) generated under a latent subspace mixture model satisfying the angle condition. Denote {X 1 , X 2 , . . . , X n } as a group of random variables that are independently and identically distributed as X satisfying Y i = Y for all i. As n → ∞ it holds that θ(X, X (1) ) → 0. where X (1) denotes the order statistic with the smallest angle difference to X. Proof. This lemma is guaranteed by the property of order statistics: for any > 0, P rob(θ(X, X i ) < ) > 0 ⇒P rob(θ(X, X (1) ) < ) n→∞ → 1. Therefore, as long as there are sufficiently many training data of the correct class, with probability converging to 1 there exists one training observation of the same class that is sufficiently close to the testing observation. Now we prove Theorem 2: Proof. To prove consistency, it suffices to prove that g scr n (X) = Y asymptotically. By Lemma 1 on the angle rule, it is equivalent to prove that for sufficiently large n, for every k = y it holds that θ(X, X YβY ) < θ(X, X kβk ). By Lemma 2, there exists a constant such that θ(X, X kβk ) > regardless of n. By Lemma 3, as sample size increases, with probability converging to 1 it holds that θ(X, X (1) ) < for which Y (1) = Y . Moreover, X (1) is guaranteed to enter the sparse representation, as by definition it enters the sparse representation X the first during screening. Since Y (1) = Y , X (1) is part of X Y , and it follows that with probability converging to 1, θ(X, X YβY ) ≤ θ(X, X (1) ) < < θ(X, X kβk ) for all k = Y . Thus Algorithm 2 is consistent under the latent subspace mixture model and the angle condition. Theorem 3 Proof Proof. In both contamination models, it suffices to prove that W V satisfying the angle condition is equivalent to W satisfying the angle condition, then applying Theorem 2 yields consistency in both models. In the fixed contamination case, the W matrix actually reduces to W V , so the conclusion follows directly. For the random contamination case, observe that each W i can be decomposed into two disjoint components W i = diag(I(V i = 1))W i + diag(I(V i < 1))W i span{diag(I(V i = 1))W i } ∩ span{diag(I(V i < 1))W i } = {0}. Therefore, as long as the uncontaminated part W V = [diag(I(V 1 = 1))W 1 | · · · |diag(I(V K = 1))W K ] satisfies the angle condition, the random matrix W always satisfies the angle condition. For example, say the first three dimensions of W are not contaminated while the remaining dimensions can be contaminated with nonzero probability. Then the first three dimensions satisfying Equation 4 guarantees W also satisfying Equation 4, regardless of how the remaining dimensions of W are contaminated. Theorem 4 Proof Proof. By Lemma 1 on the angle rule, it suffices to prove that for sufficiently large sample size, it holds that θ(x, X yβy ) < θ(x, X kβk ) for every k = y. Without loss of generality, assume x is the testing adjacency vector of size 1 × n from class 1, x is a training adjacency vector of size 1 × n also from class 1, and {x 1 , x 2 , . . . , x s } is any group of adjacency vectors not from class 1. Similar to the proof of Theorem 2, it further suffices to prove that at sufficiently large n, it holds that cos θ(x, x ) > cos θ(x, First for the within-class angle: cos θ(x, x ) = n j=1 B(V (1, Y j )V (1, Y j )) n j=1 B(V (1, Y j )) n j=1 B(V (1, Y j )) = n j=1 B(V (1, Y j )V (1, Y j ))/n n j=1 B(V (1, Y j ))/n n j=1 B(V (1, Y j ))/n n→∞ → K i=1 ρ i V (1, i)V (1, i) K i=1 ρ i V (1, i) = E(Q 2 1 ) E(Q 1 ) . For the between-class angle: cos θ(x, s i=1 c i x i ) = n j=1 s i=1 c i B(V (1, Y j )V (y i , Y j )) n j=1 B(V (1, Y j )) n j=1 { s i=1 c i B(V (y i , Y j ))} 2 n→∞ → s i=1 c i E(Q 1 Q y i )/ E(Q 1 ){ s i=1 c 2 i E(Q y i ) + 2 s j>i s i=1 c i c j E(Q y i Q y j )} ≤ s i=1 c i E(Q 1 Q y i ) s i=1 c 2 i E(Q 1 )E(Q y i ) . The last inequality is because s j>i s i=1 c i c j E(Q y i Q y j ) > 0 for any nonnegative coefficient vector C. Thus one can remove all these cross terms for the betweenclass angle and put a ≤ sign above. Since the network data is always non-negative and SBM is a binary model, the bulk of the regression vector is expected to be positive. In practice, the above inequality almost always holds for non-negative data even if a few regression coefficients turn out to be negative. Therefore, consistency holds when cos θ(x, x ) > cos θ(x, s i=1 c i x i ) a.s. ⇐ E(Q 2 1 ) E(Q 1 ) ≥ s i=1 c i E(Q 1 Q y i ) s i=1 c 2 i E(Q 1 )E(Q y i ) ⇔ E(Q 2 1 ) E(Q 1 ) ≥ s i=1 c i q 1y i E(Q 2 1 )E(Q 2 y i ) s i=1 c 2 i E(Q 1 )E(Q y i ) ⇔ s i=1 c 2 i E(Q y i ) E(Q 1 ) > s i=1 c i q 1y i E(Q 2 y i ) E(Q 2 1 ) ⇐ q 2 1k · E(Q 2 k ) E(Q 2 1 ) < E(Q k ) E(Q 1 ) for all k = 1, which is exactly Equation 5 when generalized to arbitrary class l instead of just class 1. Note that this result is also used in Lemma 1 and Theorem 1 in [14]. The condition can be readily verified on any given SBM model, and usually holds for models that are densely connected within-class and sparsely connected betweenclass. Fig. 1 : 1SRC errors under Latent Subspace Mixture Model. The left and center panel compare algorithm 1 and algorithm 2 in nocontamination and 20% random contamination data. The right panel compares SRC performance in three different contamination schemes, which shows SRC can perform very well when the contamination satisfies Theorem 3. Note that the right panel only shows algorithm 2, as the behavior is the same for algorithm 1. = ρ 3 = 0.3, ρ 2 = 0.4, generate sample data (X , Y) for n = 30, 60, . . . , 300, compute the leave-one-out error, then repeat for 100 Monte-Carlo replicates and plot the average errors inFigure 2. The class connectivity matrix V is set to the condition in Theorem 4. The new SRC algorithm is similar to the original SRC algorithm in error rate; and both of them have very low errors, supporting the consistency result of Theorem 4. Fig. 3: Images with Contamination Fig. 4 : 4SRC for Contaminated Real Data the classifier rule, one naturally wonders what the practical difference between angle and magnitude rules is. To that end, we run the same simulation and experiments as above without contamination, via several different algorithmic choices. The results are summarized in x s ) for any non-negative and non-zero vector C = [c 1 , . . . , c s ]. TABLE 1 : 1Running Time Comparison on Real Data (in seconds)Data Yale Images Coil Images Wikipedia Graph C-elegans Network SRC by Screening 72.7 25.1 32.3 0.3 SRC by 1 1101.5 345.9 573.7 9.1 0 0.5 Contamination Rate 0 0.2 0.4 0.6 Classification Error Yale Face Images SRC by Screening SRC by L1 TABLE 2 : 2Leave-one-out error comparison for image-related simulations and real data. The first two rows correspond to the original and new SRC algorithms, and the remaining rows consider other algorithmic choices. For each data column, the best error rate is highlighted. Algorithm / Data Latent Subspace Yale Faces Coil Objects 1 with Magnitude (Algorithm 1) 1.33% 0.62% 0.56% Screening with Angle (Algorithm 2) 1.00% 1.66% 0.01% 1 with Angle 1.33% 2.11% 0.42% Screening with Magnitude 1.00% 0.79% 0.01% OMP with Angle 2.00% 1.62% 1.25% OMP with Magnitude 2.00% 0.75% 1.18% Screening • 1 with Angle 1.33% 3.07% 0.21% Screening • 1 with Magnitude 1.33% 1.08% 0.21% TABLE 3 : 3Leave-one-out error comparison for network-related simulations and real data. Algorithm / Data SBM Simulation Wikipedia C-elegans 1 with Magnitude (Algorithm 1) 0.67% 29.45% 48.22% Screening with Angle (Algorithm 2) 0.33% 32.27% 42.69% 1 with Angle 0.67% 29.38% 41.50% Screening with Magnitude 0.33% 32.27% 45.06% OMP with Angle 2.33% 30.97% 40.32% OMP with Magnitude 2.33% 32.13% 46.25% Screening • 1 with Angle 0.67% 30.68% 39.53% Screening • 1 with Magnitude 0.67% 30.82% 44.66% A new approach to variable selection in least squares problems. M Osborne, B Presnell, B Turlach, IMA Journal of Numerical Analysis. 20M. Osborne, B. Presnell, and B. Turlach, "A new approach to variable selection in least squares problems," IMA Journal of Numerical Analysis, vol. 20, pp. 389-404, 2000. On the lasso and its dual. M Osborne, B Presnell, B Turlach, Journal of Computational and Graphical Statistics. 9M. Osborne, B. Presnell, and B. Turlach, "On the lasso and its dual," Journal of Computational and Graphical Statistics, vol. 9, pp. 319-337, 2000. Uncertainty principles and ideal atomic decomposition. D Donoho, X Huo, IEEE Transactions on Information Theory. 47D. Donoho and X. Huo, "Uncertainty principles and ideal atomic decomposition," IEEE Transactions on Information Theory, vol. 47, pp. 2845-2862, 2001. Least angle regression. B Efron, T Hastie, I Johnstone, R Tibshirani, Annals of Statistics. 322B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, "Least angle regression," Annals of Statistics, vol. 32, no. 2, pp. 407-499, 2004. Decoding by linear programming. E Candes, T Tao, IEEE Transactions on Information Theory. 5112E. Candes and T. Tao, "Decoding by linear programming," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203-4215, 2005. For most large underdetermined systems of linear equations the minimal l1-norm near solution approximates the sparest solution. D Donoho, Communications on Pure and Applied Mathematics. 5910D. Donoho, "For most large underdetermined systems of linear equations the minimal l1-norm near solution approximates the sparest solution," Communications on Pure and Applied Mathematics, vol. 59, no. 10, pp. 907-934, 2006. Near-optimal signal recovery from random projections: Universal encoding strategies?. E Candes, T Tao, IEEE Transactions on Information Theory. 5212E. Candes and T. Tao, "Near-optimal signal recovery from random projections: Universal encoding strategies?," IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406- 5425, 2006. Stable signal recovery from incomplete and inaccurate measurements. E Candes, J Romberg, T Tao, Communications on Pure and Applied Mathematics. 598E. Candes, J. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Commu- nications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207-1233, 2006. Robust face recognition via sparse representation. J Wright, A Y Yang, A Ganesh, S Shankar, Y Ma, IEEE Transactions on Pattern Analysis and Machine Intelligence. 312J. Wright, A. Y. Yang, A. Ganesh, S. Shankar, and Y. Ma, "Robust face recognition via sparse representation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210-227, 2009. Sparse representation for computer vision and pattern recognition. J Wright, Y Ma, J Mairal, G Sapiro, T S Huang, S Yan, Proceedings of IEEE. 986J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan, "Sparse representation for computer vision and pattern recognition," Proceedings of IEEE, vol. 98, no. 6, pp. 1031-1044, 2010. Kernel sparse representation based classification. J Yin, Z Liu, Z Jin, W Yang, Neurocomputing. 771J. Yin, Z. Liu, Z. Jin, and W. Yang, "Kernel sparse represen- tation based classification," Neurocomputing, vol. 77, no. 1, pp. 120-128, 2012. Fast l1-minimization algorithms for robust face recognition. A Yang, Z Zhou, A Ganesh, S Sastry, Y Ma, IEEE Transactions on Image Processing. 228A. Yang, Z. Zhou, A. Ganesh, S. Sastry, and Y. Ma, "Fast l1-minimization algorithms for robust face recogni- tion," IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3234-3246, 2013. Sparse subspace clustering: Algorithm, theory, and applications. E Elhamifar, R Vidal, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3511E. Elhamifar and R. Vidal, "Sparse subspace clustering: Algorithm, theory, and applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2765-2781, 2013. Robust vertex classification. L Chen, C Shen, J T Vogelstein, C E Priebe, IEEE Transactions on Pattern Analysis and Machine Intelligence. 383L. Chen, C. Shen, J. T. Vogelstein, and C. E. Priebe, "Robust vertex classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 3, pp. 578-590, 2016. Robust recovery of signals from a structured union of subspaces. Y Eldar, M Mishali, IEEE Transactions on Information Theory. 5511Y. Eldar and M. Mishali, "Robust recovery of signals from a structured union of subspaces," IEEE Transactions on Information Theory, vol. 55, no. 11, pp. 5302-5316, 2009. Compressed sensing of block-sparse signals: Uncertainty relations and efficient recovery. Y Eldar, P Kuppinger, H Bolcskei, IEEE Transactions on Signal Processing. 586Y. Eldar, P. Kuppinger, and H. Bolcskei, "Compressed sensing of block-sparse signals: Uncertainty relations and efficient recovery," IEEE Transactions on Signal Processing, vol. 58, no. 6, pp. 3042-3054, 2010. Block-sparse recovery via convex optimization. E Elhamifar, R Vidal, IEEE Transactions on Signal Processing. 608E. Elhamifar and R. Vidal, "Block-sparse recovery via con- vex optimization," IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4094-4107, 2012. Optimal sparse representation in general (nonorthogonal) dictionaries via l1 minimization. D Donoho, M Elad, Proceedings of National Academy of Science. National Academy of ScienceD. Donoho and M. Elad, "Optimal sparse representation in general (nonorthogonal) dictionaries via l1 minimiza- tion," Proceedings of National Academy of Science, pp. 2197- 2202, 2003. Are sparse representations really relevant for image classification?. R Rigamonti, M Brown, V Lepetit, Computer Vision and Pattern Recognition (CVPR). R. Rigamonti, M. Brown, and V. Lepetit, "Are sparse representations really relevant for image classification?," in Computer Vision and Pattern Recognition (CVPR), 2011. Sparse representation or collaborative representation: which helps face recognition?. L Zhang, M Yang, X Feng, International Conference on Computer Vision (ICCV). L. Zhang, M. Yang, and X. Feng, "Sparse representation or collaborative representation: which helps face recog- nition?," in International Conference on Computer Vision (ICCV), 2011. Is face recognition really a compressive sensing problem?. Q Shi, A Eriksson, A Hengel, C Shen, Computer Vision and Pattern Recognition (CVPR). Q. Shi, A. Eriksson, A. Hengel, and C. Shen, "Is face recognition really a compressive sensing problem?," in Computer Vision and Pattern Recognition (CVPR), 2011. Classification and boosting with multiple collaborative representations. Y Chi, F Porikli, IEEE Transactions on Pattern Analysis and Machine Intelligence. 368Y. Chi and F. Porikli, "Classification and boosting with multiple collaborative representations," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1519-1531, 2013. Greed is good: Algorithmic results for sparse approximation. J Tropp, IEEE Transactions on Information Theory. 5010J. Tropp, "Greed is good: Algorithmic results for sparse approximation," IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231-2242, 2004. Signal recovery from random measurements via orthogonal matching pursuit. J Tropp, A Gilbert, IEEE Transactions on Information Theory. 5312J. Tropp and A. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655- 4666, 2007. On the consistency of feature selection using greedy least squares regression. T Zhang, Journal of Machine Learning Research. 10T. Zhang, "On the consistency of feature selection using greedy least squares regression," Journal of Machine Learn- ing Research, vol. 10, pp. 555-568, 2009. Orthogonal matching pursuit for sparse signal recovery with noise. T Cai, L Wang, IEEE Transactions on Information Theory. 577T. Cai and L. Wang, "Orthogonal matching pursuit for sparse signal recovery with noise," IEEE Transactions on Information Theory, vol. 57, no. 7, pp. 4680-4688, 2011. Sure independence screening for ultrahigh dimensional feature space. J Fan, J Lv, Journal of the Royal Statistical Society: Series B. 705J. Fan and J. Lv, "Sure independence screening for ul- trahigh dimensional feature space," Journal of the Royal Statistical Society: Series B, vol. 70, no. 5, pp. 849-911, 2008. Ultrahigh dimensional feature selection: beyond the linear model. J Fan, R Samworth, Y Wu, Journal of Machine Learning Research. 10J. Fan, R. Samworth, and Y. Wu, "Ultrahigh dimensional feature selection: beyond the linear model," Journal of Machine Learning Research, vol. 10, pp. 2013-2038, 2009. High dimensional variable selection. L Wasserman, K Roeder, Annals of statistics. 375AL. Wasserman and K. Roeder, "High dimensional variable selection," Annals of statistics, vol. 37, no. 5A, pp. 2178- 2201, 2009. Nonparametric independence screening in sparse ultra-high-dimensional additive models. J Fan, Y Feng, R Song, Journal of the American Statistical Association. 106494J. Fan, Y. Feng, and R. Song, "Nonparametric indepen- dence screening in sparse ultra-high-dimensional addi- tive models," Journal of the American Statistical Association, vol. 106, no. 494, pp. 544-557, 2011. A comparison of the lasso and marginal regression. C Genovese, J Lin, L Wasserman, Z Yao, Journal of Machine Learning Research. 13C. Genovese, J. Lin, L. Wasserman, and Z. Yao, "A com- parison of the lasso and marginal regression," Journal of Machine Learning Research, vol. 13, pp. 2107-2143, 2012. Marginal regression for multitask learning. M Kolar, H Liu, Journal of Machine Learning Research W & CP. 22M. Kolar and H. Liu, "Marginal regression for multitask learning," Journal of Machine Learning Research W & CP, vol. 22, pp. 647-655, 2012. High Dimensional Probability: An Introduction with Applications in Data Science. R Vershynin, R. Vershynin, High Dimensional Probability: An Introduction with Applications in Data Science. 2017. A Probabilistic Theory of Pattern Recognition. L Devroye, L Gyorfi, G Lugosi, SpringerL. Devroye, L. Gyorfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer, 1996. Probabilistic principal component analysis. M E Tipping, C M Bishop, Journal of the Royal Statistical Society, Series B. 61M. E. Tipping and C. M. Bishop, "Probabilistic principal component analysis," Journal of the Royal Statistical Society, Series B, vol. 61, pp. 611-622, 1999. Stochastic blockmodels: First steps. P Holland, K Laskey, S Leinhardt, Social Networks. 52P. Holland, K. Laskey, and S. Leinhardt, "Stochastic block- models: First steps," Social Networks, vol. 5, no. 2, pp. 109- 137, 1983. A consistent adjacency spectral embedding for stochastic blockmodel graphs. D Sussman, M Tang, D Fishkind, C Priebe, Journal of the American Statistical Association. 107499D. Sussman, M. Tang, D. Fishkind, and C. Priebe, "A consistent adjacency spectral embedding for stochastic blockmodel graphs," Journal of the American Statistical As- sociation, vol. 107, no. 499, pp. 1119-1128, 2012. Consistency of spectral clustering in stochastic block models. J Lei, A Rinaldo, The Annals of Statistics. 431J. Lei and A. Rinaldo, "Consistency of spectral clustering in stochastic block models," The Annals of Statistics, vol. 43, no. 1, pp. 215-237, 2015. Sign-constrained least squares estimation for high-dimensional regression. N Meinshausen, Electronic Journal of Statistics. 7N. Meinshausen, "Sign-constrained least squares estima- tion for high-dimensional regression," Electronic Journal of Statistics, vol. 7, pp. 1607-1631, 2013. From few to many: Illumination cone models for face recognition under variable lighting and pose. A Georghiades, P Buelhumeur, D Kriegman, IEEE Transactions on Pattern Analysis and Machine Intelligence. 236A. Georghiades, P. Buelhumeur, and D. Kriegman, "From few to many: Illumination cone models for face recogni- tion under variable lighting and pose," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643-660, 2001. Acquiring linear subspaces for face recognition under variable lighting. K Lee, J Ho, D Kriegman, IEEE Transactions on Pattern Analysis and Machine Intelligence. 275K. Lee, J. Ho, and D. Kriegman, "Acquiring linear sub- spaces for face recognition under variable lighting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684-698, 2005. Columbia object image library (coil-20). S Nene, S Nayar, H Murase, CUCS-005-96Technical ReportS. Nene, S. Nayar, and H. Murase, "Columbia object image library (coil-20)," in Technical Report CUCS-005-96, 1996. Manifold matching: Joint optimization of fidelity and commensurability. C E Priebe, D J Marchette, Z Ma, S Adali, Brazilian Journal of Probability and Statistics. 273C. E. Priebe, D. J. Marchette, Z. Ma, and S. Adali, "Man- ifold matching: Joint optimization of fidelity and com- mensurability," Brazilian Journal of Probability and Statistics, vol. 27, no. 3, pp. 377-400, 2013. The posterior nervous system of the nematode caenorhabditis elegans: serial reconstruction of identified neurons and complete pattern of synaptic interactions. D H Hall, R Russell, The Journal of neuroscience. 111D. H. Hall and R. Russell, "The posterior nervous system of the nematode caenorhabditis elegans: serial reconstruc- tion of identified neurons and complete pattern of synap- tic interactions," The Journal of neuroscience, vol. 11, no. 1, pp. 1-22, 1991. Structural properties of the caenorhabditis elegans neuronal network. L R Varshney, B L Chen, E Paniagua, D H Hall, D B Chklovskii, PLoS computational biology. 721001066L. R. Varshney, B. L. Chen, E. Paniagua, D. H. Hall, and D. B. Chklovskii, "Structural properties of the caenorhab- ditis elegans neuronal network," PLoS computational biol- ogy, vol. 7, no. 2, p. e1001066, 2011. A joint graph inference case study: the c.elegans chemical and electrical connectomes. L Chen, J T Vogelstein, V Lyzinski, C E Priebe, Worm. 521L. Chen, J. T. Vogelstein, V. Lyzinski, and C. E. Priebe, "A joint graph inference case study: the c.elegans chemical and electrical connectomes," Worm, vol. 5, no. 2, p. 1, 2016.
[]
[ "Restrictions on the lifetime of sterile neutrinos from primordial nucleosynthesis", "Restrictions on the lifetime of sterile neutrinos from primordial nucleosynthesis" ]
[ "Oleg Ruchayskiy ", "Artem Ivashko " ]
[]
[]
We analyze the influence of sterile neutrinos with the masses in the MeV range on the primordial abundances of Helium-4 and Deuterium. We solve explicitly the Boltzmann equations for all particle species, taking into account neutrino flavour oscillations and demonstrate that the abundances are sensitive mostly to the sterile neutrino lifetime and only weakly to the way the activesterile mixing is distributed between flavours. The decay of these particles also perturbs the spectra of (decoupled) neutrinos and heats photons, changing the ratio of neutrino to photon energy density, that can be interpreted as extra neutrino species at the recombination epoch. We derive upper bounds on the lifetime of sterile neutrinos based on both astrophysical and cosmological measurements of Helium-4 and Deuterium. We also demonstrate that the recent results of Izotov & Thuan [1], who find 2σ higher than predicted by the standard primordial nucleosynthesis value of Helium-4 abundance, are consistent with the presence in the plasma of sterile neutrinos with the lifetime 0.01 -2 seconds.
10.1088/1475-7516/2012/10/014
[ "https://arxiv.org/pdf/1202.2841v2.pdf" ]
118,455,986
1202.2841
805d73d5c815a96f80aa0da2d6c551b339acda83
Restrictions on the lifetime of sterile neutrinos from primordial nucleosynthesis 19 Sep 2012 Oleg Ruchayskiy Artem Ivashko Restrictions on the lifetime of sterile neutrinos from primordial nucleosynthesis 19 Sep 2012 We analyze the influence of sterile neutrinos with the masses in the MeV range on the primordial abundances of Helium-4 and Deuterium. We solve explicitly the Boltzmann equations for all particle species, taking into account neutrino flavour oscillations and demonstrate that the abundances are sensitive mostly to the sterile neutrino lifetime and only weakly to the way the activesterile mixing is distributed between flavours. The decay of these particles also perturbs the spectra of (decoupled) neutrinos and heats photons, changing the ratio of neutrino to photon energy density, that can be interpreted as extra neutrino species at the recombination epoch. We derive upper bounds on the lifetime of sterile neutrinos based on both astrophysical and cosmological measurements of Helium-4 and Deuterium. We also demonstrate that the recent results of Izotov & Thuan [1], who find 2σ higher than predicted by the standard primordial nucleosynthesis value of Helium-4 abundance, are consistent with the presence in the plasma of sterile neutrinos with the lifetime 0.01 -2 seconds. Introduction The characteristic feature of the physical processes in the early Universe is a peculiar interplay of gravity and microscopic physics. Gravity introduces the Hubble time parameter τ H that indicates the timescale on which the global properties of the Universe (geometry, temperature, etc.) change significantly. The Hubble time is determined solely by the energy density of the matter filling the space. The microscopic matter constituents, particles, are involved in the interaction processes, that are believed to be described fundamentally by three known forces -electromagnetic, weak and strong. As long as the timescale τ of any given microscopic physical process is much smaller than τ H , the expansion can be neglected on that timescale. If time τ is enough to establish thermal equilibrium between the particles, then the equilibrium it maintained in the course of the Universe expansion, while τ ≪ τ H holds. When this inequality ceases to hold, the state of equilibrium is lost. The main reason for that is that interparticle distances become larger, while the corresponding densities become lower, hence interactions are less likely to occur. In this paper we are consider the formation of light nuclei in the primordial environment -Big Bang nucleosynthesis (BBN). All three fundamental interactions are important for this phenomenon, all playing different roles. Charged particles together with photons are subject to electromagnetic forces and the equilibration timescale of corresponding processes is tiny with respect to the expansion time. Therefore the particles are kept in thermal equilibrium at the common temperature T . Due to expansion the temperature is decreasing with time. The equilibration time of the weak interactions changes abruptly so that at T few MeV weakly interacting neutral particles (neutrinos and neutrons) stay in equilibrium, while at lower temperatures they fall out of it (freeze out). At high temperatures processes like n + ν e → p + e − maintain chemical equilibrium, that is the neutron-to-proton conversion exhibits the same finite intensity as the opposite processes. Chemical and thermal equilibria are interconnected, so they are lost simultaneously, when neutron-to-proton ratio freezes out. Finally, the strong interactions are responsible for the production of nuclei comprising more than one nucleon. The most important fusion reaction for the formation of the first nucleus, deuteron, n + p → D, releases energy of at least the binding energy of deuteron E D ≈ 2.2 MeV, and proceeds effectively in dense primordial medium. At temperatures of the order of E D , however, energetic photons collide with deuteron and lead to its destruction. As baryon density is much lower than the density of photons [2], there are many photons with energies much higher than E D that collide with deuterons and hence postpone the production of the significant deuteron density until the temperature when the photodissociation is not effective anymore, T ≃ 80 keV, much lower than the binding energy. The net abundance of deuterium is, however, non-zero at all times till this moment and is given by the equilibrium Boltzmann distribution. Deuterium that is created at lower temperatures, serves as a fuel for the formation of 3 He, 4 He and other nuclides. Although the times of elements' production and the moment of the departure from the chemical p −n equilibrium are well-separated, the former process is very sensitive to the latter. Firstly, the details of the freeze-out set the ratio of the neutron to proton densities, and secondly, the time elapsed between the two moments determines the fraction of neutrons that have decayed since then (recalling that neutron is an unstable particle). The seminal ideas of the primordial synthesis of light elements were first outlined in the so-called αβγ paper, [3], published in the late 1940s. Since then the theory of Big Bang nucleosynthesis has evolved and its main predictions were confirmed, making it a well-developed model from both theoretical and observational points of view. A lot of reviews of the standard BBN scenario and its implication for particle physics models exist (see e.g. [4,5,6]). The predictions of the primordial nucleosynthesis can change once one replaces the Standard Model of particle physics underlying the processes considered so far by some of its "beyond the Standard Model" (BSM) extensions. Therefore the BBN plays the role of a benchmark for testing physical models. In this paper we investigate the influence of sterile neutrinos on primordial nucleosynthesis. Sterile neutrinos are hypothetical massive super-weakly-interacting particles (see e.g. [7,8] for reviews), as opposed to their weakly-interacting counterparts -ordinary Standard Model neutrinos ν e , ν µ , ν τ , that are called "active" in this context. Sterile neutrinos carry no charges with respect to the Standard Model gauge groups (hence the name), but via their quadratic mixing to active neutrinos they effectively participate in weak reactions and at energies much below the mass of the W -boson their interaction can be described by the analog of the Fermi theory with the Fermi coupling constant G F replaced by G F × ϑ α , where the active-sterile mixing angle ϑ α ≪ 1 (see Fig. 1). Here α is a flavour index, α = e, µ, τ , indicating that sterile neutrino can mix differently with neutrinos of different flavours. Massive sterile neutrinos can decay, but due to their feeble interaction strength their lifetime can be of order seconds (even for masses as large as MeV). The decay products of the sterile neutrinos are injected into the primordial environment, increasing its temperature and shifting the chemical equilibrium. In this work we concentrate on sterile neutrinos with the masses in the MeV range, motivated by the recent observations [9,10,11,12,13] that particles with such masses can be responsible simultaneously for neutrino oscillations and generation of baryon and lepton asymmetry of the Universe and can influence the subsequent generation of dark matter [14]. The corresponding model has been dubbed νMSM (Neutrino Minimal Standard Model, see [7] for review). Several works had previously considered the influence of MeV-scale particles on primordial nucleosynthesis. Compared to the Refs. [15,16] this paper accounts for the neutrino flavour oscillations in the plasma and employs more accurate strategy of solving Boltzmann equations, which results in the revision of the bounds of [15,16] (see Section 4 for detailed comparison). The authors of [17] developed a new code that can perform treatment of active and sterile neutrinos with arbitrary distribution functions, non-zero lepton asymmetry, etc. However, as of time of writing this code has not been made publicly available and the Ref. [17] did not derive bounds on sterile neutrino parameters. The work [18] concentrated on the bounds that cosmic microwave background measurements could provide on decaying sterile neutrinos with the masses 100−500 MeV, leaving BBN analysis for the future work. A number [19,20,21,22,23]) analyzed the influence of decaying MeV particles on BBN. We compare with them in the corresponding parts of the paper. The paper is organized as follows. We explain the modifications of the standard BBN computations due to the presence of sterile neutrinos in the plasma and describe our numerical procedure in Sec. 2. The results are summarized in Sec. 3. We conclude in Sec. 4. Appendixes A-C provide the details of our numerical procedure. Big Bang Nucleosynthesis with sterile neutrinos The section below summarizes our setup for the BBN analysis with decaying particles. The notations and conventions closely follow the series of works [16,24,25]. We will be interested only in the tree-level Fermi interactions of sterile neutrinos with the primordial plasma. In this case the interaction is fully determined by the squares of their mixing angles. We will consider one Majorana particle with 4 degrees of freedom 1 and three active-sterile mixing angles ϑ 2 α . Matrix elements of interactions of sterile neutrinos with the Standard Model particles are summarized in Appendix B (Tables 3 -4 on page 27). We consider in this work only sterile neutrinos with the masses in the range 1 MeV < M s < M π ≈ 140 MeV. For heavier particles, two-particle decay channels appear (e.g. ν S → π 0 ν α , π ± e ∓ ) and our procedure of solving Boltzmann equations (described below) should be significantly modified. The lower bound was chosen to be around 1 MeV by the following considerations. The sterile neutrino lifetime τ s is [26] τ −1 s = Γ s = G 2 F M 5 s 96π 3 (1 +g 2 L + g 2 R )(ϑ 2 µ + ϑ 2 τ ) + (1 + g 2 L + g 2 R )ϑ 2 e ≈ 6.9 sec −1 M s 10 MeV 5 1.6 ϑ 2 e + 1.13(ϑ 2 µ + ϑ 2 τ )(1) where θ W is the Weinberg's angle and g R = sin 2 θ W ≈ 0.23 , g L = 1 2 + sin 2 θ W , g L = − 1 2 + sin 2 θ W . 2 From this expression one sees that sterile neutrinos lighter than about 2 MeV have lifetime of at least several hundred seconds even for very large mixing angles ϑ ∼ 1. Therefore, such particles survive till the onset of the BBN, and freeze-out at temperatures T ∼ 2 − 3 MeV. They would be relativistic at that time, i.e. their average momentum would be of the order of temperature, p ∼ T , and their contribution to the number of relativistic neutrino species would be significant, ∆N eff ≃ 2. In the course of the Universe expansion p would scale as temperature due to the gravitational redshift, and at some point would become smaller than the mass of sterile neutrino. At that moment the energy density of sterile neutrinos would start to change with expansion as a −3 rather than a −4 (where a is a scale-factor) so that the contribution of these massive particles to the energy density would quickly become dominant, making N eff ≫ 1 (or could even overclose the Universe) before the production of light elements starts. It contradicts the current bound that puts N eff = 3.74 +0. 8 −0.7 ± 0.06(syst) at 2σ [1]. 3 Additionally, in the νMSM the successful baryogenesis is possible only for the masses of sterile neutrinos above few MeV [11,13]. Therefore we restrict the analysis to the region of masses higher than 1 MeV. Expanding Universe and distributions of particles We consider expansion of the homogeneous and isotropic Universe with the flat Friedmann-Robertson-Walker metric in the form ds 2 = dt 2 − a 2 d x 2 , where a = a(t) is a time-dependent scale factor, whose evolution is described by the Friedmann equation H ≡ȧ a = 8πG N 3 ρ ,(2) with the quantity on the left-hand side being the Hubble expansion rate, reciprocal to the expansion timescale τ H discussed above. The total energy density ρ is the sum of all the energy densities present in the medium, and G N is the Newton's constant. The energy density together with the total pressure density p satisfy the "energy conservation" law a dρ da + 3(p + ρ) = 0.(3) At the temperatures of interest the dominant components of the plasma are photons γ, electrons and positrons e ± , three flavours of active neutrinos (ν e , ν µ , ν τ ) and sterile neutrinos. 4 Working with the particle kinematics in the expanding Universe it is convenient to use conformal momentum y instead of the usual physical momentum p. The two are related through y = pa. The quantitative description of the plasma population is provided by the distribution functions f α , that are the numbers of particles α per "unit cell" of the phase space d 3 p d 3 x = (2π) 3 . At keV-MeV temperatures the medium is homogeneous and the distribution functions are independent of spatial coordinates of particles, and due to isotropy they do not depend on the direction of the particle momentum. That simplifies the description of their evolution and therefore df dt ≡ ∂f ∂t − Hp ∂f ∂p = ∂f (t, y) ∂t(4) holds. The goal is to find the distribution functions of all relevant particles and to use them to compute the energy density and pressure as a function of time and scale-factor, closing the system of Eqs. (2)-(3) via ρ = i g i 2π 2 f i E i p 2 dp ; p = i g i 6π 2 f i p 4 E i dp(5) Here the summation goes over all plasma particles, g i , m i is the number of degrees of freedom and mass of i-th particle respectively, E i = p 2 + m 2 i . If interaction rate of the particles is much faster than the Hubble expansion rate, their distribution functions are given by either the Bose-Einstein, or the Fermi-Dirac distributions. This is the case for photons, electrons and positrons -that are kept in equilibrium due to intensive electromagnetic interactions f γ = 1 e E/T − 1 , f e = 1 e E/T + 1 .(6) Their contribution to the energy and pressure in Eqs. (2), (3) Baryonic matter The contribution of the baryonic matter to the evolution of the hot plasma of relativistic species is proportional to the so-called baryon-to-photon ratio η B = n B /n γ . The measurements of relic radiation [2] yield η B = (6.19 ± 0.15) × 10 −10 . One can see that baryons are present in negligible amount, and do not influence the dynamics of the remaining medium. This allows to analyze our problem in two steps. At step i we omit baryonic species and study how the temperature of the plasma, the expansion factor and neutrino distributions evolve in time from temperatures of the order of 100 MeV, when sterile neutrinos typically start to go out of equilibrium, 5 down to T Fin ≃ 10 keV when nuclear fusion reactions have ended. At step ii we use these results to determine the outcome of the nuclear reaction network against the background of evolving electromagnetic plasma (Sec. 2.5). Active neutrinos at MeV temperatures Weak interactions are not able to maintain the thermal equilibrium of active neutrinos with the plasma during all the expansion period we consider. A simple comparison of the weak collision rate G 2 F T 5 and H(T ) tells that neutrino maintain their equilibrium with the rest of the plasma down to temperatures T dec ∼ few MeV. The process of neutrinos going out of equilibrium is usually referred to as neutrino decoupling. Throughout the paper we assume that no large lepton asymmetry is present so that the number of neutrinos is equal to the number of antineutrinos. 6 At temperatures higher than T dec the distribution is therefore given by the Fermi-Dirac one, while at lower temperatures we have to solve the set of three Boltzmann equations df να dt = I α , α = e, µ, τ The details of the interactions, such as particle collisions, are encoded in the so-called collision terms I α . The terms are explicitly [33] I α = 1 2E α in,out S|M| 2 F [f ](2π) 4 δ 4 (p in − p out ) Q i=2 d 3 p i (2π) 3 2E i(8) The sum runs over all the possible initial states "in" involving ν α (represented by a particle set ν α , 2, 3, . . . , K) and the final states "out" (K + 1, . . . , Q). Matrix element M corresponds to the probability of the transition "in"-"out" to occur and the delta-function ensures the conservation of 4-momentum p in = p out . Symmetrization factor S is equal to 1, except of the transitions involving identical particles either in initial or in a final state. Relevant matrix elements together with the symmetrization factors are listed in Appendix B. The interaction rates are dependent on the population of the medium, and the functional F [f ] describes this. In case when all the incoming and outgoing particles are fermions, F [f ] = (1 − f να ) . . . (1 − f K )f K+1 . . . f Q − f να . . . f K (1 − f K+1 ) . . . (1 − f Q ). (9) When some of particles are bosons, one has to replace (1 − f R ) by (1 + f R ) for every bosonic particle R. A simple estimate (see Appendix C) demonstrates that the rates of transitions between neutrinos of different flavours are much faster than weak reactions. We argue that this phenomenon can be approximately described by the following modification of the Boltzmann equations df να dt = β I β P βα .(10) Summation is carried out over three active flavours and expressions for P βα are listed in Appendix C (Eqs. 26). The impact of sterile neutrinos As already mentioned, sterile neutrinos interact much more feebly than active neutrinos do. Nevertheless, at some high temperature sterile neutrinos may enter thermal equilibrium. Whether this happens or not depends on the thermal history of the Universe before the onset of the synthesis. 7 Even if they were in thermal equilibrium at early times, sterile neutrinos then necessarily decouple at temperatures higher than those of active neutrino decoupling. If sterile neutrinos were light and stable (or very long-lived), they would be relativistic and propagate freely in the medium, yielding N eff ≈ 3 + g S together with active neutrinos (g S is the number of sterile neutrinos). However, sterile neutrinos decay into active neutrinos and other particles. The energies of the decay products may be very different from the typical energies of plasma particles. For particles that equilibrate quickly (such as electrons or photons), this "injection" results in the fast redistribution of the energy between all particles in equilibrium and effectively the process looks like a temperature increase (more precisely, it just slows down the cooling of the Universe). But for particles that either are not in equilibrium or are about to fall out of it, such as active neutrinos at few MeV, the "injection" modifies the form of their spectra. The other mass-induced effect is that sterile neutrinos may switch from the relativistic regime (when their average momentum is larger than mass), that is established at large temperatures, to the non-relativistic one, due to the gravitational redshift. For the quantitative description of sterile neutrino dynamics we utilize the Boltzmann equation similar to (7), replacing active neutrino everywhere therein by sterile neutrino ν S df S dt = I S(11) Reactions contributing to the right-hand side together with their probabilities are listed in Tables 3-4 on page 27 of Appendix B. Note that we neglect the processes involving baryonic particles. However, they become important for temperatures near the QCD crossover temperature T QCD ≃ 200MeV, when their density is not negligible anymore. More scattering channels of sterile neutrino would appear and their proper account is involved. However it seems to be reasonable to assert that the only modification the account will bring is to lower the decoupling temperature of sterile neutrinos. Oscillation phenomenon does not affect significantly sterile neutrinos and therefore Boltzmann equation in its original form (11) is still valid, contrary to what we have found out for active neutrinos. An argument in favor of this statement is explained in Appendix C. When sterile neutrino is heavier than muon, the former particle can appear in the decay ν S → µ − + e + +ν e . However, the branching fraction of this decay mode does not even reach a percent for masses of sterile neutrino we consider (see e.g. [26]). Therefore we can neglect influence of both muons and other particles, appearing in the decay. As a result we have six equations (2), (3), (10), and (11) describing primordial plasma at temperatures of interest. These equations contain six unknowns -scale factor a(t), temperature T (t) and four neutrino distribution functions, f να and f S . The system of equations is therefore closed and we have solved it numerically at the step i. Course of nuclear reactions Outcome of the nuclear reaction chains is found numerically. For the Standard BBN model one of the earlier attempts was made with the code written by L. Kawano [35,36]. However, the program in its original form is inappropriate for the account of the BSM physics, and we modified it for this work. Two technical remarks are in order here. First, we used the 1992 version of the program [36] as a starting point, and not the 1988 one, [35]. Therefore, the integration time steps were taken small enough, so that the integration procedure did not introduce an error, that was compensated as a shift in the resulting value of the Y p , 8 the so-called "Kernan correction" [37]. Second, the code did not take into account the Coulomb and the nucleon finite-mass corrections to weak interaction rates, as well as radiative and finite-temperature effects. 9 We do not calculate directly these effects, but assume their net result to be in the form of the additive correction, which we took to be ∆Y p = −0.0003 [41]. The tests described in Appendix A.1 demonstrate an agreement of thus modified "Kawano code" with the results of the other code, PArthENoPE [42], that takes a proper account of these effects. Presence of sterile neutrinos alters the standard dynamics of the temperature and the expansion rate as well as the rates of weak interactions involving neutrons and protons. These quantities are known from the step i, so we have implemented the import of these data. Together with the change of ∆Y p indicated above, it has lead to the code, that became an essential tool of step ii in our approach. The computations of nuclide evolution started from temperatures of several MeV, when the chemical equilibrium ceases to hold, up to temperatures T Fin . Adopted values of abundances of the light nuclei The observables of the BBN are concentrations, or abundances, of light nuclides dispersed in the cosmos. The most relevant abundance in our problem is that of 4 He, as it is sensitive to the expansion rate of the Universe at MeV temperatures and neutrino distribution functions. The presence of sterile neutrinos in plasma typically increases the concentration of 4 He, described by Y p . Accurate calculations carried out in the Standard Model [42] predict the values Y sbbn p = 0.2480 (τ n = 885.7 sec) (12) Y sbbn p = 0.2465 (τ n = 878.5 sec)(13) depending on the lifetime of neutron, τ n , see below. There are two main methods of experimental determination of primordial Helium abundance. The first one is related to the studies of low-metallicity astrophysical environments and extrapolating them to zero metallicity case. The Y p measurements are known to be dominated by systematic uncertainties. Therefore we adopt the Y p values from the two most recent studies, Refs. [1,43] that have slightly different implications. For recent discussion of various systematic uncertainties in 4 He determination, see [44]. In Ref. [1] the value Y p = 0.2565 ± 0.0010(stat.) ± 0.0050(syst.) was obtained. Therefore, the 2σ intervals that we adopt in our studies are 10 Y p = 0.2495 − 0.2635 (Ref. [1], 2σ interval)(14) One notices that this result is more than 2σ away from the Standard Model BBN predicted value of Y p , Eq. (12). Using a subsample of the same data of [1], a different group had independently determined Y p [43]. From their studies we adopt 11 Y p = 0.2574 ± 0.0036(stat.) ± 0.0050(syst.). As a result, Y p = 0.2452 − 0.2696 (Ref. [43], 2σ interval)(15) (this values of Y p coincide with the Standard BBN one, (12), at about 1σ level). 12 Second method of determination of Helium abundance is based on the CMB measurements. This method is believed to determine truly pristine value of Y p , not prone to the systematics of astrophysical methods. However currently its uncertainties are still much larger than of the first method. The present measurements put it at [46,2], 2σ interval) (16) again consistent with the Standard Model BBN at 1.5σ. Here N eff is the so-called effective number of neutrino species Y p = 0.22 − 0.40, N eff = 3 (Refs.N eff = 120 7π 2 ρ νe + ρ νµ + ρ ντ T 4 ,(17) proportional to the ratio of the total energy, deposited into the active neutrino species to that of photons. Notice, that the bound (16) is based on assumption that before the onset of the recombination epoch the effective number of neutrino species is close to its SM value N eff ≈ 3. As we will see later, sterile neutrinos can significantly distort N eff . For the values of N eff strongly deviating from 3 the CMB bounds on Y p gets modified. For example, the analysis carried out in [46] reveals that Y p = 0.10 − 0.33, N eff = 6 (Ref. [46], 2σ interval).(18) The similar conclusion is reached if one employs the data of [47]. The other element produced during the BBN is the Deuterium, and recent observations determine its abundance to be D/H = (2.2 − 3.5) × 10 −5 (Ref. [4], 3σ interval).(19) This value is sensitive both to the baryon-to-photon ratio and to N eff . In this work we adjust the value of baryon-to-photon ratio η at the beginning of the computation so that by T Fin ∼ 10 keV it is equal to the value given by cosmic microwave background measurements [2]. Finally, we mention another important uncertainty originating from the particle physics. There are two different measurements of neutron lifetime τ n that are at tension with each other. Particle Data Group [27] provides τ n = 885.7 ± 0.8 sec, while measurements performed by Serebrov et al. [28] result in τ n = 878.5 ± 0.8 sec. We employ both results and explore the differences they lead to in what follows. Results In this Section we present our main results: the bounds on sterile neutrino lifetime as a function of their masses and mixing patterns, as well as the bounds on the mixing angles. As discussed in the previous Section, there are several systematic uncertainties in the determination of the 4 He abundance and therefore the results will depend on the adopted values of Y p (together with the neutron lifetime, τ n ). We summarize these systematic effects below. We start with comparing the upper bounds on sterile neutrino lifetime for different values of Y p (see Section 2.6). The Fig. 2a shows that the bounds from the two recent works [1,43] are quite similar (the difference is of the order of 30%). The bound, based on [45] would give a result, similar to [43] as discussed above. For the CMB bound in Fig. 2a, we present only the results for masses M s > 40 MeV where N eff does not deviate significantly from 3. Fig. 3 indicates that for smaller masses the number of effective neutrino species increases significantly. It in turn affects the CMB helium bounds (c.f. Eqs. (16) and (17)). The accurate account of this effect goes beyond the scope of this work and we choose instead to plot stronger deuterium-based bounds (those of Fig. 3) in Fig. 2a for M s 40 MeV. The lower bound on Y p from the recent work of [1] is above the Standard BBN value (12) at ∼ 2σ level (see however [44]). The presence of sterile neutrinos in plasma of course relaxes this tension and therefore at 2σ the adopted values of Y p (Eq. 14) provide both upper and lower bounds on sterile neutrino lifetime. This is (14)), except of the "CMB" line that corresponds to the upper bound from [46,2] (see Eq. (16)). shown in Fig. 2, right panel. At 3σ level the measurements of [1] are consistent with Standard BBN and the lower bound disappears. Fig. 3 shows the changes in Deuterium abundance and in the effective number of neutrino species, caused by sterile neutrinos (with parameters corresponding to the upper bound based on [1]). For these values of parameters the abundance lies within the 3σ boundaries (19). And for the highest effective number of neutrinos reached, N eff = 6, D/H is close to the 3σ upper bound. Notice that the same relation between N eff and D/H is observed in the model without new particles but with the effective number of neutrinos different from 3. The effective number of neutrino species does not define the Helium abundance though. Otherwise the same Y p bound [1] would predict only one particular value of N eff , which is not case, as the inspection of Fig. 3 shows. The influence of another systematic uncertainty (the lifetime of neutron, τ n ) is negligible. Indeed, the relative difference between sterile neutrino lifetimes were found to be of the order of 5% for two choices of τ n -from [28] and from [27] (taking the same Y p bound from [43]). Next we investigate the dependence of the resulting bounds on the mixing patterns of sterile neutrinos. Naively, one would expect that sterile neutrinos mixing "only with ν e " and "only with ν µ " should have different effect of Y p . However, it is the energy "injection" rate (i.e. the overall decay rate of sterile neutrinos) that is more important for the dynamics of plasma before the onset of nucleosynthesis. This quantity depends on the lifetime τ s and the mass M s of the neutrino. Mixing patterns affect mostly the concentration of particular decay products, but not the Discussion In this work we considered the influence of decaying particles with the masses few MeV -140 MeV on the primordial abundance of light elements (D and 4 He). Such particles appear in many cosmological scenarios [12,13,18,19,20,21,22,23,48,49,50]. Particularly, we concentrated on the properties of sterile neutrinos and derived constraints on their lifetime imposed by the present measurements of primordial Helium abundance Y p . Sterile neutrinos are super-weakly interacting particles, quadratically mixed with the active flavours. We analyzed the case of one Majorana sterile neutrino with 4 degrees of freedom (if sterile neutrinos were kept in thermal equilibrium it would be equivalent to g s = 2 species of active neutrinos). Since the plasma evolution is mostly affected by the overall decay rate of sterile neutrinos, the lifetime bounds that we obtained are essentially independent of the particular mixing patterns, as Figs. 4,5 demonstrate. In the paper [16] a similar model was considered with one Dirac sterile neutrino. Dirac sterile neutrino has the same 4 degrees of freedom and influences primordial plasma in the same way (if it has the same spectrum, lifetime and mixing pattern). However, in [16] effect of active-neutrino oscillations was not taken into account, and some simplifying approximations like Boltzmann statistics were employed. To provide corresponding analysis we wrote code that solves more accurate Boltzmann equations describing kinetics of neutrino than what were used in [16]. We compare the results of this work with the previous bounds [15,16] in Fig. 6. We see that our results are broadly consistent with the previous works. The differences for a given mixing pattern of sterile neutrinos can be as large as a factor of 2.5 for some masses. The presence of sterile neutrinos in the plasma affects the effective number of neutrino degrees of freedom, N eff . Fig. 3, right panel shows that N eff between 2.7 and 6 are possible for different mixing angles and masses, which could explain a larger than 3 values of N eff , reported recently in several CMB observations (see e.g. [46,51,47], but also [52]). Decaying sterile neutrinos with the masses 100 − 500 MeV and lifetimes from seconds to minutes and their influence on N eff and entropy production have been recently considered in [18] (see also [53]) where it was demonstrated that they can lead to N eff = 3 and can therefore be probed with the CMB measurements. The results of the present work demonstrate that in the region 100 − 140 MeV where we overlap with the parameter space, studied in [18], the primordial nucleosynthesis restricts the lifetime of sterile neutrinos to be well below 1 sec (see Fig. 2, left panel). Finally, it is interesting to compare the upper bound on sterile neutrino lifetime, derived in this paper with the lower bounds that come from direct experimental searches for sterile neutrinos (see [26,54,55]). These latter bounds are based on the assumption that sterile neutrinos with four degrees of freedom are solely responsible for the observed pattern of neutrino oscillations via the see-saw mechanism [55]. The appropriate comparison, based on [55], is presented in Fig. 7. No allowed values of sterile neutrino lifetimes for 1 MeV M s < 140 MeV exist for either type of neutrino mass hierarchy (i.e. the upper bound is smaller than the lower bound, see the purple double-shaded region in Fig. 7). Notice, that if the astrophysical bounds on Helium [1,43] were used for M s 40 MeV in Fig. 7, instead of the CMB bound, the resulting lifetime bounds would become stronger (by as much as a factor of 4) in this mass range. We stress that for this conclusion it is essential that MeV sterile neutrinos are responsible for neutrino oscillations. For example, a model in which sterile neutrinos couple to ν τ only (and therefore do not contribute to the mixing between active neutrino flavours), is allowed even if one confronts the strongest BBN bounds (based on the astrophysical Helium measurements) with the direct accelerator bounds, see Fig. 8 for details. A Tests of the numerical approach The Section below summarizes the comparison of the present work with the previous ones that analyzed the influence of the MeV particles on primordial nucleosynthesis. Throughout this Section, we normalize scale factor by imposing condition aT = 1 at the initial moment. Conformal momentum is y = pa with the same normalization of the scale factor. In the figures that contain both the solid and the dashed curves, the former correspond to the results obtained with our code, and the latter -to the Code Y p for τ n from PDG [27] Y p for τ n from [28] (Modified) Kawano code [36] 0.2472 0.2457 PArthENoPE code [42] 0.2480 0.2465 Difference -0.0008 -0.0008 A.1 Standard Model BBN First we considered the nucleosynthesis in Universe filled with the Standard Model particles only. We compute the actual non-equilibrium form of the active neutrino spectra during their decoupling. The results of the present work are compared with those of [25,24,57]. In [25,24] neutrino oscillations were neglected, while in [57] the effect was taken into account. Fig. 9 shows the evolution of the quantity aT as a function of temperature. It is identical to the Fig. 1 in Ref. [25]. Figures 10,11 show how distorted neutrino spectra f να are, compared to the thermal distribution f eq = (e y + 1) −1 . One can see good agreement between the results. We believe that the difference, that is present nevertheless, arises solely due to our one-step time integration method of the stiff kinetic equations, that is not as accurate as the method employed in Refs. [24,57]. We turned off flavour oscillations and compared asymptotic values of ratio aT at low temperatures together with the effective number of neutrino species, N ef f . For the former quantity, Refs. [24,57] present values 1.3991 and 1.3990, respectively. On the other hand, we derived 1.3996. For the number of neutrino species in absence of neutrino oscillations, the same Refs. [24,57] provide numbers 3.034 and 3.035, respectively, while we get 3.028. 15 The resulting Y p is summarized in Table 1 for different values of neutron lifetime τ n . We also provide a comparison of the modified version of the Kawano code [36] that we adopted for computing nuclear reactions with a newer code, PArthENoPE [42]. By comparing the results of PArthENoPE and the modified KAWANO code, we find the former to be larger by 0.0008 than the latter. We use the shift ∆Y p = −0.0008 as a correction in our subsequent results. In both panels, the pair of upper curves shows the distortion of the electron neutrino, the lower -of ν µ . In each pair, the solid curve is the result of this work, and the dashed is from Fig. 2 of [57]. Figure 11: Left: Relative distortion of ν e spectra δf νe /f eq for conformal momenta y = 3, 5, 7 (from bottom to up). Right: The same, but for muon neutrino. In each pair of curves the solid one corresponds to this work and the dashed one is from [25]. A.2 Test of energy conservation If all weak reactions involving electrons and positrons are turned off, neutrinos decouple from the rest of plasma. Then the energy conservation law (3) holds separately for the neutrino component and for the remaining particles. In approximation of zero mass of electron we obtain d(aT ) dt = 0 (20) similar to Eq. (22). As a corollary, product aT is conserved. On the other hand, our code solves the equation (3) involving all medium components simultaneously. And it turns out that the relation (20) is not a trivial consequence of the numerical computation. Therefore the check of the conservation serves as a test of the code. We considered separately scattering and decay processes involving neutrinos and observed conservation of aT with precision of order 0.2%. A.3 Heavy sterile Dirac neutrino Next we have tested model with one sterile Dirac neutrino ν S with mass M s = 33.9 MeV, mixed with ν τ [15]. This neutrino was assumed to be in thermal equilibrium with plasma at T 50 MeV. To simplify the problem, the authors of [15] used the Boltzmann equilibrium statistics for active species in collision integral for a sterile neutrino. Being in equilibrium the sterile neutrino spectrum becomes more and more nonrelativistic with time due to the redshift. Therefore the ratio ρ s /M s n s of the energy density ρ s to the mass times number density n s should approach 1 at lower temperatures. We have recomputed the evolution of the system using our code, without the Boltzmann approximation. Fig. 12 shows the comparison of the results with those of [15] for sterile neutrino lifetime τ s = 0.3 sec. Both results coincide till T ≈ 5 MeV and after that moment ratio ρ s /M s n s of [15] stops decreasing, while the numerical result we obtained shows the expected behaviour -the ratio continues to decrease, approaching 1. A.4 Massive ν τ Next we considered a model with the massive tau neutrino [19,21]. Fig. 13 presents relative deviation of the energy densities of massless neutrinos δρ ν /ρ eq produced by our code and plotted in [19]. ρ eq = 7π 2 T 4 120 is the equilibrium energy density of one neutrino specie, and δρ ν = ρ ν − ρ eq ν . In Fig. 13 distortion of electron neutrino spectrum y 2 δf νe /f eq is depicted. Here one observes good agreement between the results. A.5 Late reheating model To test the treatment of MeV decaying particles, we considered the low-reheating models with the reheating temperature of several MeV [22,23]. In [22] heavy nonrelativistic particles were considered, that dominated the energy density of the Universe once and then decayed into electrons, positrons or photons (so that decay products are quickly thermalized). The most important output is the effective number of active neutrino species N eff (defined in Eq. (17)). Dependence of this quantity on decay width of heavy particle is presented in Fig. 14. We have noticed some difference between the results of cited papers and those of our code. We believe Massive Ν Τ Figure 13: Left: Relative deviation from its equilibrium value of ν e energy density δρ νe /ρ eq in a model where tau neutrino is massive. Right: Spectrum distortion y 2 δf νe /f eq for the same model. In both panels M ντ = 0, 3, 7, 20 MeV from bottom to top, the solid curves depict the numerical results of this work, and the dashed -the results of [19]. that this is due to the different approximations made. For example, in both works [22,23] the scattering processes involving only neutrinos were not taken into account, approximation of Boltzmann statistics was used throughout and electron mass was neglected. We checked that the account of finite electron mass gives a gain of 5% to the N eff for τ = 0.1s, while the account of scatterings involving only neutrinos gives rise of 1%. A.6 Instant thermalization of decay products Next we considered a model with two heavy Majorana sterile neutrinos, similar to the νMSM. However, we assumed that for any mass of sterile neutrino it can decay only via channels listed in Table 4 of Appendix B. It is not a natural assumption, because usually sterile neutrinos heavier than pion decay dominantly into states containing mesons [26]. Also we approximated sterile neutrino spectrum as a non-relativistic one, while all the other particles are relativistic and in equilibrium all the time. In this case the system may be adequately described by the kinetic equation dρ s dt + 3ȧ a ρ s = −Γ s ρ s(21) together with the Friedmann equations (2)(3). The latter of these equations can be rewritten as d(aT ) dt = 30aΓ s ρ s 43π 2 T 3(22) Γ s is the decay width of sterile neutrino, ρ s is the energy density of sterile neutrinos, and we have used expression for the energy and pressure densities of relativistic species ρ rel = 3p rel = 43π 2 T 4 /120. [22,23]. In Figs. 15 the evolution of quantities aT and ρ s /ρ rel is compared between the results of our code and the semi-analytic integration of Eqs. (21)- (22) for three different sets of masses and lifetimes. One can see very good agreement between these results, maximum relative deviation is 1%. B Tree-level matrix elements In this Appendix we summarize the matrix elements we used for computing the collision integrals in Boltzmann equation. The squares of the matrix elements for Standard Model particles only are listed in Table 2, while the squares of the matrix elements of processes with sterile neutrinos are summarized in Tables 3, 4. In these expressions, averaging over helicities of incoming particles and summation over those of outgoing products is assumed. The reactions are considered for two cases. In the first one sterile neutrino is a right-chiral Majorana neutrino that has 2 helicity degrees of freedom. That is actually the case in our problem, where we have two neutrinos of this kind. The other case corresponds to sterile neutrino of Dirac nature. Dirac fermions have both right-and left-chiral components, hence yielding 4 degrees of freedom in total. Expressions listed in Tables 3, 4 are applicable for both cases of the neutrino nature. Moreover, to complete the list of possible tree-level reactions, one has to consider charge-conjugated channels and take into account that Dirac particle is distinct from its antiparticle, while Majorana neutrino is not. Throughout this Section we use the notations g R = sin 2 θ W , g L = 1/2 + sin 2 θ W , g L = −1/2 + sin 2 θ W , where θ W is the Weinberg angle so that sin 2 θ W ≈ 0.23. The resulting expressions coincide with [25,15]. C Neutrino oscillations The active neutrinos of different flavours ν e , ν µ , ν τ are related to the mass eigen-state basis ν 1 , ν 2 , ν 3 via a non-diagonal Pontecorvo-Maki-Nakagava-Sakata (PMNS) matrix V |ν α = V αi |ν i (see e.g. [58] for reviews): here c ij = cos θ ij and s ij = sin θ ij are functions of the active-active neutrino mixing angles θ ij . Exact treatment of active neutrino oscillation in the early Universe is a difficult task (see e.g. [59,60,61]) Characteristic timescale of oscillation between i and j mass eigen-states for a neutrino with energy E is [58] τ ij = 4πE |m 2 i − m 2 j | ≈ 8.3 × 10 −6 s E MeV 10 −3 eV 2 |m 2 i − m 2 j |(23) Average energy of relativistic Fermi particles in equilibrium is E = 3.15T [33]. Applying this relation to active neutrinos and using their measured mass differences [62] m 2 2 − m 2 1 ≈ 7.6 × 10 −5 eV 2 , |m 2 3 − m 2 1 | ≈ 2.5 × 10 −3 eV 2 , we obtain τ 12 ≈ 1.0 × 10 −3 sec T 3 MeV , τ 13 ≈ 3.1 × 10 −5 sec T 3 MeV ,(24) provided that influence of the surrounding environment on neutrino propagation is neglected. One sees therefore that about the moment active neutrino decouples S is the symmetrization factor; α, β = e, µ, τ . In all processes we take α = β. The results coincide with those of Ref. [25]. Process (1 + 2 → 3 + 4) S SG −2 F |M| 2 ν α + ν β → ν α + ν β 1 32(p 1 · p 2 )(p 3 · p 4 ) ν α +ν β → ν α +ν β 1 32(p 1 · p 4 )(p 2 · p 3 ) ν α + ν α → ν α + ν α 1/2 64(p 1 · p 2 )(p 3 · p 4 ) ν α +ν α → ν α +ν α 1 128(p 1 · p 4 )(p 2 · p 3 ) ν α +ν α → ν β +ν β 1 32(p 1 · p 4 )(p 2 · p 3 ) ν e +ν e → e + + e − 1 128[g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 3 )(p 2 · p 4 ) + g L g R m 2 e (p 1 · p 2 )] ν e + e − → ν e + e − 1 128[g 2 L (p 1 · p 2 )(p 3 · p 4 )+ g 2 R (p 1 · p 4 )(p 2 · p 3 ) − g L g R m 2 e (p 1 · p 3 )] ν e + e + → ν e + e + 1 128[g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 2 )(p 3 · p 4 ) − g L g R m 2 e (p 1 · p 3 )] ν µ(τ ) +ν µ(τ ) → e + + e − 1 128[g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 3 )(p 2 · p 4 ) +g L g R m 2 e (p 1 · p 2 )] ν µ(τ ) + e − → ν µ(τ ) + e − 1 128[g 2 L (p 1 · p 2 )(p 3 · p 4 )+ g 2 R (p 1 · p 4 )(p 2 · p 3 ) −g L g R m 2 e (p 1 · p 3 )] ν µ(τ ) + e + → ν µ(τ ) + e + 1 128[g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 2 )(p 3 · p 4 ) −g L g R m 2 e (p 1 · p 3 )] T ≃ 3MeV typical oscillation timescales are much smaller than the Hubble expansion time given by Eq. (2) τ H = 15 4π 3 g * G N T 4 ≃ 0.16 sec 3 MeV T 2 .(25) Here g * ≈ 11 (at T ∼ MeV) [33] is the so-called number of relativistic species that enters energy-temperature relation ρ = π 2 g * T 4 30 . Therefore, active neutrinos oscillate many times between the subsequent reactions involving them. In quantitative terms it means that probabilities P αβ to transform from flavour α to flavour β are oscillating functions of time. In realistic situation neutrinos do not have a definite momentum but are created in wave packets that are superpositions of states which have one. Since oscillation periods are momentum-dependent according to Eq. (23), each state in the superposition will have his own period. Therefore after sufficiently many periods initial phases characterizing superposition will change, and there is no reason Table 3: Squared matrix elements for scatterings of sterile neutrinos ν S . Here S is the symmetrization factor; α, β = e, µ, τ ; α = β. ϑ α is the mixing angle of sterile neutrino with ν α . The results are applicable for one right-chiral Majorana neutrino as well as for one Dirac neutrino, for details see text. Process (1 + 2 → 3 + 4) S SG −2 F |M| 2 ν s + ν β → ν α + ν β 1 32ϑ 2 α (p 1 · p 2 )(p 3 · p 4 ) ν s +ν β → ν α +ν β 1 32ϑ 2 α (p 1 · p 4 )(p 2 · p 3 ) ν s + ν α → ν α + ν α 1/2 64ϑ 2 α (p 1 · p 2 )(p 3 · p 4 ) ν s +ν α → ν α +ν α 1 128ϑ 2 α (p 1 · p 4 )(p 2 · p 3 ) ν s +ν α → ν β +ν β 1 32ϑ 2 α (p 1 · p 4 )(p 2 · p 3 ) ν s +ν e → e + + e − 1 128ϑ 2 e [g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 3 )(p 2 · p 4 ) + g L g R m 2 e (p 1 · p 2 )] ν s + e − → ν e + e − 1 128ϑ 2 e [g 2 L (p 1 · p 2 )(p 3 · p 4 )+ g 2 R (p 1 · p 4 )(p 2 · p 3 ) − g L g R m 2 e (p 1 · p 3 )] ν s + e + → ν e + e + 1 128ϑ 2 e [g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 2 )(p 3 · p 4 ) − g L g R m 2 e (p 1 · p 3 )] ν s +ν µ(τ ) → e + + e − 1 128ϑ 2 µ(τ ) [g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 3 )(p 2 · p 4 ) +g L g R m 2 e (p 1 · p 2 )] ν s + e − → ν µ(τ ) + e − 1 128ϑ 2 µ(τ ) [g 2 L (p 1 · p 2 )(p 3 · p 4 )+ g 2 R (p 1 · p 4 )(p 2 · p 3 ) −g L g R m 2 e (p 1 · p 3 )] ν s + e + → ν µ(τ ) + e + 1 128ϑ 2 µ(τ ) [g 2 L (p 1 · p 4 )(p 2 · p 3 )+ g 2 R (p 1 · p 2 )(p 3 · p 4 ) −g L g R m 2 e (p 1 · p 3 )] Process (1 → 2 + 3 + 4) S SG −2 F |M| 2 ν S → ν α + ν β +ν β 1 32 ϑ 2 α (p 1 · p 4 )(p 2 · p 3 ) ν S → ν α + ν α +ν α 1/2 64 ϑ 2 α (p 1 · p 4 )(p 2 · p 3 ) ν S → ν e + e + + e − 1 128 ϑ 2 e [g 2 L (p 1 · p 3 )(p 2 · p 4 )+ g 2 R (p 1 · p 4 )(p 2 · p 3 ) + g L g R m 2 e (p 1 · p 2 )] ν S → ν µ(τ ) + e + + e − 1 128 ϑ 2 µ(τ ) [g 2 L (p 1 · p 3 )(p 2 · p 4 )+ g 2 R (p 1 · p 4 )(p 2 · p 3 ) +g L g R m 2 e (p 1 · p 2 )] Table 4: Squared matrix elements for decays of sterile neutrinos ν S . Here S is the symmetrization factor; α, β = e, µ, τ ; α = β. ϑ α is the mixing angle of sterile neutrino with ν α . The results are both for Majorana and Dirac neutrinos, for details see text. for the phase changes to be correlated with each other. So the decoherence of states is what happens. This phenomenon can be described effectively by averaging transition probabilities P αβ over time. Resulting expressions are [58] P ee = 1 − 1 2 (sin 2 2θ 13 + cos 4 θ 13 sin 2 2θ 12 ) (26a) P eµ = P µe = 1 2 cos 2 θ 13 sin 2 2θ 12 (26b) P eτ = P τ e = sin 2 θ 13 cos 2 θ 13 2 − 1 2 sin 2 2θ 12 (26c) P µµ = 1 − 1 2 sin 2 2θ 12 (26d) P µτ = P τ µ = 1 2 sin 2 θ 13 sin 2 2θ 12 (26e) P τ τ = 1 − sin 2 θ 13 2 cos 2 θ 13 + 1 2 sin 2 θ 13 sin 2 2θ 12 To understand what happens with a neutrino, consider example of electronneutrino created in electron-positron annihilation. At the production time this particle has probability 1 to oscillate into ν e and zero for other final state. After long enough time for many oscillations to happen and before the time when a collision with other particle becomes quite probable, the decoherence comes into play. So now we may find the ν e with probability P ee , ν µ with probability P eµ and ν τ with P eτ . The production rate of the initial specimen per unit time is proportional to collision integral I e , according to the Boltzmann equation (7). But the actual number of produced electron neutrinos is actually reduced by factor P ee . And even if (consider this hypothetical situation) muon neutrino does not interact with plasma, it will be anyway produced, at rate P eµ I e . Generalization to other neutrino flavours leads us to conclusion that the modified Boltzmann equation df α dt = P αβ I β(27) describes neutrino dynamics correctly (that is not the case for the initial equation (7)). For the actual computations we use the following experimental best-fit values: sin 2 θ 12 = 0.31, sin 2 θ 23 = 0.52 from [62], and sin 2 2θ 13 = 0.09 from the Daya Bay [63]. The latter number is close to the result sin 2 2θ 13 = 0.11 indicated by another recent experiment, RENO [64]. However, in dense medium oscillations proceed differently due to considerable effects of the plasma on properties of a single particle. Still, the phenomenon can be described by the formalism of the PMNS matrix. The difference is that mixing parameters together with masses now depend on properties of the environment. In case of plasma close to equilibrium with no non-trivial conserving charges present the parameter describing it is the temperature. So the parameters of the PMNS become temperature-dependent. In language of the effective Hamiltonian approach the system of three neutrinos is described by the addition of medium potential ∆H M to the Hamiltonian H V of the system in vacuum [65] H M = H V + ∆H M , H V = 1 2E V * diag(m 2 1 , m 2 2 , m 2 3 )V † ,(28) where E is the neutrino energy. Diagonalization of the total propagation Hamiltonian H M gives effective masses and mixings. The medium potential comprises effects of neutrino interactions. Since neutrinos take part only in charged-and neutral-current interactions, matter potential has two terms ∆H CC and ∆H N C , respectively. All neutrinos couple to neutral currents identically, so ∆H N C is proportional to unit matrix. Therefore this term just renormalizes energy, and does not affect oscillations. In contrast, the charged-current term is non-diagonal and is present only for ν e . The reason is that due to abundance of electrons in plasma, ν e couples effectively to charged currents, while at temperatures below the muon's mass there is no significant contribution of muons and tau-leptons to realize coupling of other neutrinos to W boson. Explicitly matter potential is [58] ∆H CC = − 14 √ 2G F 45M 2 W E T 4 diag(1, 0, 0)(29) in the flavour neutrino basis (ν e , ν µ , ν τ ). M W is the mass of the W-boson. So far we have dropped sterile neutrinos from consideration. But their mixing properties are also altered in hot plasma. Using the approach of the effective Hamiltonian for them, one finds that their effective mixing angles in medium θ M differ from that in vacuum θ V as [65] θ M − θ V θ V ∼ G F T 5 M 2 W M 2 S ∼ 10 −11 × T 100 MeV 6 10 MeV M S 2 (30) for small mixing angles θ V . Therefore the mixing angle is not altered significantly for sterile neutrinos and matter effects are negligible for their dynamics. Decay of sterile neutrino ν S → ν e ν ανα through neutral current interacFermi-like interaction with the "effective" Fermi constant ϑ e × G F for the process in the panel (b). Figure 1 : 1Fermi-like super-weak interactions of sterile neutrino of other works ( upper bounds on sterile neutrino lifetime, based on different measurements of Y p : Ref. [1] ("Izotov & Thuan"); Ref. [43] ("Aver et al."); Refs. [46, 2] ("CMB bound") Upper and lower bounds on sterile neutrino lifetime, based on the measurements of [1]. The upper curve is the same as the dashed curve in the left panel. Figure 2 : 2Bounds (at 2σ level) on sterile neutrino lifetime as a function of their mass for various measurements of Y p (summarized in Section 2.6). All results are for mixing of sterile neutrino with electron flavour only (the dependence on the particular mixing pattern is very weak, see below). For the CMB bound, we present only the result for masses M s > 40 MeV where N eff ≈ 3. For smaller masses we plot instead bounds based on 3σ Deuterium upper bound(19). For details, see Sec. 3 andFig. 3. Figure 3 : 3Left: Deuterium abundance, with the shaded region corresponding to the allowed 3σ range, based on[4]. Right: Effective number of neutrino species (the ratio of the effective neutrino temperature to the photon temperature at T ∼ fewkeV) as a result of decay of sterile neutrino. The horizontal "SM" lines indicate N eff that corresponds to the boundary of the 3σ range[4], in the SM with the number of relativistic species deviating from N eff ≈ 3. In both panels, parameters of sterile neutrinos correspond to the upper bound on Y p from[1] (see Eq. based on astrophysical measurements of Y p of[43] from CMB (notice different yaxis range!). Figure 4 : 4Upper bound for sterile neutrino lifetime for different mixing patterns: mixing with ν e -only (red dashed line), ν µ -only (green dashed-dotted line) and equal mixing with ν e and ν µ flavours (black solid line). All bounds are derived for the lifetime of neutron τ n adopted from[27]. The effect of different mixing patterns is at the level ∼ 10 − 50% and can only be seen in the right panel because of the different y axis. In the right panel, only the masses M s > 40 MeV are presented. For details, see Sec. 3 andFig. 3. based on astrophysical measurements of Y p of[43] Figure 5 : 5Lower bound on mixing angles of sterile neutrinos for different mixing patterns: mixing with ν e -only (red dashed line), ν µ -only (green dashed-dotted line) and equal mixing with ν e and ν µ flavours (black solid line). Both types of bounds are derived by assuming lifetime of the neutron τ n from[27]. In the right panel, only the masses M s > 40 MeV are presented. For details, see Sec. 3 andFig. 3.injection rate. In addition, the neutrino oscillations (fast at the BBN epoch) make the difference between flavours less pronounced (see Appendix C). As a result, mixing patterns give essentially the same results with the difference at the level of tens of per cent (seeFigs. 4,5). Figure 6 : 6Comparison with the previous results of[15,16] Figure 7 : 7Experimental 3σ lower bounds on the lifetime of sterile neutrinos [55] (solid line), combined with the upper bounds from this work (dashed line), corresponding to the weakest bound in Fig. 2a. The accelerator bounds are for two Majorana sterile neutrinos solely responsible for neutrino oscillations. Left: normal hierarchy, right: inverted hierarchy. Combination of BBN bounds with direct experimental searches demonstrates that sterile neutrinos with the masses in 1-140 MeV range, solely responsible for neutrino oscillations are ruled out. See Secs. 3,4 for details. Figure 8 : 8Comparison of direct accelerator constraints and BBN bounds, based on the Helium-4 measurements of[1] in the model where sterile neutrinos mix with ν τ only. Unlike the case, presented inFig. 7there is an allowed region of parameter space for most of the masses below 140 MeV.AcknowledgmentsWe would like to thank A. Boyarsky, D. Gorbunov, S. Hansen, D. Semikoz, M. Shaposhnikov for valuable discussions and for help and encouragement during various stages of this project. We specially thank D. Semikoz for sharing with us the original version of the code, used in[15,16,24,19] and J. Racle for writing an initial version of the BBN code as a part of his Master's project[56] at EPFL. A.I. is also grateful to S. Vilchynskiy, Scientific and Educational Centre of the Bogolyubov Institute for Theoretical Physics in Kiev, Ukraine 13 and to Ukrainian Virtual Roentgen and Gamma-Ray Observatory VIRGO.UA. 14 The work of A.I. was supported in part from the Swiss-Eastern European cooperation project (SCOPES) No. IZ73Z0 128040 of Swiss National Science Foundation. A.I. acknowledges support from the ERC Advanced Grant 2008109304. Figure 9 : 9T /T ν as a function of inverse temperature T −1 . The solid line is produced by the code of the present work, the dashed -the result of[25]. Figure 10 : 10Relative distortions of neutrino spectra before the onset of BBN. Left: neutrino flavour oscillations are neglected, right: the oscillations are taken into account, with the parameter choice θ 13 = 0, sin 2 θ 23 = 0.5, sin 2 θ 12 = 0.3 used in[57]. Figure 12 : 12Ratio ρ s /n s M s as a function of scale factor for M s = 33.9 MeV sterile neutrino. The upper curve is the result of Ref.[15], the lower curve is the present work. Figure 14 : 14Effective number of neutrino species N eff depending on decay width of heavy non-relativistic particles. Comparison of the results of this work and Refs. Figure 15 : 15Left: Evolution of aT for the model of Sec. A.6. Right: ρ s /ρ SM . We consider three parameter sets: sterile neutrino mass M s = 580 MeV with lifetime τ = 1sec; M s = 1030 MeV with τ = 0.1sec; M s = 100MeV, τ = 0.5sec. The solid line depicts the numerical result of this work, dashed -the semianalytical calculation. Table 1 : 1Values of Helium abundance Y p in the Standard Model BBN (SBBN) and their dependence on the neutron lifetime, τ n .original results of the other papers. Table 2 : 2Squared matrix elements for weak processes involving active species only. This number corresponds to g s = 2 of additional chiral singlets (i.e. "neutrino-like" species). Actual number of degrees of freedom is of course twice larger: dof = 2 × g s = 4, because every chiral fermion has 2 different helicity states. The expression (1) is for Majorana particle. For Dirac particle the lifetime would be twice larger.3 Here the systematic error is due to the different values of neutron lifetime between the average value from Particle Data group,[27] and the recent measurement of[28]. Muons may appear in plasma from the decays of the sterile neutrinos with M s > 106 MeV. See Sec. 2.4 for details. The exact "freeze-out" temperature depends on the mixing angle.6 For the previous studies of the BBN outcomes with the large lepton asymmetry present see e.g.[29,30,31,17,32]. For example in the νMSM model at early times (T ≫ 100 GeV) initial densities of sterile neutrinos are negligible[34]. Then the neutrinos come into equilibrium at temperature T + (typically T + = 10 ÷ 100 GeV) and freeze-out at temperatures T − ∼ 0.5 − 5 GeV[12]. We denote by Y p the mass fraction of the 4 He, that is a fraction of the total baryon mass stored in the form of Helium-4 For the accurate account of these corrections, see e.g.[4,38,39,40].10 We add the systematic errors linearly We use the average value over metallicities, Y p (Eq. (8.2) of[43]) and leave the systematic error from[1].12 A study of[45], based on the independent dataset, provides the value Y p = 0.2477 ± 0.0029. Its upper bound becomes very close to that of (15) if one employs an additional systematic uncertainty at the level ∆Y syst = 0.010 (twice the value of systematic uncertainty of[1]). http://sec.bitp.kiev.ua 14 http://virgo.org.ua Ref.[57] the takes into account both the effects of neutrino oscillations and QED corrections, the latter changes the result significantly. As a result we could not compare the effect of neutrino oscillations only. The primordial abundance of 4He: evidence for non-standard big bang nucleosynthesis. Y Izotov, T Thuan, 1001.4440Astrophys.J. 710Y. Izotov and T. Thuan, The primordial abundance of 4He: evidence for non-standard big bang nucleosynthesis, Astrophys.J. 710 (2010) L67-L71 [1001.4440]. E Komatsu, K M Smith, J Dunkley, C L Bennett, B Gold, G Hinshaw, N Jarosik, D Larson, M R Nolta, L Page, D N Spergel, M Halpern, R S Hill, A Kogut, M Limon, S S Meyer, N Odegard, G S Tucker, J L Weiland, E Wollack, E L Wright, Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation. 192181001.4538E. Komatsu, K. M. Smith, J. Dunkley, C. L. Bennett, B. Gold, G. Hinshaw, N. Jarosik, D. Larson, M. R. Nolta, L. Page, D. N. Spergel, M. Halpern, R. S. Hill, A. Kogut, M. Limon, S. S. Meyer, N. Odegard, G. S. Tucker, J. L. Weiland, E. Wollack and E. L. Wright, Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation, ApJS 192 (Feb., 2011) 18-+ [1001.4538]. The origin of chemical elements. R Alpher, H Bethe, G Gamow, Phys.Rev. 73R. Alpher, H. Bethe and G. Gamow, The origin of chemical elements, Phys.Rev. 73 (1948) 803-804. Primordial Nucleosynthesis: from precision cosmology to fundamental physics. F Iocco, G Mangano, G Miele, O Pisanti, P D Serpico, Phys. Rept. 4720809.0631F. Iocco, G. Mangano, G. Miele, O. Pisanti and P. D. Serpico, Primordial Nucleosynthesis: from precision cosmology to fundamental physics, Phys. Rept. 472 (2009) 1-76 [0809.0631]. Primordial Nucleosynthesis in the Precision Cosmology Era. G Steigman, Ann. Rev. Nucl. Part. Sci. 570712.1100G. Steigman, Primordial Nucleosynthesis in the Precision Cosmology Era, Ann. Rev. Nucl. Part. Sci. 57 (2007) 463-491 [0712.1100]. Big Bang Nucleosynthesis as a Probe of New Physics. M Pospelov, J Pradler, Ann. Rev. Nucl. Part. Sci. 601011.1054M. Pospelov and J. Pradler, Big Bang Nucleosynthesis as a Probe of New Physics, Ann. Rev. Nucl. Part. Sci. 60 (2010) 539-568 [1011.1054]. The role of sterile neutrinos in cosmology and astrophysics. A Boyarsky, O Ruchayskiy, M Shaposhnikov, Ann. Rev. Nucl. Part. Sci. 591910901.0011A. Boyarsky, O. Ruchayskiy and M. Shaposhnikov, The role of sterile neutrinos in cosmology and astrophysics, Ann. Rev. Nucl. Part. Sci. 59 (2009) 191 [0901.0011]. Sterile neutrinos: the dark side of the light fermions. A Kusenko, Phys. Rept. 4810906.2968A. Kusenko, Sterile neutrinos: the dark side of the light fermions, Phys. Rept. 481 (2009) 1-28 [0906.2968]. Baryogenesis via neutrino oscillations. E K Akhmedov, V A Rubakov, A Y Smirnov, hep-ph/9803255Phys. Rev. Lett. 81E. K. Akhmedov, V. A. Rubakov and A. Y. Smirnov, Baryogenesis via neutrino oscillations, Phys. Rev. Lett. 81 (1998) 1359-1362 [hep-ph/9803255]. The nuMSM, dark matter and neutrino masses. T Asaka, S Blanchet, M Shaposhnikov, hep-ph/0503065Phys. Lett. 631T. Asaka, S. Blanchet and M. Shaposhnikov, The nuMSM, dark matter and neutrino masses, Phys. Lett. B631 (2005) 151-156 [hep-ph/0503065]. The nuMSM, dark matter and baryon asymmetry of the universe. T Asaka, M Shaposhnikov, arXiv:hep-ph/0505013Phys. Lett. B. 620T. Asaka and M. Shaposhnikov, The nuMSM, dark matter and baryon asymmetry of the universe, Phys. Lett. B 620 (July, 2005) 17-26 [arXiv:hep-ph/0505013]. The nuMSM, leptonic asymmetries, and properties of singlet fermions. M Shaposhnikov, JHEP. 0880804.4542M. Shaposhnikov, The nuMSM, leptonic asymmetries, and properties of singlet fermions, JHEP 08 (2008) 008 [0804.4542]. L Canetti, M Shaposhnikov, 1006.0133Baryon Asymmetry of the Universe in the NuMSM. 10091L. Canetti and M. Shaposhnikov, Baryon Asymmetry of the Universe in the NuMSM, JCAP 1009 (2010) 001 [1006.0133]. Sterile neutrino dark matter as a consequence of νMSM-induced lepton asymmetry. M Laine, M Shaposhnikov, arXiv:0804.4543JCAP. 631M. Laine and M. Shaposhnikov, Sterile neutrino dark matter as a consequence of νMSM-induced lepton asymmetry, JCAP 6 (June, 2008) 31-+ [arXiv:0804.4543]. Cosmological and astrophysical bounds on a heavy sterile neutrino and the KARMEN anomaly. A D Dolgov, S H Hansen, G Raffelt, D V Semikoz, hep-ph/0002223Nucl. Phys. 580A. D. Dolgov, S. H. Hansen, G. Raffelt and D. V. Semikoz, Cosmological and astrophysical bounds on a heavy sterile neutrino and the KARMEN anomaly, Nucl. Phys. B580 (2000) 331-351 [hep-ph/0002223]. Heavy sterile neutrinos: Bounds from big-bang nucleosynthesis and SN 1987A. A D Dolgov, S H Hansen, G Raffelt, D V Semikoz, hep-ph/0008138Nucl. Phys. 590A. D. Dolgov, S. H. Hansen, G. Raffelt and D. V. Semikoz, Heavy sterile neutrinos: Bounds from big-bang nucleosynthesis and SN 1987A, Nucl. Phys. B590 (2000) 562-574 [hep-ph/0008138]. Big Bang Nucleosynthesis with Independent Neutrino Distribution Functions. C J Smith, G M Fuller, M S Smith, 105001 [0812.1253Phys.Rev. 79C. J. Smith, G. M. Fuller and M. S. Smith, Big Bang Nucleosynthesis with Independent Neutrino Distribution Functions, Phys.Rev. D79 (2009) 105001 [0812.1253]. Heavy sterile neutrinos, entropy and relativistic energy production, and the relic neutrino background. G M Fuller, C T Kishimoto, A Kusenko, 1110.6479G. M. Fuller, C. T. Kishimoto and A. Kusenko, Heavy sterile neutrinos, entropy and relativistic energy production, and the relic neutrino background, 1110.6479. Impact of massive tau neutrinos on primordial nucleosynthesis. Exact calculations. A Dolgov, S Hansen, D Semikoz, hep-ph/9712284Nucl.Phys. 524A. Dolgov, S. Hansen and D. Semikoz, Impact of massive tau neutrinos on primordial nucleosynthesis. Exact calculations, Nucl.Phys. B524 (1998) 621-638 [hep-ph/9712284]. Nonequilibrium decays of light particles and the primordial nucleosynthesis. A Dolgov, D Kirilova, Int.J.Mod.Phys. 3267A. Dolgov and D. Kirilova, Nonequilibrium decays of light particles and the primordial nucleosynthesis, Int.J.Mod.Phys. A3 (1988) 267. Big bang nucleosynthesis constraints on the tau-neutrino mass. M Kawasaki, P Kernan, H.-S Kang, R J Scherrer, G Steigman, Nucl.Phys. 419M. Kawasaki, P. Kernan, H.-S. Kang, R. J. Scherrer, G. Steigman et. al., Big bang nucleosynthesis constraints on the tau-neutrino mass, Nucl.Phys. B419 (1994) 105-128. MeV scale reheating temperature and thermalization of neutrino background. M Kawasaki, K Kohri, N Sugiyama, astro-ph/0002127Phys.Rev. 6223506M. Kawasaki, K. Kohri and N. Sugiyama, MeV scale reheating temperature and thermalization of neutrino background, Phys.Rev. D62 (2000) 023506 [astro-ph/0002127]. What is the lowest possible reheating temperature?. S Hannestad, astro-ph/0403291Phys. Rev. 7043506S. Hannestad, What is the lowest possible reheating temperature?, Phys. Rev. D70 (2004) 043506 [astro-ph/0403291]. Nonequilibrium corrections to the spectra of massless neutrinos in the early universe. (Addendum). A D Dolgov, S H Hansen, D V Semikoz, hep-ph/9805467Nucl. Phys. 543A. D. Dolgov, S. H. Hansen and D. V. Semikoz, Nonequilibrium corrections to the spectra of massless neutrinos in the early universe. (Addendum), Nucl. Phys. B543 (1999) 269-274 [hep-ph/9805467]. Non-equilibrium corrections to the spectra of massless neutrinos in the early universe. A D Dolgov, S H Hansen, D V Semikoz, hep-ph/9703315Nucl. Phys. 503A. D. Dolgov, S. H. Hansen and D. V. Semikoz, Non-equilibrium corrections to the spectra of massless neutrinos in the early universe, Nucl. Phys. B503 (1997) 426-444 [hep-ph/9703315]. How to find neutral leptons of the numsm?. D Gorbunov, M Shaposhnikov, arXiv:0705.1729JHEP. 1015hep-phD. Gorbunov and M. Shaposhnikov, How to find neutral leptons of the numsm?, JHEP 10 (2007) 015 [arXiv:0705.1729 [hep-ph]]. Review of particle physics. K Nakamura, Particle Data Group CollaborationJ.Phys.G. 3775021Particle Data Group Collaboration, K. Nakamura et. al., Review of particle physics, J.Phys.G G37 (2010) 075021. Neutron lifetime from a new evaluation of ultracold neutron storage experiments. A Serebrov, A Fomin, 1005.4312Phys.Rev. 8235501A. Serebrov and A. Fomin, Neutron lifetime from a new evaluation of ultracold neutron storage experiments, Phys.Rev. C82 (2010) 035501 [1005.4312]. Cosmological implications of a relic neutrino asymmetry. J Lesgourgues, S Pastor, hep-ph/9904411Phys. Rev. D. 60103521J. Lesgourgues and S. Pastor, Cosmological implications of a relic neutrino asymmetry, Phys. Rev. D 60 (Nov., 1999) 103521-+ [hep-ph/9904411]. Lepton asymmetry and primordial nucleosynthesis in the era of precision cosmology. P D Serpico, G G Raffelt, astro-ph/0506162Phys. Rev. 71127301P. D. Serpico and G. G. Raffelt, Lepton asymmetry and primordial nucleosynthesis in the era of precision cosmology, Phys. Rev. D71 (2005) 127301 [astro-ph/0506162]. Light Element Signatures of Sterile Neutrinos and Cosmological Lepton Numbers. C J Smith, G M Fuller, C T Kishimoto, K N Abazajian, astro-ph/0608377Phys.Rev. 7485008C. J. Smith, G. M. Fuller, C. T. Kishimoto and K. N. Abazajian, Light Element Signatures of Sterile Neutrinos and Cosmological Lepton Numbers, Phys.Rev. D74 (2006) 085008 [astro-ph/0608377]. Constraining the cosmic radiation density due to lepton number with Big Bang Nucleosynthesis. G Mangano, G Miele, S Pastor, O Pisanti, S Sarikas, JCAP. 1103351011.0916G. Mangano, G. Miele, S. Pastor, O. Pisanti and S. Sarikas, Constraining the cosmic radiation density due to lepton number with Big Bang Nucleosynthesis, JCAP 1103 (2011) 035 [1011.0916]. E Kolb, M Turner, The Early Universe. Reading, MA, USAAddison-WesleyPrepared with L A T E XE. Kolb and M. Turner, The Early Universe. Addison-Wesley, Reading, MA, USA, 1990. Prepared with L A T E X. On initial conditions for the Hot Big Bang. F Bezrukov, D Gorbunov, M Shaposhnikov, JCAP. 0906290812.3622F. Bezrukov, D. Gorbunov and M. Shaposhnikov, On initial conditions for the Hot Big Bang, JCAP 0906 (2009) 029 [0812.3622]. Let's Go: Early Universe. Guide to Primordial Nucleosynthesis Programming. L Kawano, L. Kawano, Let's Go: Early Universe. Guide to Primordial Nucleosynthesis Programming, . Let's go: Early universe. 2. Primordial nucleosynthesis: The Computer way. L Kawano, L. Kawano, Let's go: Early universe. 2. Primordial nucleosynthesis: The Computer way, . Refined big bang nucleosynthesis constraints on Omega (baryon) and N (neutrino). P J Kernan, L M Krauss, astro-ph/9402010Phys.Rev.Lett. 72P. J. Kernan and L. M. Krauss, Refined big bang nucleosynthesis constraints on Omega (baryon) and N (neutrino), Phys.Rev.Lett. 72 (1994) 3309-3312 [astro-ph/9402010]. New Nuclear Physics for Big Bang Nucleosynthesis. R N Boyd, C R Brune, G M Fuller, C J Smith, 1008.0848Phys.Rev. 82105005R. N. Boyd, C. R. Brune, G. M. Fuller and C. J. Smith, New Nuclear Physics for Big Bang Nucleosynthesis, Phys.Rev. D82 (2010) 105005 [1008.0848]. Nuclear weak interaction rates in primordial nucleosynthesis. G M Fuller, C J Smith, 1009.0277Phys.Rev. 82125017G. M. Fuller and C. J. Smith, Nuclear weak interaction rates in primordial nucleosynthesis, Phys.Rev. D82 (2010) 125017 [1009.0277]. Standard Big-Bang Nucleosynthesis up to CNO with an improved extended nuclear network. A Coc, S Goriely, Y Xu, M Saimpert, E Vangioni, 1107.1117Astrophys.J. 744158A. Coc, S. Goriely, Y. Xu, M. Saimpert and E. Vangioni, Standard Big-Bang Nucleosynthesis up to CNO with an improved extended nuclear network, Astrophys.J. 744 (2012) 158 [1107.1117]. S Sarkar, hep-ph/9602260Big bang nucleosynthesis and physics beyond the standard model. 59Dedicated to Dennis Sciama on his 67th birthdayS. Sarkar, Big bang nucleosynthesis and physics beyond the standard model, Rept.Prog.Phys. 59 (1996) 1493-1610 [hep-ph/9602260]. Dedicated to Dennis Sciama on his 67th birthday. PArthENoPE: Public Algorithm Evaluating the Nucleosynthesis of Primordial Elements. O Pisanti, A Cirillo, S Esposito, F Iocco, G Mangano, Comput.Phys.Commun. 1780705.0290O. Pisanti, A. Cirillo, S. Esposito, F. Iocco, G. Mangano et. al., PArthENoPE: Public Algorithm Evaluating the Nucleosynthesis of Primordial Elements, Comput.Phys.Commun. 178 (2008) 956-971 [0705.0290]. An MCMC determination of the primordial helium abundance. E Aver, K A Olive, E D Skillman, 1112.3713E. Aver, K. A. Olive and E. D. Skillman, An MCMC determination of the primordial helium abundance, 1112.3713. G Mangano, P D Serpico, A robust upper limit on N eff from BBN. 7011103.1261G. Mangano and P. D. Serpico, A robust upper limit on N eff from BBN, circa 2011, Phys.Lett. B701 (2011) 296-299 [1103.1261]. Revised Primordial Helium Abundance Based on New Atomic Data. M Peimbert, V Luridiana, A Peimbert, astro-ph/0701580Astrophys.J. 666M. Peimbert, V. Luridiana and A. Peimbert, Revised Primordial Helium Abundance Based on New Atomic Data, Astrophys.J. 666 (2007) 636-646 [astro-ph/0701580]. The Atacama Cosmology Telescope: Cosmological Parameters from the 2008 Power Spectra. J Dunkley, R Hlozek, J Sievers, V Acquaviva, P Ade, Astrophys.J. 739521009.0866J. Dunkley, R. Hlozek, J. Sievers, V. Acquaviva, P. Ade et. al., The Atacama Cosmology Telescope: Cosmological Parameters from the 2008 Power Spectra, Astrophys.J. 739 (2011) 52 [1009.0866]. A Measurement of the Damping Tail of the Cosmic Microwave Background Power Spectrum with the South Pole Telescope. R Keisler, C Reichardt, K Aird, B Benson, L Bleem, Astrophys.J. 743281105.3182R. Keisler, C. Reichardt, K. Aird, B. Benson, L. Bleem et. al., A Measurement of the Damping Tail of the Cosmic Microwave Background Power Spectrum with the South Pole Telescope, Astrophys.J. 743 (2011) 28 [1105.3182]. Low reheating temperature and the visible sterile neutrino. G Gelmini, S Palomares-Ruiz, S Pascoli, astro-ph/0403323Phys. Rev. Lett. 9381302G. Gelmini, S. Palomares-Ruiz and S. Pascoli, Low reheating temperature and the visible sterile neutrino, Phys. Rev. Lett. 93 (2004) 081302 [astro-ph/0403323]. Inert-Sterile Neutrino: Cold or Warm Dark Matter Candidate. G B Gelmini, E Osoba, S Palomares-Ruiz, G. B. Gelmini, E. Osoba and S. Palomares-Ruiz, Inert-Sterile Neutrino: Cold or Warm Dark Matter Candidate, 0912.2478. Heavy sterile neutrinos and supernova explosions. G M Fuller, A Kusenko, K Petraki, Phys.Lett. 6700806.4273G. M. Fuller, A. Kusenko and K. Petraki, Heavy sterile neutrinos and supernova explosions, Phys.Lett. B670 (2009) 281-284 [0806.4273]. Cosmological Constraints from Sunyaev-Zel'dovich-Selected Clusters with X-ray Observations in the First 178 Square Degrees of the South Pole Telescope Survey. B Benson, T De Haan, J Dudley, C Reichardt, K Aird, 1112.5435B. Benson, T. de Haan, J. Dudley, C. Reichardt, K. Aird et. al., Cosmological Constraints from Sunyaev-Zel'dovich-Selected Clusters with X-ray Observations in the First 178 Square Degrees of the South Pole Telescope Survey, 1112.5435. New constraints on cosmological parameters and neutrino properties using the expansion rate of the Universe. M Moresco, L Verde, L Pozzetti, R Jimenez, A Cimatti, to z 1.75, 1201.6658M. Moresco, L. Verde, L. Pozzetti, R. Jimenez and A. Cimatti, New constraints on cosmological parameters and neutrino properties using the expansion rate of the Universe to z 1.75, 1201.6658. Opening a new window for warm dark matter. T Asaka, M Shaposhnikov, A Kusenko, hep-ph/0602150Phys. Lett. 638T. Asaka, M. Shaposhnikov and A. Kusenko, Opening a new window for warm dark matter, Phys. Lett. B638 (2006) 401-406 [hep-ph/0602150]. Mixing of Active and Sterile Neutrinos. T Asaka, S Eijima, H Ishida, 1101.1382JHEP. 110411T. Asaka, S. Eijima and H. Ishida, Mixing of Active and Sterile Neutrinos, JHEP 1104 (2011) 011 [1101.1382]. Experimental bounds on sterile neutrino mixing angles. O Ruchayskiy, A Ivashko, 1112.3319O. Ruchayskiy and A. Ivashko, Experimental bounds on sterile neutrino mixing angles, 1112.3319. Deriving bounds on interactions of numsm sterile neutrinos using primordial nucleosynthesis, Master's thesis, EPFL. J Racle, J. Racle, Deriving bounds on interactions of numsm sterile neutrinos using primordial nucleosynthesis, Master's thesis, EPFL, 2008. Relic neutrino decoupling including flavor oscillations. G Mangano, G Miele, S Pastor, T Pinto, O Pisanti, hep-ph/0506164Nucl.Phys. 729G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti et. al., Relic neutrino decoupling including flavor oscillations, Nucl.Phys. B729 (2005) 221-234 [hep-ph/0506164]. Neutrino masses and mixings and. A Strumia, F Vissani, hep-ph/0606054A. Strumia and F. Vissani, Neutrino masses and mixings and., hep-ph/0606054. Cosmological bounds on neutrino degeneracy improved by flavor oscillations. A Dolgov, S Hansen, S Pastor, S Petcov, G Raffelt, hep-ph/0201287Nucl.Phys. 632A. Dolgov, S. Hansen, S. Pastor, S. Petcov, G. Raffelt et. al., Cosmological bounds on neutrino degeneracy improved by flavor oscillations, Nucl.Phys. B632 (2002) 363-382 [hep-ph/0201287]. BBN bounds on active sterile neutrino mixing. A Dolgov, F Villante, hep-ph/0308083Nucl.Phys. 679A. Dolgov and F. Villante, BBN bounds on active sterile neutrino mixing, Nucl.Phys. B679 (2004) 261-298 [hep-ph/0308083]. Non-equilibrium neutrino in the early universe plasma. D Kirilova, AIP Conf.Proc. 1121D. Kirilova, Non-equilibrium neutrino in the early universe plasma, AIP Conf.Proc. 1121 (2009) 83-89. Where we are on θ 13 : addendum to 'Global neutrino data and recent reactor fluxes: status of three-flavour oscillation parameters. T Schwetz, M Tortola, J Valle, New J.Phys. 131094011108.1376T. Schwetz, M. Tortola and J. Valle, Where we are on θ 13 : addendum to 'Global neutrino data and recent reactor fluxes: status of three-flavour oscillation parameters', New J.Phys. 13 (2011) 109401 [1108.1376]. Observation of electron-antineutrino disappearance at Daya Bay. F An, DAYA-BAY Collaboration Collaboration1203.1669DAYA-BAY Collaboration Collaboration, F. An et. al., Observation of electron-antineutrino disappearance at Daya Bay, 1203.1669. Observation of Reactor Electron Antineutrino Disappearance in the RENO Experiment. J Ahn, RENO collaboration Collaboration1204.0626RENO collaboration Collaboration, J. Ahn et. al., Observation of Reactor Electron Antineutrino Disappearance in the RENO Experiment, 1204.0626. Neutrino Dispersion at Finite Temperature and Density. D Notzold, G Raffelt, Nucl. Phys. 307924D. Notzold and G. Raffelt, Neutrino Dispersion at Finite Temperature and Density, Nucl. Phys. B307 (1988) 924.
[]
[ "Stellar substructures in the solar neighbourhood I. Kinematic group 3 in the Geneva-Copenhagen survey", "Stellar substructures in the solar neighbourhood I. Kinematic group 3 in the Geneva-Copenhagen survey" ]
[ "E Stonkutė [email protected] \nInstitute of Theoretical Physics and Astronomy (ITPA)\nVilnius University\nA. Gostauto 12LT-01108VilniusLithuania\n", "G Tautvaišienė [email protected] \nInstitute of Theoretical Physics and Astronomy (ITPA)\nVilnius University\nA. Gostauto 12LT-01108VilniusLithuania\n", "B Nordström \nNiels Bohr Institute\nCopenhagen University\nJuliane Maries Vej 30, DK-2100CopenhagenDenmark\n", "R Ženovienė \nInstitute of Theoretical Physics and Astronomy (ITPA)\nVilnius University\nA. Gostauto 12LT-01108VilniusLithuania\n" ]
[ "Institute of Theoretical Physics and Astronomy (ITPA)\nVilnius University\nA. Gostauto 12LT-01108VilniusLithuania", "Institute of Theoretical Physics and Astronomy (ITPA)\nVilnius University\nA. Gostauto 12LT-01108VilniusLithuania", "Niels Bohr Institute\nCopenhagen University\nJuliane Maries Vej 30, DK-2100CopenhagenDenmark", "Institute of Theoretical Physics and Astronomy (ITPA)\nVilnius University\nA. Gostauto 12LT-01108VilniusLithuania" ]
[]
Context. Galactic Archeology is a powerful tool for investigating the formation and evolution of the Milky Way. We use this technique to study kinematic groups of F-and G-stars in the solar neighbourhood. From correlations between orbital parameters, three new coherent groups of stars were recently identified and suggested to correspond to remnants of disrupted satellites. Aims. We determine detailed elemental abundances in stars belonging to one of these groups and compare their chemical composition with Galactic disc stars. The aim is to look for possible chemical signatures that might give information about the history of this kinematic group of stars. Methods. High-resolution spectra were obtained with the FIES spectrograph at the Nordic Optical Telescope, La Palma, and analysed with a differential model atmosphere method. Comparison stars were observed and analysed with the same method.Results. The average value of [Fe/H] for the 20 stars investigated in this study is −0.69±0.05 dex. Elemental abundances of oxygen and α-elements are overabundant in comparison with Galactic thin-disc dwarfs and thin-disc chemical evolution models. This abundance pattern has similar characteristics as the Galactic thick-disc. Conclusions. The homogeneous chemical composition together with the kinematic properties and ages of stars in the investigated Group 3 of the Geneva-Copenhagen survey provides evidence of their common origin and possible relation to an ancient merging event. The similar chemical composition of stars in the investigated group and the thick-disc stars might suggest that their formation histories are linked.
10.1051/0004-6361/201118760
[ "https://arxiv.org/pdf/1203.6199v1.pdf" ]
119,290,928
1203.6199
0ae262f73781412011c34d61e5cf95f0d9d17492
Stellar substructures in the solar neighbourhood I. Kinematic group 3 in the Geneva-Copenhagen survey 28 Mar 2012 May 5, 2014 E Stonkutė [email protected] Institute of Theoretical Physics and Astronomy (ITPA) Vilnius University A. Gostauto 12LT-01108VilniusLithuania G Tautvaišienė [email protected] Institute of Theoretical Physics and Astronomy (ITPA) Vilnius University A. Gostauto 12LT-01108VilniusLithuania B Nordström Niels Bohr Institute Copenhagen University Juliane Maries Vej 30, DK-2100CopenhagenDenmark R Ženovienė Institute of Theoretical Physics and Astronomy (ITPA) Vilnius University A. Gostauto 12LT-01108VilniusLithuania Stellar substructures in the solar neighbourhood I. Kinematic group 3 in the Geneva-Copenhagen survey 28 Mar 2012 May 5, 2014Received December 29, 2011; accepted March 8, 2012Astronomy & Astrophysics manuscript no. Stonkute c ESO 2014stars: abundances -Galaxy: disc -Galaxy: formation -Galaxy: evolution Context. Galactic Archeology is a powerful tool for investigating the formation and evolution of the Milky Way. We use this technique to study kinematic groups of F-and G-stars in the solar neighbourhood. From correlations between orbital parameters, three new coherent groups of stars were recently identified and suggested to correspond to remnants of disrupted satellites. Aims. We determine detailed elemental abundances in stars belonging to one of these groups and compare their chemical composition with Galactic disc stars. The aim is to look for possible chemical signatures that might give information about the history of this kinematic group of stars. Methods. High-resolution spectra were obtained with the FIES spectrograph at the Nordic Optical Telescope, La Palma, and analysed with a differential model atmosphere method. Comparison stars were observed and analysed with the same method.Results. The average value of [Fe/H] for the 20 stars investigated in this study is −0.69±0.05 dex. Elemental abundances of oxygen and α-elements are overabundant in comparison with Galactic thin-disc dwarfs and thin-disc chemical evolution models. This abundance pattern has similar characteristics as the Galactic thick-disc. Conclusions. The homogeneous chemical composition together with the kinematic properties and ages of stars in the investigated Group 3 of the Geneva-Copenhagen survey provides evidence of their common origin and possible relation to an ancient merging event. The similar chemical composition of stars in the investigated group and the thick-disc stars might suggest that their formation histories are linked. Introduction The history of our home Galaxy is complex and not fully understood. Observations and theoretical simulations have made much progress and provided us with tools to search for past accretion events in the Milky Way and beyond. The well-known current events are the Sagittarius (Ibata et al. 1994), Canis Major (Martin et al. 2004) and Segue 2 (Belokurov et al. 2009) dwarf spheroidal galaxies, merging into the Galactic disc at various distances. The Monoceros stream (Yanny et al. 2003;Ibata et al. 2003) and the Orphan stream (Belokurov et al. 2006) according to some studies are interpreted as tidal debris from the Canis Major and Ursa Major II dwarf galaxies, respectively (see Peñarrubia et al. 2005;Fellhauer et al. 2007; and the review of Helmi 2008). Accreted substructures are found also in other galaxies, such as the Andromeda galaxy (Ibata et al. 2001;McConnachie et al. 2009), NGC 5907 (Martínez-Delgado et al. 2008), and NGC 4013 (Martínez-Delgado et al. 2009). Helmi et al. (2006) have used a homogeneous data set of about 13.240 F-and G-type stars from the Nordström et al. (2004) catalogue, which has complete kinematic, metallicity, and age parameters, to search for signatures of past accretions in the Milky Way. From correlations between orbital parameters, such as apocentre (A), pericentre (P), and z-angular momentum (L z ), the so-called APL space, Helmi et al. identified three new coherent groups of stars and suggested that those might correspond to remains of disrupted satellites. In the U-V plane, the investigated stars are distributed in a banana-shape, whereas the disc stars define a centrally concentrated clump ( Fig. 1). At the same time, in the U-W plane the investigated stars populate mostly the outskirts of the distributions. Both the U and W distributions are very symmetric. The investigated stars have a lower mean rotational velocity in comparison to the Milky Way disc stars, as we can see in the W-V plane. These characteristics are typical for stars associated with accreted satellite galaxies (Helmi 2008;Villalobos & Helmi 2009). Stars in the identified groups cluster not only around regions of roughly constant eccentricity (0.3 ≤ ǫ < 0.5) and have distinct kinematics, but have also distinct metallicities [Fe/H] and age distributions. One of the parameters according to which the stars were divided into three groups was metallicity. Group 3, which we investigate in this work, is the most metal-deficient and consists of 68 stars. According to the Nordström et al. (2004) catalogue, its mean photometric metallicity, [Fe/H], is about −0.8 dex and the age is about 14 Gyr. Group 3 also differs from the other two groups by slightly different kinematics, particularly in the vertical (z) direction. Holmberg et al. (2009) updated and improved the parameters for the stars in the Nordström et al. (2004) catalogue and we use those values throughout. In Fig. 1 we show the Galactic disc stars from Holmberg et al. (2009). Stars belonging to Group 3 in Helmi et al. are marked with open and filled circles (the latter are used to mark stars investigated in our work). Evidently, stars 1 E. Stonkutė, G. Tautvaišienė, B. Nordström, and R belonging to Group 3 have a different distribution in the velocity space in comparison to other stars of the Galactic disc. In Fig. 2, the stars are shown in the APL space. From high-resolution spectra we have measured abundances of iron group and α-elements in 21 stars belonging to Group 3 to check the homogeneity of their chemical composition and compare them with Galactic disc stars. The α-elementto-iron ratios are very sensitive indicators of galactic evolution (Pagel & Tautvaišienė 1995;Fuhrmann 1998;Reddy et al. 2006;Tautvaišienė et al. 2007;Tolstoy et al. 2009 and references therein). If stars have been formed in different environments they normally have different α-element-to-iron ratios for a given metallicity. Observations and method of analysis Spectra of high-resolving power (R ≈68 000) in the wavelength range of 3680-7270 Å were obtained at the Nordic Optical Telescope with the FIES spectrograph during July 2008. Twenty-one programme and six comparison stars (thin-disc dwarfs) were observed. A list of the observed stars and some of their parameters (taken from the Holmberg et al. 2009 catalogue and Simbad) are presented in Table 1. All spectra were exposed to reach a signal-to-noise ratio higher than 100. Reductions of CCD images were made with the FIES pipeline FIEStool, which performs a complete reduction: calculation of reference frame, bias and scattering subtraction, flat-field dividing, wavelength calibration and other procedures (http://www.not.iac.es/instruments/fies/fiestool). Several examples of stellar spectra are presented in Fig. 3. The spectra were analysed using a differential model atmosphere technique. The Eqwidth and Spectrum program packages, developed at the Uppsala Astronomical Observatory, were used to carry out the calculation of abundances from measured equivalent widths and synthetic spectra, respectively. A set of plane-parallel, line-blanketed, constant-flux LTE model atmospheres (Gustafsson et al. 2008) were taken from the MARCS stellar model atmosphere and flux library (http://marcs.astro.uu.se/). The Vienna Atomic Line Data Base (VALD, Piskunov et al. 1995) was extensively used in preparing input data for the calculations. Atomic oscillator strengths for the main spectral lines analysed in this study were taken from an inverse solar spectrum analysis performed in Kiev (Gurtovenko & Kostyk 1989). All lines used for calculations were carefully selected to avoid blending. All line profiles in all spectra were hand-checked, requiring that the line profiles be sufficiently clean to provide reliable equivalent widths. The equivalent widths of the lines were measured by fitting of a Gaussian profile using the 4A software package (Ilyin 2000). Initial values of the effective temperatures for the programme stars were taken from Holmberg et al. (2009) and then carefully checked and corrected if needed by forcing Fe i lines to yield no dependency of iron abundance on excitation potential by changing the model effective temperature. For four stars our effective temperature is +100 to +200 K higher than in the catalogue. We used the ionization equilibrium method to find surface gravities of the programme stars by forcing neutral and ionized iron lines to yield the same iron abundances. Microturbulence velocity values corresponding to the minimal line-to-line Fe i abundance scattering were chosen as correct values. Using the g f values and solar equivalent widths of analysed lines from Gurtovenko & Kostyk (1989) we obtained the solar abundances, used later for the differential determination of abundances in the programme stars. We used the solar model atmosphere from the set calculated in Uppsala with a microturbulent velocity of 0.8 km s −1 , as derived from Fe i lines. In addition to thermal and microturbulent Doppler broadening of lines, atomic line broadening by radiation damping and van der Waals damping were considered in the calculation of abundances. Radiation damping parameters of lines were taken from the VALD database. In most cases the hydrogen pressure damping of metal lines was treated using the modern quantum mechanical calculations by Anstee & O'Mara (1995), Barklem & O'Mara (1997), and Barklem et al. (1998). When using the Unsöld (1955) approximation, correction factors to the classical van der Waals damping approximation by widths (Γ 6 ) were taken from Simmons & Blackwell (1982). For all other species a correction factor of 2.5 was applied to the classical Γ 6 (∆logC 6 = +1.0), following Mäckle et al. (1975). For lines stronger than W = 100 mÅ the correction factors were selected individually by inspection of the solar spectrum. The oxygen abundance was determined from the forbidden [O i] line at 6300.31 Å (Fig. 4). The oscillator strength values for 58 Ni and 60 Ni, which blend the oxygen line, were taken from Johansson et al. (2003). The [O i] log g f = −9.917 value was calibrated by fitting to the solar spectrum (Kurucz 2005) with log A ⊙ = 8.83 taken from Grevesse & Sauval (2000). Stellar rotation was taken into account if needed with vsini values from Holmberg et al. (2007). Abundances of oxygen was not determined for every star due to blending by telluric lines or weakness of the oxygen line profile. Abundances of other chemical elements were determined using equivalent widths of their lines. Abundances of Na and Mg were determined with non-local thermodynamical equilibrium (NLTE) taken into account, as described by Gratton et al. (1999). The calculated corrections did not exceed 0.04 dex for Na i and 0.06 dex for Mg i lines. Abundances of sodium were determined from equivalent widths of the Na i lines at 5148.8, 5682.6, 6154.2, and 6160.8 Å; magnesium from the Mg i lines at 4730.0, 5711.1, 6318.7, and 6319.2 Å; and that of aluminum from the Al i lines at 6696.0, 6698.6, 7084.6, and 7362.2 Å. Estimation of uncertainties The uncertainties in abundances are due to several sources: uncertainties caused by analysis of individual lines, including random errors of atomic data and continuum placement and uncertainties in the stellar parameters. The sensitivity of the abundance estimates to changes in the atmospheric parameters by the assumed errors ∆[El/H] are illustrated for the star HD 224930 (Table 2). Clearly, possible parameter errors do not affect the abundances seriously; the element-to-iron ratios, which we use in our discussion, are even less sensitive. The scatter of the deduced abundances from different spectral lines σ gives an estimate of the uncertainty due to the random errors. The mean value of σ is 0.05 dex, thus the uncer- Results and discussion The atmospheric parameters T eff , log g, v t , [Fe/H] and abundances of 12 chemical elements relative to iron [El/Fe] of the programme and comparison stars are presented in Table 3. The number of lines and the line-to-line scatter (σ) are presented as well. Comparison with previous studies Some stars from our sample have been previously investigated by other authors. In Table 4 we present a comparison with the results by Nissen & Schuster (2010), Reddy et al. (2006), and with Ramírez et al. (2007). Ramírez et al. determined only the main atmospheric parameters. The thin-disc stars we have investigated in our work for a comparison have been analysed previously by Edvardsson et al. (1993) and by Thévenin & Idiart (1999). In Table 5 we present a comparison with the results obtained by these authors. Our [El/Fe] for the stars in common agree very well with other studies. Slight differences in the log g values lie within errors of uncertainties and are caused mainly by differ- Nissen & Schuster (2010), 7 stars in common with Reddy et al. (2006), and 10 stars in common with Ramírez et al. (2007). ences in determination methods applied. In our work we see that titanium abundances determined using Tii and Tiii lines agree well and confirm the log g values determined using iron lines. Notes. Mean differences and standard deviations of the main parameters and abundance ratios [El/Fe] for 4 stars of Group 3 that are in common with Effective temperatures for all stars investigated here are also available in Holmberg et al. (2009) and Casagrande et al. (2011). Casagrande et al. provide astrophysical parameters for the Geneva-Copenhagen survey by applying the infrared flux method for the effective temperature determination. In compar- Casagrande et al. (2011). A comparison between Holmberg et al. and Casagrande et al. shows that the latter gives [Fe/H] values that are on average by 0.1 dex more metal-rich. For our programme stars we obtain a difference of 0.1 ± 0.1 dex in comparison with Holmberg et al., and no systematic difference but a scatter of 0.1 dex in comparison with Casagrande et al. Comparison with the thin-and thick-disc dwarfs The metallicities and ages of all programme stars except one (BD +35 3659) are quite homogeneous: [Fe/H] = −0.69 ± 0.05 dex and the average age is about 12 ± 2 Gyrs. However, the ages, which we took from Holmberg et al. (2009), were not determined for every star. BD +35 3659 is much younger (0.9 Gyr), has [Fe/H] = −1.45, its eccentricity, velocities, distance, and other parameters differ as well (see Fig. 5 and 6). We doubt its membership of Group 3. The next step was to compare the determined abundances with those in the thin-disc dwarfs. In Fig. 7 we present these comparisons with data taken from Edvardsson et al. (1993), Bensby et al. (2005), Reddy et al. (2006), Zhang & Zhao (2006), and with the chemical evolution model by Pagel & Tautvaišienė (1995). The thin-disc stars from Edvardsson et al. and Zhang & Zhao were selected by using the membership probability evaluation method described by Trevisan et al. (2011), since their lists contained stars of other Galactic components as well. The same kinematical approach in assigning thin-disc membership was used in Bensby et al. (2005) and Reddy et al. (2006), so the thin-disc stars used for the comparison are uniform in that respect. In Fig. 7 we see that the abundances of α-elements in the investigated stars are overabundant compared with the Galactic thin-disc dwarfs. A similar overabundance of αelements is exhibited by the thick-disc stars (Fuhrmann 1998;Prochaska et al. 2000;Tautvaišienė et al. 2001;Bensby et al. 2005;Reddy & Lambert 2008; and references therein). Helmi et al. (2006), based on the isochrone fitting, have suggested that stars in the identified kinematic groups might be α-rich. Our spectroscopic results qualitatively agree with this. However, based on metallicities and vertical velocities, Group 3 cannot be uniquely associated to a single traditional Galactic component (Helmi et al. 2006). What does the similarity of α-element abundances in the thick-disc and the investigated kinematic group mean? It would be easier to answer this question if the origin of the thick disc of the Galaxy was known (see van der Kruit & Freeman 2011 for a review). There are several competing models that aim to explain the nature of a thick disc. Stars may have appeared at the thick disc through (1) orbital migration because of heating of a pre-existing thin disc by a varying gravitational potential in the thin disc (e.g. Roškar et al. 2008;Schönrich & Binney 2009); (2) heating of a pre-existing thin disc by minor mergers (e.g. Kazantzidis et al. 2008;Villalobos & Helmi 2008); (3) accretion of disrupted satellites (e.g. Abadi et al. 2003), or (4) gas-rich satellite mergers when thick-disc stars form before the gas completely settles into a thin-disc (see Brook et al. 2004Brook et al. , 2005. Dierickx et al. (2010) analysed the eccentricity distribution of thick-disc stars that has recently been proposed as a diagnostic to differentiate between these mechanisms (Sales et al. 2009). Using SDSS data release 7, they have assembled a sample of 31.535 G-dwarfs with six-dimensional phase-space information and metallicities and have derived their orbital eccentricities. They found that the observed eccentricity distribution is inconsistent with that predicted by orbital migration only. Also, the thick disc cannot be produced predominantly through heating of a pre-existing thin disc, since this model predicts more higheccentricity stars than observed. According to Dierickx et al., the observed eccentricity distribution fits well with a gas-rich merger scenario, where most thick-disc stars were born in situ. In the gas-rich satellite merger scenario, a distribution of stellar eccentricities peak around e = 0.25, with a tail towards higher values belonging mostly to stars originally formed in satellite galaxies. The group of stars investigated in our work fits this model with a mean eccentricity value of 0.4. This scenario is also supported by the RAVE survey data analysis made by Wilson et al. (2011) and the numerical simulations by Di Matteo et al. (2011). In this scenario, Group 3 can be explained as a remnant from stars originally formed in a merging satellite. Conclusions We measured abundances of iron group and α-elements from high-resolution spectra in 21 stars belonging to Group 3 of the Geneva-Copenhagen survey. This kinematically identified group of stars was suspected to be a remnant of a disrupted satellite galaxy. Our main goal was to investigate the chemical composition of the stars within the group and to compare them with Galactic disc stars. Our study shows that Fig. 7. Comparison of elemental abundance ratios of stars in the investigated stellar group (black points) and data for Milky Way thin-disc dwarfs from Edvardsson et al. (1993, plus signs), Bensby et al. (2005, stars), Reddy et al. (2006, squares), Zhang & Zhao (2006, triangles), and Galactic thin disc chemical evolution models by Pagel & Tautvaišienė (1995, solid lines). Results obtained for thin-disc dwarfs analysed in our work are shown by open circles. Average uncertainties are shown in the box for Na. of the Geneva-Copenhagen survey support the scenario of an ancient merging event. The similar chemical composition of stars in Group 3 and the thick-disc stars might suggest that their formation histories are linked. The kinematic properties of our stellar group fit well with a gas-rich satellite merger scenario (Brook et al. 2004(Brook et al. , 2005Dierickx et al. 2010;Wilson et al. 2011;Di Matteo et al. 2011; and references therein). We plan to increase the number of stars and chemical elements investigated in this group, and also to study the chemical composition of stars in other kinematic groups of the Geneva-Copenhagen survey. The identification of such kinematic groups and the exploration of their chemical composition will be a key in understanding the formation and evolution of the Galaxy. Fig. 3 . 3Samples of stellar spectra of several programme stars. An offset of 0.5 in relative flux is applied for clarity. Fig. 4 . 4Fit to the forbidden [O i] line at 6300.3 Å in the programme star HD 204848. The observed spectrum is shown as a solid line with black dots. The synthetic spectra with [O/Fe] = 0.52 ± 0.1 are shown as dashed lines. Fig. 6 . 6Toomre diagram of all stars of Group 3 (circles) and those investigated in this work (filled circles). Dotted lines indicate constant values of total space velocity in steps of 50 km s −1 . 1 . 1All stars in Group 3 except one have a similar metallicity. The average [Fe/H] value of the 20 stars is −0.69 ± 0.05 dex. 2. All programme stars are overabundant in oxygen and αelements compared with Galactic thin-disc dwarfs and the Galactic evolution model used. This abundance pattern has similar characteristics as the Galactic thick disc.The homogeneous chemical composition together with the kinematic properties and ages of stars in the investigated Group 3 .Ženovienė: Stellar substructures in the solar neighbourhoodFig. 2. Distribution for the stars in the APL space. Plus signs denote the Holmberg et al. (2009) sample, circles -Group 3, filled circles -investigated stars. Note that the investigated stars as well as all Group 3 stars are distributed in APL space with constant eccentricity.-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200 V (km s -1 ) U (km s -1 ) BD +35 3659 BD +35 3659 W (km s -1 ) U (km s -1 ) -200 -100 0 100 200 -200 -100 0 100 200 BD +35 3659 V (km s -1 ) W (km s -1 ) Fig. 1. Velocity distribution for all stars in the Holmberg et al. (2009) sample (plus signs), stars of Group 3 (circles) and the investi- gated stars (filled circles). 800 1000 1200 1400 1600 1800 2000 8 9 10 11 12 13 14 15 apo (kpc) L z (kpc km s -1 ) BD +35 3659 3 4 5 6 7 8 BD +35 3659 peri (kpc) Table 1 . 1Parameters of the programme and comparison stars.Star Sp. type Age M V d U LSR V LSR W LSR e z max R min R max Gyr mag pc km s −1 km s −1 km s −1 kpc kpc kpc HD 967 G5 9.9 5.23 43 -55 -80 0 0.34 0.12 4.09 8.29 HD 17820 G5 11.2 4.45 61 34 -98 -77 0.39 1.62 3.65 8.31 HD 107582 G2V 9.4 5.18 41 -1 -103 -46 0.41 0.76 3.35 8.02 BD +73 566 G0 ... 5.14 67 -52 -86 -18 0.36 0.18 3.89 8.27 BD +19 2646 G0 ... 5.53 74 103 -52 22 0.38 0.64 4.58 10.21 HD 114762 F9V 10.6 4.36 39 -79 -66 57 0.31 1.57 4.64 8.79 HD 117858 G0 11.7 4.02 61 71 -56 -20 0.32 0.21 4.76 9.17 BD +13 2698 F9V 14.2 4.52 93 102 -67 -66 0.40 1.60 4.21 9.92 BD +77 0521 G5 14.5 5.27 68 4 -103 -36 0.42 0.48 3.30 8.05 HD 126512 F9V 11.1 4.01 45 82 -81 -73 0.40 1.69 3.95 9.20 HD 131597 G0 ... 3.06 119 -133 -98 -43 0.52 0.81 3.10 9.85 BD +67 925 F8 13 4.14 139 -128 -103 -29 0.53 0.43 2.97 9.69 HD 159482 G0V 10.9 4.82 52 -170 -60 89 0.51 3.67 4.04 12.55 HD 170737 G8III-IV ... 2.88 112 -64 -102 -92 0.40 2.61 3.47 8.07 BD +35 3659 F1 0.9 5.32 96 212 -86 -117 0.65 5.50 3.24 15.41 HD 201889 G1V 14.5 4.40 54 -126 -83 -35 0.46 0.56 3.58 9.80 HD 204521 G5 2.1 5.18 26 15 -73 -19 0.29 0.18 4.45 8.11 HD 204848 G0 ... 1.98 122 42 -91 66 0.36 1.77 3.88 8.34 HD 212029 G0 13.1 4.66 59 67 -95 31 0.44 0.77 3.44 8.76 HD 222794 G2V 12.1 3.83 46 -73 -104 83 0.42 3.02 3.43 8.39 HD 224930 G5V 14.7 5.32 12 -9 -75 -34 0.29 0.44 4.42 8.01 HD 17548 F8 6.9 4.46 55 -14 31 32 0.15 0.88 8.02 10.85 HD 150177 F3V 5.7 3.33 40 -7 -23 -24 0.07 0.25 6.95 7.97 HD 159307 F8 5.2 3.33 65 -14 -21 0 0.06 0.11 7.00 7.95 HD 165908 F7V 6.8 4.09 16 -6 1 10 0.03 0.26 7.97 8.45 HD 174912 F8 6.1 4.73 31 -22 8 -43 0.07 0.69 7.90 9.10 HD 207978 F6IV-V 3.3 3.33 27 13 16 -7 0.11 0.00 7.81 9.68 Table 2 . 2Effects on derived abundances resulting from model changes for the star HD 224930.Ion ∆T eff −100 K ∆ log g −0.3 ∆vt −0.3 km s −1 [O i] 0.00 -0.10 -0.01 Na i -0.06 0.01 0.00 Mg i -0.04 0.01 0.01 Al i -0.05 0.00 0.00 Si i -0.01 -0.03 0.01 Ca i -0.07 0.02 0.04 Sc ii -0.01 -0.13 0.02 Ti i -0.10 0.01 0.03 Ti ii -0.01 -0.13 0.03 V i -0.12 0.00 0.00 Cr i -0.09 0.01 0.05 Fe i -0.08 0.00 0.05 Fe ii 0.04 -0.13 0.04 Co i -0.07 -0.02 0.01 Ni i -0.05 -0.01 0.04 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 0.2 0.3 0.4 0.5 0.6 0.7 e [Fe/H] BD +35 3659 Fig. 5. Diagram of orbital eccentricity e vs. [Fe/H] for all stars of Group 3 (circles) and those investigated in this work (filled circles). tainties in the derived abundances that are the result of random errors amount to approximately this value. Table 3 . 3Main atmospheric parameters and elemental abundances of the programme and comparison stars.Notes.(a) Probably not a member of Group 3.Star T eff log g v t [Fe/H] σ FeI n FeI σ FeII n FeII [O/Fe] [Na/Fe] σ n [Mg/Fe] σ n K km s −1 HD 967 5570 4.3 0.9 -0.62 0.05 38 0.04 7 ... 0.04 0.03 3 0.33 0.04 4 HD 17820 5900 4.2 1.0 -0.57 0.05 29 0.01 6 ... 0.06 0.04 3 0.25 0.06 3 HD 107582 5600 4.2 1.0 -0.62 0.05 32 0.07 5 0.39 0.06 0.04 4 0.30 0.04 3 BD +73 566 5580 3.9 0.9 -0.91 0.05 31 0.02 6 ... 0.14 0.03 2 0.43 0.05 3 BD +19 2646 5510 4.1 0.9 -0.68 0.04 31 0.04 5 0.55 0.10 0.08 3 0.38 0.06 4 HD 114762 5870 3.8 1.0 -0.67 0.05 32 0.03 7 ... 0.09 0.03 3 0.33 0.05 4 HD 117858 5740 3.8 1.2 -0.55 0.04 34 0.03 6 0.32 0.08 0.02 3 0.29 0.04 3 BD +13 2698 5700 4.0 1.0 -0.74 0.06 28 0.05 5 ... 0.02 0.02 2 0.34 0.05 4 BD +77 0521 5500 4.0 1.1 -0.50 0.07 24 0.05 5 ... -0.02 ... 1 0.25 0.04 4 HD 126512 5780 3.9 1.1 -0.55 0.05 27 0.03 6 0.41 0.10 0.02 3 0.30 0.07 3 HD 131597 5180 3.5 1.1 -0.64 0.04 32 0.03 6 ... 0.12 0.01 4 0.37 0.05 4 BD +67 925 5720 3.5 1.2 -0.55 0.05 24 0.03 6 0.37 0.04 0.02 2 0.35 0.06 3 HD 159482 5730 4.1 1.0 -0.71 0.05 26 0.01 5 0.42 0.13 0.05 4 0.31 0.03 4 HD 170737 5100 3.3 1.0 -0.68 0.04 29 0.05 6 ... 0.11 0.02 4 0.30 0.07 3 BD +35 3659 a 5850 3.9 0.9 -1.45 0.04 25 0.04 4 ... 0.04 0.05 3 0.30 0.06 3 HD 201889 5700 3.8 0.9 -0.73 0.05 30 0.03 4 0.58 0.08 0.04 3 0.32 0.03 4 HD 204521 5680 4.3 1.0 -0.72 0.05 30 0.05 5 ... 0.06 0.02 3 0.29 0.04 3 HD 204848 4900 2.3 1.2 -1.03 0.04 31 0.05 7 0.52 0.01 0.04 3 0.43 0.03 4 HD 212029 5830 4.2 0.9 -0.98 0.02 20 0.01 2 ... 0.10 0.04 3 0.37 0.07 4 HD 222794 5560 3.7 1.1 -0.61 0.04 30 0.05 6 ... 0.09 0.05 4 0.37 0.07 4 HD 224930 5470 4.2 0.9 -0.71 0.05 35 0.05 6 0.45 0.08 0.04 3 0.42 0.04 4 HD 17548 6030 4.1 1.0 -0.49 0.05 32 0.03 7 0.16 -0.02 0.04 3 0.07 0.06 4 HD 150177 6300 4.0 1.5 -0.50 0.04 23 0.05 4 ... 0.07 0.02 3 0.18 0.04 3 HD 159307 6400 4.0 1.6 -0.60 0.04 17 0.04 4 ... 0.12 0.07 3 0.28 0.03 4 HD 165908 6050 3.9 1.1 -0.52 0.04 24 0.03 7 ... 0.02 0.02 3 0.20 0.07 4 HD 174912 5860 4.1 0.8 -0.42 0.04 33 0.04 6 0.10 0.02 0.01 3 0.08 0.05 4 HD 207978 6450 3.9 1.6 -0.50 0.04 22 0.04 7 ... 0.09 0.05 3 0.28 0.06 4 Table 4 . 4Group 3 comparison with previous studies.Ours-Nissen Ours-Reddy Ours-Ramírez Quantity Diff. σ Diff. σ Diff. σ T eff 34 54 86 33 47 45 log g -0.26 0.16 -0.28 0.15 -0.27 0.14 [Fe/H] 0.03 0.04 0.06 0.07 0.10 0.04 [Na/Fe] 0.02 0.11 0.00 0.04 ... ... [Mg/Fe] -0.02 0.05 0.02 0.01 ... ... [Al/Fe] ... ... -0.01 0.07 ... ... [Si/Fe] -0.05 0.01 0.03 0.05 ... ... [Ca/Fe] 0.00 0.01 0.08 0.05 ... ... [Sc/Fe] ... ... -0.01 0.05 ... ... [Ti/Fe] 0.06 0.10 0.07 0.06 ... ... [V/Fe] ... ... -0.01 0.03 ... ... [Cr/Fe] 0.02 0.06 0.08 0.04 ... ... [Co/Fe] ... ... -0.04 0.03 ... ... [Ni/Fe] 0.01 0.05 -0.01 0.04 ... ... Table 5 . 5Thin-disc stars comparison with previous studies.Ours-Edvardsson Ours-Thévenin Quantity Diff. σ Diff. σ T eff 86 66 87 68 log g -0.18 0.21 -0.06 0.14 [Fe/H] 0.10 0.04 0.03 0.03 [Na/Fe] -0.08 0.09 ... ... [Mg/Fe] -0.02 0.07 ... ... [Al/Fe] -0.10 0.07 ... ... [Si/Fe] -0.02 0.02 ... ... [Ca/Fe] 0.05 0.03 ... ... [Ti/Fe] -0.02 0.08 ... ... [Ni/Fe] -0.06 0.06 ... ... Notes. Mean differences and standard deviations of the main parame- ters and abundance ratios [El/Fe] for 6 thin-disc stars that are in com- mon with Edvardsson et al. (1993) and 5 stars with Thévenin & Idiart (1999). ison to Holmberg et al., stars in the Casagrande et al. catalogue are on average 100 K hotter. For the stars investigated here, our spectroscopic temperatures are on average 40 ± 70 K hotter than in Holmberg et al. and 60 ±80 K cooler than in Casagrande et al. (BD +35 3659, which has a difference of 340 K, was excluded from the average). [Fe/H] values for all investigated stars are available in Holmberg et al. (2009) as well as in Table 3 . 3ContinuedStar [Al/Fe] σ n [Si/Fe] σ n [Ca/Fe] σ n [Sc/Fe] σ n [Tii/Fe] σ n HD 967 0.37 0.00 3 0.26 0.05 17 0.28 0.04 8 0.12 0.04 9 0.33 0.06 14 HD 17820 0.11 0.05 3 0.24 0.04 16 0.26 0.06 9 0.17 0.04 9 0.33 0.05 9 HD 107582 0.22 0.05 3 0.21 0.05 16 0.27 0.06 5 0.05 0.05 8 0.30 0.05 7 BD +73 566 0.27 0.04 2 0.33 0.06 18 0.39 0.06 6 0.05 0.05 7 0.33 0.05 9 BD +19 2646 0.28 0.08 2 0.22 0.06 16 0.32 0.06 8 0.09 0.04 8 0.28 0.04 9 HD 114762 0.15 0.02 2 0.20 0.05 17 0.21 0.05 7 0.05 0.05 10 0.21 0.03 8 HD 117858 0.31 0.06 3 0.24 0.05 17 0.23 0.04 8 0.12 0.02 9 0.24 0.03 9 BD +13 2698 0.12 0.05 2 0.33 0.06 14 0.31 0.05 8 0.10 0.04 7 0.36 0.05 8 BD +77 0521 0.23 0.01 2 0.18 0.07 10 0.20 0.06 6 0.04 0.03 4 0.22 0.05 6 HD 126512 0.17 0.06 2 0.25 0.05 16 0.23 0.04 6 0.11 0.02 8 0.21 0.04 7 HD 131597 0.36 0.04 2 0.29 0.05 16 0.25 0.04 8 0.18 0.02 10 0.25 0.06 16 BD +67 925 0.31 0.00 2 0.22 0.08 17 0.30 0.06 8 -0.05 0.05 4 0.39 0.03 3 HD 159482 0.29 0.04 2 0.27 0.06 15 0.31 0.06 7 0.16 0.03 8 0.27 0.02 4 HD 170737 0.39 ... 1 0.24 0.04 15 0.27 0.06 7 0.16 0.02 8 0.30 0.06 12 BD +35 3659 0.40 0.01 2 0.25 0.03 8 0.31 0.08 4 0.03 0.11 7 0.41 0.02 4 HD 201889 0.27 0.01 3 0.31 0.05 15 0.33 0.08 6 0.09 0.03 9 0.33 0.05 9 HD 204521 0.26 0.02 2 0.22 0.05 17 0.26 0.06 8 0.12 0.03 7 0.32 0.05 9 HD 204848 0.45 0.05 3 0.44 0.05 16 0.41 0.05 8 0.07 0.03 11 0.31 0.07 18 HD 212029 0.19 0.05 2 0.34 0.05 11 0.25 0.03 6 0.14 0.02 5 0.28 0.06 5 HD 222794 0.39 0.04 3 0.23 0.05 16 0.25 0.04 7 0.08 0.04 8 0.29 0.04 11 HD 224930 0.43 0.08 3 0.25 0.05 16 0.30 0.04 5 0.10 0.03 11 0.27 0.04 9 HD 17548 -0.02 0.01 2 0.08 0.05 17 0.10 0.04 7 0.05 0.04 10 0.11 0.06 7 HD 150177 0.06 0.04 2 0.05 0.06 12 0.10 0.03 5 0.08 0.03 7 0.15 0.05 3 HD 159307 ... ... ... 0.16 0.02 9 0.17 0.04 5 0.12 0.06 7 0.17 0.07 3 HD 165908 0.02 0.02 2 0.11 0.05 15 0.12 0.05 6 0.02 0.04 7 0.12 0.04 6 HD 174912 -0.03 0.04 2 0.04 0.04 17 0.09 0.05 6 0.00 0.04 12 0.02 0.05 9 HD 207978 -0.01 0.02 2 0.12 0.04 15 0.15 0.03 7 0.06 0.02 6 0.16 0.06 3 Star [Tiii/Fe] σ n [V/Fe] σ n [Cr/Fe] σ n [Co/Fe] σ n [Ni/Fe] σ n HD 967 0.30 0.09 2 0.09 0.05 11 0.05 0.08 17 0.08 0.04 5 0.02 0.07 26 HD 17820 0.30 0.01 3 0.07 0.04 5 -0.01 0.08 14 0.07 0.01 3 0.02 0.04 22 HD 107582 0.27 0.06 2 0.05 0.03 10 0.05 0.06 14 0.04 0.05 9 0.02 0.06 16 BD +73 566 0.35 0.06 3 -0.03 0.04 6 0.05 0.05 11 0.06 0.04 3 0.00 0.05 17 BD +19 2646 0.21 0.04 3 0.10 0.04 8 0.06 0.10 15 0.05 0.06 7 0.00 0.05 18 HD 114762 0.21 0.05 3 0.05 0.05 4 0.00 0.08 11 0.04 0.05 5 -0.04 0.04 16 HD 117858 0.20 0.03 3 0.11 0.03 8 0.01 0.05 16 0.09 0.04 6 0.02 0.05 24 BD +13 2698 0.33 0.05 3 0.08 0.03 7 0.04 0.06 14 0.09 0.03 4 0.03 0.06 19 BD +77 0521 0.17 0.03 2 0.06 0.02 3 0.04 0.10 11 0.04 0.02 2 0.06 0.06 13 HD 126512 0.29 0.02 2 0.08 0.03 7 0.02 0.06 14 0.09 0.04 5 0.01 0.05 17 HD 131597 0.30 0.03 3 0.06 0.06 11 0.04 0.07 16 0.05 0.06 10 0.01 0.05 25 BD +67 925 0.43 ... 1 -0.03 0.08 4 -0.02 0.12 13 -0.02 0.01 2 0.04 0.06 14 HD 159482 0.23 0.01 3 0.09 0.03 7 0.05 0.06 13 0.10 0.04 4 0.01 0.05 15 HD 170737 0.28 0.06 4 0.08 0.05 8 0.03 0.09 16 0.05 0.04 6 0.02 0.07 23 BD +35 3659 0.24 ... 1 ... ... ... 0.03 0.11 12 ... ... ... 0.01 0.08 7 HD 201889 0.33 0.05 3 0.02 0.08 4 0.05 0.06 12 0.07 0.03 6 0.00 0.05 17 HD 204521 0.29 0.04 3 0.06 0.05 7 0.02 0.06 13 0.07 0.05 6 0.00 0.05 18 HD 204848 0.26 0.07 3 0.07 0.03 13 0.06 0.09 17 0.05 0.03 10 0.05 0.06 25 HD 212029 0.36 0.03 2 0.09 0.05 4 -0.05 0.06 12 ... ... ... 0.01 0.06 11 HD 222794 0.27 0.04 3 0.07 0.05 10 0.03 0.07 16 0.06 0.03 8 0.02 0.06 18 HD 224930 0.22 0.03 2 0.12 0.03 6 0.05 0.08 14 0.10 0.04 7 -0.01 0.06 21 HD 17548 0.08 0.03 3 0.09 0.04 4 -0.01 0.05 13 0.05 0.03 3 -0.04 0.05 20 HD 150177 0.23 0.05 2 0.03 ... 1 -0.03 0.07 14 0.06 0.00 2 -0.02 0.06 12 HD 159307 0.18 0.05 2 ... ... ... 0.04 0.05 9 0.06 ... 1 0.04 0.02 10 HD 165908 0.07 0.03 3 0.06 0.03 4 0.00 0.04 10 0.00 0.08 5 -0.03 0.06 15 HD 174912 0.00 0.03 3 -0.01 0.05 5 0.00 0.08 14 0.04 0.00 4 -0.05 0.05 21 HD 207978 0.07 0.06 3 ... ... ... 0.01 0.07 11 0.05 0.06 5 -0.01 0.05 15 Acknowledgements. The data are based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement number RG226604 (OPTICON). BN acknowledges support from the Danish Research council. This research has made use of Simbad, VALD and NASA ADS databases. We thank the anonymous referee for insightful questions and comments. . M G Abadi, J F Navarro, M Steinmetz, V R Eke, ApJ. 59721Abadi, M. G., Navarro, J. F., Steinmetz, M., & Eke, V. R. 2003, ApJ, 597, 21 . S D Anstee, B J Mara, MNRAS. 276859Anstee, S. D., & O'Mara, B. J. 1995, MNRAS, 276, 859 . P S Barklem, B J Mara, MNRAS. 290102Barklem, P. S., & O'Mara, B. J. 1997, MNRAS, 290, 102 . P S Barklem, B J O&apos;mara, J E Ross, MNRAS. 2961057Barklem, P. S., O'Mara, B. J., & Ross, J. E. 1998, MNRAS, 296, 1057 . V Belokurov, D B Zucker, N W Evans, ApJ. 642137Belokurov, V., Zucker, D. B., Evans, N. W., et al. 2006, ApJ, 642, L137 . V Belokurov, M G Walker, N W Evans, MNRAS. 3971748Belokurov, V., Walker, M. G., Evans, N. W., et al. 2009, MNRAS, 397, 1748 . T Bensby, S Feltzing, I Lundström, I Ilyin, A&A. 433185Bensby, T., Feltzing, S., Lundström, I., & Ilyin, I. 2005, A&A, 433, 185 . C B Brook, D Kawata, B K Gibson, K C Freeman, ApJ. 612894Brook, C. B., Kawata, D., Gibson, B. K., & Freeman, K. C. 2004, ApJ, 612, 894 . C B Brook, B K Gibson, H Martel, D Kawata, ApJ. 630298Brook, C. B., Gibson, B. K., Martel, H., & Kawata, D. 2005, ApJ, 630, 298 . L Casagrande, R Schönrich, M Asplund, A&A. 530138Casagrande, L., Schönrich, R., Asplund, M., et al. 2011, A&A, 530, A138 . Di Matteo, P Lehnert, M D Qu, Y Van Driel, W , A&A. 5253Di Matteo, P., Lehnert, M. D., Qu, Y., & van Driel, W. 2011, A&A, 525, L3 . M Dierickx, R Klement, H.-W Rix, C Liu, ApJ. 725186Dierickx, M., Klement, R., Rix, H.-W., & Liu, C. 2010, ApJ, 725, L186 . B Edvardsson, J Andersen, B Gustafsson, A&A. 275101Edvardsson, B., Andersen, J., Gustafsson, B., et al. 1993, A&A, 275, 101 . M Fellhauer, N W Evans, V Belokurov, MNRAS. 3751171Fellhauer, M., Evans, N. W., Belokurov, V., et al. 2007, MNRAS, 375, 1171 . K Fuhrmann, A&A. 338161Fuhrmann, K. 1998, A&A, 338, 161 . R G Gratton, E Carretta, K Eriksson, B Gustafsson, A&A. 350955Gratton, R. G., Carretta, E., Eriksson, K., & Gustafsson, B. 1999, A&A, 350, 955 Origin of Elements in the Solar System, Implications of Post-1957 Observations. N Grevesse, A J Sauval, 261Grevesse, N., & Sauval, A. J. 2000, Origin of Elements in the Solar System, Implications of Post-1957 Observations, 261 Kiev Izdatel Naukova Dumka. E A Gurtovenko, R I Kostyk, B Gustafsson, B Edvardsson, K Eriksson, A&A. 486951Gurtovenko, E. A., & Kostyk, R. I. 1989, Kiev Izdatel Naukova Dumka, Gustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, A&A, 486, 951 . A Helmi, J F Navarro, B Nordström, MNRAS. 3651309Helmi, A., Navarro, J. F., Nordström, B., et al. 2006, MNRAS, 365, 1309 . A Helmi, A&A Rev. 15145Helmi, A. 2008, A&A Rev., 15, 145 . J Holmberg, B Nordström, J Andersen, A&A. 475519Holmberg, J., Nordström, B., & Andersen, J. 2007, A&A, 475, 519 . J Holmberg, B Nordström, J Andersen, A&A. 501941Holmberg, J., Nordström, B., & Andersen, J. 2009, A&A, 501, 941 . R A Ibata, G Gilmore, M J Irwin, Nature. 370194Ibata, R. A., Gilmore, G., & Irwin, M. J. 1994, Nature, 370, 194 . R Ibata, M Irwin, G Lewis, A M N Ferguson, N Tanvir, Nature. 41249Ibata, R., Irwin, M., Lewis, G., Ferguson, A. M. N., & Tanvir, N. 2001, Nature, 412, 49 . R A Ibata, M J Irwin, G F Lewis, A M N Ferguson, N Tanvir, MNRAS. 34021Ibata, R. A., Irwin, M. J., Lewis, G. F., Ferguson, A. M. N., & Tanvir, N. 2003, MNRAS, 340, L21 . I V Ilyin, S Johansson, U Litzén, H Lundberg, Z Zhang, ApJ. 584107Ph.D. ThesisIlyin, I. V. 2000, Ph.D. Thesis, Johansson, S., Litzén, U., Lundberg, H., & Zhang, Z. 2003, ApJ, 584, L107 . S Kazantzidis, J S Bullock, A R Zentner, A V Kravtsov, L A Moustakas, ApJ. 688254Kazantzidis, S., Bullock, J. S., Zentner, A. R., Kravtsov, A. V., & Moustakas, L. A. 2008, ApJ, 688, 254 . R L Kurucz, Memorie della Societa Astronomica Italiana Supplementi8189Kurucz, R. L. 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 189 . R Mäckle, R Griffin, R Griffin, H Holweger, A&AS. 19303Mäckle, R., Griffin, R., Griffin, R., & Holweger, H. 1975, A&AS, 19, 303 . D Martínez-Delgado, J Peñarrubia, R J Gabany, ApJ. 689184Martínez-Delgado, D., Peñarrubia, J., Gabany, R. J., et al. 2008, ApJ, 689, 184 . D Martínez-Delgado, M Pohlen, R J Gabany, ApJ. 692955Martínez-Delgado, D., Pohlen, M., Gabany, R. J., et al. 2009, ApJ, 692, 955 . N F Martin, R A Ibata, B C Conn, MNRAS. 35533Martin, N. F., Ibata, R. A., Conn, B. C., et al. 2004, MNRAS, 355, L33 . A W Mcconnachie, M J Irwin, R A Ibata, Nature. 46166McConnachie, A. W., Irwin, M. J., Ibata, R. A., et al. 2009, Nature, 461, 66 . P E Nissen, W J Schuster, A&A. 51110Nissen, P. E., & Schuster, W. J. 2010, A&A, 511, L10 . B Nordström, M Mayor, J Andersen, A&A. 418989Nordström, B., Mayor, M., Andersen, J., et al. 2004, A&A, 418, 989 . B E J Pagel, G Tautvaišienė, MNRAS. 276505Pagel, B. E. J., & Tautvaišienė, G. 1995, MNRAS, 276, 505 . J Peñarrubia, D Martínez-Delgado, H W Rix, ApJ. 626128Peñarrubia, J., Martínez-Delgado, D., Rix, H. W., et al. 2005, ApJ, 626, 128 . N E Piskunov, F Kupka, T A Ryabchikova, W W Weiss, C S Jeffery, A&AS. 112525Piskunov, N. E., Kupka, F., Ryabchikova, T. A., Weiss, W. W., & Jeffery, C. S. 1995, A&AS, 112, 525 . J X Prochaska, S O Naumov, B W Carney, A Mcwilliam, A M Wolfe, AJ. 1202513Prochaska, J. X., Naumov, S. O., Carney, B. W., McWilliam, A., & Wolfe, A. M. 2000, AJ, 120, 2513 . I Ramírez, C Prieto, D L Lambert, A&A. 465271Ramírez, I., Allende Prieto, C., & Lambert, D. L. 2007, A&A, 465, 271 . B E Reddy, D L Lambert, C Prieto, MNRAS. 3671329Reddy, B. E., Lambert, D. L., & Allende Prieto, C. 2006, MNRAS, 367, 1329 . B E Reddy, D L Lambert, MNRAS. 39195Reddy, B. E., & Lambert, D. L. 2008, MNRAS, 391, 95 . R Roškar, V P Debattista, G S Stinson, ApJ. 67565Roškar, R., Debattista, V. P., Stinson, G. S., et al. 2008, ApJ, 675, L65 . L V Sales, A Helmi, M G Abadi, MNRAS. 4001145MNRASSales, L. V., Helmi, A., Abadi, M. G., et al. 2009, MNRAS, 400, L61 Schönrich, R., & Binney, J. 2009, MNRAS, 399, 1145 . G J Simmons, D E Blackwell, A&A. 112209Simmons, G. J., & Blackwell, D. E. 1982, A&A, 112, 209 . G Tautvaišienė, B Edvardsson, I Tuominen, I Ilyin, A&A. 380578Tautvaišienė, G., Edvardsson, B., Tuominen, I., & Ilyin, I. 2001, A&A, 380, 578 . G Tautvaišienė, B Edvardsson, E Puzeras, I Ilyin, A&A. 431933Tautvaišienė, G., Edvardsson, B., Puzeras, E., & Ilyin, I. 2005, A&A, 431, 933 . G Tautvaišienė, D Geisler, G Wallerstein, AJ. 1342318Tautvaišienė, G., Geisler, D., Wallerstein, G., et al. 2007, AJ, 134, 2318 . F Thévenin, T P Idiart, ApJ. 521753Thévenin, F., & Idiart, T. P. 1999, ApJ, 521, 753 . E Tolstoy, V Hill, M Tosi, ARA&A. 47371Tolstoy, E., Hill, V., & Tosi, M. 2009, ARA&A, 47, 371 . M Trevisan, B Barbuy, K Eriksson, A&A. 53542Trevisan, M., Barbuy, B., Eriksson, K., et al. 2011, A&A, 535, A42 . A Unsöld, Aufl, P C Van Der Kruit, K C Freeman, ARA&A. 492301SpringerUnsöld, A. 1955, Berlin, Springer, 1955. 2. Aufl., van der Kruit, P. C., & Freeman, K. C. 2011, ARA&A, 49, 301 . Á Villalobos, A Helmi, MNRAS. 3911806Villalobos,Á., & Helmi, A. 2008, MNRAS, 391, 1806 . Á Villalobos, A Helmi, MNRAS. 399166Villalobos,Á., & Helmi, A. 2009, MNRAS, 399, 166 . M L Wilson, A Helmi, H L Morrison, MNRAS. 4132235Wilson, M. L., Helmi, A., Morrison, H. L., et al. 2011, MNRAS, 413, 2235 . B Yanny, H J Newberg, E K Grebel, ApJ. 588824Yanny, B., Newberg, H. J., Grebel, E. K., et al. 2003, ApJ, 588, 824 . H W Zhang, G Zhao, A&A. 449127Zhang, H. W., & Zhao, G. 2006, A&A, 449, 127
[]
[ "PDF4LHC recommendations for Run II", "PDF4LHC recommendations for Run II" ]
[ "Juan Rojo [email protected] \nUniversity of Oxford\n\n" ]
[ "University of Oxford\n" ]
[]
The interpretation of LHC measurements requires a careful estimate of various sources of uncertainties that affect theoretical calculations. In this contribution, we present the PDF4LHC Working Group recommendations for the usage of sets of parton distribution functions (PDFs) at the LHC Run II. We review the construction and validation of the PDF4LHC15 combined sets, and study some of their phenomenological implications. We also address some recent criticism of these recommendations.
null
[ "https://arxiv.org/pdf/1606.08243v1.pdf" ]
119,306,559
1606.08243
7e893aecd96e518da3a02b56655559953a04b4ae
PDF4LHC recommendations for Run II 27 Jun 2016 Juan Rojo [email protected] University of Oxford PDF4LHC recommendations for Run II 27 Jun 2016XXIV International Workshop on Deep-Inelastic Scattering and Related Subjects 11-15 April, 2016 DESY Hamburg, Germany * Speaker. † Presented on behalf of the PDF4LHC Working Group. The interpretation of LHC measurements requires a careful estimate of various sources of uncertainties that affect theoretical calculations. In this contribution, we present the PDF4LHC Working Group recommendations for the usage of sets of parton distribution functions (PDFs) at the LHC Run II. We review the construction and validation of the PDF4LHC15 combined sets, and study some of their phenomenological implications. We also address some recent criticism of these recommendations. Figure 1 : NLO inclusive cross-sections, evaluated with NNLO PDFs, for Higgs production at the LHC with √ s = 13 TeV in the gluon-fusion (left) and the tt associated production (right) channels, for different PDFs, as a function of the native value of the strong coupling α s (m Z ) value in each case. In each plot, two arrows indicate schematically two possible definitions of the total PDF uncertainty in these observables. Why a recommendation? Given the ever-increasing precision of LHC measurements, careful assessment of theoretical uncertainties in LHC cross-sections is of utmost importance. One of the dominant sources arises from our imperfect knowledge of the structure of the proton, encoded by the parton distribution functions (PDFs) [1], as well as of related physical parameters such as the strong coupling α s or the charm mass m c . Quantifying the total PDF+α s uncertainties is of high importance in a number of LHC applications: a prime example is the extraction of the Higgs boson properties, such as couplings and branching fractions, which can only achieved by a comparison of theoretical predictions with the corresponding LHC measurements. Other examples include the determination of exclusion ranges for specific BSM scenarios, when searches return null results, and determination of fundamental parameters, such as the mass of the W boson. To illustrate the challenge, in Fig. 1 we show the NLO cross-sections (with NNLO PDFs) for Higgs production in gluon fusion and in tt associated production for different PDF sets, as a function of the native value of the strong coupling α s (m Z ). These processes have been computed with MadGraph5_aMC@NLO [2, 3] using default scale settings. From Fig. 1 is clear that results from different PDF sets are not always compatible within uncertainties. The issue is then how one can define a total PDF+α s uncertainty: this is required to extract the Higgs couplings from the measurements of the cross-sections shown in Fig. 1. Should one maybe take an envelope of the three global fits, CT14, MMHT14, and NNPDF3.0? Or maybe one should account for the complete spread of PDF variations? In addition, an important motivation for having an uniform treatment of PDF uncertainties in LHC calculations is allowing to establish a consistent framework for the evaluation of PDF uncertainties and their correlations in generic LHC processes. Different points of view have been advocated to define a total PDF+α s uncertainty on LHC cross-sections. In this contribution, we review the recommendations of the PDF4LHC Working Group [4] for the usage of PDFs and their uncertainties for applications at the LHC Run II. We also briefly comment on an alternative recommendation presented by authors of the Ref. [5]. recommendations [6] was that it required a calculation of cross-sections for each individual PDF sets, and then combine them a posteriori by taking the envelope of the PDF+α s uncertainties from each set. Moreover, the statistical interpretation of such envelope was unclear, since it gave too much weight to outliers. To overcome these limitations, the PDF4LHC15 recommendations are now provided in terms of combined PDF sets. It is important to emphasize that switching from the envelope of the 2011 recommendation to the statistical combination of the 2015 one is at least in part motivated by the improved agreement between the three PDFs sets that enter the combination [7,4], namely CT14 [8], MMHT14 [9] and NNPDF3.0 [10], as compared to the previous generation sets, CT10, MSTW08 and NNPDF2.1. The PDF4LHC15 combined sets, available from LHAPDF6 [11], are constructed as follows. First of all, N rep = 300 Monte Carlo (MC) replicas of NNPDF3.0 are combined with the same number from CT14 and MMHT14 using the Watt-Thorne method [12] for the representation of Hessian sets in terms of MC replicas. The resulting set of N rep = 900 replicas is then reduced into more compact representations, two Hessian ones, and a MC one. In the latter case, the CMC-PDF algorithm is used [13], while in the former case the two Hessian reduced sets are constructed one with N eig = 30 eigenvectors, using the META method [14], and the other with N eig = 100 eigenvectors, using the MC2H algorithm [15]. This procedure is summarized in Fig. 2. 1 In general, good agreement is obtained between the prior combination and the three reduced sets in most of the relevant phase space. In Fig. 3 we compare the NNLO gluon-gluon luminosity for the PDF4LHC15 prior combination, compared with the Monte Carlo and the Hessian reduced sets. In Fig. 3 we also show a comparison of the predictions for differential distributions in Higgs production in gluon fusion at the LHC with √ s = 13 TeV, in particular the rapidity and p T distributions, obtained from the three PDF4LHC15 combined sets. A similar level of agreement is obtained at the level of correlations, both between PDFs and between different collider cross-sections. A complete set of comparisons of the PDF4LHC15 combined sets at the level of PDFs, lumi- In all cases we use the default theory settings, including the scale choices, of the respective codes. Theory calculations have been performed at 7 TeV for these processes for which data is already available and that have been used for PDF fits, see for example [11]; in addition, a number of dedicated grids for 13 TeV processes have also been generated. In the former case the binning follows that to the corresponding experimental measurements, which we indicate in the list below. In this report we will only show a representative subset of processes. The complete list of processes and kinematic distributions (available from the PDF4LHC15 webpage) is described now. At 7 TeV, the processes where experimental LHC measurements are available, and for which APPLgrid grids matching the experimental binning have been produced, are the following: In addition to these grids, at 13 TeV we have generated spec a number of new fast NLO grids for di↵erential distributi production using Madgraph5 aMC@NLO interfaced to aMCfas • Rapidity and p T distributions in inclusive gg ! h sections for hZ, hW and htt production. • Rapidity, p T and m tt distributions in top-quark pair p • Missing E T , lepton p T and rapidity, and transverse ma Z production. First of all, let us go back to Figs. 10 and 11, which i sections where departures from the Gaussian regime were pa the probability distributions computed from the MC900 pri using the CMC100 (PDF4LHC15 mc) and MCH100 (PDF4LH nosities and LHC cross-sections can be found at the following two websites: https://www.hep.ucl.ac.uk/pdf4lhc/mc2h-gallery/website/ http://metapdf.hepforge.org/2016_pdf4lhc/ In addition to the benchmark exercise performed in the context of Ref. [4], subsequent studies in the framework of the Les Houches workshop [18] further explore both the validity and the phenomenological implications of the PDF4LHC15 recommendations, for instance addressing in more detail the issues that arise when the prior PDF combination exhibits non-Gaussian features. An alternative recommendation. Recently, an alternative proposal for PDF usage at the LHC has been advocated by Accardi et al in Ref. [5]. There, a rather more conservative approach is advocated: for precision theory predictions, the recommendation would be to use the individual PDF sets from as many groups as possible, together with the respective uncertainties and the values of α s (m Z ), m c and m b . In other words, the suggestion is to take the widest possible envelope of theoretical inputs that enter an LHC calculation. As illustrated by Fig. 1, adopting this recommendation would lead to much larger theoretical uncertainties in Higgs characterization studies and New Physics searches, affecting the physics output of the LHC, and thus it is important to understand the reasoning that motivates this recommendation. In this respect, there are a number of questionable assumptions in Ref. [5]: • It does not seem justified to disregard the wealth of PDF-sensitive measurements available, including from the LHC [19], for precision physics, and treat on equal footing all PDF sets irrespectively of their level of agreement with existing data: an envelope of results based on fits to different-sized datasets degrades the accuracy to that of the fit obtained from the smallest dataset. • In the same way as new and more precise higher-order calculations, or Monte Carlo simulations, replace older and less accurate ones, also from the PDF point of view, mixing state-of-the-art PDFs with rather older sets does not seem justified. • The envelope procedure leads to uncertainties that are bigger than a statistical combination, but is justified only when there are large and poorly understood discrepancies. This was arguably the case at the time of the previous recommendation, but it does not seem to be the case now. • We do not think it is justified to ignore the PDG average (and the associated uncertainty) for the strong coupling α s (m Z ) [20], implicitly declaring that both the quoted central value and the uncertainty are off by a substantial amount. Another point raised by the authors of Ref. [5] is that, for the PDF fits that enter the PDF4LHC 2015 recommendations, the numerical value of the charm mass is effectively tuned to reach an artificial agreement for the Higgs cross-section in gluon fusion. A first reply to this objection is that, in the three global fits, the value of σ (gg → h) depends only mildly on the specific value of m c used. To illustrate this point, in Fig. 4 we show the Higgs cross-section in gluon fusion computed with NLO PDFs for a range of values of the charm pole mass m c , for NNPDF3 [21] and MMHT14 [22]. Even in this wide range, the cross-section varies no more than 1%. In Fig. 4 we show a similar stability study, this time in the CT10 framework [23], using NNLO PDFs. Secondly, the generalmass variable-flavour number (GM-VFN) schemes used by the three groups have been extensively benchmarked [24] up to NNLO, and are known to differ only by small, formally subleading, terms. Therefore, there is little room to modify the GM-VFN matching, which by construction is more accurate than a fixed-flavour number (FFN) calculation. Finally, it should be emphasized that while the conversion of the charm mass from the pole to the MS scheme is perturbatively unstable, the same conversion is better behaved for the bottom quark. Together with the fact that m Outlook. The PDF4LHC 2015 recommendations are the result of a joint effort by theorists and experimentalists aiming to provide a robust estimate of the combined PDF+α s uncertainties for precision LHC calculations. The main forum of the PDF4LHC Working Group are its periodic meetings, which provide a unique opportunity for the cross-talk between theory and experiment, as well as between PDF fitters. Topics that will be explored in the coming months include the impact of LHC data from the 13 TeV runs, including new NNLO calculations in PDF fits and the role of electroweak corrections and the photon PDF. Eventually, PDF4LHC will present updated recommendations to take into account developments from the theory, data, and methodology points of view. For instance, future updates might include additional PDF sets. In any case, the general combination strategy developed in the context of present recommendation is flexible and robust enough to accommodate these and other foreseeable updates. Figure 2 : 2The PDF4LHC 2015 recommendations. One of the main limitations of the 2011 PDF4LHC Schematic representation of the different algorithms leading to the PDF4LHC15 combined sets. Figure 14 : 14Comparison of parton luminosities at the LHC 13 TeV computed using the prior set PDF4LHC15 nnlo prior with N rep = 900, and its compressed Monte Carlo representation, PDF4LHC15 nnlo mc, for ↵ s (m 2 Z ) = 0.118. We show the gg, qg, qq and qq luminosities as a function of the invariant mass of the final state M X , normalized to the central value of PDF4LHC15 nnlo prior. NLOjet++ [89] and MCFM [15], and with aMCfast [117] interfaced to Madgraph5 aMC@NLO [118]. • Drell-Yan rapidity distributions in the LHCb forward region [36, 119]. • Invariant mass distribution from high-mass Drell-Yan (DY) [120]. • Rapidity distributions of W and Z production [34, 50]. Figure 15 : 15• p T distribution of inclusive W production [121]. Same asFig.14 now comparing the parto PDF4LHC15 nnlo prior and the two Hessian sets, PDF4LHC15 nnl• Double di↵erential DY distributions in dilepton mass• Lepton rapidity distributions in W +charm productio• Inclusive jet production in the central and forward re Fig. 6 : 6Differential distributions for Higgs production at gluon fusion at p s = 13 TeV. The Higgs rapidity (left) and transverse momentum (right) distributions are shown, using the three different deliveries of the combined PDF4LHC15 set. The upper plots show the absolute distributions, while in the lower plot results are normalized to the central value of the PDF4LHC15_mc set. Cross-sections have been computed at NLO with NNLO PDFs. 8 Figure 3 : 83Upper plots: the NNLO gluon-gluon luminosity for the PDF4LHC15 prior combination, compared with the subsequent Monte Carlo (left) and Hessian (right plot) reduced sets. Lower plots: comparison of the predictions for differential distributions in Higgs production in gluon fusion obtained from the three PDF4LHC15 combined sets. of renormalon ambiguities, this implies that a measurement of m MS b leads to a reasonably accurate prediction of m pole c = 1.51 ± 0.13[25], consistent with the values used in the global fits. Figure 4 : 4Left: the Higgs cross-section in gluon fusion computed with NLO PDFs for a range of values of the charm pole mass m c , for NNPDF3 and MMHT14. Results are normalized to the respective values for m c = 1.45 GeV. Right: the gluon-fusion Higgs cross-section as a function of σ (Z) for a range of values of the charm mass, from the NNLO PDF fits of Ref. [23]. Although not part of the recommendation, it is possible to further compact the two Hessian reduced by specifying a preferred set of input cross-sections to be reproduced, using either the SM-PDF[16] or the META-H[17,14] approaches. Acknowledgments. I am grateful to my colleagues of the PDF4LHC Working Group and in particular to the authors of[4]for innumerable illuminating discussions. This work has been supported by an STFC Rutherford Fellowship and Grant ST/K005227/1 and ST/M003787/1, and by an European Research Council Starting Grant "PDF4BSM". Progress in the Determination of the Partonic Structure of the Proton. S Forte, G Watt, arXiv:1301.6754Ann.Rev.Nucl.Part.Sci. 63S. Forte and G. Watt, Progress in the Determination of the Partonic Structure of the Proton, Ann.Rev.Nucl.Part.Sci. 63 (2013) 291, [arXiv:1301.6754]. The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. J Alwall, R Frederix, S Frixione, V Hirschi, F Maltoni, arXiv:1405.0301JHEP. 140779J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, et al., The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 1407 (2014) 079, [arXiv:1405.0301]. aMCfast: automation of fast NLO computations for PDF fits. V Bertone, R Frederix, S Frixione, J Rojo, M Sutton, arXiv:1406.7693JHEP. 1408V. Bertone, R. Frederix, S. Frixione, J. Rojo, and M. Sutton, aMCfast: automation of fast NLO computations for PDF fits, JHEP 1408 (2014) 166, [arXiv:1406.7693]. PDF4LHC recommendations for LHC Run II. J Butterworth, arXiv:1510.03865J. Phys. 4323001J. Butterworth et al., PDF4LHC recommendations for LHC Run II, J. Phys. G43 (2016) 023001, [arXiv:1510.03865]. A Accardi, arXiv:1603.08906Recommendations for PDF usage in LHC predictions. A. Accardi et al., Recommendations for PDF usage in LHC predictions, arXiv:1603.08906. M Botje, arXiv:1101.0538The PDF4LHC Working Group Interim Recommendations. M. Botje et al., The PDF4LHC Working Group Interim Recommendations, arXiv:1101.0538. Parton Distribution Benchmarking with LHC Data. R D Ball, S Carrazza, L Debbio, S Forte, J Gao, arXiv:1211.5142JHEP. 1304R. D. Ball, S. Carrazza, L. Del Debbio, S. Forte, J. Gao, et al., Parton Distribution Benchmarking with LHC Data, JHEP 1304 (2013) 125, [arXiv:1211.5142]. New parton distribution functions from a global analysis of quantum chromodynamics. S Dulat, T.-J Hou, J Gao, M Guzzi, J Huston, P Nadolsky, J Pumplin, C Schmidt, D Stump, C P Yuan, arXiv:1506.07443Phys. Rev. 933S. Dulat, T.-J. Hou, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, J. Pumplin, C. Schmidt, D. Stump, and C. P. Yuan, New parton distribution functions from a global analysis of quantum chromodynamics, Phys. Rev. D93 (2016), no. 3 033006, [arXiv:1506.07443]. Parton distributions in the LHC era: MMHT 2014 PDFs. L A Harland-Lang, A D Martin, P Motylinski, R S Thorne, arXiv:1412.3989Eur. Phys. J. 75L. A. Harland-Lang, A. D. Martin, P. Motylinski, and R. S. Thorne, Parton distributions in the LHC era: MMHT 2014 PDFs, Eur. Phys. J. C75 (2015) 204, [arXiv:1412.3989]. Parton distributions for the LHC Run II. R D Ball, NNPDF CollaborationarXiv:1410.8849JHEP. 0440NNPDF Collaboration, R. D. Ball et al., Parton distributions for the LHC Run II, JHEP 04 (2015) 040, [arXiv:1410.8849]. LHAPDF6: parton density access in the LHC precision era. A Buckley, J Ferrando, S Lloyd, K Nordstrãűm, B Page, arXiv:1412.7420Eur.Phys.J. 75A. Buckley, J. Ferrando, S. Lloyd, K. NordstrÃűm, B. Page, et al., LHAPDF6: parton density access in the LHC precision era, Eur.Phys.J. C75 (2015) 132, [arXiv:1412.7420]. Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs. G Watt, R S Thorne, arXiv:1205.4024JHEP. 1208G. Watt and R. S. Thorne, Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs, JHEP 1208 (2012) 052, [arXiv:1205.4024]. A compression algorithm for the combination of PDF sets. S Carrazza, J I Latorre, J Rojo, G Watt, arXiv:1504.06469Eur. Phys. J. 75S. Carrazza, J. I. Latorre, J. Rojo, and G. Watt, A compression algorithm for the combination of PDF sets, Eur. Phys. J. C75 (2015) 474, [arXiv:1504.06469]. A meta-analysis of parton distribution functions. J Gao, P Nadolsky, arXiv:1401.0013JHEP. 140735J. Gao and P. Nadolsky, A meta-analysis of parton distribution functions, JHEP 1407 (2014) 035, [arXiv:1401.0013]. An Unbiased Hessian Representation for Monte Carlo PDFs. S Carrazza, S Forte, Z Kassabov, J I Latorre, J Rojo, arXiv:1505.06736Eur. Phys. J. 758S. Carrazza, S. Forte, Z. Kassabov, J. I. Latorre, and J. Rojo, An Unbiased Hessian Representation for Monte Carlo PDFs, Eur. Phys. J. C75 (2015), no. 8 369, [arXiv:1505.06736]. Specialized minimal PDFs for optimized LHC calculations. S Carrazza, S Forte, Z Kassabov, J Rojo, arXiv:1602.00005Eur. Phys. J. 764S. Carrazza, S. Forte, Z. Kassabov, and J. Rojo, Specialized minimal PDFs for optimized LHC calculations, Eur. Phys. J. C76 (2016), no. 4 205, [arXiv:1602.00005]. Data set diagonalization in a global fit. J Pumplin, arXiv:0904.2425Phys. Rev. 8034002J. Pumplin, Data set diagonalization in a global fit, Phys. Rev. D80 (2009) 034002, [arXiv:0904.2425]. J R Andersen, arXiv:1605.04692Physics at TeV Colliders Standard Model Working Group Report. Les Houches; Les Houches, France9th Les Houches Workshop on Physics at TeV CollidersJ. R. Andersen et al., Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report, in 9th Les Houches Workshop on Physics at TeV Colliders (PhysTeV 2015) Les Houches, France, June 1-19, 2015, 2016. arXiv:1605.04692. The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II. J Rojo, arXiv:1507.00556J. Phys. 42103103J. Rojo et al., The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II, J. Phys. G42 (2015) 103103, [arXiv:1507.00556]. . K Olive, Particle Data Group CollaborationReview of Particle Physics, Chin.Phys. 3890001Particle Data Group Collaboration, K. Olive et al., Review of Particle Physics, Chin.Phys. C38 (2014) 090001. R D Ball, NNPDF CollaborationV Bertone, NNPDF CollaborationM Bonvini, NNPDF CollaborationS Carrazza, NNPDF CollaborationS Forte, NNPDF CollaborationA Guffanti, NNPDF CollaborationN P Hartland, NNPDF CollaborationJ Rojo, NNPDF CollaborationL Rottoli, NNPDF CollaborationarXiv:1605.06515A Determination of the Charm Content of the Proton. The NNPDF Collaboration, R. D. Ball, V. Bertone, M. Bonvini, S. Carrazza, S. Forte, A. Guffanti, N. P. Hartland, J. Rojo, and L. Rottoli, A Determination of the Charm Content of the Proton, arXiv:1605.06515. Charm and beauty quark masses in the MMHT2014 global PDF analysis. L A Harland-Lang, A D Martin, P Motylinski, R S Thorne, arXiv:1510.02332Eur. Phys. J. 761L. A. Harland-Lang, A. D. Martin, P. Motylinski, and R. S. Thorne, Charm and beauty quark masses in the MMHT2014 global PDF analysis, Eur. Phys. J. C76 (2016), no. 1 10, [arXiv:1510.02332]. Charm quark mass dependence in a global QCD analysis. J Gao, M Guzzi, P M Nadolsky, arXiv:1304.3494Eur.Phys.J. 732541J. Gao, M. Guzzi, and P. M. Nadolsky, Charm quark mass dependence in a global QCD analysis, Eur.Phys.J. C73 (2013) 2541, [arXiv:1304.3494]. Chapter. J Rojo, arXiv:1003.1241The SM and NLO multileg working group: Summary report. J. Rojo et al., "Chapter 22 in: J. R. Andersen et al., "The SM and NLO multileg working group: Summary report"." arXiv:1003.1241, 2010. Global analysis of inclusive B decays. C W Bauer, Z Ligeti, M Luke, A V Manohar, M Trott, hep-ph/0408002Phys. Rev. 7094017C. W. Bauer, Z. Ligeti, M. Luke, A. V. Manohar, and M. Trott, Global analysis of inclusive B decays, Phys. Rev. D70 (2004) 094017, [hep-ph/0408002].
[]
[ "Isotachophoresis applied to chemical reactions", "Isotachophoresis applied to chemical reactions" ]
[ "C Eid ", "J G Santiago " ]
[]
[]
This review discusses research developments and applications of isotachophoresis (ITP) to the initiation, control, and acceleration of chemical reactions, emphasizing reactions involving biomolecular reactants such as nucleic acids, proteins, and live cells. ITP is a versatile technique which requires no specific geometric design or material, and is compatible with a wide range of microfluidic and automated platforms. Though ITP has traditionally been used as a purification and separation technique, recent years have seen its emergence as a method to automate and speed up chemical reactions. ITP has been used to demonstrate up to 14,000-fold acceleration of nucleic acid assays, and has been used to enhance lateral flow and other immunoassays, and even whole bacterial cell detection assays. We here classify these studies into two categories: homogeneous (all reactants in solution) and heterogeneous (at least one reactant immobilized on a solid surface) assay configurations. For each category, we review and describe physical modeling and scaling of ITP-aided reaction assays, and elucidate key principles in ITP assay design. We summarize experimental advances, and identify common threads and approaches which researchers have used to optimize assay performance. Lastly, we propose unaddressed challenges and opportunities that could further improve these applications of ITP.
null
[ "https://arxiv.org/pdf/1708.08298v1.pdf" ]
34,544,114
1708.08298
33a3642d3dde67a11e409d762bf73bbf03f59900
Isotachophoresis applied to chemical reactions C Eid J G Santiago Isotachophoresis applied to chemical reactions 1 REVIEW This review discusses research developments and applications of isotachophoresis (ITP) to the initiation, control, and acceleration of chemical reactions, emphasizing reactions involving biomolecular reactants such as nucleic acids, proteins, and live cells. ITP is a versatile technique which requires no specific geometric design or material, and is compatible with a wide range of microfluidic and automated platforms. Though ITP has traditionally been used as a purification and separation technique, recent years have seen its emergence as a method to automate and speed up chemical reactions. ITP has been used to demonstrate up to 14,000-fold acceleration of nucleic acid assays, and has been used to enhance lateral flow and other immunoassays, and even whole bacterial cell detection assays. We here classify these studies into two categories: homogeneous (all reactants in solution) and heterogeneous (at least one reactant immobilized on a solid surface) assay configurations. For each category, we review and describe physical modeling and scaling of ITP-aided reaction assays, and elucidate key principles in ITP assay design. We summarize experimental advances, and identify common threads and approaches which researchers have used to optimize assay performance. Lastly, we propose unaddressed challenges and opportunities that could further improve these applications of ITP. 3 LE ions and fall back into their original TE zone. Conversely, higher mobility LE ions diffusing into the higher electric field TE zone are restored since they migrate faster than the TE. Importantly, TE and LE mobilities are chosen such that sample ions in the TE (LE) migrate faster (slower) than neighboring TE (LE) ions and are driven toward the TE-to-LE interface. See for example Khurana et al. 35 and Garcia et al. 36 for more detailed and quantitative descriptions (including models and experimental studies) of the diffusion-and dispersion-limited focusing dynamics of ITP sample ions. ITP processes can conveniently be categorized as either peak-mode or plateau-mode. Peak-mode ITP is associated with sample ions present in trace concentrations. Such samples focus into the TE-to-LE interface region but there is insufficient time (and equivalently distance along the channel) for the sample ions to appreciably influence local ionic conductivity in the channel. 37 In peak-mode ITP, the sample species respond solely to the electric field established by the dynamics of the TE and LE. Importantly, multiple sample ions can co-focus within and significantly overlap within the same sharp ITP interface. In an approximate sense, well-focused sample ions accumulate into Gaussian peaks with continuously increasing area and significant spatial overlap. The focusing and relative positions of these peaks is determined solely by the electric field established by the TE and LE and the relative mobilities of the TE, the LE, and each sample species. The second useful category for ITP is plateau-mode ITP. Above a certain threshold concentration (and duration of the ITP process), sample ions accumulate and reach a local maximum concentration. Here, multiple sample ions will reach respective maximum concentration and segregate into respective, multiple plateau-like zones of locally uniform (and constant) concentration. If there is a continuous influx of sample ions (e.g., from a reservoir), these plateaus increase in length in proportion to the amount of electrical charge run through the system. 38 For the rare case of ITP of fully-ionized species, this threshold is determined by the Kohlrausch regulating function (KRF). 39 For the common case of weak electrolytes (e.g., LE and TE solutions which are pH buffers), the threshold is governed by the Alberty 40 and Jovin 41 functions instead. Plateau-mode ITP has been leveraged for many applications, including separation and indirect detection of toxins, amino acids, and others. [42][43][44] Briefly, peak-mode ITP is well-suited for mixing and driving reaction kinetics due to co-focusing of trace sample ions into high-concentration, overlapping peaks; while plateau-mode ITP is better-suited for separation of species into distinct zones for the purpose of purification or identification. 4 In this review, we outline and discuss an emerging use and field of application of ITP: The initiation (via mixing), control, and acceleration of chemical reactions involving at least one ionic species. Accordingly, we will specifically consider applications of ITP wherein at least one reagent in a chemical reaction is focused at an ITP interface, and this focusing is used to control a chemical reaction involving that reagent and at least one other reagent. We first briefly summarize simple concepts of second-order reactions. We then review a series of papers in which ITP was used to preconcentrate and mix reagents, and describe mixing time scales for two adjoining zones. We then review papers using ITP to accelerate chemical reactions, and separately discuss homogeneous and heterogeneous assay configurations. For each class of application, we review and describe physical modeling and scaling of ITP-aided reaction assays and summarize relevant literature. In Table 1, we summarize the studies discussed in this review, classify these in a manner consistent with our discussions, and briefly mention their major contributions. In Table 2, we characterize the reactant species, kinetics, and performance of the reactions described in these studies. Lastly, we discuss unaddressed challenges and make recommendations for promising areas for future work. II. Earliest work involving ITP to control chemical reactions A common theme in the first set of papers on ITP-aided reaction acceleration is the use of ITP to mix and control reactants in an ITP zone. We first provide a brief and simple scaling analysis to provide some physical context for this process. Unlike other types of microfluidic mixing using stirring or chaotic flows, mixing in ITP is typically accomplished via a deterministic electrophoretic process wherein one species is electromigrated into a region occupied by a second species. As mentioned above, migration velocity is the product of the electrophoretic mobility and the local electric field, ii UE   (1) Here, Ui represents the velocity of a migrating species, µi is the local electrophoretic mobility (the sign of which indicates direction), and E the local electric field. Consider the case of two analyte species, A and B, occupying two adjoining zones in a channel. For now, consider that both of these are present as trace species in a background of buffer ions. In such a case, their differential electrophoretic velocity can be quantified in terms of their effective mobilities. The two species mix when their different electrophoretic velocities cause relative motion toward each other. The time over which the two species would mix (i.e. overlap fully) scales as   11 21 mix AB t U U E    (2) 5 where δ1 denotes the width of smaller of the two zones, E denotes the local electric field in zone 1, and µA and µB represent the electrophoretic mobilities of species A and B. As eq 2 shows, the mixing rate is influenced by the relative mobilities of the two species, the width of the zone, and the local electric field. The closer the two mobilities are to each other, the longer they will take to mix. We note that the latter scaling is also useful when one species is in ITP (plateau or peak mode) and the second has a mobility and initial position configured so that it will pass through the space occupied by the first. In such a case, the characteristic difference in velocity can be characterized as the difference between the ITP velocity (which is in turn the velocity of the LE co-ion) and the local electrophoretic velocity of the second species. To our knowledge, the first demonstrated use of ITP to mix and initiate chemical reactions came in 2008 from scientists working at Wako Pure Chemical Industries, in a series of papers describing the development of the assay concept, its optimization, and its development into a commercial instrument. The result of this work, the µTASWako i30, 45 was the first commercially-available instrument that uses ITP. In their first paper, Kawabata et al. 46 described an assay that they called the Electrokinetic Analyte Transport Assay. The assay leveraged ITP to focus a DNA-coupled antibody and increase its concentration while reacting with a target protein, α-fetoprotein (AFP). Conjugating the antibody with a DNA molecule increased its electrophoretic mobility and enabled the DNA-antibody to focus in ITP. The differential velocity between the ITP-focused DNA-antibody complex and the AFP (which was not focused in ITP) was used to initiate the reaction. The DNA-antibody complex reacted with AFP and Kawabata hypothesized that the ITP preconcentration of the DNA-antibody complex accelerated these reaction kinetics (neither quantitative data nor analysis supporting accelerated kinetics was provided) . Further, the reaction resulted in recruitment of unfocused AFP into ITP mode, increasing product concentration by up to 140-fold. Applied voltages where then reconfigured on their chip to initiate CE and separate the immune complex of interest from background fluorescent signal, as shown in Figure 2. The plastic microfluidic chip was designed as a straight channel with several branches to allow the introduction of the various buffers and reagents. Their LE and TE buffers contained Tris-HCl and Tris-HEPES, respectively, and additional components like polymers, albumin, salts, and surfactants to improve assay performance. They achieved a limit of detection of 5 pM with this assay, impressively nearly 2 orders of magnitude below clinically-relevant limits. 6 Simultaneously (the papers were published within the same week), Park et al. 47 published a paper on improving the reproducibility of the assay. They studied peak intensity and separation, and their dependence on "handoff time", the moment at which voltage switching causes the assay to transition from ITP stacking to CE separation. Interestingly, they found that changes in buffer concentration or small manufacturing defects in the devices caused noticeable variation in arrival times, which in turn affected handoff and adversely affected data quality. To combat this, Park introduced automated handoff and timing mechanisms which relied on computer monitoring of voltage, in order to adjust for external factors, and to achieve highly precise control of signal intensity and peak separation. The final paper in this series was published in 2009, by Kagebayashi et al. 48 In it, they described the automated AFP-L3 assay, and the µTASWako i30 immunoanalyzer which evolved from the previous two papers. Kagebayashi et al. described its mechanism and characterized its performance. In addition to quantifying total AFP levels, they also incorporated an affinity-based separation step to simultaneously quantify the L3 isoform of AFP, AFP-L3. AFP-L3% is a clinically-relevant biomarker that is specific to malignant tumors and other pathologies. 49,50 By specifically binding to the L3 isoform through ITP preconcentration, the DNA-AFP-L3 immunocomplex separated from AFP-L1 isoform, allowing the quantitation of both isoforms using laser-induced fluorescence. They validated their assay in spiked serum samples, and achieved a limit of detection of 1 pM, with 2% coefficient of variation. Their test demonstrated good correlation with a commercially-available reference assay. The µTASWako i30 immunoanalyzer received FDA 510(k) clearance in 2011, 51 Bercovici et al. 52 developed the first analytical model examining ITP-aided chemical reactions wherein both reacting species are focused in peak-mode ITP. This initial model assumed a perfectly overlapped Gaussian reactant peaks. They developed a volume-averaged set of first-order differential equations to describe conservation of species: d 1 d 3 dd d 1 d 3 dd d 1 d 3 dd TE AA on A B off AB LE BB on A B off AB AB on A B off AB cQ k c c k c t A t cQ k c c k c t A t c k c c k c tt                              (5)                    (6) UITP is the velocity of the ITP zone, 0 A c and 0 B c the respective reservoir or initial concentrations of species A and B, and β is the ratio of TE ion concentrations in the adjusted TE and TE zones. For more on adjusted TE zones, we refer interested readers to Khurana et al. 35 and to Eid and Santiago 53 . δ is the width of the ITP zone, and is determined by the balance of dispersion effects (e.g., diffusion which acts to mix species and broaden the peak) and electromigration (which acts to sharpen the interface). In ideally, diffusion-limited conditions this width can be estimated as 54 LE TE theory ITP LE TE RT FU        (7) where R and is the universal gas constant, T is the temperature, and F is Faraday's constant. We note that in practice, the width of the ITP zone is not constant, but grows slightly over time. 35,52 Eq 6 also contains the so-called separabilities pA,TE and pB,LE, which were first introduced by Bocek 55 Separabilities quantify the relative mobilities of a species and the buffer it is in, and are useful in estimating the focusing rate of species. 53 By assuming that one of the two reactants is in relative excess at the ITP interface, as well as an equilibrium constant which is low compared to the local concentration of the species in excess, an analytical solution to the system in eq 5 can be obtained. Under those circumstances, the above system of equations simplifies to a single ordinary differential equation, with the following exact and They supported their model with experimental validation using a molecular beacon probe and oligo target. We summarize some of these results in Figure 3. ITP enhancement was more pronounced at lower reactant concentration (14,000-fold reduced reaction time at 500 pM target concentration), a regime in which reactions are governed by the off-rate. Though their work nominally focused on DNA hybridization, it is theoretically applicable to any ITP-aided reaction assay in which both reactants are preconcentrated in ITP. Eid et al. 57 presented a modified version of Bercovici's model for cases in which only one species is focused in ITP. Namely, Eid considered loading of two reactants into the LE buffer, but where only one of them focused in ITP. This resulted in a reduction of ITP-enhanced reaction rate. Under these conditions, the net reaction rate of the not-yet-focused species within the LE zone may be comparable (on a moles per second basis) to that in the ITP zone, due to the significantly larger volume of the LE zone. They modeled the latter effect as a shrinking reactor, and introduced a dimensionless parameter λ which incorporates several of the key variables which influence product formation 0 0 on B ITP L k c U   (12) Here L0 is the length of the separation region in the channel. All the models discussed above make the simplifying assumption that ITP zones have a Gaussian profile and were perfectly overlapped, which allows the use of volume-averaged concentrations. Garcia et al. 36 first showed that samples focused in ITP peak mode may exhibit species-specific and non-Gaussian/asymmetric profiles. In their work, they found that sample ion properties contributed greatly to dispersion and ITP peak shapes. In particular, ITP peaks wherein sample ions had mobilities near those of the TE or LE exhibited significant tailing into these respective zones and an associated asymmetry. Rubin et al. 58 presented a study focused on sample distribution within ITP zones, and the effect of these species specific distributions on reaction rates. They presented closed-form solutions for peak shapes and production rates for the case of ITP dynamics dominated by pure diffusion (e.g., no advective dispersion) and electromigration. To account for sample zone shape asymmetry, they defined an effective association rate constant of the form eff on on form k k k (13) Here, kform depends on the relative sample, TE, and LE mobilities, and is given by       1 1 cot cot AB form AB yy k A y y        (14) 1 1 1 1 , 1 1 1 1 TE A TE B AB TE LE TE LE yy           (15) Interestingly, they found that reaction rate is not necessarily maximized when concentration profiles of two reacting species perfectly overlap, and instead varies depending on the relative mobilities of the species. As a result, production rates calculated while accounting for sample distribution are typically lower than those computed with volume-averaged concentration models. Recently, Eid and Santiago 53 considered the design parameters that govern the performance of peakmode ITP assays. They incorporated the results of Rubin's more comprehensive species overlap model into their analysis. This analysis showed that for reaction times longer than the characteristic ITP reaction time scale τ, the number of molecules of product AB formed depends solely on the relative influx rates of reactants A and B, and defined a dimensionless production rate such that   jj j AB A AB jj B A B NQ Nt Q t Q Q   (16) Here, j represents the initial loading buffer (LE or TE buffers), and NAB represents the number of AB molecules formed. For the case in which B is in relative abundance to A, the above equation simplifies to unity, implying that after a short transient associated with kinetic rates, production rate is limited by and equal to the net influx rate of the rate-limiting species (the species present in locally higher concentration). This finding is consistent with what Shintaku et al. 59 reported in their examination of ITP-aided acceleration of bead-based reactions. Furthermore, they considered the effect of initial sample placement on production rates. They defined ε, a ratio of product formation when both reactants are initially loaded into the TE versus when both are loaded in the LE buffer, TE AB LE AB N N  (17) For long reaction times, they found that ε depends solely on ϕA, the influx ratio of the limiting species,   , , A TE A A LE p t p     (18) 11 Eid and Santiago concluded that for sufficient reaction times, the optimal production rate of species AB in an ITP-aided reaction assay is obtained by simply maximizing the influx rate of the rate-limiting species. III.b. Homogeneous reactions: experimental studies and assays III.b.1. Homogeneous nucleic acid hybridization assays We here consider nucleic acid hybridization reactions. We term these as "homogenous" when they involve at single-stranded nucleic acid species which are in solution (i.e. not attached to a substrate). Such assays are attractive due to their simple design and implementation. However, excess reactant removal and clean-up steps can be more difficult to incorporate into the workflow (e.g., relative to heterogeneous reactions). Importantly, multiplexing is much more difficult to achieve in this format. The first discussion of using ITP to mix reagents in the context of ssDNA hybridization came from Goet et al. 60 in 2009. This analysis came in the greater context of using ITP to bring sample zones into well-controlled contact. However, Goet only discussed this possibility and did not experimentally demonstrate the concept. To our knowledge, Persat and Santiago 61 were the first to experimentally demonstrate ITP-aided hybridization of nucleic acids. They used ITP and molecular beacons to selectively profile among seven microRNA (miRNA) species in total RNA samples from human liver and kidney. This is also the first published quantitative demonstration of high specificity reaction using ITP; and signal selectivity to target miR-26 (22 nt), compared with its relative, miR-126 (22 nt), and its precursor, premir-26a (77 nt). Molecular beacons are single-stranded DNA (ssDNA) probes with a unique hairpin structure that places a fluorophore and quencher in close proximity. 62 Upon binding to a specific target, the structure of the probe is disrupted, separating the fluorophore from its quencher, and resulting in increased fluorescent signal. Persat and Santiago developed a multistage assay wherein the channel contained three discrete regions with varying amounts of sieving matrix (polyvinylpyrrolidone, PVP), magnesium chloride, and denaturant. In the first region, all RNA molecules in the total RNA sample were preconcentrated in ITP. The second region contained a high concentration of sieving matrix in order to defocus large RNA and selectively retain miRNA. The third region applied stringent conditions to promote specific hybridization between the molecular beacons and miRNA target. Figure 4 shows a schematic of their multi-stage assay. Persat verified both sequence-and size-selectivity of this assay; the former by titrating with mismatched miRNA, and the latter by titrating with larger precursor miRNA. They demonstrated initial biological relevance of this technique by achieving a 10 pM limit of detection of miR-122 in kidney and liver total RNA samples. Bercovici et al. 63 Another drawback of this approach is the complexity of assay chemistry needed for effective bidirectional ITP. A few months later, Eid et al. 65 introduced a new method for homogeneous post-hybridization cleanup. Their multistage approach used ITP to accelerate hybridization between a 26 nt linear ssDNA probe and 149 nt ssDNA target, and an ionic spacer to subsequently separate reaction products. In a similar vein to Persat et al., 61 the first stage focused all probe and target, promoting rapid mixing and hybridization. The second stage contained high-concentration sieving matrix, which allowed the ionic 13 spacer to overtake and separate the unbound probes from the slower probe-target complex. Figure 5 shows the different stages of the reaction-separation assay. Eid et al. 65 used 20 mM HEPES as TE, 1 mM MOPS as spacer, 1.8% hydroxyethyl cellulose (HEC) as sieving matrix. This resulted in two focused ITP peaks, one at the LE-spacer interface, and one at the spacer-TE interface. They demonstrated the advantage of this approach by achieving a 220 fM limit of detection in 10 min, with a 3.5 decade dynamic range. This technique is fairly simple and flexible, and produces two ITP-focused peaks; this improves signal and is compatible with downstream manipulation. However, the method lacks high selectivity, and requires sieving matrix and spacer optimization for targets of different sizes. III.b.2. Bead-based homogeneous DNA assays Beads can be focused into ITP zones and used to achieve pseudo-homogenous reactions between species in solution and randomly dispersed beads. Beads offer the advantage of inherent multiplexing (e.g., by coding beads). Shintaku et al. 59 leveraged ITP to co-focus target DNA with DNA-conjugated beads in order to strongly accelerate multiplexed DNA hybridization reactions. They conjugated 6.5 µm polystyrene beads with ssDNA probe sequences corresponding to ten different target oligos. Since fluorescent quantitation was performed using a Luminex 200 instrument, the need for additional signal removal methods was obviated. They also developed a model to describe the reaction kinetics of bead-based hybridization with and without ITP. Interestingly, Shintaku described a quasiequilibrium at high target concentrations between target influx and consumption within an ITP zone, and Eid and Santiago 53 would later expand on this finding (see Section III.a). Their 20 min assay achieved comparable sensitivities to a standard 20 h standard hybridization assay, and 5-fold higher sensitivity when compared with 30 min of standard hybridization. Furthermore, their multiplexed assay (ten target species) showed similar specificity as the standard bead assay, as measured by a so-called specificity index, which was defined as the ratio of specific signal to the highest nonspecific signal. III.b.3. Homogeneous immunoassays aided by ITP Unlike nucleic acids, which have relatively high and largely size-independent mobilities in free solution, 66 However, when extended to 20-fold diluted serum, assay sensitivity suffered, decreasing by an order of magnitude. Eid hypothesized this was due to the complexity of serum, due to its high protein content and associated nonspecific binding. The presence of many ionic species which act as native spacer molecules may have also contributed. III.b.4. Whole cell ITP reaction assays In recent years, ITP has been used to accelerate reactions involving whole dispersed cells. Schwartz While this concentration range excludes several interesting applications, ITP-FISH is a faster, more convenient, and automation-friendly alternative to conventional FISH assays. III.b.5. Coupling ITP with enzymatic reactions Although not yet demonstrated, there have been two interesting accomplishments toward integration of ITP with enzymatic processes. The first was a paper published by Borysiak et al. 20 IV. ITP to preconcentrate, mix, and accelerate heterogeneous reactions IV.a. Heterogenous reactions: theory and models In heterogeneous assays, one or more of the reactants is immobilized on a solid (stationary) surface. Karsenty et al. 73 developed the first analytical model for surface-based hybridization assays using ITP. They considered the case of a single target species migrating over a finite segment of functionalized magnetic beads immobilized using an external magnet. Through scaling analysis, they showed that the problem can be assumed to be one-dimensional, meaning that the analyte concentration at each point x along the channel can be effectively represented by its area-averaged concentration. Karsenty also showed that the spatial distribution of target ions remains unchanged as they react with the immobilized probes. They developed the following one-dimensional ordinary differential equation to describe the concentration of surface probes bound by target molecules d ( ) ( ) d m on R b t b b t t  (19) which, at low concentrations, simplifies to ITP tot R    (22) The above results indicate that at low concentrations, ITP-based enhancement depends on the preconcentration factor as well as the fraction of the total assay time in which the two reacting species overlap through ITP. At higher concentrations, the reaction rate, even without ITP, is sufficiently high to saturate the probes, and the enhancement ratio decays until it reaches unity. indicating that reaction rates are much lower than diffusion, and thus these assays are reaction-limited. Moghadam et al. also used the analysis of Karsenty 73 in their work on ITP-aided lateral flow assays. They too found that their assay was reaction-limited, and their limit of detection was governed by the preconcentration and off-rate constant, similarly to Bercovici et al. 52 Recently, Paratore et al. 75 Here, H is the channel height. By assuming that the sample is well-mixed along the channel height, that diffusion along the y-axis is faster than the reaction rate, that the width of the reaction site is wider than the ITP zone width, and that the number of target molecules is significantly lower than available reaction sites, an analytical expression for fraction of bound surface molecules, equivalent to eq 23, was obtained: 0                           2ˆ off P off TD P ITP on P k c k a c Da K c U k c      (28) Here, a is the maximum concentration of the Gaussian distribution, σ its standard deviation, and cP,0 the initial probe concentration. Shkolnikov found that increasing concentration via ITP can proportionally reduce capture times and required capture lengths across different Da regimes, thus increasing column utilization compared with traditional AC assays. They also studied capture dynamics in a variety of interesting limiting regimes. For example, they found that scaled capture time reaches an asymptotic value for the low Da regime, but increases linearly with Da for high Da values. The former limit is where capture time is limited by the advective time associated with the electromigration of the species in entering the capture region; while the latter limit is governed by the time-to-completion of the reaction (as determined by the species with the higher local concentration). They also found that the spatial resolution of ITP-AC purification scales proportionally with time for the case where the AC capture molecule is present at higher concentration. IV.b. Heterogeneous reactions: Experimental studies and assays IV.b.1. Functionalized gel or porous regions and affinity chromatography type ITP assays Garcia-Schwarz and Santiago 77,78 developed a two-stage assay that used ITP to enhance hybridization and a photopatterned functionalized gel to remove excess reactant. The first of their two publications on this subject 77 is notable for its integration of serial ITP reactions with gel capture. ITP was first used to enhance the reaction kinetics between miRNA targets and fluorescently-labeled complementary reporters. In the second stage of the assay, the ITP zone migrated towards a photopatterned gel region which was functionalized with probes complementary to the reporters. Unused reporters bound to the probes and stayed behind as the reporter-target complex migrates through the gel. The resulting purified ITP zone was then fluorescently quantified to determine target concentration. Garcia-Schwarz demonstrated the selectivity of this technique for mature over precursor miRNA that are larger but contain the mature miRNA sequence. They achieved a 1 pM limit of detection with a 4 order of magnitude dynamic range using a linear DNA probe. In the second paper, 78 Garcia-Schwarz and Santiago built on their previous work by exploring specifically increases in assay specificity. They demonstrated single-nucleotide specificity with an assay which preferentially detected let-7a over other members of the let-7 microRNA family. Furthermore, they explored improved probe designs to enhance thermodynamic and kinetic specificity, and used locked nucleic acid (LNA) probes as well as a hairpin-structured reporter. The tradeoff for this enhanced specificity was reduced sensitivity and 20 dynamic range, as they sacrificed an order of magnitude in the former and two orders of magnitude in the latter. Finally, they successfully validated this approach using total RNA samples, and demonstrated comparable results to PCR quantification. One limitation of this method of signal removal is the laborious experimental preparation required to pattern and functionalize the gel, and another is the challenge of performing multiple experiments on a single device. However, the demonstration of single-nucleotide specificity, a key requirement in many nucleic acid applications, was an important achievement showing the significant potential of ITP assays. In the second of their two-part work, Shkolnikov and Santiago 76,79 presented a technique that coupled ITP preconcentration with affinity chromatography for sequence-specific capture and purification of target DNA molecules. The found that the three key parameters which they identified in their first paper (Da,ˆT c , andˆD K ) (and section IV.a above) showed remarkable consistency when compared with experimental results, accurately predicting the spatiotemporal behavior of the DNA. In addition to their analytical contributions (described in the previous section), they synthesized and functionalized a custom porous polymer monolith (PPM), made of poly(glycidyl methacrylate-co-ethylene dimetharcylate). They then functionalized this monolith with probe oligos, and characterized its performance. The PPM had little non-specific binding, was nonsieving, and compatible with ITP. Schkolnikov demonstrated the capability of this approach, purifying 25 nt Cy5-labeled target DNA from 10,000-fold more abundant background DNA in less than a minute, and their technique had a 4 order of magnitude dynamic range. IV.b.2. Surface-based DNA hybridization assays Karsenty et al. 73 were the first to experimentally demonstrate ITP-aided speed-up of a simple surfacebased hybridization reaction. They designed a 3 min assay in which they used an external magnet to attract and immobilize molecular beacon-conjugated paramagnetic beads on a designated location on the microfluidic channel. To facilitate the capture of beads, the microfluidic channel contained a 100 x 30 µm trench. They then initiated ITP to transport sample containing target molecules over the prefunctionalized surface, allowing rapid hybridization over only 4 s of incubation. Contaminants and unbound target molecules continued migrating downstream, resulting in a single-step reaction and purification assay. They achieved a limit of detection of 1 nM, a 100-fold improvement in sensitivity over comparable continuous-flow surface hybridization assays. Interestingly, the improvement in sensitivity underperformed theoretical predictions which they attributed to in part to shorter reaction 21 times for ITP. This initial feasibility type study used a single probe and a single target species, and did not explore specificity and cross-reactivity with interferents. Han et al. 74 were the first to demonstrate multiplexed ITP-accelerated heterogenous reactions. They applied ITP to focus nucleic acids and accelerate a DNA hybridization microarray assay. A key feature of DNA array technologies is their massive multiplexing capacity, capable of detecting thousands of targets simultaneously. 80 Furthermore, this improvement in sensitivity was achieved without any loss in specificity, as measured using the specificity index introduced earlier by Shintaku et al. 59 This work demonstrated the potential for ITP to integrate into DNA microarrays and other multiplexed surface reactions. IV.b.3. Heterogeneous immunoassays aided by ITP To the best of our knowledge, there have been two research groups (three studies) which used ITP to enhance heterogeneous immunoassays. In the first, Khnouf et al. 83 extended ITP-aided surface reaction assays to immunoassays using two different approaches for antibody immobilization. The first approach consisted of immobilizing antibody-coated magnetic beads at a specified region in the microchannel. In the second, they directly immobilized antibodies on a functionalized region of the channel. They designed a PMMA device that enabled electrokinetic injection in order to control the amount of sample injected. Following electrokinetic injection, the ITP-focused sample, containing the protein target, here bovine serum albumin (BSA), enters the reaction regions and binds to the immobilized antibodies. They empirically estimated a 100-fold preconcentration factor for the target protein. For both assay formats, using ITP to preconcentrate sample lead to an increase in sensitivity. Khnouf circumvented the issue of low protein mobility by selecting BSA, a high mobility protein, as their target, and labeling it with the even higher-mobility Alexa Fluor 488. 84 Despite this limitation, this study showed the feasibility of incorporating ITP in heterogeneous immunoassays. Moghadam et al. 85 23 V. Conclusions and recommendations We reviewed a number of theoretical and experimental studies wherein ITP focusing and mixing is used to initiate and accelerate chemical reactions. ITP preconcentration has been shown to increase reaction rate by up to 14,000-fold. By selectively focusing species within a designed electrophoretic mobility range, ITP can be used to remove contaminants and interfering species prior to a reaction, increase concentration during the reaction, and remove background signal or excess species following a reaction. Assays using ITP have demonstrated the ability to seamlessly integrate with various detection methods and other downstream assay steps. These assays leverage a variety of analytical chemistry tools including microfabrication, electric field control, a variety of cDNA-type probes, aptamer-type probes, antibodies, sieving matrices, spacer ions, denaturing conditions, microarrays and associated scanners, enzymatic reactions, and multispectral emissions and color detection. Downstream assays shown to be compatible with ITP include PCR, Luminex, and Bioanalyzer quantitation. Together, these studies demonstrate ITP's versatility and potential to control, enhance and accelerate chemical reactions. Furthermore, the continued near-decade existence of a commercial ITP-based immunoanalyzer suggests ITP's potential beyond the academic realm. There remains several areas of unaddressed challenges and opportunities. There have been only a handful of immunoassays using ITP, and those have been limited to a few, particularly wellcharacterized proteins. Though the heterogeneity of protein mobilities, solubilities, and sensitivity to pH and ionic strength presents a significant challenge, the opportunities to incorporate ITP into proteomic analysis are significant and worth exploring. Integrating ITP with enzymatic processes, particularly amplification techniques like PCR and RPA, is also a significant opportunity for future ITP assays. Another area of opportunity is increasing the level of multiplexing in ITP-aided reactions from 20 species to thousands, and increasing throughput by analyzing and controlling samples in parallel. The studies up to this point have been fairly limited in the input amount of sample that is processed. Increasing channel dimensions would potentially increase sensitivity and allow the processing and detection of rare species like viral DNA and circulating tumor cells. It may also encourage more widespread adoption of the technique. In particular, further incorporating ITP reaction process with complex sample extraction and purification (e.g. from blood, urine, tumor tissue, etc.) and with nextgeneration sequencing library preparation and target enrichment protocols would address key needs of Table 1. Summary of the twenty-three studies that applied ITP to chemical reactions. Authors Year Primary Contribution ITP to mix and control chemical reactions Kawabata et al. 46 2008 First work to use ITP to speed-up chemical reactions First commercial system to use ITP Park et al. 47 2008 Kagebayashi et al. 48 85 2015 First demonstration of ITP accelerating lateral flow assays Paratore et al. 75 2017 Lowest limit of detection achieved by an ITP immunoassay to date where their velocity will match that of the LE and TE zones (the so-called ITP velocity). Species that have a mobility lower than that of the TE electromigrate but fall behind the ITP interface, whereas species with higher mobility than the LE overtake the LE. Bottom: Schematic of peak and plateau ITP modes. In the former, dilute sample ions focus in a Gaussian-like peak. Multiple sample ions can co-focus in partially or entirely overlapping peaks, depending on their relative mobilities. The typical ITP peak magnitude is exaggerated considerably for visualization purposes. In plateau mode, sample ions at sufficiently high concentration to form plateaus of constant (in time) and locally uniform concentration, and contribute significantly to local conductivity. Figure 2. Electropherogram from Kawabata et al. 46 using of ITP preconcentration and DNA-labeled antibodies to control an ITP assay for AFP, a protein target. Peaks 1 and 3 correspond to the DNAantibody-AFP complex, whereas peaks 2 and 4 correspond to unreacted antibodies, trailing the immune complex. ITP resulted in a 140-fold increase in fluorescent signal, enabling lower detection limits. . Schematic representation of the first experimental ITP hybridization assay from Persat and Santiago. 61 The assay used molecular beacons, which are fluorescently (imperfectly) quenched in their native state but unravel and emit higher fluorescence upon binding to their target. Sieving matrix was used to divide the channel into three regions to respectively promote ITP preconcentration, size selectivity, and highly stringent hybridization. This assay was able to selectively detect mature miRNA target in tissue total RNA samples, and is the first of many ITP hybridization studies to use molecular beacons as probes. Figure 5. Experimental visualization from DNA reaction and separation assay using ITP and ionic spacers. 65 The probe is a 26 nt fluorescently-labeled ssDNA oligo and target a 149 nt ssDNA oligo. In the first stage (t1), all reactants and products were co-focused in an ITP zone, which promoted rapid hybridization. In the second stage (t2 and t3), the ITP zone entered a region with sieving matrix, causing the ionic spacer to overspeed the larger probe-target complex but not the unreacted probe molecules. By t4, all species had reached equilibrium, and the two ITP peaks migrated in parallel. Figure 6. Schematic and experimental visualization of the first assay to use ITP to accelerate reactions with whole dispersed cells. Positively-charged antimicrobial peptides are focused in ITP then held stationary using counterflow, while whole bacterial cells migrate through the channel by pressuredriven flow. Peptides label the flowing cells, which are then detected downstream using a fluorescent detector. The experimental set-up was stable over the 1 h detection window. Figure 7. Schematic of the ITP-based multiplexed microarray assay designed by Han et al. 74 First, labeled ssDNA targets were focused in ITP. Upon reaching the constriction, the electric field was turned off to minimize Joule heating and allow redistribution of target DNA through diffusion. The ssDNA targets then migrated over 60 immobilized probe sites representing 20 unique sequences, and fluorescence was measured after the ITP zone had swept by. The 30 min assay achieved a 100 fM limit of detection, the lowest for any heterogeneous ITP reaction assay. The array signal was quantified using a standard DNA array scanner. 85 By using ITP to preconcentrate target molecules, ITP-LFA addresses the poor detection limits typical of combined ITP-aided reactions with molecular beacons, but used it to detect a significantly larger target, 16S rRNA from bacteria in cultures and patient urine samples. To our knowledge, this is the first quantitative demonstration of reaction acceleration using ITP in addition to the first quantitative theory and validation thereof. They successfully demonstrated the applicability of this approach in real patient samples at clinically-relevant levels. Another significant contribution of the latter work is the design of a photomultiplier tube (PMT) system for high sensitivity fluorescent signal quantification. The assay achieved a limit of detection of 10 6 cfu/mL, or 30 pM, of E. Coli in human urine samples. This work and that of Persat and Santiago 61 demonstrated that molecular beacons can be used for selective assays; but also that ITP assays with molecular beacon based detection sacrifice sensitivity and dynamic range (due to large background signal of unreacted MBs). These limitations highlighted the need for removal (e.g., physical separation) of background signal following hybridization, and subsequent work in the area has devised and demonstrated various solutions to this issue.Bahga et al.64 introduced a homogeneous ITP reaction assay for removing excess background signal of unreacted molecular beacons. They designed an assay in which ITP-aided DNA hybridization was coupled to the high-resolving power of CE using bidirectional ITP (see review by Bahga and Santiago 12 ). Following ITP-aided hybridization, CE was triggered, and unbound molecular beacons separated from the larger (lower mobility) beacon-target complex. They successfully demonstrated sequence-specific detection of a 39 nt ssDNA target, with a 3 pM limit of detection. Though Bahga et al. improved sensitivity of ITP and molecular beacon assays with this approach, the use of CE resulted in more dispersed signal peaks, limiting sensitivity and downstream analysis of reaction products. and Bercovici 69 used positively-charged antimicrobial peptides (AMPs) to detect (but not identify) the presence of whole bacterial cells from a water sample spiked with E. coli. The cell-AMPs reaction is not specific to E. coli strain but demonstrated a new whole cell/reactant application. While previous studies used ITP to focus and detect pre-labeled (i.e., pre-reacted) bacterial cells in river water samples,70,71 this was the first to perform in-line labeling reaction, detection, and separation. Here, Schwartz used cationic ITP to focus the AMPs and used pressure-driven counterflow to hold them the high-concentration ITP zone stationary for over 1 h.Figure 6shows the schematic of this assay and an image of the stationary ITP peak. Meanwhile, ITP and pressure-driven flow acted unidirectionally on E. coli bacterial cells, resulting in continuous flow of bacterial cells reacting with and labeled by the stationary (in the lab frame) AMPs. AMP concentration was limited by the tendency of positively charged AMPs to bind to negatively-charged glass channel walls. The assay was capable of detecting 10 4 cfu/mL over the 1 h detection window, which is on the upper limit of relevant concentrations, while using 100-fold fewer reagents compared to conventional AMP techniques.Schwartz and Bercovici 15 hypothesized that an additional one or two orders of magnitude improvement in sensitivity may be possible using parallel channels for increased throughput.Phung et al.72 leveraged the use of counterflow pressure as well as an ionic spacer and sieving matrix to perform an ITP-aided fluorescence in situ hybridization (FISH) assay on bacterial cells. In the first step of their assay, bacterial cells and fluorescently labeled oligonucleotide probes are co-focused in peak-mode ITP. The nature of their synthetic cDNA hybridization probes makes this, to our knowledge, the first example of a high specificity reaction between cells and a reactant. Phung used counter flow to immobilize the ITP zone and prolong the sequence-specific hybridization process. In a second stage of the assay, a sieving matrix was used to separate stained cells from excess probe.Probes specific to target species E. coli and P. aeruginosa were respectively labeled with FAM (excitation wavelength of 488 nm) and Cy5 (excitation wavelength of 635 nm). Similar to Eid et al.,65 Phung used 1.8% HEC as sieving matrix, but used MES as an ionic spacer to separate stained cells from excess probe. They tested the selectivity of their technique by testing probes designed for different bacterial species, and found that selectivity heavily depends on probe design and the number of matched nucleotides with the bacterial target. The in-line ITP-FISH assay required 30 min to complete, a significant improvement over the 2.5 h typical of conventional FISH assays. However, the performance suffered due to reduced staining efficiency (approximately 50% of the 2.5 h conventional FISH offline hybridization reaction), and the assay achieved a limit of detection of 6 x 10 4 cells/mL. in which loopmediated isothermal amplification (LAMP) was used in conjunction with ITP. ITP was first used to extract and purify DNA from E. coli after proteinase K-based lysis of diluted whole milk samples. ITP was then deactivated and increased temperature was used to manipulate air pressure and fluid flow, and divert the ITP-focused DNA into a reaction chamber with dried LAMP reagents for on-chip amplification. Borysiak achieved 100-fold lower limit of detection compared with standard tube-based LAMP without ITP purification. More recently, Eid and Santiago 23 demonstrated the compatibility of ITP with another isothermal amplification method, recombinase polymerase amplification (RPA), as well as with alkaline and proteinase-K lysis. ITP was used to extract and preconcentrate genomic DNA from inactivated Gram-positive L. monocytogenes cells spiked into whole blood. The extracted DNA was then transferred directly to a tube containing RPA master mix. In the supplementary information of this reference, Eid further demonstrated preliminary results of a more integrated assay in which an electric field was used to migrate an ITP DNA sample zone into an LE reservoir containing RPA master mix and primers. ITP was then deactivated and on-chip isothermal amplification and detection was initiated. In both assays, ITP enabled isothermal amplification with minimal (10-fold or less) dilution of the original complex sample. the timescales that govern reaction, association, and dissociation rates, respectively. bm is the initial (total) number of available probe molecules, and α is ITP preconcentration factor. The duration over which the ITP-focused target overlaps with the stationary probe molecules isgiven by ITP ITP U  , where δ is the width of the ITP peak. Karsenty defines an enhancement ratio of ITP-aided surface reactions compared with standard flow reactions, expanded on the works of Karsenty and Han, and further investigated surface-based ITP reaction kinetics. They defined three operation modes for ITP: pass-over ITP (PO-ITP), stop and diffuse ITP (SD-ITP), and counterflow ITP (CF-ITP). The first of the three is the mode used in Karsenty's and Han's previous works, and consists of focused target passing over a reaction site containing immobilized probes, and reacting during a brief period of spatial overlap. In SD-ITP, once the focused target reaches the reaction site, the electric field was turned off, and the target diffused while reacting over the reaction site. To characterize reaction kinetics in this mode, the authors defined two characteristic time scales for depletion and axial diffusion, long times, longer than either characteristic time scales, the solution to eq 25 approaches a steady state of 0 c  . The third and final mode of ITP they discussed is counterflow ITP (CF-ITP), in which the ITP peak was held steady over the reaction site by applying counterflow pressure. There, the fraction of surface molecules bound is given s model revealed that SD-ITP and CF-ITP result in significantly more reaction acceleration compared with PO-ITP. SD-ITP is highly dependent on the diffusion behavior of the focused species, which can be improved by increasing viscosity. On the other hand, CF-ITP is dependent on the ratio of ITP accumulation and species depletion time scales, which can be controlled by optimizing ITP focusing conditions. Shkolnikov and Santiago 76 developed a model for ITP coupled with affinity chromatography (AC), wherein target molecules are focused in ITP and probes are immobilized in an affinity capture region.Their analytical model they developed was broad and comprehensive. It contained fully-reversible second-order reaction kinetics, allowed for spatially variable bound substrate distribution, and accounted for zone dynamics of ITP-focused analytes. They found that key parameters like probe and target concentrations, forward and reverse reaction constants, and others collapsed into three dimensionless parameters that governed the different limiting regimes in the ITP-AC problem: the Damkohler number (Da), the ratio of peak target concentration scaled by the initial probe concentration (ˆT c ), and a dimensionless equilibrium constant (ˆD K ), ,81 Up until the work of Han, ITP studies were limited to the detection of one or two targets. Han detected 20 target sequences using 60 microarray spots. To achieve this, Han et al.designed a PDMS microfluidic superstructure containing a single channel that they bonded to a commercially available microarray on a glass substrate, exposing 60 spots to the path of the ITP zone.The resulting device was 40 µm deep, 8 cm long, and 500 µm wide, except for a short constriction that had width of 200 µm. The electric field was varied to optimize both ITP preconcentration and DNA hybridization (Figure 7). In the first part of the assay, they applied high electric field to accumulate all the ssDNA targets. Once the ITP zone reached the constriction, the electric field was turned off to allow target to redistribute through diffusion and minimize Joule heating effects. Finally, in the hybridization step, they applied low electric field values to avoid electrokinetic instabilities 37,82 as the sample migrated over the immobilized array spots. The 30 min assay achieved a 100 fM limit of detection with a dynamic range of 4 orders of magnitude. Compared with a conventional 15 h microarray hybridization assay, the 30 min ITP assay also demonstrated 8-fold increase in sensitivity. further demonstrated the applicability of ITP to one of the most widely-used immunoassay formats, lateral flow assays (LFA). Despite their convenience and popularity for qualitative applications, LFAs are often constrained by poor detection limits. Using ITP preconcentration, Moghadam demonstrated 2 orders of magnitude improvement in sensitivity in detecting IgG from a clean buffer. In conventional LFAs, samples migrate solely under capillary action, whereas they migrate electrophoretically in ITP-LFA (Figure 8). In addition to reaction enhancement achieved by ITP, Moghadam estimated an extraction efficiency of nearly 30%, more than an order of magnitude improvement over conventional LFA. Additionally, Moghadam offered significant insight into the general design of ITP-LFA. Capture efficiency in conventional LFA depends on the normalized initial target concentration. In ITP-LFA, however, capture efficiency also depends on preconcentration factor, time, and dissociation rate constant. As a result, optimal assay performance can be achieved by carefully designing buffer chemistry and device geometry. Most recently, Paratore et al. 75 developed an assay in which they used different modes of ITP to accelerate surface-based immunoassays. They guided paramagnetic beads coated with anti-GFP antibodies to a desired location in the microfluidic channel, similar to Karsenty et al. They used enhanced GFP (EGFP) as a model protein, and demonstrated three separate operation modes of ITP: pass-over ITP (PO-ITP), stop and diffuse ITP (SD-ITP), and counterflow ITP (CF-ITP). For all assays, the total assay time was nearly 30 min, only 6 of which were for antibody-antigen binding. They found that SD-ITP, in which the electric field was turned off when the focused protein reached the reaction site, and CF-ITP, in which counterflow was applied to keep the focused proteins over the reaction site, resulted in significant decrease in LoD as compared with standard mode SD-ITP. Paratore reduced the LoD from 300 pM (for a standard flow immunoassay) to 220 fM using CF-ITP. They noted the influence of pH and salt concentration on the performance of the ITP immunoassay, and highlighted the heterogeneity of target proteins, which in turn requires careful optimization for each target. Despite the typical issues associated with ITP immunoassays (e.g. impact of extreme pH values and salt concentrations on focusing dynamics and reaction kinetics), Paratore's results are promising for the integration of ITP with protein detection assays. Figure 1 . 1Top: Schematic representation of selective focusing in ITP. Sample species with intermediate mobilities migrating in the TE zone overspeed neighboring TE ions, while those migrating in the LE zone are overtaken by neighboring LE ions. Sample species therefore focus at the TE-LE interface, Figure 3 . 3Experimental demonstration of ITP-aided hybridization acceleration from Bercovici et al.52 Fraction of reactants hybridized is shown for both standard and ITP-aided hybridizations at two different reactant concentrations. 960-and 14,000-fold hybridization acceleration is demonstrated for limiting species concentrations of 10 nM and 100 pM, respectively. The theoretical model (discussed in Section III.a) agreed with experimental data, and captured the significant trends resulting from ITPaided hybridization. This analytical model of ITP was the first of its kind and served as the foundation for several subsequent analytical studies. Figure 4 4Figure 4. Schematic representation of the first experimental ITP hybridization assay from Persat and Santiago. 61 The assay used molecular beacons, which are fluorescently (imperfectly) quenched in their native state but unravel and emit higher fluorescence upon binding to their target. Sieving matrix was used to divide the channel into three regions to respectively promote ITP preconcentration, size selectivity, and highly stringent hybridization. This assay was able to selectively detect mature miRNA target in tissue total RNA samples, and is the first of many ITP hybridization studies to use molecular beacons as probes. Figure 8 . 8Illustration of the various stages of the ITP-LFA designed by Moghadam et al. and remains commercially available as of this publication.III. ITP to preconcentrate, mix, and accelerate homogeneous reactionsIII.a. Homogeneous reactions: theory and modelsIn this section, we will briefly review analytical modeling and scaling of homogeneous chemical reactions (i.e. all reactants suspended in solution) using ITP. Standard second-order chemical reactions can be expressed as on off k k A B AB      (3) where kon and koff are the reaction on-and off-rate constants, respectively. The characteristic hybridization time scale at which half the limiting species (here, reactant B) can be expressed as 7 0 ln 2 std on A kc   (4) where 0 A c represents the initial concentration of reactant A. Here , HereTE A Q and LE BQ represent the influx of A and B from the zones wherein they were initially loaded (the TE for species A, and LE for species B). These rates are given by    0 , 00 , 1 and 1 TE TE ATE A A S ITP S S TE ITP S TE LE LE LE B B ITP S S ITP S S LE ITP S LE Q U U Ac p U A c Q U U Ac U Ac p U Ac protein mobilities cover a wide range of mobilities and solubilities, and are difficult to model and predict a priori. Additionally, proteins are highly sensitive to pH levels and other environmental factors. As a result, researchers seeking to design ITP reaction assays with protein targets have eitherhad to use well-characterized and high-mobility proteins, or devise clever workarounds to focus and detect proteins (e.g., such as the aforementioned conjugation of proteins with a DNA molecule by Wako 46 ). typically non-focusing C-reactive protein (CRP) into ITP. The modified aptamers are called SOMAmers (Slow Off-Rate Modified Aptamer), and unlike traditional aptamers that are DNA or RNA oligonucleotides, SOMAmers have modified bases to increase their hydrophobicity. 67,68 Eid here loaded both the protein and fluorescently-labeled SOMAmers into LE buffer. The ITP chemistry was chosen so as to preconcentrate the relatively high mobility SOMAmers and the SOMAmer/protein complex but not the unbound protein target. Hence ITP preconcentrated SOMAmers (increasing reaction rate) and simultaneously recruited proteins into the ITP zone upon binding. They extended the approach introduced by Eid and colleagues 65 combining ITP and ionic spacers to separate unbound SOMAmers from SOMAmer-protein complex to remove excess background signal. In clean buffer, this technique achieved a limit of detection of 2 nM, well within the clinically-relevant range of CRP. Han et al. also identified the Damkohler number as an important dimensionless parameter governing these reactions. Damkohler number, Da, is the ratio of reaction rate and diffusion rate, and can be used to determine whether a system is reaction-or diffusion-limited. Typical ITP assays have Da << 1,The above model has been adapted by several subsequent works since the original publication. Han et al., 74 in their work on ITP-aided acceleration of microarray assays, provided a similar analysis of ITP- aided surface hybridization. They define the fraction of surface probes hybridized following ITP reaction by       0 0 1 exp on ITP on off ITP o on off ck h c k k c k k         (23) Analytical model focusing on sample distribution within ITP zones, and its effect on reaction rates Shintaku et al.59 2014 First demonstration of ITP hybridization in bead-based assaysSchwartz and Bercovici 69 2014 First work to use ITP to speed up reactions with whole cell reactants Eid et al. 57 2015 Recruiting non-focusing protein reactant into ITP using modified aptamer probes Eid et al. 53 2016 Analytical modeling of influx and reaction rates in ITP as a function of initial sample placement Phung et al. 72 2017 Coupling ITP with FISH for detection of whole bacterial cells ITP to preconcentrate, mix, and accelerate heterogeneous reactions Garcia-Schwarz and Santiago 77,78 2012 2013 ITP reaction coupled with gel-based excess reactant removal Karsenty et al. 73 2014 First experimentally-validated model for ITP-aided surface hybridization Han et al. 74 2014Demonstration of ITP-aided DNA microarray hybridization with sub-picomolar limit of detection First demonstration of truly multiplexed ITP detection2009 ITP to preconcentrate, mix, and accelerate homogeneous reactions Persat and Santiago 61 2011 First experimental demonstration of ITP-aided enhancement of nucleic acid hybridization Bercovici et al. 63 2011 First quantitative demonstration of reaction acceleration using ITP Bercovici et al. 52 2012 First analytical model of ITP reaction assays, framework for several future models Bahga et al. 64 2013 Coupling ITP-based reaction with capillary electrophoresis for clean-up of excess reactants Eid et al. 65 2013 Inline reaction-separation assay using ionic spacer with sub- picomolar detection limit Rubin et al. 58 2014 Shkolnikov and Santiago 76,79 2014 Coupling ITP preconcentration to affinity chromatography purification Comprehensive and highly predictive analytical modeling of ITP- aided capture kinetics Khnouf et al. 83 2014 Demonstration of ITP enhancement for surface-based immunoassays Moghadam et al. Table 2 . 2Summary of reacting species, reaction characteristics, and assay performance for the experimental studies discussed in this review. L = labeled (fluorescent, colorimetric, etc.) and i = immobilized (surface, gel, etc.)Authors Species Reaction type (kon, koff, KD if known) Species controlling completion time Reaction time Sensitivity Kawabata et al. 46 Probe 1: DNA anti-AFP WA1 antibody Probe 2: anti-AFP WA2 antibody (L) Target: Alpha feto protein Homogeneous Probes 1 and 2 2 min 1 pM Park et al. 47 Kagebayashi et al. 48 Persat and Santiago 61 Probe: Molecular Beacon (L) Target: miRNA Homogeneous Probe 2 min 10 pM Bercovici et al. 63 Probe: Molecular Beacon (L) Target: 16S rRNA Homogeneous (kon = 4.75 x 10 3 M -1 s -1 ) Probe 30 pM Bahga et al. 64 Probe: Molecular Beacon (L) Target: ssDNA oligo Homogeneous Probe 3 min 3 pM Eid et al. 65 Probe: ssDNA oligo (L) Target: ssDNA oligo Homogeneous Probe 5 min 230 fM Shintaku et al. 59 Probe: Beads conjugated with ssDNA oligo Target: ssDNA oligo Homogeneous (kon = 4.4 x 10 -6 M -1 s -1 KD = 7.3 x 10 -12 M) Probe 20 min 100 fM Schwartz and Bercovici 69 Probe: antimicrobial peptides (L) Target: E. coli cells Homogeneous Probe Up to 1 hr 2 x 10 4 cfu/mL Eid et al. 57 Probe: SOMAmer (L) Target: C-reactive Protein Homogeneous (KD = 10 -9 M) Probe 5 min 2 nM Phung et al. Probe: ssDNA (L) Target: E. coli and P. aeruginosa cells Homogeneous Probe 30 min 6 x 10 4 cfu/mL Garcia-Schwarz and Santiago 77,78 Reporter: Hairpin oligo Probe: ssDNA oligo (i) Target: let-7a miRNA Gel Reporter and probe 15 min 2 pM 10 pM Karsenty et al. 73 Probe: Molecular beacon (i) Surface Target 3 min 1 nM those techniques (long incubation times, need for several washes). Such integration has immense potential. . J Kendall, E D Crittenden, Proc Natl Acad Sci U S A. 9J. Kendall and E. D. Crittenden, Proc Natl Acad Sci U S A, 1923, 9, 75-78. . L G Longsworth, Ann. N. Y. Acad. Sci. 39L. G. Longsworth, Ann. N. Y. Acad. Sci., 1939, 39, 187-202. . A J P Martin, Unpublished resultsA. J. P. Martin, Unpublished results, 1942. H Haglund, Sci Tools. 17H. Haglund, Sci Tools, 1970, 17, 2-13. . P Smejkal, D Bottenus, M C Breadmore, R M Guijt, C F Ivory, F Foret, M Macka, Electrophoresis. 34P. Smejkal, D. Bottenus, M. C. Breadmore, R. M. Guijt, C. F. Ivory, F. Foret and M. Macka, Electrophoresis, 2013, 34, 1493-1509. . V Dolnik, P Bocek, J Chromatogr A. 246V. Dolnik and P. Bocek, J Chromatogr A, 1982, 246, 343-345. . V Dolnik, P Bocek, J Chromatogr A. 246V. Dolnik and P. Bocek, J Chromatogr A, 1982, 246, 340-342. . P Stehle, P Furst, J Chromatogr A. 346P. Stehle and P. Furst, J Chromatogr A, 1985, 346, 271-279. . P Stehle, H S Bahsitta, P Furst, J Chromatogr A. 370P. Stehle, H. S. Bahsitta and P. Furst, J Chromatogr A, 1986, 370, 131-138. . D Kaniansky, J Marak, J Chromatogr A. 498D. Kaniansky and J. Marak, J Chromatogr A, 1990, 498, 191-204. . F Foret, E Szoko, J Chromatogr A. 608F. Foret and E. Szoko, J Chromatogr A, 1992, 608, 3-12. . S S Bahga, J G Santiago, Analyst. 138S. S. Bahga and J. G. Santiago, Analyst, 2013, 138, 735-754. . P Gebauer, P Bocek, Electrophoresis. 23P. Gebauer and P. Bocek, Electrophoresis, 2002, 23, 3858-3864. . P Gebauer, Z Mala, P Bocek, Electrophoresis. 30P. Gebauer, Z. Mala and P. Bocek, Electrophoresis, 2009, 30, 29-35. . Z Mala, P Gebauer, P Bocek, Electrophoresis. 34Z. Mala, P. Gebauer and P. Bocek, Electrophoresis, 2013, 34, 19-28. . P G Righetti, J Chromatogr A. 1079P. G. Righetti, J Chromatogr A, 2005, 1079, 24-40. . L Krivankova, P Gebauer, P Bocek, Methods Enzymol. 270L. Krivankova, P. Gebauer and P. Bocek, Methods Enzymol, 1996, 270, 375-401. . P Mikus, K Marakova, L Veizerova, J Piestansky, J Galba, E Havranek, J Chromatogr Sci. 50P. Mikus, K. Marakova, L. Veizerova, J. Piestansky, J. Galba and E. Havranek, J Chromatogr Sci, 2012, 50, 849-854. . D W Wegman, F Ghasemi, A S Stasheuski, A Khorshidi, B B Yang, S K Liu, G M Yousef, S N Krylov, Anal. Chem. 88D. W. Wegman, F. Ghasemi, A. S. Stasheuski, A. Khorshidi, B. B. Yang, S. K. Liu, G. M. Yousef and S. N. Krylov, Anal. Chem., 2016, 88, 2472-2477. . M D Borysiak, K W Kimura, J D Posner, Lab Chip. 15M. D. Borysiak, K. W. Kimura and J. D. Posner, Lab Chip, 2015, 15, 1697-1707. . A Persat, L A Marshall, J G Santiago, Anal Chem. 81A. Persat, L. A. Marshall and J. G. Santiago, Anal Chem, 2009, 81, 9507-9511. . A Rogacs, L A Marshall, J G Santiago, 105-120. 23C. Eid and J. G. Santiago, Analyst. 1335J Chromatogr AA. Rogacs, L. A. Marshall and J. G. Santiago, J Chromatogr A, 2014, 1335, 105-120. 23. C. Eid and J. G. Santiago, Analyst, 2016, 142, 48-54. . J Sadecka, J Polonsky, J Chromatogr A. 735J. Sadecka and J. Polonsky, J Chromatogr A, 1996, 735, 403-408. . R B Schoch, M Ronaghi, J G Santiago, Lab Chip. 9R. B. Schoch, M. Ronaghi and J. G. Santiago, Lab Chip, 2009, 9, 2145-2152. . M Bercovici, S K Lele, J G Santiago, J Chromatogr A. 1216M. Bercovici, S. K. Lele and J. G. Santiago, J Chromatogr A, 2009, 1216, 1008-1018. . O Dagan, M Bercovici, Anal. Chem. 86O. Dagan and M. Bercovici, Anal. Chem., 2014, 86, 7835-7842. . D Bottenus, T Z Jubery, P Dutta, C F Ivory, Electrophoresis. 32D. Bottenus, T. Z. Jubery, P. Dutta and C. F. Ivory, Electrophoresis, 2011, 32, 550-562. . T Jacroux, D Bottenus, B Rieck, C F Ivory, W J Dong, Electrophoresis. 35T. Jacroux, D. Bottenus, B. Rieck, C. F. Ivory and W. J. Dong, Electrophoresis, 2014, 35, 2029-2038. . K Kuriyama, H Shintaku, J G Santiago, Electrophoresis. 36K. Kuriyama, H. Shintaku and J. G. Santiago, Electrophoresis, 2015, 36, 1658-1662. . C Eid, S S Branda, R J Meagher, Analyst. 142C. Eid, S. S. Branda and R. J. Meagher, Analyst, 2017, 142, 2094-2099. F M Everaerts, J L Beckers, P E M Verheggen, Isotachophoresis: Theory, instrumentation, and applications. F. M. Everaerts, J. L. Beckers and P. E. M. Verheggen, Isotachophoresis: Theory, instrumentation, and applications, 1976. . G Garcia-Schwarz, A Rogacs, S S Bahga, J G Santiago, 10.3791/3890J Vis Exp. 3890G. Garcia-Schwarz, A. Rogacs, S. S. Bahga and J. G. Santiago, J Vis Exp, 2012, DOI: 10.3791/3890, e3890. . B Jung, R Bharadwaj, J G Santiago, Anal Chem. 78B. Jung, R. Bharadwaj and J. G. Santiago, Anal Chem, 2006, 78, 2319-2327. . T K Khurana, J G Santiago, Anal Chem. 80T. K. Khurana and J. G. Santiago, Anal Chem, 2008, 80, 6300-6307. . G Garcia-Schwarz, M Bercovici, L A Marshall, J G Santiago, Journal of Fluid Mechanics. 679G. Garcia-Schwarz, M. Bercovici, L. A. Marshall and J. G. Santiago, Journal of Fluid Mechanics, 2011, 679, 455-475. . C H Chen, H Lin, S K Lele, J G Santiago, Journal of Fluid Mechanics. 524C. H. Chen, H. Lin, S. K. Lele and J. G. Santiago, Journal of Fluid Mechanics, 2005, 524, 263-303. . A J P Martin, Everaert, Fm, Proceedings of the Royal Society of London Series a-Mathematical and Physical Sciences. 316493A. J. P. Martin and Everaert.Fm, Proceedings of the Royal Society of London Series a- Mathematical and Physical Sciences, 1970, 316, 493-&. . W Kohlrausch, Verhandlungen Der Deutschen Gesellschaft Fur Kreislaufforschung. 17W. Kohlrausch, Verhandlungen Der Deutschen Gesellschaft Fur Kreislaufforschung, 1951, 17, 303-305. . J Alberty, Klin Wochenschr. 28J. Alberty, Klin Wochenschr, 1950, 28, 786-787. . T M Jovin, Biochemistry. 12T. M. Jovin, Biochemistry, 1973, 12, 871-879. . M Bercovici, G V Kaigala, J G Santiago, Anal Chem. 82M. Bercovici, G. V. Kaigala and J. G. Santiago, Anal Chem, 2010, 82, 2134-2138. . M Bercovici, G V Kaigala, C J Backhouse, J G Santiago, Anal Chem. 82M. Bercovici, G. V. Kaigala, C. J. Backhouse and J. G. Santiago, Anal Chem, 2010, 82, 1858-1866. . R D Chambers, J G Santiago, Anal Chem. 81R. D. Chambers and J. G. Santiago, Anal Chem, 2009, 81, 3022-3028. Wako, μTASWako® i30 Automated Platform microfluidic-based clinical immunoanalyzer. Wako, μTASWako® i30 Automated Platform microfluidic-based clinical immunoanalyzer, http://www.wakodiagnostics.com/utaswako_i30.html). . T Kawabata, H G Wada, M Watanabe, S Satomura, Electrophoresis. 29T. Kawabata, H. G. Wada, M. Watanabe and S. Satomura, Electrophoresis, 2008, 29, 1399- 1406. . C C Park, I Kazakova, T Kawabata, M Spaid, R L Chien, H G Wada, S Satomura, Anal Chem. 80C. C. Park, I. Kazakova, T. Kawabata, M. Spaid, R. L. Chien, H. G. Wada and S. Satomura, Anal Chem, 2008, 80, 808-814. . C Kagebayashi, I Yamaguchi, A Akinaga, H Kitano, K Yokoyama, M Satomura, T Kurosawa, M Watanabe, T Kawabata, W Chang, C Li, L Bousse, H G Wada, S Satomura, Anal Biochem. 388C. Kagebayashi, I. Yamaguchi, A. Akinaga, H. Kitano, K. Yokoyama, M. Satomura, T. Kurosawa, M. Watanabe, T. Kawabata, W. Chang, C. Li, L. Bousse, H. G. Wada and S. Satomura, Anal Biochem, 2009, 388, 306-311. . D Li, T Mallory, S Satomura, Clin. Chim. Acta. 313D. Li, T. Mallory and S. Satomura, Clin. Chim. Acta., 2001, 313, 15-19. . Y J Zhao, Q Ju, G C Li, Mol Clin Oncol. 1Y. J. Zhao, Q. Ju and G. C. Li, Mol Clin Oncol, 2013, 1, 593-598. . Journal Fda, FDA, Journal, 2011. . M Bercovici, C M Han, J C Liao, J G Santiago, Proc Natl Acad Sci U S A. 109M. Bercovici, C. M. Han, J. C. Liao and J. G. Santiago, Proc Natl Acad Sci U S A, 2012, 109, 11127-11132. . C Eid, J G Santiago, Anal Chem. 88C. Eid and J. G. Santiago, Anal Chem, 2016, 88, 11352-11357. . D A Macinnes, L G Longsworth, Chem. Rev. 11D. A. MacInnes and L. G. Longsworth, Chem. Rev., 1932, 11, 171-230. . P Bocek, P Gebauer, V Dolnik, F Foret, J Chromatogr. 334P. Bocek, P. Gebauer, V. Dolnik and F. Foret, J Chromatogr, 1985, 334, 157-195. . L Marshall, Stanford Phd, University, L. Marshall, PhD, Stanford University, 2013. . C Eid, J W Palko, E Katilius, J G Santiago, Anal. Chem. 87C. Eid, J. W. Palko, E. Katilius and J. G. Santiago, Anal. Chem., 2015, 87, 6736-6743. . S Rubin, O Schwartz, M Bercovici, Physics of Fluids. 26S. Rubin, O. Schwartz and M. Bercovici, Physics of Fluids, 2014, 26. . H Shintaku, J W Palko, G M Sanders, J G Santiago, Angew Chem Int Ed Engl. 53H. Shintaku, J. W. Palko, G. M. Sanders and J. G. Santiago, Angew Chem Int Ed Engl, 2014, 53, 13813-13816. . G Goet, T Baier, S Hardt, Lab Chip. 9G. Goet, T. Baier and S. Hardt, Lab Chip, 2009, 9, 3586-3593. . A Persat, J G Santiago, Anal. Chem. 83A. Persat and J. G. Santiago, Anal. Chem., 2011, 83, 2310-2316. . S Tyagi, F R Kramer, Nat Biotechnol. 14S. Tyagi and F. R. Kramer, Nat Biotechnol, 1996, 14, 303-308. . M Bercovici, G V Kaigala, K E Mach, C M Han, J C Liao, J G Santiago, Anal Chem. 83M. Bercovici, G. V. Kaigala, K. E. Mach, C. M. Han, J. C. Liao and J. G. Santiago, Anal Chem, 2011, 83, 4110-4117. . S S Bahga, C M Han, J G Santiago, Analyst. 138S. S. Bahga, C. M. Han and J. G. Santiago, Analyst, 2013, 138, 87-90. . C Eid, G Garcia-Schwarz, J G Santiago, Analyst. 138C. Eid, G. Garcia-Schwarz and J. G. Santiago, Analyst, 2013, 138, 3117-3120. . N C Stellwagen, C Gelfi, P G Righetti, Biopolymers. 42N. C. Stellwagen, C. Gelfi and P. G. Righetti, Biopolymers, 1997, 42, 687-703. . L Gold, D Ayers, J Bertino, C Bock, A Bock, E N Brody, J Carter, A B Dalby, B E , L. Gold, D. Ayers, J. Bertino, C. Bock, A. Bock, E. N. Brody, J. Carter, A. B. Dalby, B. E. . T Eaton, D Fitzwater, A Flather, T Forbes, C Foreman, B Fowler, M Gawande, M Goss, S Gunn, D Gupta, J Halladay, J Heil, B Heilig, G Hicke, N Husar, T Janjic, S Jarvis, E Jennings, T R Katilius, N Keeney, T H Kim, S Koch, L Kraemer, N Kroiss, D Le, W Levine, B Lindsey, W Lollo, M Mayfield, R Mehan, S K Mehler, M Nelson, D Nelson, M Nieuwlandt, U Nikrad, R M Ochsner, M Ostroff, T Otis, S Parker, D I Pietrasiewicz, J Resnicow, G Rohloff, S Sanders, Sattin, Eaton, T. Fitzwater, D. Flather, A. Forbes, T. Foreman, C. Fowler, B. Gawande, M. Goss, M. Gunn, S. Gupta, D. Halladay, J. Heil, J. Heilig, B. Hicke, G. Husar, N. Janjic, T. Jarvis, S. Jennings, E. Katilius, T. R. Keeney, N. Kim, T. H. Koch, S. Kraemer, L. Kroiss, N. Le, D. Levine, W. Lindsey, B. Lollo, W. Mayfield, M. Mehan, R. Mehler, S. K. Nelson, M. Nelson, D. Nieuwlandt, M. Nikrad, U. Ochsner, R. M. Ostroff, M. Otis, T. Parker, S. Pietrasiewicz, D. I. Resnicow, J. Rohloff, G. Sanders, S. Sattin, D. Schneider, B. Singer, M. Stanton, A. . A Sterkel, S Stewart, J D Stratford, M Vaught, J J Vrkljan, M Walker, S Watrobka, A Waugh, S K Weiss, A Wilcox, S K Wolfson, C Wolk, D Zhang, Zichi, PLoS One. Sterkel, A. Stewart, S. Stratford, J. D. Vaught, M. Vrkljan, J. J. Walker, M. Watrobka, S. Waugh, A. Weiss, S. K. Wilcox, A. Wolfson, S. K. Wolk, C. Zhang and D. Zichi, PLoS One, 2010, 5, e15004. . J C Rohloff, A D Gelinas, T C Jarvis, U A Ochsner, D J Schneider, L Gold, N Janjic, Mol. Ther. Nucleic Acids. 3201J. C. Rohloff, A. D. Gelinas, T. C. Jarvis, U. A. Ochsner, D. J. Schneider, L. Gold and N. Janjic, Mol. Ther. Nucleic Acids, 2014, 3, e201. . O Schwartz, M Bercovici, Anal Chem. 86O. Schwartz and M. Bercovici, Anal Chem, 2014, 86, 10106-10113. . F Oukacine, L Garrelly, B Romestand, D M Goodall, T Zou, H Cottet, Anal. Chem. 83F. Oukacine, L. Garrelly, B. Romestand, D. M. Goodall, T. Zou and H. Cottet, Anal. Chem., 2011, 83, 1571-1578. . S C Phung, Y H Nai, S M Powell, M Macka, M C Breadmore, Electrophoresis. 34S. C. Phung, Y. H. Nai, S. M. Powell, M. Macka and M. C. Breadmore, Electrophoresis, 2013, 34, 1657-1662. . S C Phung, J M Cabot, M Macka, S M Powell, R M Guijt, M Breadmore, Anal. Chem. 89S. C. Phung, J. M. Cabot, M. Macka, S. M. Powell, R. M. Guijt and M. Breadmore, Anal. Chem., 2017, 89, 6513-6520. . M Karsenty, S Rubin, M Bercovici, Anal Chem. 86M. Karsenty, S. Rubin and M. Bercovici, Anal Chem, 2014, 86, 3028-3036. . C M Han, E Katilius, J G Santiago, Lab Chip. 14C. M. Han, E. Katilius and J. G. Santiago, Lab Chip, 2014, 14, 2958-2967. . F Paratore, T Kalman, T Rosenfeld, G V Kaigala, M Bercovici, 10.1021/acs.analchem.7b00725Anal. Chem. F. Paratore, T. Zeidman Kalman, T. Rosenfeld, G. V. Kaigala and M. Bercovici, Anal. Chem., 2017, DOI: 10.1021/acs.analchem.7b00725. . V Shkolnikov, J G Santiago, Anal Chem. 86V. Shkolnikov and J. G. Santiago, Anal Chem, 2014, 86, 6220-6228. . G Garcia-Schwarz, J G Santiago, Anal Chem. 84G. Garcia-Schwarz and J. G. Santiago, Anal Chem, 2012, 84, 6366-6369. . G Garcia-Schwarz, J G Santiago, Angew Chem Int Ed Engl. 52G. Garcia-Schwarz and J. G. Santiago, Angew Chem Int Ed Engl, 2013, 52, 11534-11537. . V Shkolnikov, J G Santiago, Anal Chem. 86V. Shkolnikov and J. G. Santiago, Anal Chem, 2014, 86, 6229-6236. . D T Okou, K M Steinberg, C Middle, D J Cutler, T J Albert, M E Zwick, Nat Methods. 4D. T. Okou, K. M. Steinberg, C. Middle, D. J. Cutler, T. J. Albert and M. E. Zwick, Nat Methods, 2007, 4, 907-909. . D T Ross, U Scherf, M B Eisen, C M Perou, C Rees, P Spellman, V Iyer, S S Jeffrey, M Van De Rijn, M Waltham, A Pergamenschikov, J C Lee, D Lashkari, D Shalon, T G Myers, J N Weinstein, D Botstein, P O Brown, Nat Genet. 24D. T. Ross, U. Scherf, M. B. Eisen, C. M. Perou, C. Rees, P. Spellman, V. Iyer, S. S. Jeffrey, M. Van de Rijn, M. Waltham, A. Pergamenschikov, J. C. Lee, D. Lashkari, D. Shalon, T. G. Myers, J. N. Weinstein, D. Botstein and P. O. Brown, Nat Genet, 2000, 24, 227-235. . J D Posner, C L Perez, J G Santiago, Proc Natl Acad Sci U S A. 109J. D. Posner, C. L. Perez and J. G. Santiago, Proc Natl Acad Sci U S A, 2012, 109, 14353- 14356. . R Khnouf, G Goet, T Baier, S Hardt, Analyst. 139R. Khnouf, G. Goet, T. Baier and S. Hardt, Analyst, 2014, 139, 4564-4571. . D Milanova, R D Chambers, S S Bahga, J G Santiago, Electrophoresis. 32D. Milanova, R. D. Chambers, S. S. Bahga and J. G. Santiago, Electrophoresis, 2011, 32, 3286-3294. conventional LFA. The assay demonstrated a two order of magnitude improvement in detecting IgG, and maintained the short assay times. B Y Moghadam, K T Connelly, J D Posner, Anal Chem. 875 min) typical of LFAB. Y. Moghadam, K. T. Connelly and J. D. Posner, Anal Chem, 2015, 87, 1009-1017. conventional LFA. The assay demonstrated a two order of magnitude improvement in detecting IgG, and maintained the short assay times (5 min) typical of LFA.
[]
[ "Influence of Thickness and Contact Area on the Performance of PDMS-Based Triboelectric Nanogenerators", "Influence of Thickness and Contact Area on the Performance of PDMS-Based Triboelectric Nanogenerators" ]
[ "A Gomes \nIFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal\n", "C Rodrigues \nIFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal\n", "A M Pereira \nIFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal\n", "J Ventura \nIFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal\n" ]
[ "IFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal", "IFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal", "IFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal", "IFIMUP and IN-Institute of Nanoscience and Nanotechnology\nDepartamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto\nRua do Campo Alegre 6874169-007PortoPortugal" ]
[]
Triboelectric nanogenerators (TENGs) are an emerging mechanical energy harvesting technology that was recently demonstrated. Due to their flexibility, they can be fabricated in various configurations and consequently have a large number of applications. Here, we present a study on the influence of the thickness of the triboelectric layer and of the contact surface area between two triboelectrical materials on the electric signals generated by a TENG. Using the PDMS-Nylon tribo-pair, and varying the thickness of the PDMS layer, we demonstrate that the generated voltage decreases with increasing thickness. However, the maximum generated current presents an inverse behaviour, increasing with increasing PDMS thickness. The maximum output power initially increases with increasing PDMS thickness up to 32 µm, followed by a sharp decrease. Using the same tribo-pair (but now with a constant PDMS thickness), we verified that increasing the contact surface area between the two tribo-materials increases the electrical signals generated from the triboelectric effect.
null
[ "https://arxiv.org/pdf/1803.10070v1.pdf" ]
119,455,130
1803.10070
fdadc5f1d3e1f99c5057a631eec9f3084178b21a
Influence of Thickness and Contact Area on the Performance of PDMS-Based Triboelectric Nanogenerators A Gomes IFIMUP and IN-Institute of Nanoscience and Nanotechnology Departamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto Rua do Campo Alegre 6874169-007PortoPortugal C Rodrigues IFIMUP and IN-Institute of Nanoscience and Nanotechnology Departamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto Rua do Campo Alegre 6874169-007PortoPortugal A M Pereira IFIMUP and IN-Institute of Nanoscience and Nanotechnology Departamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto Rua do Campo Alegre 6874169-007PortoPortugal J Ventura IFIMUP and IN-Institute of Nanoscience and Nanotechnology Departamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto Rua do Campo Alegre 6874169-007PortoPortugal Influence of Thickness and Contact Area on the Performance of PDMS-Based Triboelectric Nanogenerators Triboelectricitytriboelectric materialsthickness layercontact surface area Triboelectric nanogenerators (TENGs) are an emerging mechanical energy harvesting technology that was recently demonstrated. Due to their flexibility, they can be fabricated in various configurations and consequently have a large number of applications. Here, we present a study on the influence of the thickness of the triboelectric layer and of the contact surface area between two triboelectrical materials on the electric signals generated by a TENG. Using the PDMS-Nylon tribo-pair, and varying the thickness of the PDMS layer, we demonstrate that the generated voltage decreases with increasing thickness. However, the maximum generated current presents an inverse behaviour, increasing with increasing PDMS thickness. The maximum output power initially increases with increasing PDMS thickness up to 32 µm, followed by a sharp decrease. Using the same tribo-pair (but now with a constant PDMS thickness), we verified that increasing the contact surface area between the two tribo-materials increases the electrical signals generated from the triboelectric effect. Introduction The most recent energy harvesting technology comes from the triboelectric effect [1,2,3,4]. Triboelectric nanogenerators (TENGs) are based on the conjunction of triboelectrification and electrostatic induction in which a material becomes electrically charged after it comes into contact with another material through friction [5,6,7,8]. The discovery of TENGs opened a new field for materials scientists to fabricate nanogenerators that convert mechanical energy at high efficiencies [1,2,9,10,11] and that are easy to integrate [12]. This type of nanogenerators have innumerous advantages, such as flexibility, environmental friendliness, versatility and extremely high output voltages. TENGs can have different configurations, depending on the way the two triboelectric materials come into contact, leading to four operation modes: vertical contact-separation, lateral-sliding, single-electrode and freestanding triboelectric-layer [2,6,11,13,14,15,16]. Theoretical TENGs studies [12,17,18,19] found that, as first approximation, the output voltage (V) of a dielectric-to-dielectric TENG in the contact-mode is given by [6,20]: V = − Q Aε 0 [d 0 + x(t)] + σx(t) ε 0(1) where A is the triboelectric surface area, Q is the transferred charge, x(t) is the time-dependence distance between the two triboelectric layers, ε 0 is the permittivity of free space and d 0 is the effective dielectric thickness given by d1 εr1 + d2 εr2 (where d 1 and d 2 are the thicknesses of the two dielectric materials and ε r1 and ε r2 are the relative dielectric constants of dielectrics 1 and 2). In the open-circuit case, there is no charge transfer (Q = 0), so the open-circuit voltage (V OC ) is given by: V OC = σx(t) ε 0(2) The short-circuit current (I SC ) generated by a TENG is proportional to the generated triboelectric charge density (σ), the surface of the electrode (A), and the speed of the relative mechanical movement [v(t)] and inversely proportional to the square of the distance between the electrodes: I SC = σAd 0 v(t) [d 0 + x(t)] 2(3) In the construction of triboelectric nanogenerators it is necessary to consider the main factors behind high performance, particularly the choice of triboelectric materials. For this aim, one uses the Triboelectric Series, where the triboelectric materials are ordered according to their polarity (ability of a material to gain/lose electrons) [18,21,22]. Materials such as glass or Nylon are positive tribo-materials and tend to lose electrons when coming into contact with negative charge tendency materials [e.g. Poly(tetrafluoroethylene) (PTFE) or Poly(dimethylsiloxane) (PDMS)], that have a tendency to gain electrons [18,21,23]. For this study, we choose PDMS and Nylon as the two triboelectrical materials, due to their opposite position in the triboelectric series [21,23]. PDMS is a widely used polymer, due to its flexibility, manufacturing ease, transparency, biocompatibility and super-hydrophobicity. It is also widely used as triboelectric material in the construction of TENGs for a broad range of applications [24,25,26]. On the other hand, Nylon has good mechanical properties (strength and stiffness), high impact resistance, is easy to fabricate and maintains its properties over a large temperature range [27]. Herein, we study the effect of the variation of the thickness of the PDMS triboelectric layer and of the surface area on the generated electrical outputs when PDMS and Nylon come into contact. We aim to best understand and enhance the triboelectric effect and consequently to optimize future prototypes. Experimental details In this study we used a SYLGARD 182 Silicone Elastomer kit, supplied in two parts consisting of the PDMS base and the curing agent component (Dimethyl,Methylhydrogen Siloxane). The base and the curing agent were mixed using a weight ratio of 10:1.We used the spin-coating technique at different rotation-velocities to deposit the fabricated PDMS on an aluminium substrate which will serve as one of the electrodes. The rotation velocity of the spinner was variated from 500 to 5000 rpm, leading to PDMS thicknesses from 13 to 220 µm [28,29,30,28,29,31]. Curing of PDMS was performed at 80 • C for two hours in an oven. The other triboelectric material (Nylon) was used in thin film form with a thickness of 50 µm. To measure the generated electrical signals, an aluminium tape was attached to both triboelectric materials to serve as electrode and placed into acrylic plates (with 20 cm 2 ). We then used a home-made systematic testing system that makes the two triboelectrics materials come into contact. Measurements of the generated current, voltage and power as a function of the load resistances (R L ) were then performed using a circuit board with resistors from 100 to 1 GΩ. Results and discussion Thickness of the Triboelectric Layer Aiming the improvement of the triboelectric effect, we studied the influence of the PDMS layer thickness on the generated electrical outputs. PDMS was the triboelectric material chosen to vary the thickness (between 13 and 220 µm), while Nylon was used in film form [ Fig. 2(a)]. When the different samples of PDMS come into contact with the Nylon plate, we systematically measured the voltage, current and corresponding power. On the other hand, the minimum voltage (0.2 V) occurred for a PDMS thickness of 220 µm. The decrease of < V OC > with increasing thickness is apparently not in agreement with present TENG theories [17,18]. It must then be related with intrinsic properties of the PDMS films, such as stiffness , hardness or roughness and their thickness dependence. The current has the opposite behaviour since, as the thickness increases, the current also increases, saturating at a value of ∼ 1.3 µA [ Fig. 2(b)]. These results are in accordance with the theory of triboelectric nanogenerators [Eq. (3)]. We then measured the voltage and current from 100 to 1 GΩ , as seen in Figs. 2(a) and (b). Figure 2(a) illustrates the generated voltage and, for all samples, the voltage increased above a load resistance of 10 kΩ, tending to a constant value (open-circuit voltage). We also measured the current flowing in the circuit for the same range of resistances and noticed the opposite behaviour since, with increasing resistance, the current values decrease [ Fig. 2(b)]. These results confirm that increasing the PDMS thickness leads to a decrease (increase) of the generated voltage (current). On the other hand, the maximum output power occurred for a PDMS layer thickness of 32 µm and has a value of 0.56 µW [which corresponds to a power density of 0.55 mW/m 2 ; Fig. 2(c)]. Figure 2(d) shows the maximum generated power as a function of the PDMS thickness layer. It is possible to verify two different behaviours: up to 32 µm of PDMS, the generated power increases with increasing thickness (reaching a maximum value of 0.56 µW); from 32 to 220 µm the generated power decreases sharply, reaching a minimum value of 0.23 µW (which corresponds to a power density of 0.23 mW/m 2 ) . Area of the Triboelectric Surfaces To investigate the relationship between the electric outputs and the area of the triboelectric surfaces, a set of systematic measurements was also performed. In this study, we changed the areas of the triboelectric materials between 2.5 and 15 cm 2 . Similar to the previous study, we used a PDMS-Nylon tribo-pair, although in this study the thickness of the PDMS remained constant (= 32 µm). In Fig. 3(a) Fig. 3(d)]. Thus, we observed that the increase of the macroscopic contact area leads to the increase of the output voltage and current. We again measured the electric outputs for the four different areas when connected to variable load resistances (from 100 to 1 GΩ; not shown). As expected, the voltage has maximum values (1.2 V) at high resistances, whereas the current reaches maximum values (0.43 µA) for small resistances. The area dependence of the maximum power is shown in the inset of Fig. 3(d), displaying a linearly increasing tendency with the increase of sample area and reaching an output power of 0.5 µW (corresponding to a power density of 0.33 mW/m 2 ). Conclusion In summary, we studied the influence of the thickness of the triboelectric layer and of the contact area of the triboelectric surfaces on the triboelectric effect. By changing the PDMS thickness layer, we obtained the relationship between the PDMS thickness layer and the electrical output, when PDMS comes into contact with Nylon. With increasing PDMS thickness, the voltage values decreased but the current increased. The maximum output power occurred for a PDMS layer thickness of 32 µm and had a value of 0.56 µW. We also observed that increasing the contact surface of the triboelectric materials increases the electrical output generated for the contact-separation of the tribo-pair PDMS-Nylon. For a surface area of 15 cm 2 , we achieved an output power density of 0.33 mW/m 2 . Figures 1(a) and (b)show the measured open-circuit voltage and short-circuit current, respectively, for different PDMS layer thicknesses. From these graphs it is possible to observe that the voltage (current) decreases (increases) with increasing thickness of the PDMS triboelectric layer. Taking the average values of the open-circuit voltage and short-circuit current peak for all PDMS thicknesses we obtained the graphs shown inFigs. 1(c)and (d), respectively. A maximum voltage of 1.4 V for 13 µm of PDMS was obtained. Figure 1 : 1(a) Open-circuit voltage and (b) short-circuit current peaks, (c) mean open-circuit voltage and (d) mean short-circuit current for the samples with different PDMS layer thickness. Figure 2 : 2Influence of the PDMS triboelectric layer thickness on the output (a) voltage, (b) current and (c) power generated as a function of the load resistance. (d) Maximum output power generated for the contact-separation mode of different PDMS and Nylon plates. , it is possible to observe the open-circuit voltage for the four different values of triboelectric surface area, clearly demonstrating an increase of the voltage peaks with increasing contact area. In Fig. 3(b) are represented the short-circuit currents obtained for the same triboelectric surface areas. The output current also increases with the increase of the contact area, due to the increased amount of transferred charges. Similarly, we proceeded to calculate the average of the voltage and current peak values for the different areas of the triboelectric surfaces [Figs. 3(c) and (d)]. Figure 3(c) shows the values of the mean open-circuit voltage generated for the Nylon-PDMS pair in the contact-separation mode. The maximum value of the generated voltage was 1.2 V and occurred for a surface area of 15 cm 2 . The maximum generated current was 0.43 µA for a contact area of 15 cm 2 [ Figure 3 : 3(a) Open-circuit voltage, (b) short-circuit current, (c) mean open-circuit voltage and (d) mean short-circuit current obtained for different areas of the triboelectric surfaces. The inset is the area dependence of the maximum power. AcknowledgmentsThe authors acknowledge funding from FEDER and ON2 through project UTAP-EXPL/NTec/0021/2017 and from FCT through the Associated Laboratory Institute of Nanoscience and Nanotechnology (POCI-01-0145-FEDER-016623). J. V. acknowledges financial support through FSE/POPH and project UTAP-EXPL/NTec/0021/2017. C. R. thanks inanoEnergy for the research grant. Flexible triboelectric generator. F R Fan, Z Q Tian, Z Lin Wang, 10.1016/j.nanoen.2012.01.004Nano Energy. 12Fan, F.R., Tian, Z.Q., Lin Wang, Z.. Flexible triboelectric generator. Nano Energy 2012;1(2):328-334. doi:\bibinfo{doi} {10.1016/j.nanoen.2012.01.004}. URL http://dx.doi.org/10.1016/j.nanoen.2012.01.004. Triboelectric nanogenerators as new energy technology and self-powered sensors Principles, problems and perspectives. Z L Wang, \bibinfo{doi}{10.1039/C4FD00159A}Faraday Discuss. 176Wang, Z.L.. Triboelectric nanogenerators as new energy technology and self-powered sensors Principles, problems and perspectives. Faraday Discuss 2014;176(Xx):447-458. doi:\bibinfo{doi}{10.1039/C4FD00159A}. URL http://xlink. rsc.org/?DOI=C4FD00159A. Applicability of triboelectric generator over a wide range of temperature. X Wen, Y Su, Y Yang, H Zhang, Z L Wang, 10.1016/j.nanoen.2014.01.001Nano Energy. 4Wen, X., Su, Y., Yang, Y., Zhang, H., Wang, Z.L.. Applicability of triboelectric generator over a wide range of temperature. Nano Energy 2014;4:150-156. doi:\bibinfo{doi}{10.1016/j.nanoen.2014.01.001}. URL http://dx.doi.org/ 10.1016/j.nanoen.2014.01.001. Progress in triboelectric nanogenerators as a new energy technology and self-powered sensors. Z L Wang, J Chen, L Lin, \bibinfo{doi}{10.1039/C5EE01532D}.1612.08814Energy Environ Sci. 88Wang, Z.L., Chen, J., Lin, L.. Progress in triboelectric nanogenerators as a new energy technology and self-powered sensors. Energy Environ Sci 2015;8(8):2250-2282. doi:\bibinfo{doi}{10.1039/C5EE01532D}. 1612.08814; URL http: //xlink.rsc.org/?DOI=C5EE01532D. Contact electrification. J Lowell, A C Rose-Innes, 10.1080/00018738000101466doi:\bibinfo{doi}{10. 1080/00018738000101466}Advances in Physics. 296Lowell, J., Rose-Innes, A.C.. Contact electrification. Advances in Physics 1980;29(6):947-1023. doi:\bibinfo{doi}{10. 1080/00018738000101466}. URL https://doi.org/10.1080/00018738000101466. Theoretical systems of triboelectric nanogenerators. S Niu, Z L Wang, 10.1016/j.nanoen.2014.11.034Nano Energy. 14Niu, S., Wang, Z.L.. Theoretical systems of triboelectric nanogenerators. Nano Energy 2014;14:161-192. doi:\bibinfo{doi} {10.1016/j.nanoen.2014.11.034}. URL http://dx.doi.org/10.1016/j.nanoen.2014.11.034. Recent Progress in Triboelectric Nanogenerators as a Renewable and Sustainable Power Source. Z Lin, J Chen, J Yang, \bibinfo{doi}{10.1155/2016/5651613}Journal of Nanomaterials. Lin, Z., Chen, J., Yang, J.. Recent Progress in Triboelectric Nanogenerators as a Renewable and Sustainable Power Source. Journal of Nanomaterials 2016;2016. doi:\bibinfo{doi}{10.1155/2016/5651613}. Flexible Nanogenerators for Energy Harvesting and Self-Powered Electronics. F R Fan, W Tang, Z L Wang, \bibinfo{doi}{10.1002/adma.201504299}Advanced Materials. 2822Fan, F.R., Tang, W., Wang, Z.L.. Flexible Nanogenerators for Energy Harvesting and Self-Powered Electronics. Advanced Materials 2016;28(22):4283-4305. doi:\bibinfo{doi}{10.1002/adma.201504299}. TribotronicsA new field by coupling triboelectricity and semiconductor. C Zhang, Z L Wang, 10.1016/j.nantod.2016.07.004Nano Today. 114Zhang, C., Wang, Z.L.. TribotronicsA new field by coupling triboelectricity and semiconductor. Nano To- day 2016;11(4):521-536. doi:\bibinfo{doi}{10.1016/j.nantod.2016.07.004}. URL http://dx.doi.org/10.1016/j.nantod. 2016.07.004. . U Khan, S W Kim, Khan, U., Kim, S.W.. Triboelectric Nanogenerators for Blue Energy Harvesting. \bibinfo{doi}{10.1021/acsnano.6b04213}ACS Nano. 107Triboelectric Nanogenerators for Blue Energy Harvesting. ACS Nano 2016;10(7):6429-6432. doi:\bibinfo{doi}{10.1021/acsnano.6b04213}. Triboelectric driven turbine to generate electricity from the motion of water. C R Rodrigues, C A Alves, J Puga, A M Pereira, J O Ventura, 10.1016/j.nanoen.2016.09.038Nano Energy. 30Rodrigues, C.R., Alves, C.A., Puga, J., Pereira, A.M., Ventura, J.O.. Triboelectric driven turbine to generate electricity from the motion of water. Nano Energy 2016;30(October):379-386. doi:\bibinfo{doi}{10.1016/j.nanoen.2016.09.038}. URL http://dx.doi.org/10.1016/j.nanoen.2016.09.038. Recent Progress on Flexible Triboelectric Nanogenerators for SelfPowered Electronics. R Hinchet, W Seung, S W Kim, 10.1002/cssc.201403481ChemSusChem. 814Hinchet, R., Seung, W., Kim, S.W.. Recent Progress on Flexible Triboelectric Nanogenerators for SelfPowered Elec- tronics. ChemSusChem 2015;8(14):2327-2344. doi:\bibinfo{doi}{10.1002/cssc.201403481}. URL http://dx.doi.org/10. 1002/cssc.201403481. Theoretical investigation and structural optimization of single-electrode triboelectric nanogenerators. S Niu, Y Liu, S Wang, L Lin, Y S Zhou, Y Hu, \bibinfo{doi}{10.1002/adfm.201303799}Advanced Functional Materials. 2422Niu, S., Liu, Y., Wang, S., Lin, L., Zhou, Y.S., Hu, Y., et al. Theoretical investigation and structural optimization of single-electrode triboelectric nanogenerators. Advanced Functional Materials 2014;24(22):3332-3340. doi:\bibinfo{doi} {10.1002/adfm.201303799}. Theory of freestanding triboelectric-layerbased nanogenerators. S Niu, Y Liu, X Chen, S Wang, Y S Zhou, L Lin, 10.1016/j.nanoen.2015.01.013Nano Energy. 12Niu, S., Liu, Y., Chen, X., Wang, S., Zhou, Y.S., Lin, L., et al. Theory of freestanding triboelectric-layer- based nanogenerators. Nano Energy 2015;12:760-774. doi:\bibinfo{doi}{10.1016/j.nanoen.2015.01.013}. URL http: //dx.doi.org/10.1016/j.nanoen.2015.01.013. Theory of sliding-mode triboelectric nanogenerators. S Niu, Y Liu, S Wang, L Lin, Y S Zhou, Y Hu, \bibinfo{doi}{10.1002/adma.201302808}Advanced Materials. 2543Niu, S., Liu, Y., Wang, S., Lin, L., Zhou, Y.S., Hu, Y., et al. Theory of sliding-mode triboelectric nanogenerators. Advanced Materials 2013;25(43):6184-6193. doi:\bibinfo{doi}{10.1002/adma.201302808}. . X Cao, Y Jie, N Wang, Z L Wang, Cao, X., Jie, Y., Wang, N., Wang, Z.L.. Triboelectric Nanogenerators Driven Self-Powered Electrochemical Processes for Energy and Environmental Science. \bibinfo{doi}{10.1002/aenm.201600665}Advanced Energy Materials. 623Triboelectric Nanogenerators Driven Self-Powered Electrochemical Processes for Energy and Environmental Science. Advanced Energy Materials 2016;6(23). doi:\bibinfo{doi}{10.1002/aenm.201600665}. Theoretical study of contact-mode triboelectric nanogenerators as an effective power source. S Niu, S Wang, L Lin, Y Liu, Y S Zhou, Y Hu, doi:\bibinfo{doi}{10. 1039/c3ee42571a}Energy & Environmental Science. 6123576Niu, S., Wang, S., Lin, L., Liu, Y., Zhou, Y.S., Hu, Y., et al. Theoretical study of contact-mode triboelectric nanogenerators as an effective power source. Energy & Environmental Science 2013;6(12):3576. doi:\bibinfo{doi}{10. 1039/c3ee42571a}. URL http://xlink.rsc.org/?DOI=c3ee42571a. Triboelectric Nanogenerators. Z L Wang, L Lin, J Chen, S Niu, Y Zi, http:/link.springer.com/10.1007/978-3-319-40039-6Wang, Z.L., Lin, L., Chen, J., Niu, S., Zi, Y.. Triboelectric Nanogenerators. 2016. ISBN 978-3-319-40038-9. doi:\bibinfo{doi}{10.1007/978-3-319-40039-6}. URL http://link.springer.com/10.1007/978-3-319-40039-6. A Fully Verified Theoretical Analysis of Contact-Mode Triboelectric Nanogenerators as a Wearable Power Source. B Yang, W Zeng, Z H Peng, S R Liu, K Chen, X M Tao, \bibinfo{doi}{10.1002/aenm.201600505}Advanced Energy Materials. 616Yang, B., Zeng, W., Peng, Z.H., Liu, S.R., Chen, K., Tao, X.M.. A Fully Verified Theoretical Analysis of Contact-Mode Triboelectric Nanogenerators as a Wearable Power Source. Advanced Energy Materials 2016;6(16):1-8. doi:\bibinfo{doi} {10.1002/aenm.201600505}. Simulation method for optimizing the performance of an integrated triboelectric nanogenerator energy harvesting system. S Niu, Y S Zhou, S Wang, Y Liu, L Lin, Y Bando, 10.1016/j.nanoen.2014.05.018Nano Energy. 8Niu, S., Zhou, Y.S., Wang, S., Liu, Y., Lin, L., Bando, Y., et al. Simulation method for optimizing the performance of an integrated triboelectric nanogenerator energy harvesting system. Nano Energy 2014;8:150-156. doi:\bibinfo{doi} {10.1016/j.nanoen.2014.05.018}. URL http://dx.doi.org/10.1016/j.nanoen.2014.05.018. Tribocharging and the Triboelectric Series. D M Gooding, G K Kaufman, http:/doi.wiley.com/10.1002/9781119951438.eibc2239Encyclopedia of Inorganic and Bioinorganic Chemistry. Gooding, D.M., Kaufman, G.K.. Tribocharging and the Triboelectric Series. Encyclopedia of Inorganic and Bioinor- ganic Chemistry 2014;:1-9doi:\bibinfo{doi}{10.1002/9781119951438.eibc2239}. URL http://doi.wiley.com/10.1002/ 9781119951438.eibc2239. Where is water in the triboelectric series. T A Burgo, F Galembeck, G H Pollack, 10.1016/j.elstat.2016.01.002Journal of Electrostatics. 80Burgo, T.A., Galembeck, F., Pollack, G.H.. Where is water in the triboelectric series? Journal of Electrostatics 2016;80:30-33. doi:\bibinfo{doi}{10.1016/j.elstat.2016.01.002}. URL http://dx.doi.org/10.1016/j.elstat.2016.01. 002. A semi-quantitative tribo-electric series for polymeric materials: The influence of chemical structure and properties. A F Diaz, R M Felix-Navarro, \bibinfo{doi}{10.1016/j.elstat.2004.05.005}Journal of Electrostatics. 624Diaz, A.F., Felix-Navarro, R.M.. A semi-quantitative tribo-electric series for polymeric materials: The influence of chemical structure and properties. Journal of Electrostatics 2004;62(4):277-290. doi:\bibinfo{doi}{10.1016/j.elstat.2004. 05.005}. A flexible and biocompatible triboelectric nanogenerator with tunable internal resistance for powering wearable devices. Y Zhu, B Yang, J Liu, X Wang, L Wang, X Chen, 10.1038/srep22233Scientific Reports. 6Zhu, Y., Yang, B., Liu, J., Wang, X., Wang, L., Chen, X., et al. A flexible and biocompatible triboelectric nanogenerator with tunable internal resistance for powering wearable devices. Scientific Reports 2016;6(December 2015):1- 10. doi:\bibinfo{doi}{10.1038/srep22233}. URL http://dx.doi.org/10.1038/srep22233. PDMS-based triboelectric and transparent nanogenerators with ZnO nanorod arrays. Y H Ko, G Nagaraju, S H Lee, J S Yu, \bibinfo{doi}{10.1021/am5018072}ACS Applied Materials and Interfaces. 69Ko, Y.H., Nagaraju, G., Lee, S.H., Yu, J.S.. PDMS-based triboelectric and transparent nanogenerators with ZnO nanorod arrays. ACS Applied Materials and Interfaces 2014;6(9):6631-6637. doi:\bibinfo{doi}{10.1021/am5018072}. In Vivo Self-Powered Wireless Cardiac Monitoring via Implantable Triboelectric Nanogenerator. Q Zheng, H Zhang, B Shi, X Xue, Z Liu, Y Jin, \bibinfo{doi}{10.1021/acsnano.6b02693}ACS Nano. 107Zheng, Q., Zhang, H., Shi, B., Xue, X., Liu, Z., Jin, Y., et al. In Vivo Self-Powered Wireless Cardiac Monitoring via Implantable Triboelectric Nanogenerator. ACS Nano 2016;10(7):6510-6518. doi:\bibinfo{doi}{10.1021/acsnano.6b02693}. Polyamide for Flexible Packaging Film. W Goetz, Polymers Laminations and Coatings Conference. Goetz, W.. Polyamide for Flexible Packaging Film. Polymers Laminations and Coatings Conference 2003;(May). Characterization of polydimethylsiloxane (PDMS) properties for biomedical micro/nanosystems. A Mata, A J Fleischman, S Roy, \bibinfo{doi}{10.1007/s10544-005-6070-2}arXiv:1011.1669v3Biomedical microdevices. 74Mata, A., Fleischman, A.J., Roy, S.. Characterization of polydimethylsiloxane (PDMS) properties for biomedical micro/nanosystems. Biomedical microdevices 2005;7(4):281-293. doi:\bibinfo{doi}{10.1007/s10544-005-6070-2}. arXiv: 1011.1669v3. Thin PDMS films using long spin times or tert-butyl alcohol as a solvent. J H Koschwanez, R H Carlson, D R Meldrum, \bibinfo{doi}{10.1371/journal.pone.0004572}PLoS ONE. 42Koschwanez, J.H., Carlson, R.H., Meldrum, D.R.. Thin PDMS films using long spin times or tert-butyl alcohol as a solvent. PLoS ONE 2009;4(2):2-6. doi:\bibinfo{doi}{10.1371/journal.pone.0004572}. . K All, All, K.. Laurell 650M Spin Coater User Manual ????. Laurell 650M Spin Coater User Manual ????;. PDMS thickness VS spin-coating speed. Elveflow, 1Elveflow, . PDMS thickness VS spin-coating speed 2013;1:2-3.
[]
[ "Complex wave fields in the interacting 1d Bose gas", "Complex wave fields in the interacting 1d Bose gas", "Complex wave fields in the interacting 1d Bose gas", "Complex wave fields in the interacting 1d Bose gas" ]
[ "J Pietraszewicz \nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n", "P Deuar \nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n", "J Pietraszewicz \nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n", "P Deuar \nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n" ]
[ "Institute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland", "Institute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland", "Institute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland", "Institute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland" ]
[]
We study the temperature regimes of the 1d interacting gas to determine when the matter wave (c-field) theory is, in fact, correct and usable. The judgment is made by investigating the level of discrepancy in many observables at once in comparison to the exact Yang-Yang theory. We also determine what cutoff maximizes the accuracy of such an approach. Results are given in terms of a bound on accuracy, as well as an optimal cutoff prescription. For a wide range of temperatures the optimal cutoff is independent of density or interaction strength and so its temperature dependent form is suitable for many cloud shapes and, possibly, basis choices. However, this best global choice is higher in energy than most prior determinations. The high value is needed to obtain the correct kinetic energy, but does not detrimentally affect other observables.
10.1103/physreva.97.053607
[ "https://arxiv.org/pdf/1808.10251v1.pdf" ]
119,423,927
1808.10251
bd291b04c380a93a1acc882b27ff7a8cb92236d6
Complex wave fields in the interacting 1d Bose gas 30 Aug 2018 J Pietraszewicz Institute of Physics Polish Academy of Sciences Aleja Lotników 32/4602-668WarsawPoland P Deuar Institute of Physics Polish Academy of Sciences Aleja Lotników 32/4602-668WarsawPoland Complex wave fields in the interacting 1d Bose gas 30 Aug 2018(Dated: August 31, 2018) We study the temperature regimes of the 1d interacting gas to determine when the matter wave (c-field) theory is, in fact, correct and usable. The judgment is made by investigating the level of discrepancy in many observables at once in comparison to the exact Yang-Yang theory. We also determine what cutoff maximizes the accuracy of such an approach. Results are given in terms of a bound on accuracy, as well as an optimal cutoff prescription. For a wide range of temperatures the optimal cutoff is independent of density or interaction strength and so its temperature dependent form is suitable for many cloud shapes and, possibly, basis choices. However, this best global choice is higher in energy than most prior determinations. The high value is needed to obtain the correct kinetic energy, but does not detrimentally affect other observables. I. INTRODUCTION The classical field, or "matter wave", description has become an irreplaceable workhorse in the quantum gases community. It allows one to deal with a zoo of dynamical and thermal phenomena, particularly those that are nonperturbative or random between experimental runs. The idea of the c-field approach is to treat the low energy part of the system as an ensemble of complex-valued wave fields when mode occupations are large. The general aim here will be to quantitatively judge the accuracy of the cfield approach and circumscribe the physical parameters of the matter wave region. This will make calculations more confident in the future. To say that a system's state is well described by such matter waves implies a host of important physical consequences. Among them: the system behaves like a superfluid at least on short scales; all relevant features come from a collective contribution of many particles; the capability for quantum wave turbulence and nonlinear selforganization [1]; and single realizations of the ensemble correspond to measurements of many particle positions in an experimental run [2][3][4]. A rapid growth of interest in these kinds of phenomena has occurred as soon as they became accessible experimentally. Their theoretical study and comparison to experiment has relied on the various flavors of classical field descriptions [5][6][7]. This is because c-fields are typically the only non-perturbative treatment of quantum mechanics that remains tractable for a large systems. Examples include defect seeding and formation [8][9][10][11][12], quantum turbulence [1,13,14], the Kibble-Zurek mechanism [15][16][17], nonthermal fixed points [18,19] and vortex dynamics [20][21][22][23], the BKT transition [20], and evaporative cooling [11,24,25]. Related effective field theories using complex-valued fields have been developed for polaritons [26,27], superfluid Fermi gases [28,29] or for Yang-Mills theory [30]. * [email protected], [email protected] The general understanding of c-fields for the last 15 years or so has rested on two widely applicable qualitative arguments: Firstly, that the bulk of the physics must take place in single particle modes that are highly occupied. This allows one to neglect particle discretization and the non-commutativity of the Bose fields Ψ(x) and Ψ † (x). Secondly, that the subspace of modes treated with c-fields should be limited to below some energy cutoff E c , at which mode occupation is O(1 − 10). At this level particle discretization (which is impossible to emulate using a complex amplitude) becomes important. Such a cutoff also prevents the ultraviolet catastrophe that occurs because of the equipartition of k B T energy per each mode in classical wave equilibrium. A variety of prescriptions for choosing the cutoff have been obtained in the past [22,[31][32][33][34][35][36][37][38], but they were usually tailored to correctly predict one particular observable. For a general treatment of a system with classical fields, however, one desires something broader: that at least all the usual measured observables are close to being described correctly at the same time. This is particularly important for the study of nonlinear dynamical and nonequilibrium phenomena such as quantum turbulence, or defect formation times, for which significant errors in one quantity will rapidly feed through into errors in all others. As we will show, the key to a better understanding of the situation is that some quantities are much more sensitive to cutoff than others. Moreover, their dependence can come in two flavors depending on whether they are dominated by low energy (IR) or high energy (UV) modes. Past cutoff studies concentrated primarily on static observables dominated by low energy modes (condensate fraction [31,33,36], phase [37] and density correlation functions [38]). For damping and long-time dynamics, though [34,[39][40][41], the influence of higher energy modes comes out more strongly. A cutoff determination that would cover such nonequilibrium processes has been lacking. Here we will take the approach that to describe them well, at least the energy and its components (kinetic, interacting) should be correct. Accordingly, we will determine the cutoffs needed for both static and energetic observables to be accurate simultaneously, and explicitly bound this accuracy. In this paper we consider the one-dimensional Bose gas with repulsive contact interactions. This system underpins a very large part of ultracold gas experiment and theory. Moreover, there exist beautiful exact results for the uniform gas to compare to [42,43]. To judge when matter wave physics is an accurate description, we generate c-field ensembles, construct a robust figure of merit and study its dependence on the cutoff. This follows the same tested route that was used in a preliminary study of the ideal Bose gas in [44]. An overview of the classical field method and system properties is given in Sec. II. Next, Sec. III reports the dependence of observables on the cutoff, while in Sec.IV the procedure used to judge the overall goodness of a cfield description is presented. The practical limits of the matter wave region and the globally optimal cutoff are found in Sec. V. Subsequently, analysis of the optimal cutoff for nonuniform and/or dynamical systems can be found in Sec.VI, along with a comparison to earlier results. We conclude in Sec. VII. Additional detail on a number of technical matters is given in the appendices. II. DESCRIPTION OF THE SYSTEM A. Classical wave fields The classical field description of a system can be succinctly summarized as the replacement of quantum annihilation (creation) operatorsâ k (â † k ) of single particle modes k in the second-quantized field operator by complex amplitudes α k (α * k ). With mode wave functions ψ k (x), it can be written: Ψ(x) = kâ k ψ k (x) → Ψ(x) = k∈C α k ψ k (x) . (1) This is warranted when occupations are sufficiently macroscopic. Evidently, occupations will become not sufficiently macroscopic for modes with high enough energy. For this reason it is necessary to restrict the set of modes to a low energy subspace (often called the "coherent" or "c-field" region [6]), which is denoted by C. This set is usually parametrized by a single cutoff parameter, at a given mode energy E c . Note that, in general, the system's state is represented using an ensemble {...} of complex field realizations, each with its own set of amplitudes α k . Each member breaks the gauge symmetry of a typical full quantum ensemble in a manner similar to single experimental realizations, but the ensemble preserves it [2,45]. This naturally allows e.g. for the presence of spontaneous nonlinear many-body defects, and many interesting nonmean field phenomena that are very difficult to access using other approaches. For in-depth discussion of the subject, we refer the reader to [5][6][7]45] and the earlier reviews [46][47][48]. B. Interacting 1d gas Here, we consider the one-dimensional Bose gas with repulsive contact interactions. The exact Yang-Yang solution for the uniform gas at a given interaction strength and temperature [43] will provide the benchmark to which the c-field method will be compared. To obtain results independent of the trapping geometry, density profile, etc., we will focus on systems that are amenable to a local density approach (LDA). In this, it is assumed that the ensemble averaged density n = |Ψ(x)| 2 varies more slowly than other relevant quantities, and the system can be treated using sections of the gas of a given density n. The local density |Ψ(x)| 2 in a single realization can still vary strongly within the section, which is sufficient to include defects and turbulence phenomena. Working thus in the LDA with a uniform section of a larger gas, the grand canonical ensemble is the natural choice, as the rest of the system acts as a particle and thermal reservoir. Results that are independent of finitesize effects are the most useful. Therefore we impose the thermodynamic limit by using a section with uniform density n and periodic boundary conditions of a length L sufficiently large to contain the longest length scale in the system. Usually this is the phase correlation length, and is equivalent to requiring the first-order ("phase") correlation g (1) (z) = 1 n Ψ † (x) Ψ(x + z)(2) to drop to zero when z L/2. For uniform systems, a basis of plane wave modes k ≡ k is natural. The energy cutoff for the low energy subspace C is equivalent to a momentum cutoff k c so that only modes |k| < k c are included. Its dimensionless form is f c = k c Λ T 2π = k c √ 2πmk B T ,(3) in terms of the particle mass m and thermal de Broglie wavelength Λ T at temperature T . The corresponding energy cutoff is ε c = k 2 c 2m = π f 2 c k B T.(4) The properties of a uniform 1d gas for a given density n with contact interaction strength g are encapsulated by two dimensionless parameters γ = mg 2 n ; τ d = T T d = 1 2π mk B 2 T n 2(5) (provided there are no finite-size effects). The first parameter quantifies the interaction strength, moving from dilute Bose gases for γ ≪ 1 to a strongly interacting fermionized regime when γ ≫ 1. The second one is a dimensionless temperature, in units of quantum degeneracy temperature T d . The parameter space of the system has been classified by the behavior of density fluctuations [49,50] A series of c-field ensembles was generated for each of many chosen parameter pairs (γ, τ d ) in parameter space. Each series contained separate uniform ensembles for a range of values of f c . We primarily used a Metropolis algorithm for a grand canonical ensemble as described in [44,52]. For a few parameter pairs, in the large γ, low τ d region where the Metropolis method was very inefficient, the ensemble was generated by evolving a projected stochastic Gross-Pitaevskii equation (SPGPE) to its stationary ensemble. Details of our implementations are given in Appendix A. III. OBSERVABLES For a correct treatment of the physics of the system all the low order observables that form the staple of experimental measurements should be calculated correctly. Further, for correct dynamics, the energy and energy balance (kinetic/interaction) must also be correct. Any discrepancies will rapidly feed through into the other quantities in a nonlinear system, whether in the form of energy mixing, or dephasing in integrable systems. This argument becomes even more important in nonuniform systems, because discrepancies in the density dependence of energy will immediately lead to bogus expansion or contraction. Let us define a relative discrepancy between the classical field value of an observable Ω (cf) obtained using a given cutoff, and the true quantum observable Ω: δ Ω (γ, τ d , f c ) := ∆Ω Ω = Ω (cf) (γ, τ d , f c ) Ω(γ, τ d ) − 1 .(6) We will always compare a classical field ensemble with density n to exact results with the same density, so δ n = 0 by construction. This ensures equivalence between the physical parameters γ and τ d (5) for both CF and exact results. Next, setting g and temperature T , leaves one free technical parameter f c for the c-field. Its value can often be chosen to make one other observable match exactly, but not all. In an earlier study [44], the dependence of the discrepancies on cutoff was analyzed, and one of the most important results was that observables fall into two broad categories described below. The first kind are IR-dominated observables which display falling values with growing cutoff. They are dominated by low energy modes and finally reach asymptotic values. This group includes such quantities as the halfwidth of the g (1) (z) correlation function given by (2) (i.e. the phase coherence length), values of g (1) (z) at large distances z, the condensate fraction n 0 , the effective temperature in a microcanonical ensemble [53,54], or the coarse grained density fluctuations. These last are given by u G := var N N = S 0 = n dz g (2) (z) − 1 + 1(7) and will play an important role in our analysis. (7) is also known as the k = 0 static structure factor S 0 . It is the ratio of the measured number fluctuations to Poisson shot noise in regions that are wider than the density correlation length, and serves as a measure of the typical number of particles per randomly occurring density lump [55]. It tends: (i) to one in a coherent state or at high temperature, (ii) to zero at T = 0 or in a fermionized gas, and (iii) to large positive values in the thermal-dominated quasicondensate. It is a density fluctuation quantity that is quite readily measured in experiments [56][57][58] because standard pixel resolution is sufficient, and is an intensive thermodynamic quantity. These features are to be contrasted to the microscopic density-density correlation g (2) (z) = Ψ † (x) Ψ † (x + z) Ψ(x + z) Ψ(x) /n 2 which is usually neither intensive nor experimentally resolvable in situ. The second category contains UV-dominated observables which follow the opposite trend. They are underestimated for low cutoff, because high energy modes have a large contribution, and grow with increasing f c . The most prominent example are energies per particle E tot = E int + ε (8) = g 2 dx Ψ † 2 (x) Ψ 2 (x) + dx 2 2m ∇ Ψ † (x)∇ Ψ. Others are some collective mode frequencies in a trap [40], and g (2) (0). The density n also behaves this way when changing only k c , but is guaranteed matched in our approach. We have inspected the discrepancies (6) for many locations in (γ,τ d ) space using the numerically generated ensembles. Fig. 1(a) shows a representative case for the quantum degenerate and thermally fluctuating condensate regions. A variant at larger values of γ when total energy is dominated by interaction can be found in Appendix A 4. The exact quantum observables E tot , ε, N , were obtained according to [43], while the g (2) (0) according to [59]. The u G calculation uses a specially developed method [60]. Overall, we find that the same trends seen in the 1d ideal gas are repeated in the interacting case for the relevant observables such as kinetic energy ε and u G . For the interacting gas one should also separately consider the interaction energy per particle E int = 1 2 g n g (2) (0) and the total energy per particle E tot . For the physical regimes studied, they are both found to belong to the second category: discrepancy rising with cutoff. In general, we can confirm that the basic two observable categories hold also in the interacting gas. IV. GLOBAL ACCURACY INDICATOR In the ideal gas, the low order observables with the most disparate behavior are the coarse-grained density fluctuations u G and kinetic energy ε [44]. Their mismatch is the strongest restriction on the range of f c for which all δ Ω errors are small. Based on this, a global figure of merit was defined: RM S id = δ 2 ε + δ 2 uG . This includes discrepancies of observables belonging to both classes. It had the convenient property that it was an upper bound on the error of all the observables that were treated. The interacting gas brings with it several additional observables. Among them, the components of energy are particularly important: a correct E tot is needed for an accurate description of dynamics, E int for local density fluctuations, while the kinetic energy ε is closely related to phase coherence length and the momentum distribution. A good figure of merit RM S(f c ) for the interacting case should be a bound both for δ ε and δ uG as well as for δ Etot and δ Eint . Inspecting the data, one finds that the discrepancies in the total energy and interaction energy (same as for g (2) (0)) are found to grow slower than ε, so that ε and u G always remain the most extreme representatives of their groups. However, at large interaction γ, it is observed that near the optimum the errors in ε and u G can be exceeded by the error in E tot (see Appendix. A 4). This happens when the dominant energy contribution comes from interactions, and the discrepancies of E int and E tot are about equal. Hence, the overall conclusion is that δ Eint can be omitted from the RM S without loss of generality, but δ Etot should stay. As a result, we define the measure of global error (at a given cutoff) as: RM S(γ, τ d , f c ) = δ uG 2 + max δ 2 ε , δ 2 Etot .(9) We use the maximum of the energy discrepancies rather than an rms of all three potentially extreme observables. In this form it has a more convenient interpretation, because (1) we do not wish to double count the importance of energy to keep the interpretation of RM S as being close to the upper bound on the discrepancies (not √ 2 times the upper bound), and (2) it will remain consistent with the ideal gas results of [44]. Fig. 1(b) shows the dependence of (9) for a variety of cases. The minimum of the global error quantity RM S(f c ) will provide the main results in this paper. Its value minRM S, will be our figure of merit for the classical field description, and its location optf c is the globally optimal cutoff. Namely: min fc>0 [RM S] = minRM S = RM S(γ, τ d , optf c ). (10) After generating ensembles with given values of γ, τ d , and f c , observable expectation values Ω (cf) were calculated. They were compared to corresponding exact quantum values, to get the discrepancies δ Ω (γ, τ d , f c ). At the end, the RM S(γ, τ d , f c ) function was minimized numerically to get minRM S and optf c . Details of these procedures are provided in Appendix B 1. V. RESULTS: THE CLASSICAL WAVE REGIME The dependence of the figure of merit minRM S on physical parameters γ, τ d is shown in Fig. 2. This presents the regime of applicability of classical fields, depending on how much inaccuracy is to be tolerated. The quantity minRM S bounds the inaccuracy for all observables studied. Open and closed points indicate for which parameters ensembles were calculated. Color contours specify given values of the estimated minRM S. The white region in Fig.2 was studied, and includes: the thermal quasicondensate T , the quantum turbulent soliton region S , the degenerate gas regimes D . The gray areas include most of the classical region C , fermionized regime F and the quasicondensate dominated by quantum fluctuations Q . These are listed and delineated in Sec. II B. We have no reliable information for the gray areas because of technical difficulties in obtaining ensembles in the thermodynamic limit. For the coldest temperatures, especially when τ d γ, one sees slow convergence during calculations and/or increasing problems in removing finite-size effects. Knowing that experimental uncertainties are typically of the order of 10%, a value of minRM S 0.1 tells us that the physics in this region is in practice the physics of classical matter wave fields. Overall, the region dominated by classical wave physics in Fig.2 is larger than one could have conservatively supposed. Almost the entire thermal quasicondensate T is described well, all the way until the crossover to fermionization kicks in around γ = 0.018 to 0.075. The quoted values correspond to discrepancies minRM S = 0.1 and 0.2, respectively. The quantum degenerate gas D is correctly described for all temperatures up to at least τ d ≈ 0.008 (minRM S = 0.1) and 0.03 (minRM S = 0.2). This is an important result, as it is not immediately obvious that classical fields apply so far. What this means is that not only do they cover the entire T regime but a number of strongly fluctuating higher temperature regions as well. One of them is the region with prominent thermal solitons S in the range τ d ∼ 0.04 √ γ − 0.2 √ γ [51? ]. At warmer temperatures 1 τ d ≈ 0.27 √ γ, the crossover to an ideal-gas-like state that occurs when µ changes sign also lies well within the classical wave region. This means that the changeover from wave-like to particle-like physics occurs at much higher temperatures than the crossover between Bogoliubov and Hartree-Fock physics [61]. Finally, observing the left edge of Fig. 2, we can confirm that our numerical results for minRM S match well to the ideal gas results of [44]. Summarizing the criterion of 10% goodness of the cfields description, the limit of the matter wave region reaches: (i) τ d ≈ 0.008 until γ 0.001, (ii) γ ≈ 0.018 below τ d 0.002. Between (i) and (ii), there is a bulge that extends to τ d ≈ γ ≈ 0.03. Together with minRM S, also the optimal cutoff optf c was obtained. Its behavior is presented in Fig.3. Black contours specify given values of optf c estimated from the Metropolis/SPGPE data as described in Appendix. B 2. The green points are a copy of the minRM S = 0.1 location. For higher temperatures the optf c values are changing slowly between contours. This behavior specifies a flat region in which a constant optf c ≈ 0.64 is a very good approximation. Around τ ≈ 0.1γ a change in behavior occurs and the cutoff begins to grow at very low-temperatures, towards the F regime. This is evidenced by the large jump from values of 1 to 10 on a similar contour spacing as between 0.65 and 0.64. This optf c growth corresponds also to a crossover from the thermal to quantum fluctuation dom- inated quasicondensate, and to behavior like in Fig. 4 in Appendix A 4. All in all, there appear two regions (on either side of τ d ∼ 0.1γ) in which the dependence of optf c on γ and τ d is either flat or significant. In the flat region dominated by thermal fluctuations (up to minRM S 0.2) one can state that: optf c = 0.64 ± 0.01 (11a) i.e. k c = (1.60 ± 0.03) √ mk B T ; ε c = (1.29 ± 0.04) k B T. (11b) Confidence in our results is added by the fact that they are consistent with the ideal gas from [44] which reported optf c → ζ(3/2)/4 = 0.653 till τ d 0.00159. The case of low γ (γ < 0.005 in Fig. 3) agrees with this. A. Analytical approximations for optfc and minRM S Some useful analytical approximations regarding optf c and minRM S are readily obtained for the flat optf c region using the ideal gas. From [44] we know that the discrepancy in u G becomes very flat for a wide range of f c , while discrepancy in ε remains always highly sensitive. In fact, this kind of behavior is typical for many parameters, not just the ideal gas (see Fig. 1(a)). The quantum prediction for ε in the thermodynamic limit L → ∞ at γ = 0 is ε (q) = 1 2πn ∞ −∞ dk 2 k 2 2m n (q) k with Bose-Einstein mode occupations n (q) k = [e ( 2 k 2 /2m−µ)/kB T − 1] −1 ,(12) while the corresponding c-field quantity ε (cf) uses the cutoff kc −kc and Rayleigh-Jeans occupations from equipartition: n (cf) k = k B T / 2 k 2 2m − µ .(13) Now, simple calculations lead to: δ ε = 4f c ζ( 3 2 ) −1+O(τ 1 2 d ); δ uG = √ τ d 6 f c π + 3ζ( 1 2 ) +O(τ d )(14) Analysis of (14) shows that no positive f c value can satisfy δ uG = 0. In effect, the optimal cutoff value, i.e. optf c ≈ ζ( 3 2 ) 4 ≈ 0.653(15) is set only by the location of δ ε = 0. In turn, from the δ uG expression one has an estimate of minRM S = √ τ d 24 πζ( 3 2 ) + 3ζ( 1 2 ) ≈ 1.46 √ τ d .(16) The estimated values of τ d = 0.0047 and 0.019 for the location of minRM S = 0.1 and 0.2, respectively, are relatively close to the actual numerical values. VI. BEYOND THE UNIFORM GAS A. Nonuniform density In a nonuniform gas, neighboring sections described by the LDA have different densities, so results assigned for them are placed differently on Figs. 2-3. In thermal equilibrium they follow the line τ d (n) ∝ γ 2 . The central core of the gas cloud is not usually in the quantum fluctuation region Q because cooling that far is difficult. So then, the central core is in the constant optf c ≈ 0.64 region, and so are the remaining gas sections, apart for the dilute tails. This means that all the sections can be described together using the same common cutoff k c on a common grid. A further conjecture under these conditions would be that a good description can be obtained regardless of whether a plane wave or a harmonic oscillator basis is chosen (at least if one uses optf c ). The agreement seen between numerical studies of trapped gases using plane wave bases and experiment [38,51,62] corroborates the above statement. B. Comparison to past results In the history of the subject, the cutoff has been given either in terms of the highest single-particle energy E c (equivalent to the wavevector k c ), or in terms of the cfield occupation N c = |α |k|=kc | 2 of the cutoff mode. The relationship between N c and E c is the following: N c ≈ k B T E c .(17) Our finding of optimal cutoff optf c = 0.64 indicates N c = 1 π(optfc) 2 = 0.78 in 1d. How does this compare to other studies? The fundamental observation in the early work was that it should occur somewhere around N c ∼ O(1 − 10) [53,63]. Since the focus was mainly on qualitative results, cutoffs were simply postulated. This has also been done in later approaches which used N c = 1 [32] or N c = 2 [22]. The authors of [33,47] obtained N c = 0.6 − 0.7 in 3d by matching condensate fraction n 0 to the ideal gas value. Although a 40% discrepancy in ε arose, getting n 0 correct was considered much more important. In turn, others found that changing the cutoff value of E c by 20% introduced only a few percent difference in n 0 [9]. In turn, the authors of [31] optimized the cutoff to minimize the mismatch in the full distribution of the excited fraction in an ideal gas. They find N c = 1 (hence optf c = 0.56) in a 1d trap in the canonical ensemble, whose local LDA density segments are comparable to the treatment here. The cutoff obtained by [31] was used for weakly interacting gases by [64,65] who found a good match for condensate fraction fluctuations, but 10% discrepancies in g (2) (0). Most previous cutoff determinations were based on optimizing single, IR-dominated observables. In contrast, we also include observables from the UV-dominated group and show that the discrepancy δ ε tends to be more sensitive to f c than the other δ Ω . What was not noted before is that the modes that contribute most to ε are not the same as for the majority of the other observables. Hence, raising the cutoff in energy to optf c does not significantly affect the observables of the first kind. C. Relevance of high cutoff for dynamics The lower cutoffs (like those proposed in the past) generate a large error in kinetic energy. This can be problematic for nonlinear dynamics, where correct energies become crucial. Any errors in energy in one part of the system rapidly infect the rest with errors through the nonlinearity, and for example lead to spurious movement of mass. An example of dynamics that is adversely affected are collective mode frequencies, studied by experiment [66], c-fields, and other theory. The experiment determined a rapid increase in the frequency of the m = 0 quadrupole mode from about 1.85ω ⊥ below T ≈ 0.6T c to 2ω ⊥ above T ≈ 0.7T c . The ZNG theory [67] and the second-order Bogoliubov of Morgan et al. [68] fairly well matched this phenomenon. In contrast, c-field calculations [40] with lower optf c did not predict a rise in frequency. The disagreement was attributed to inadequate description of the dynamics of the above-cutoff modes. Interestingly, in [40] the relevant frequency rose up to 1.9ω ⊥ at the highest 3d cutoff when 65% of the atoms were in the c-field. The cutoff increase was not taken further due to anxiety over including poorly described modes with small occupation. Later, [41] used an even higher energy cutoff and recovered the frequency increase but at an excessive temperature T ≈ 0.8T c . Our present result showing a weak cutoff dependence of non-kinetic observables suggests that with an even higher cutoff, such as optf c , further improvement in the collective mode's description may be possible. There would also be little detrimental effect on condensate fraction. VII. WRAP-UP We have determined the region of parameter space of the 1d interacting Bose gas in which matter wave physics applies. It is shown in Fig. 2. For 10% error in standard observables or less, the limits lie at γ = 0.018 and τ d = 0.008 (i.e. τ = 0.1 in the notation of [49]), with an additional bulge extending somewhat beyond these. We claim that quantitatively accurate studies can be confi-dently carried out with the classical field approach provided the system stays in this region, i.e. in the thermal quasicondensate, the quantum turbulent regime, and the whole crossover into Hartree-Fock physics. The appropriate cutoff choice to capture the behavior of many observables simultaneously and obtain the correct energy in the system is shown in Fig. 3. For k B T µ, when the fluctuations are thermally dominated, optf c is uniformly ≈ 0.64. This justifies the use of a single cutoff for plane-wave modes even for inhomogeneous gases and suggests that different bases are equivalent in this regime. The globally optimal cutoff that we obtain is noticeably higher in energy than most prior determinations. However this result is still consistent with earlier determinations because when minRM S remains small, the discrepancies in the various observables remain small as well. The goodness of the c-field description depends on how large minRM S is. An important observation is that the energy per particle (especially kinetic) is the most sensitive criterion for a correct overall description in the interacting gas. This is especially relevant for strongly nonstationary dynamics where errors in energy feed through into errors in dynamics. As a result, the best looking option for a consistent system description are c-fields with a high cutoff. Finally, the approach used here could also work in attractive, or multicomponent gases and in higher dimensional systems, where the cutoff dependence could be markedly different [31,44]. It should also shed light on the question of whether the inconsistency of c-fields in the 2d ideal gas identified in [44] abates when interactions are present. Simultaneously accurate values of u G and kinetic energy were impossible to obtain even in the limit T → 0. The greatest difficulty for such a study is how to obtain accurate 2d/3d results to compare with the c-field description. In the high-temperature region, Hartree-Fock methods can be exploited for this purpose, because as temperature rises they eventually become more accurate than classical fields [61]. A topic for a future paper will concern the fermionized and quantum fluctuating quasicondensate at lower temperatures. We were not able to reach these regimes here, but they can be accessed with the extended Bogoliubov theory of Mora & Castin [69]. ACKNOWLEDGMENTS We are grateful to Mariusz Gajda, Matthew Davis, Blair Blakie, and Nick Proukakis for helpful discussions, and Isabelle Bouchoule for insight into her experiment which showed the importance of u G [58]. This work was supported by the National Science Centre grant No. 2012/07/E/ST2/01389. For each of the pairs of parameters γ, τ d shown in Fig. 2, about 10-15 ensembles of Ψ(x) with different cutoff values f c were generated and used to evaluate c-field observables Ω (cf) (γ, τ d , f c ). The values of f c were chosen individually for each γ, τ d pair to cover the region of the minimum of RM S(f c ) with a resolution that is sufficient to determine minRM S and optf c to a satisfactory degree. This was usually 10-20 values, as shown e.g. by the data points in Fig. 1. Each ensemble consisted of 10 3 − 10 4 members. The members were generated using either a Metropolis algorithm (see Sec. A 2) or a projected stochastic Gross-Pitaevskii (SPGPE) equation simulation (see Sec. A 3). The former are shown as empty symbols in Fig. 2 and the latter as filled symbols. The generation of the ensembles is parametrized by the values of T , µ, g, and the numerical lattice. This is not immediately convertible to γ and τ d , since these quantities depend on n, and n = N /L depends numerically on the actual ensemble generated. To deal with this inverse problem, we proceeded as follows: First a target pair of γ target , τ target d is chosen. This gives g and a target density n target from (5). Next, the Yang-Yang exact solution [43] is obtained, giving the value of µ that generates the target density in the full quantum description 2 . This is usually close but not exactly equal to a chemical potential µ (cf) (f c ) that would give the same density of the c-field ensemble. In any case, µ (cf) depends on the chosen f c . Nevertheless, since µ and µ (cf) are close, we simply use the target quantum µ and various values of f c to generate each ensemble, knowing that it will be close to γ target and τ target d . Each ensemble with a different f c generates a slightly different mean particle density n (cf) (f c ) = N /L, and corresponding γ (cf) (f c ), τ (cf) d (f c ) which lie close to but not exactly at γ target , τ target d . However, it is not necessary to hit exact predetermined target values for our purpose of generating contour diagrams. Instead, the value of n at the optimal cutoff optf c was the one used to determine the operational values of γ = mg 2 n (cf) (optf c ) , τ d = mk B T 2π 2 n (cf) (optf c ) 2 (A1) used for the analysis (and shown in Fig. 2). The numerical lattice itself, is chosen according to the usual criteria to obtain a system in the thermodynamic limit. The box length L must be sufficient to capture the longest length scales. The longest feature is the width of the g (1) (z) phase correlation function, and L was chosen so that g (1) decays to zero before reaching a distance of z = L/2. Namely, g (1) (L/2) found from the ensemble falls closer to zero than its statistical uncertainty. On the other hand, the lattice spacing ∆x must be sufficiently small to resolve the smallest allowable features. These are the density ripples of a standing wave composed of waves with the cutoff momentum k c . That is we need ∆x π/(2k c ). In practice we took a several times finer spacing ∆x to smooth the visible features. The numerical lattices contained M = 2 10 − 2 12 points. Periodic boundary conditions were used, to have easy access to plane wave modes through Fourier transforms. The c-field was given support only within the low-energy subspace C by keeping only the M C plane wave components of the Fourier transformed field with |k| k c . Metropolis algorithm Our application of the Metropolis method to generate c-field ensembles follows the approach of Witkowska et al. [52], with minor modifications as used in [44]. The latter paper introduced amendments to generate grand canonical ensembles, primarily by removing the conservation of N used in [52]. This has the additional advantage of removing the need for a small but tricky compensation that is otherwise needed to preserve detailed balance in the number-conserving case. We aim to generate the grand canonical probability distribution P (Ψ) ∝ exp − E kin (Ψ) + E int (Ψ) − µN (Ψ) k B T . (A2) where N (Ψ) = x ∆x|Ψ(x)| 2 ,(A3) and energies are E int (Ψ) = g∆x 2 x |Ψ(x)| 4 ,(A4)E kin (Ψ) = 2m k ∆k k 2 | Ψ(k)| 2 .(A5) The kinetic energy uses the Fourier transformed field normalized to make k ∆k | Ψ(k)| 2 = N , i.e. Ψ(k) = 1 √ 2π x ∆x e −ikx Ψ(x)(A6) with wavevectors k = 2πj/L = j∆k, and j integers. The starting state was Ψ 0 (x) = 0. A random walk is then initiated which generates a Markov chain with members Ψ s (x) after each step s. A trial update Ψ trial (x) is generated at each step. The ratio of probabilities r = P (Ψ trial ) P (Ψ s ) (A7) is evaluated. The update is accepted always if r > 1, or with probability r if 0 < r < 1. Then the next member of the random walk Ψ s+1 becomes Ψ trial . Otherwise the update is rejected, and Ψ s+1 = Ψ s . We used two kinds of updates, chosen randomly at each step: 1. 99%/M C probability: A change of the amplitude of one of the plane wave modes k ′ , such that Ψ trial (k ′ ) = Ψ s (k ′ ) + δ while the other modes are unchanged: Ψ trial (k = k ′ ) = Ψ s (k). The random shift δ is a Gaussian distributed complex random number with amplitude chosen so that the acceptance ratio is about 50%. The value of k ′ to change is chosen randomly from the M C plane wave modes that lie below the energy cutoff. 2. 1% probability: We found that it is necessary to sometimes slightly shift the center of mass of the system to break the system out of getting stuck on a nonzero mean velocity. For this, one generates Ψ trial (k) = Ψ s (k ± ∆k), shifting all values by one lattice point. The sign is chosen randomly, and the wavefunction Ψ(k) at the marginal value of k = ±(k c + ∆k) that overflows k c due to the shift is moved to the opposite end of the spectrum at ∓(k c ). Since all of these updates preserve detailed balance individually, no additional compensation to r is required to determine acceptance, unlike in the N -conserving algorithm for the canonical ensemble. The correlation of various quantities such as E, N , g (1) (z) and the center-of-mass momentum k COM over subsequent steps s is tracked. Ergodicity is then exploited to obtain independent ensemble members by placing only every ∆s-th member of the Markov chain into the final ensemble. The spacing ∆s is chosen sufficiently large to make all the observables uncorrelated. Also, the first t s ∆s elements of the Markov chain are discarded to allow for the dissipation of transients caused by the starting state. The required ∆s and t s depend on the regime studied. Generally speaking, when γ < τ d , we had ∆s = O(10 4 − 10 5 ), while in the opposite regime (γ > τ d ) ∆s was even larger, causing us to switch to using the SPGPE algorithm. SPGPE algorithm The projected stochastic Gross-Pitaevskii equation (SPGPE) [70,71] is here i dΨ(x) dt = (1 − iγ C )P C − 2 2m d 2 dx 2 − µ + g|Ψ(x)| 2 Ψ(x) + 2 γ C k B T η(x, t).(A8) This approach has been extensively used to generate grand canonical ensembles of Ψ(x) and described in detail in [3,7,48,70]. The quantity η(x, t) is a complex time-dependent white noise field with the properties η(x, t)η(x ′ , t ′ ) = η(x, t) = 0 (A9a) η(x, t) * η(x ′ , t ′ ) = δ(x − x ′ )δ(t − t ′ ).(A9b) It is generated using Gaussian random numbers of variance 1/(∆x∆t) where time steps are ∆t and the numerical spatial lattice is ∆x. The P C is a projector onto the low-energy subspace. A quantity P C {A(x)} is implemented by Fourier transforming A(x) to k-space, zeroing out all components with |k| > k c , and Fourier transforming back again to x-space. The physical model in the SPGPE treats the abovecutoff atoms as a thermal and diffusive bath with temperature T , chemical potential µ, and a coupling strength γ C between the low-and high-energy subspaces. We typically used γ C = 0.02. The long-time limit of an ensemble of many such trajectories is the grand canonical ensemble with T and µ the same as the bath. To obtain an ensemble we started with the standard vacuum initial states Ψ(x) = 0 and ran the simulation multiple times, with new noises in each run to generate a new trajectory. Ensemble averaged quantities were tracked over time until all reached stationary values. Then, the fields at this stabilized time were taken as the members of the final ensemble, usually with 400-2000 members. Apart from faster convergence, this approach requires less numerical tweaking than the Metropolis algorithm since it is unnecessary to search for the correlation time ∆s in the Markov chain, while the size of the fluctuations is chosen automatically instead of optimizing δ to get reasonable acceptance rates. On the other hand, only the grand canonical ensemble can be generated using (A8). However, a more complicated "scattering term" SPGPE derived by Rooney [72] does conserve N , and a simple modified SPGPE for canonical ensembles has been derived recently [55]. Fig. 4 shows cutoff dependence for a representative case of large interaction and low temperature that was calculated with the SPGPE. corner points (labeled below as i = 1, 2, 3), chosen from among those shown in Fig. 2. Within such a triangle, the interpolation of a function F that takes values F i at the corner points is given by Discrepancies at lower temperature F (γ, τ d ) = j N i (γ, τ d ) F i .(B2) Here N i (γ, τ d ) = 1 2A (α i + β i γ + ζ i τ d ) ,(B3) A is the area of the triangle, and the coefficients are α i = γ i+1 τ d,i+2 − γ i+2 τ d,i+1 β i = τ d,i+1 − τ d,i+2 (B4) ζ i = γ i+2 − γ i+1 where the indices i + 1 and i + 2 are understood as being modulo 3. The locus of a contour is obtained by requiring a given value of F (e.g. F = 0.1) at chosen locations. A precise determination of the behavior of optf c in the light orange colored region of Fig. 3 proved elusive, however. Much larger numerical ensembles than those we generated would be necessary to get higher precision, as well as a finer spacing in γ,τ d parameter space. It seems that this light orange region is a little practical importance since minRM S becomes large there and a c-field treatment is no longer recommended. Figure 1 . 1Example of accuracy assessment for a representative choice of parameters. Panel (a): cutoff dependence of the discrepancies δ of single observables in the case γ = 0.001, τ d = 0.01, for the coarse grained density fluctuations uG (red), the total energy Etot (purple), the kinetic energy ε (blue), the density-density fluctuations, and g(2) (0) or the interaction energy Eint (green). Panel (b): cutoff dependence of the global discrepancy RM S(fc) at τ d = 0.0159 obtained numerically for several values of γ: 0.01 (square), 0.1 (circle), 0.2 (diamond), 1 (rectangle). Dashed curves show the fit (parabolic to (RM S) 2 ) to numerical points near the minimum. Figure 2 . 2The regime of applicability for classical wave fields. Contours of minRM S (the upper bound on discrepancy of observables) are shown at values of 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6. The dashed lines are ideal gas positions [44]. The diagram also shows the location of numerical ensembles generated by Metropolis and SPGPE ensembles as open and filled diamonds, respectively. The circled letters indicate the physical regime, as explained in the text. Figure 3 . 3Globally optimal values of the cutoff, optfc (black contours with values shown on the plot). For reference, green symbols show the location of minRM S = 0.1 fromFig. 2. The salmon colored area indicates a region in which there was insufficient precision in the numerical ensembles to determine the position of the closely spaced contours. The dashed gray lines are the ideal gas predictions for optfc. Figure 4 . 4Cutoff dependence of the discrepancies δ of single observables for the case γ = 0.112, τ d = 0.00125. Notation as in Fig. 1(a). The symbols show results obtained numerically with the SPGPE (A8). The figure of merit RM S(fc) is shown as the thick grey line. Error bars are shown for δu G , while the remaining error bars are below resolution. The dashed blue line shows −δε as a reference. The arrow indicates the choice of the optimal cutoff. into a number of regions separated by crossovers: S A soliton region: between regions D and T [51? ].C Classical gas: τ d 1/(4π) (physics described by classical particles, not waves). D Quantum degenerate gas: √ γ 4πτ d 1 (Low energy modes occupied by more than one particle, density fluctuations large). T Thermally fluctuating quasicondensate: γ 4πτ d √ γ (phase coherence on appreciable scales, small density fluctuations dominated by thermal excitations, weak bunching of atoms). Q Quantum fluctuation dominated quasicondensate: 4πτ d γ 1 (phase coherence on long scales, small density fluctuations dominated by quantum depletion, weak antibunching). F Fermionized gas: γ 1 (physics dominated by strong interparticle repulsion, overlap between single-particle wavefunctions becoming small). Obtained from exact Yang-Yang calculations[43]. There is one free scaling parameter in the description using γ and τ d , which we set in the Yang-Yang calculations using the arbitrary choice k B T = 1. For numerically generated ensembles, the set of data we use has been described in Sec. A 1: about 10-15 c-field ensembles of Ψ(x) generated for different cutoff values f c but the same µ, as shown inFig. 1(a). The target was to obtain a resolution that is sufficient to determine minRM S and optf c to a satisfactory degree.To match these ensembles, exact quantum results were obtained using the Yang-Yang theory, but now with a chemical potential chosen individually for each f c to obtain the same density as n (cf) (f c ). Observable discrepancies δ Ω (f c ) were calculated according to(6). Then the values of RM S(f c ) were calculated using(9). All these were inspected on plots like those shown inFig. 1 and 4.The approach taken to identify optf c depends now on the behavior of the minimum of RM S(f c ).A rounded minimum occurs when the interaction energy has only a minor contribution near optf c . Behavior of this kind is seen inFig. 1. It happens typically in the hotter part of parameter space when thermal fluctuations dominate and γ is relatively small (it was already seen in the ideal gas[44]). Both δ uG and δ ε are approximately linear in the region near the minimum of RM S, whereas δ Etot ≈ δ ε . Then (RM S) 2 is given approximately by a sum of two parabolas -i.e. a parabola in f c . Accordingly, we make a least squares fit of the numerical values of RM S to the parabolawith fitting parameters optf c , minRM S and a in the vicinity of the minimum. Examples are seen inFig. 1(b). The fitted optf c and minRM S are our final estimates. The other possibility is a flat-bottomed minimum like that shown inFig. 4. This occurs when the error in g(2)(0) (and by implication in E int and E tot ) is sufficiently large near the minimum to compete with or exceed the error in u G . In the region in which |δ ε | |δ Etot |, the error in ε becomes negligible, and so RM S = δ 2 uG + δ Etot . Both of these two remaining errors usually depend weakly on f c . In this case, we choose the smaller of the two RM S values at the "corners" of the flat-bottomed minimum that occur at δ ε = δ Etot as minRM S, and the corresponding value of f c as optf c . Some rare intermediate cases, such as when the maximum error swaps between δ uG and δ Etot within the flat-bottomed minimum are dealt with on a case-by-case basis after inspection of the plot of RM S(f c ).Generation of contour linesThe contours in Figs. 2 and 3 were obtained using Lagrangian interpolation of a function of two variables. Triangular polygons are successively selected using three . M C Tsatsos, P E Tavares, A Cidrim, A R Fritsch, M A Caracanhas, F E A Santos, C F Barenghi, V S Bagnato, 10.1016/j.physrep.2016.02.003Physics Reports. 622quantum turbulence in trapped atomic Bose-Einstein condensatesM. C. Tsatsos, P. E. Tavares, A. Cidrim, A. R. Fritsch, M. A. Caracanhas, F. E. A. dos Santos, C. F. Barenghi, and V. S. Bagnato, Physics Reports 622, 1 (2016), quantum turbulence in trapped atomic Bose-Einstein condensates. . Y Kagan, B V Svistunov, Phys. Rev. Lett. 793331Y. Kagan and B. V. Svistunov, Phys. Rev. Lett. 79, 3331 (1997). . R A Duine, H T C Stoof, 10.1103/PhysRevA.65.013603Phys. Rev. A. 6513603R. A. Duine and H. T. C. Stoof, Phys. Rev. A 65, 013603 (2001). . R J Lewis-Swan, M K Olsen, K V Kheruntsyan, 10.1103/PhysRevA.94.033814Phys. Rev. A. 9433814R. J. Lewis-Swan, M. K. Olsen, and K. V. Kheruntsyan, Phys. Rev. A 94, 033814 (2016). A classicalfield approach for bose gases. M Brewczyk, M Gajda, K Rzążewski, 10.1142/9781848168121_0012Quantum Gases. Imperial College Press12M. Brewczyk, M. Gajda, and K. Rzążewski, "A classical- field approach for bose gases," in Quantum Gases (Impe- rial College Press, 2013) Chap. 12, pp. 191-202. C-field methods for non-equilibrium bose gases. M J Davis, T M Wright, P B Blakie, A S Bradley, R J Ballagh, C W Gardiner, 10.1142/9781848168121_0010Quantum Gases. Imperial College Press10M. J. Davis, T. M. Wright, P. B. Blakie, A. S. Bradley, R. J. Ballagh, and C. W. Gardiner, "C-field methods for non-equilibrium bose gases," in Quantum Gases (Impe- rial College Press, 2013) Chap. 10, pp. 163-175. The stochastic gross-pitaevskii methodology. S P Cockburn, N P Proukakis, 10.1142/9781848168121_0011Quantum Gases. Imperial College PressS. P. Cockburn and N. P. Proukakis, "The stochastic gross-pitaevskii methodology," in Quantum Gases (Im- perial College Press, 2013) Chap. 11, pp. 177-189. . C N Weiler, T W Neely, D R Scherer, A S Bradley, M J Davis, B P Anderson, 10.1038/nature07334Nature. 455948C. N. Weiler, T. W. Neely, D. R. Scherer, A. S. Bradley, M. J. Davis, and B. P. Anderson, Nature 455, 948 (2008). . A S Bradley, C W Gardiner, M J Davis, 10.1103/PhysRevA.77.033616Phys. Rev. A. 7733616A. S. Bradley, C. W. Gardiner, and M. J. Davis, Phys. Rev. A 77, 033616 (2008). . B Damski, W H Zurek, 10.1103/PhysRevLett.104.160404Phys. Rev. Lett. 104160404B. Damski and W. H. Zurek, Phys. Rev. Lett. 104, 160404 (2010). . E Witkowska, P Deuar, M Gajda, K Rzążewski, 10.1103/PhysRevLett.106.135301Phys. Rev. Lett. 106135301E. Witkowska, P. Deuar, M. Gajda, and K. Rzążewski, Phys. Rev. Lett. 106, 135301 (2011). . I.-K Liu, R W Pattinson, T P Billam, S A Gardiner, S L Cornish, T.-M Huang, W.-W Lin, S.-C Gou, N G Parker, N P Proukakis, 10.1103/PhysRevA.93.023628Phys. Rev. A. 9323628I.-K. Liu, R. W. Pattinson, T. P. Billam, S. A. Gardiner, S. L. Cornish, T.-M. Huang, W.-W. Lin, S.-C. Gou, N. G. Parker, and N. P. Proukakis, Phys. Rev. A 93, 023628 (2016). . N G Berloff, B V Svistunov, 10.1103/PhysRevA.66.013603Phys. Rev. A. 6613603N. G. Berloff and B. V. Svistunov, Phys. Rev. A 66, 013603 (2002). . N G Parker, C S Adams, 10.1103/PhysRevLett.95.145301Phys. Rev. Lett. 95145301N. G. Parker and C. S. Adams, Phys. Rev. Lett. 95, 145301 (2005). . J Sabbatini, W H Zurek, M J Davis, New Journal of Physics. 1495030J. Sabbatini, W. H. Zurek, and M. J. Davis, New Journal of Physics 14, 095030 (2012). . T Świsłocki, E Witkowska, J Dziarmaga, M Matuszewski, 10.1103/PhysRevLett.110.045303Phys. Rev. Lett. 11045303T. Świsłocki, E. Witkowska, J. Dziarmaga, and M. Ma- tuszewski, Phys. Rev. Lett. 110, 045303 (2013). . M Anquez, B A Robbins, H M Bharath, M Boguslawski, T M Hoang, M S Chapman, 10.1103/PhysRevLett.116.155301Phys. Rev. Lett. 116155301M. Anquez, B. A. Robbins, H. M. Bharath, M. Bo- guslawski, T. M. Hoang, and M. S. Chapman, Phys. Rev. Lett. 116, 155301 (2016). . B Nowak, J Schole, D Sexty, T Gasenzer, 10.1103/PhysRevA.85.043627Phys. Rev. A. 8543627B. Nowak, J. Schole, D. Sexty, and T. Gasenzer, Phys. Rev. A 85, 043627 (2012). . M Karl, B Nowak, T Gasenzer, 10.1103/PhysRevA.88.063615Phys. Rev. A. 8863615M. Karl, B. Nowak, and T. Gasenzer, Phys. Rev. A 88, 063615 (2013). . R N Bisset, M J Davis, T P Simula, P B Blakie, 10.1103/PhysRevA.79.033626Phys. Rev. A. 7933626R. N. Bisset, M. J. Davis, T. P. Simula, and P. B. Blakie, Phys. Rev. A 79, 033626 (2009). . T Karpiuk, M Brewczyk, M Gajda, K Rzążewski, Journal of Physics B: Atomic, Molecular and Optical Physics. 4295301T. Karpiuk, M. Brewczyk, M. Gajda, and K. Rzążewski, Journal of Physics B: Atomic, Molecular and Optical Physics 42, 095301 (2009). . S J Rooney, A S Bradley, P B Blakie, 10.1103/PhysRevA.81.023630Phys. Rev. A. 8123630S. J. Rooney, A. S. Bradley, and P. B. Blakie, Phys. Rev. A 81, 023630 (2010). . T Simula, M J Davis, K Helmerson, 10.1103/PhysRevLett.113.165302Phys. Rev. Lett. 113165302T. Simula, M. J. Davis, and K. Helmerson, Phys. Rev. Lett. 113, 165302 (2014). . R J Marshall, G H C New, K Burnett, S Choi, 10.1103/PhysRevA.59.2085Phys. Rev. A. 592085R. J. Marshall, G. H. C. New, K. Burnett, and S. Choi, Phys. Rev. A 59, 2085 (1999). . N P Proukakis, J Schmiedmayer, H T C Stoof, 10.1103/PhysRevA.73.053603Phys. Rev. A. 7353603N. P. Proukakis, J. Schmiedmayer, and H. T. C. Stoof, Phys. Rev. A 73, 053603 (2006). . M Wouters, V Savona, 10.1103/PhysRevB.79.165302Phys. Rev. B. 79165302M. Wouters and V. Savona, Phys. Rev. B 79, 165302 (2009). . A Chiocchetta, I Carusotto, 10.1103/PhysRevA.90.023633Phys. Rev. A. 9023633A. Chiocchetta and I. Carusotto, Phys. Rev. A 90, 023633 (2014). . D Lacroix, D Gambacurta, S Ayik, 10.1103/PhysRevC.87.061302Phys. Rev. C. 8761302D. Lacroix, D. Gambacurta, and S. Ayik, Phys. Rev. C 87, 061302 (2013). . S N Klimin, J Tempere, G Lombardi, J T Devreese, 10.1140/epjb/e2015-60213-4The European Physical Journal B. 88122S. N. Klimin, J. Tempere, G. Lom- bardi, and J. T. Devreese, The European Physical Journal B 88, 122 (2015). . G D Moore, N Turok, 10.1103/PhysRevD.55.6538Phys. Rev. D. 556538G. D. Moore and N. Turok, Phys. Rev. D 55, 6538 (1997). . E Witkowska, M Gajda, K Rzążewski, 10.1103/PhysRevA.79.033631Phys. Rev. A. 7933631E. Witkowska, M. Gajda, and K. Rzążewski, Phys. Rev. A 79, 033631 (2009). . M Brewczyk, P Borowski, M Gajda, K Rzążewski, Journal of Physics B: Atomic, Molecular and Optical Physics. 37M. Brewczyk, P. Borowski, M. Gajda, and K. Rzążewski, Journal of Physics B: Atomic, Molecular and Optical Physics 37 . L Zawitkowski, M Brewczyk, M Gajda, K Rzążewski, 10.1103/PhysRevA.70.033614Phys. Rev. A. 7033614L. Zawitkowski, M. Brewczyk, M. Gajda, and K. Rzążewski, Phys. Rev. A 70, 033614 (2004). . A Sinatra, C Lobo, Y Castin, Journal of Physics B: Atomic, Molecular and Optical Physics. 35A. Sinatra, C. Lobo, and Y. Castin, Journal of Physics B: Atomic, Molecular and Optical Physics 35 . A S Bradley, P B Blakie, C W Gardiner, Journal of Physics B: Atomic, Molecular and Optical Physics. 38A. S. Bradley, P. B. Blakie, and C. W. Gardiner, Journal of Physics B: Atomic, Molecular and Optical Physics 38 . P B Blakie, M J Davis, Journal of Physics B: Atomic, Molecular and Optical Physics. 40P. B. Blakie and M. J. Davis, Journal of Physics B: Atomic, Molecular and Optical Physics 40 . T Sato, Y Kato, T Suzuki, N Kawashima, 10.1103/PhysRevE.85.050105Phys. Rev. E. 8550105T. Sato, Y. Kato, T. Suzuki, and N. Kawashima, Phys. Rev. E 85, 050105 (2012). . S P Cockburn, N P Proukakis, 10.1103/PhysRevA.86.033610Phys. Rev. A. 8633610S. P. Cockburn and N. P. Proukakis, Phys. Rev. A 86, 033610 (2012). . A Sinatra, Y Castin, 10.1103/PhysRevA.78.053615Phys. Rev. A. 7853615A. Sinatra and Y. Castin, Phys. Rev. A 78, 053615 (2008). . A Bezett, P B Blakie, 10.1103/PhysRevA.79.023602Phys. Rev. A. 7923602A. Bezett and P. B. Blakie, Phys. Rev. A 79, 023602 (2009). . T Karpiuk, M Brewczyk, M Gajda, K Rzążewski, 10.1103/PhysRevA.81.013629Phys. Rev. A. 8113629T. Karpiuk, M. Brewczyk, M. Gajda, and K. Rzążewski, Phys. Rev. A 81, 013629 (2010). . E H Lieb, W Liniger, 10.1103/PhysRev.130.1605Phys. Rev. 1301605E. H. Lieb and W. Liniger, Phys. Rev. 130, 1605 (1963). . C N Yang, C P Yang, 10.1063/1.1664947Journal of Mathematical Physics. 101115C. N. Yang and C. P. Yang, Journal of Mathematical Physics 10, 1115 (1969). . J Pietraszewicz, P Deuar, Phys. Rev. A. 9263620J. Pietraszewicz and P. Deuar, Phys. Rev. A 92, 063620 (2015). Reconciling the classical-field method with the beliaev brokensymmetry approach. T M Wright, M J Davis, N P Proukakis, 10.1142/9781848168121_0019Quantum Gases. Imperial College Press19T. M. Wright, M. J. Davis, and N. P. Proukakis, "Recon- ciling the classical-field method with the beliaev broken- symmetry approach," in Quantum Gases (Imperial Col- lege Press, 2013) Chap. 19, pp. 299-312. . P Blakie, A Bradley, M Davis, R Ballagh, C Gardiner, 10.1080/00018730802564254Advances in Physics. 57363P. Blakie, A. Bradley, M. Davis, R. Ballagh, and C. Gar- diner, Advances in Physics 57, 363 (2008). . M Brewczyk, M Gajda, K Rzążewski, Journal of Physics B: Atomic, Molecular and Optical Physics. 48] N. P. Proukakis and B. Jackson4041Journal of Physics B: Atomic, Molecular and Optical PhysicsM. Brewczyk, M. Gajda, and K. Rzążewski, Journal of Physics B: Atomic, Molecular and Optical Physics 40 [48] N. P. Proukakis and B. Jackson, Journal of Physics B: Atomic, Molecular and Optical Physics 41 . K V Kheruntsyan, D M Gangardt, P D Drummond, G V Shlyapnikov, 10.1103/PhysRevLett.91.040403Phys. Rev. Lett. 9140403K. V. Kheruntsyan, D. M. Gangardt, P. D. Drummond, and G. V. Shlyapnikov, Phys. Rev. Lett. 91, 040403 (2003). . P Deuar, A G Sykes, D M Gangardt, M J Davis, P D Drummond, K V Kheruntsyan, 10.1103/PhysRevA.79.043619Phys. Rev. A. 7943619P. Deuar, A. G. Sykes, D. M. Gangardt, M. J. Davis, P. D. Drummond, and K. V. Kheruntsyan, Phys. Rev. A 79, 043619 (2009). . T Karpiuk, P Deuar, P Bienias, E Witkowska, K Pawłowski, M Gajda, K Rzążewski, M Brewczyk, 10.1103/PhysRevLett.109.205302Phys. Rev. Lett. 109205302T. Karpiuk, P. Deuar, P. Bienias, E. Witkowska, K. Pawłowski, M. Gajda, K. Rzążewski, and M. Brewczyk, Phys. Rev. Lett. 109, 205302 (2012). . E Witkowska, M Gajda, K Rzążewski, 10.1016/j.optcom.2009.10.080Optics Communications. 283671E. Witkowska, M. Gajda, and K. Rzążewski, Optics Communications 283, 671 (2010). . H Schmidt, K Góral, F Floegel, M Gajda, K Rzążewski, Journal of Optics B: Quantum and Semiclassical Optics. 596H. Schmidt, K. Góral, F. Floegel, M. Gajda, and K. Rzążewski, Journal of Optics B: Quantum and Semiclassical Optics 5, S96 (2003). . M J Davis, S A Morgan, 10.1103/PhysRevA.68.053615Phys. Rev. A. 6853615M. J. Davis and S. A. Morgan, Phys. Rev. A 68, 053615 (2003). . J Pietraszewicz, E Witkowska, P Deuar, 10.1103/PhysRevA.96.033612Phys. Rev. A. 9633612J. Pietraszewicz, E. Witkowska, and P. Deuar, Phys. Rev. A 96, 033612 (2017). . C Sanner, E J Su, A Keshet, R Gommers, Y Shin, W Huang, W Ketterle, 10.1103/PhysRevLett.105.040402Phys. Rev. Lett. 10540402C. Sanner, E. J. Su, A. Keshet, R. Gom- mers, Y.-i. Shin, W. Huang, and W. Ketterle, Phys. Rev. Lett. 105, 040402 (2010). . T Müller, B Zimmermann, J Meineke, J.-P Brantut, T Esslinger, H Moritz, 10.1103/PhysRevLett.105.040401Phys. Rev. Lett. 10540401T. Müller, B. Zimmermann, J. Meineke, J.- P. Brantut, T. Esslinger, and H. Moritz, Phys. Rev. Lett. 105, 040401 (2010). . T Jacqmin, J Armijo, T Berrada, K V Kheruntsyan, I Bouchoule, 10.1103/PhysRevLett.106.230405Phys. Rev. Lett. 106230405T. Jacqmin, J. Armijo, T. Berrada, K. V. Kheruntsyan, and I. Bouchoule, Phys. Rev. Lett. 106, 230405 (2011). . K V Kheruntsyan, D M Gangardt, P D Drummond, G V Shlyapnikov, 10.1103/PhysRevA.71.053615Phys. Rev. A. 7153615K. V. Kheruntsyan, D. M. Gangardt, P. D. Drummond, and G. V. Shlyapnikov, Phys. Rev. A 71, 053615 (2005). . J Pietraszewicz, P Deuar, New Journal of Physics. 19123010J. Pietraszewicz and P. Deuar, New Journal of Physics 19, 123010 (2017). . C Henkel, T.-O Sauer, N P Proukakis, Journal of Physics B: Atomic, Molecular and Optical Physics. 50114002C. Henkel, T.-O. Sauer, and N. P. Proukakis, Journal of Physics B: Atomic, Molecular and Optical Physics 50, 114002 (2017). . S P Cockburn, D Gallucci, N P Proukakis, 10.1103/PhysRevA.84.023613Phys. Rev. A. 8423613S. P. Cockburn, D. Gallucci, and N. P. Proukakis, Phys. Rev. A 84, 023613 (2011). . M J Davis, S A Morgan, K Burnett, 10.1103/PhysRevLett.87.160402Phys. Rev. Lett. 87160402M. J. Davis, S. A. Morgan, and K. Burnett, Phys. Rev. Lett. 87, 160402 (2001). . P Bienias, K Pawłowski, M Gajda, K Rzążewski, 10.1103/PhysRevA.83.033610Phys. Rev. A. 8333610P. Bienias, K. Pawłowski, M. Gajda, and K. Rzążewski, Phys. Rev. A 83, 033610 (2011). . P Bienias, K Pawłowski, M Gajda, K Rzążewski, Europhysics Letters). 9610011EPLP. Bienias, K. Pawłowski, M. Gajda, and K. Rzążewski, EPL (Europhysics Letters) 96, 10011 (2011). . D S Jin, M R Matthews, J R Ensher, C E Wieman, E A Cornell, 10.1103/PhysRevLett.78.764Phys. Rev. Lett. 78764D. S. Jin, M. R. Matthews, J. R. Ensher, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. 78, 764 (1997). . B Jackson, E Zaremba, 10.1103/PhysRevLett.88.180402Phys. Rev. Lett. 88180402B. Jackson and E. Zaremba, Phys. Rev. Lett. 88, 180402 (2002). . S A Morgan, M Rusch, D A W Hutchinson, K Burnett, 10.1103/PhysRevLett.91.250403Phys. Rev. Lett. 91250403S. A. Morgan, M. Rusch, D. A. W. Hutchinson, and K. Burnett, Phys. Rev. Lett. 91, 250403 (2003). . C Mora, Y Castin, 10.1103/PhysRevA.67.053615Phys. Rev. A. 6753615C. Mora and Y. Castin, Phys. Rev. A 67, 053615 (2003). . C W Gardiner, M J Davis, Journal of Physics B: Atomic, Molecular and Optical Physics. 364731C. W. Gardiner and M. J. Davis, Journal of Physics B: Atomic, Molecular and Optical Physics 36, 4731 (2003). . A S Bradley, P B Blakie, 10.1103/PhysRevA.90.023631Phys. Rev. A. 9023631A. S. Bradley and P. B. Blakie, Phys. Rev. A 90, 023631 (2014). . S J Rooney, P B Blakie, A S Bradley, 10.1103/PhysRevA.86.053634Phys. Rev. A. 8653634S. J. Rooney, P. B. Blakie, and A. S. Bradley, Phys. Rev. A 86, 053634 (2012).
[]
[ "BOUNDEDNESS IN LANGUAGES OF INFINITE WORDS", "BOUNDEDNESS IN LANGUAGES OF INFINITE WORDS" ]
[ "Miko Laj \nCnrs/Liafa\nWarsaw University\nUniversité Paris Diderot\nParis\n", "Thomas Colcombet [email protected] \nCnrs/Liafa\nWarsaw University\nUniversité Paris Diderot\nParis\n" ]
[ "Cnrs/Liafa\nWarsaw University\nUniversité Paris Diderot\nParis", "Cnrs/Liafa\nWarsaw University\nUniversité Paris Diderot\nParis" ]
[ "Logical Methods in Computer Science" ]
We define a new class of languages of ω-words, strictly extending ω-regular languages.One way to present this new class is by a type of regular expressions. The new expressions are an extension of ω-regular expressions where two new variants of the Kleene star L * are added: L B and L S . These new exponents are used to say that parts of the input word have bounded size, and that parts of the input can have arbitrarily large sizes, respectively. For instance, the expression (a B b) ω represents the language of infinite words over the letters a, b where there is a common bound on the number of consecutive letters a. The expression (a S b) ω represents a similar language, but this time the distance between consecutive b's is required to tend toward the infinite.We develop a theory for these languages, with a focus on decidability and closure. We define an equivalent automaton model, extending Büchi automata. The main technical result is a complementation lemma that works for languages where only one type of exponent-either L B or L S -is used.We use the closure and decidability results to obtain partial decidability results for the logic MSOLB, a logic obtained by extending monadic second-order logic with new quantifiers that speak about the size of sets.CCCreative CommonsMIKO LAJ BOJAŃCZYK AND THOMAS COLCOMBETthe bound being fixed for the whole word. For instance, the language given in the first paragraph is described by the expression (a B b) ω . The idea behind S is that the number of iterated concatenations of the language L must tend toward infinity (i.e., have no bounded subsequence).This paper is devoted to developing a theory for the new languages. There are different motivations for such a study. The first one is the interest of the model itself: it extends naturally the standard framework of ω-regular languages, while retaining some closure and decidability properties. We also show that, just as ω-regular expressions define the same languages of infinite words as the ones definable in monadic second-order logic, the class of ωBS-regular languages also admits a logical counterpart. The relevance of the model is also quite natural: the use of bounding arguments in proofs is very common, and the family of ωBS-regular languages provides a unified framework for developing such arguments. A notable example is the famous star-height problem, to which we will return later in the introduction. Another application of our results, which is presented in this paper, is an algorithm deciding if an ω-automatic graph has bounded out-degree. We believe that more problems are related to this notion of regularity with bounds. In this paper, we concentrate on a basic theoretical study of the model.The first step in this study is the introduction of a new family of automata over infinite words, extending Büchi automata, which we call bounding automata. Bounding automata have the same expressive power as ωBS-regular expressions. (However, the translations between bounding automata and ωBS-regular expressions are more involved than in the case of ω-regular languages.) A bounding automaton is a finite automaton equipped with a finite number of counters. During a run, these counters can be incremented and reset, but not read. The counter values are used in the acceptance condition, which depends on their asymptotic values (whether counter values are bounded or tend toward infinity). Thanks to the equivalence between automata and expressions, and using simple automata constructions, we obtain the closure of ωBS-regular languages under union, intersection and projection.Unfortunately, ωBS-regular automata cannot be determinized. Even more problematic, ωBS-regular languages are not closed under complement. However, we are able to identify two fragments of ωBS-regular languages that complement each other. We show that the complement of a language that only talks about bounded sequences is a language that only talks about sequences tending toward infinity; and vice versa. The difficult proof of this complementation result is the technical core of the paper.Finally, we present a logic that captures exactly the ωBS-regular languages, i.e. is equivalent to both the ωBS-regular expressions and automata. As it is well known, languages defined by ω-regular expressions are exactly the ones definable in monadic second-order logic. What extension of this logic corresponds to ωBS-regular expressions? Our approach is to add a new quantifier, called the existential unbounding quantifier U. A formula UX.φ(X) is true if it is possible to find sets satisfying φ(X) of arbitrarily large size. Every ωBS-regular language can be defined in monadic second-order logic extended with U. However, due to the failure of the closure under complementation, the converse does not hold. By restricting the quantification patterns, we identify fragments of the logic that correspond to the various types of ωBS-regular expressions introduced in this paper.Related work. This work tries to continue the long lasting tradition of logic/automata correspondences initiated by Büchi[3,4]and continued by Rabin [12], only to mention theBOUNDEDNESS IN LANGUAGES OF INFINITE WORDS3 most famous names (see[14]for a survey). We believe that bounding properties extend the standard notion of regularity in a meaningful way, and that languages defined by our extended expressions have every right to be called regular, even though they are not captured by Büchi automata. For instance, every ωBS-regular language L has a finite number of quotients w −1 L and Lw −1 . (Moreover, the right quotients Lw −1 are regular languages of finite words.) Unfortunately, we do not have a full complementation result.The quantifier U in the logic that describes ωBS-regular languages was already introduced in [1]. More precisely, the quantifier studied in [1] is B: a formula BX.φ expresses that there is a bound on the size of sets X satisfying property φ. This formula is semantically equivalent to ¬(UX.φ). Although [1] went beyond words and considered infinite trees, the proposed satisfiability algorithm worked for a very restricted fragment of the logic with no (not even partial) complementation. Furthermore, no notion of automata or regular expression was proposed.Boundedness properties have been considered in model-checking. For instance, [2] considered systems described by pushdown automata whose stack size is unbounded.Our work on bounds can also be related to cardinality restrictions. In [11], Klaedtke and Ruess considered an extension of monadic second-order logic, which allowed cardinality constraints of the formBOUNDEDNESS IN LANGUAGES OF INFINITE WORDS 5A language of word sequences is a set of word sequences. The finitary variant of ωBSregular expressions will describe languages of word sequences. Note that the finiteness concerns only the individual words in the sequences, while the sequences themselves will sometimes be infinitely long.We define the following operations, which take as parameters languages of word sequences K, L.
10.23638/lmcs-13(4:3)2017
[ "https://arxiv.org/pdf/1708.09765v2.pdf" ]
13,564,568
1708.09765
181256670c1e75292d50551194c9c71430981424
BOUNDEDNESS IN LANGUAGES OF INFINITE WORDS 2017 Miko Laj Cnrs/Liafa Warsaw University Université Paris Diderot Paris Thomas Colcombet [email protected] Cnrs/Liafa Warsaw University Université Paris Diderot Paris BOUNDEDNESS IN LANGUAGES OF INFINITE WORDS Logical Methods in Computer Science 13201710.23638/LMCS-13(4:3)2017Submitted Jun. 20, 2007 We define a new class of languages of ω-words, strictly extending ω-regular languages.One way to present this new class is by a type of regular expressions. The new expressions are an extension of ω-regular expressions where two new variants of the Kleene star L * are added: L B and L S . These new exponents are used to say that parts of the input word have bounded size, and that parts of the input can have arbitrarily large sizes, respectively. For instance, the expression (a B b) ω represents the language of infinite words over the letters a, b where there is a common bound on the number of consecutive letters a. The expression (a S b) ω represents a similar language, but this time the distance between consecutive b's is required to tend toward the infinite.We develop a theory for these languages, with a focus on decidability and closure. We define an equivalent automaton model, extending Büchi automata. The main technical result is a complementation lemma that works for languages where only one type of exponent-either L B or L S -is used.We use the closure and decidability results to obtain partial decidability results for the logic MSOLB, a logic obtained by extending monadic second-order logic with new quantifiers that speak about the size of sets.CCCreative CommonsMIKO LAJ BOJAŃCZYK AND THOMAS COLCOMBETthe bound being fixed for the whole word. For instance, the language given in the first paragraph is described by the expression (a B b) ω . The idea behind S is that the number of iterated concatenations of the language L must tend toward infinity (i.e., have no bounded subsequence).This paper is devoted to developing a theory for the new languages. There are different motivations for such a study. The first one is the interest of the model itself: it extends naturally the standard framework of ω-regular languages, while retaining some closure and decidability properties. We also show that, just as ω-regular expressions define the same languages of infinite words as the ones definable in monadic second-order logic, the class of ωBS-regular languages also admits a logical counterpart. The relevance of the model is also quite natural: the use of bounding arguments in proofs is very common, and the family of ωBS-regular languages provides a unified framework for developing such arguments. A notable example is the famous star-height problem, to which we will return later in the introduction. Another application of our results, which is presented in this paper, is an algorithm deciding if an ω-automatic graph has bounded out-degree. We believe that more problems are related to this notion of regularity with bounds. In this paper, we concentrate on a basic theoretical study of the model.The first step in this study is the introduction of a new family of automata over infinite words, extending Büchi automata, which we call bounding automata. Bounding automata have the same expressive power as ωBS-regular expressions. (However, the translations between bounding automata and ωBS-regular expressions are more involved than in the case of ω-regular languages.) A bounding automaton is a finite automaton equipped with a finite number of counters. During a run, these counters can be incremented and reset, but not read. The counter values are used in the acceptance condition, which depends on their asymptotic values (whether counter values are bounded or tend toward infinity). Thanks to the equivalence between automata and expressions, and using simple automata constructions, we obtain the closure of ωBS-regular languages under union, intersection and projection.Unfortunately, ωBS-regular automata cannot be determinized. Even more problematic, ωBS-regular languages are not closed under complement. However, we are able to identify two fragments of ωBS-regular languages that complement each other. We show that the complement of a language that only talks about bounded sequences is a language that only talks about sequences tending toward infinity; and vice versa. The difficult proof of this complementation result is the technical core of the paper.Finally, we present a logic that captures exactly the ωBS-regular languages, i.e. is equivalent to both the ωBS-regular expressions and automata. As it is well known, languages defined by ω-regular expressions are exactly the ones definable in monadic second-order logic. What extension of this logic corresponds to ωBS-regular expressions? Our approach is to add a new quantifier, called the existential unbounding quantifier U. A formula UX.φ(X) is true if it is possible to find sets satisfying φ(X) of arbitrarily large size. Every ωBS-regular language can be defined in monadic second-order logic extended with U. However, due to the failure of the closure under complementation, the converse does not hold. By restricting the quantification patterns, we identify fragments of the logic that correspond to the various types of ωBS-regular expressions introduced in this paper.Related work. This work tries to continue the long lasting tradition of logic/automata correspondences initiated by Büchi[3,4]and continued by Rabin [12], only to mention theBOUNDEDNESS IN LANGUAGES OF INFINITE WORDS3 most famous names (see[14]for a survey). We believe that bounding properties extend the standard notion of regularity in a meaningful way, and that languages defined by our extended expressions have every right to be called regular, even though they are not captured by Büchi automata. For instance, every ωBS-regular language L has a finite number of quotients w −1 L and Lw −1 . (Moreover, the right quotients Lw −1 are regular languages of finite words.) Unfortunately, we do not have a full complementation result.The quantifier U in the logic that describes ωBS-regular languages was already introduced in [1]. More precisely, the quantifier studied in [1] is B: a formula BX.φ expresses that there is a bound on the size of sets X satisfying property φ. This formula is semantically equivalent to ¬(UX.φ). Although [1] went beyond words and considered infinite trees, the proposed satisfiability algorithm worked for a very restricted fragment of the logic with no (not even partial) complementation. Furthermore, no notion of automata or regular expression was proposed.Boundedness properties have been considered in model-checking. For instance, [2] considered systems described by pushdown automata whose stack size is unbounded.Our work on bounds can also be related to cardinality restrictions. In [11], Klaedtke and Ruess considered an extension of monadic second-order logic, which allowed cardinality constraints of the formBOUNDEDNESS IN LANGUAGES OF INFINITE WORDS 5A language of word sequences is a set of word sequences. The finitary variant of ωBSregular expressions will describe languages of word sequences. Note that the finiteness concerns only the individual words in the sequences, while the sequences themselves will sometimes be infinitely long.We define the following operations, which take as parameters languages of word sequences K, L. Introduction In this paper we introduce a new class of languages of infinite words. The new languages of this kind-called ωBS-regular languages-are defined using an extended form of ω-regular expressions. The extended expressions can define properties such as "words of the form (a * b) ω for which there is an upper bound on the number of consecutive letters a". Note that this bound depends upon the word, and for this reason the language is not ω-regular. This witnesses that ωBS-regular languages are a proper extension of ω-regular languages. The expressions for ωBS-regular languages are obtained by extending the usual ωregular expressions with two new variants of the Kleene star L * . These are called the bounded exponent L B and the strongly unbounded exponent L S . The idea behind B is that the language L in the expression L B must be iterated a bounded number of times, |X 1 | + · · · + |X n | ≤ |Y 1 | + · · · + |Y m | . In general, such cardinality constraints (even |X| = |Y |) lead to undecidability of the satisfiability problem. Even though cardinality constraints can express all ωBS-regular languages, the decidable fragments considered in [11] are insufficient for our purposes: those fragments capture only a small fragment of ωBS-regular languages. Finally, the objects we manipulate are related to the (restricted) star-height problem. This problem is, given a natural number k and a regular language of finite words L, to decide if L can be defined by a regular expression where the nesting depth of Kleene-stars is at most k. This problem, first raised by Eggan in 1963 [7], has a central role in the theory of regular languages. It has been first shown decidable by Hashiguchi [8] via a difficult proof. The correctness of this proof is still unclear. A new proof, due to Kirsten [10], reduces the star-height problem to the limitedness problem for nested distance desert automata. The latter are finite state automata, which assign a natural number to each word. The limitedness problem is the question whether there is a bound on the numbers produced by a given automaton. Nested distance desert automata-we were not aware of their existence when developing the theory of ωBS-regular languages-happen to be syntactically equivalent to the hierarchical B-automata that we use as an intermediate object in the present work. The semantics of the two models are also tightly connected, and it is possible to derive from the result of our paper the decidability of the limitedness problem for nested distance desert automata (though using our more general results, we obtain a non-elementary complexity, which is far beyond the optimal PSPACE algorithm of Kirsten). Structure of the paper. In Section 2, we formally define the ωBS-regular expressions that are the subject of this paper. We introduce two restricted types of expressions (where the B and S exponents are prohibited, respectively) and give an overview of the closure properties of the respective expressions. In Section 3, we introduce our automata models and show that they are equivalent to the ωBS-regular expressions. In Section 4, we show how our results can be applied to obtain a decision procedure for satisfiability in an extension of monadic second-order logic. The main technical result, which concerns closure under complementation, is given at the end of the paper, in Section 5. We would like to thank the anonymous referees, who contributed enormously to this paper, through their many thoughtful and helpful comments. Regular Expressions with Bounds In this section we define ωBS-regular expressions. The expressions are the first of three means of defining languages with bounds. The other two-automata and logic-are presented in the next sections. 2. 1. Definition. In a nutshell, to the standard operations used in ω-regular expressions, we add two variants of the Kleene star * : the B and S exponents. These are used to constrain the number of iterations, or more precisely the asymptotic behavior of the number of iterations (this makes sense since the new exponents are used in the context of an ω exponent. When the B exponent is used, the number of iterations has to be bounded by a bound which depends on the word. When the S exponent is used, it has to tend toward infinity. For instance, the expression (a B b) ω represents the words in (a * b) ω where the size of sequences of consecutive a's is bounded. Similarly, the expression (a S b) ω requires the size of (maximal) sequences of consecutive a's to tend toward infinity. These new expressions are called ωBS-regular expressions. A more detailed definition is presented below. Just as an ω-regular expression uses regular expressions of finite words as building blocks, for ωBS-regular expressions one also needs first to define a finitary variant, called BS-regular expressions. Subsection 2. 1.1 presents this finitary variant, while Subsection 2. 1.2 introduces ωBS-regular expressions. 2. 1. 1. BS-regular expressions. In the following we will write that an infinite sequence of natural numbers g ∈ N ω is bounded if there exists a global bound on the g(i)'s, i.e., if lim sup i g(i) < +∞. This behavior is denoted by the letter B. The infinite sequence is strongly unbounded if it tends toward infinity, i.e., lim inf i g(i) = +∞. This behavior is denoted by the letter S. Let us remark that an infinite sequence is bounded iff it has no strongly unbounded infinite subsequence, and that an infinite sequence is strongly unbounded iff it has no bounded infinite subsequence. A natural number sequence g is a finite or infinite sequence of natural numbers that we write in a functional way: g(0), g(1), . . . Its length is |g|, which may possibly be ∞. We denote by N ∞ the set of sequences of natural numbers, both finite and infinite. A sequence of natural numbers g is non-decreasing if g(i) ≤ g(j) holds for all i ≤ j < |g|. We write g ≤ n to say that g(i) ≤ n holds for all i < |g|. For g a non-decreasing sequence of natural numbers, we define g ′ to be g ′ (i) = g(i + 1) − g(i) for all i such that i + 1 < |g|. The sequence g is of bounded difference if g ′ is either finite or bounded; it is of strongly unbounded difference if the sequence g ′ is either finite or strongly unbounded. A word sequence u over an alphabet Σ is a finite or infinite sequence of finite words over Σ, i.e., an element of (Σ * ) ∞ . The components of the word sequence u are the finite words u 0 , u 1 , . . . ; we also write u = u 0 , u 1 , . . . . We denote by ε the finite word of length 0, which is different from the word sequence of length 0 denoted . We denote by | u| the length of the word sequence u. The language of word sequences [[a B · (b · a B ) S ]] consists of word sequences where the number of consecutive a's is bounded, while the number of b's in each word of the word sequence is strongly unbounded. Except for the two extra exponents B and S and the constantε, these expressions coincide syntactically with the standard regular expressions. Therefore, one may ask, how do our expressions correspond to standard regular expressions on their common syntactic fragment, which includes the Kleene star * , concatenation · and union +? The new expressions are a conservative extension of standard regular expressions in the following sense. If one takes a standard regular expression e defining a language of finite words L and evaluates it as a BS-regular expression, the resulting language of word sequences is the set of word sequences where every component belongs to L, i.e. [[e]] = { u : ∀i<|u|. u i ∈ L}. In the fact below we present two important closure properties of languages defined by BS-regular expressions. We will often use this fact, sometimes without explicitly invoking it. Proof. A straightforward structural induction. Item 2 implies that BS-regular languages are closed under taking subsequences. That is, every BS-regular language of word sequences is closed under removing, possibly infinitely many, component words from the sequence. In particular every BS-regular language is closed under taking the prefixes of word sequences. 2. 1.2. ωBS-regular expressions. We are now ready to introduce the ωBS-regular expressions. These describe languages of ω-words. From a word sequence we can construct an ω-word by concatenating all the words in the word sequence: u 0 , u 1 , . . . ω = u 0 u 1 . . . This operation-called the ω-power -is only defined if the word sequence has nonempty words on infinitely many coordinates. The ω-power is naturally extended to languages of word sequences by taking the ω-power of every word sequence in the language (where it is defined). Definition 2. 4. An ωBS-regular expression is an expression of the form: e = n i=1 e i · f ω i in which each e i is a regular expression, and each f i is a BS-regular expression. The expression is called ωB-regular (respectively, ωS-regular) if all the expressions f i are Bregular (respectively, S-regular). The semantic interpretation [[e]] is the language of ω-words i=1. ..n [ [e i ]] F · [[f i ]] ω , in which [[·]] F denotes the standard semantic of regular expressions, and · denotes the concatenation of a language of finite words with a language of ω-words. Following a similar tradition for regular expressions, we will often identify an expression with its semantic interpretation, writing, for instance, w ∈ ( a B b) ω instead of w ∈ [[(a B b) ω ]]. A language of ω-words is called ωBS-regular (respectively, ωB-regular, ωS-regular) if it is equal to [[e]] for some ωBS-regular expression e (respectively, ωB-regular expression e, ωS-regular expression e). In ωBS-regular expressions, the language of finite words can be {ε}; to avoid clutter we omit the language of finite words in such a situation, e.g., we write a ω for {ε} · a ω . This definition differs from the definition of ω-regular expressions only in that the ω exponent is applied to BS-regular languages of word sequences instead of regular word languages. As one may expect, the standard class of ω-regular languages corresponds to the case of ωBS-regular languages where neither B nor S is used (the presence ofε does not make any difference here). Example 2. 5. The expression (a B b) ω defines the language of ω-words containing an infinite number of b's where the possible number of consecutive a's is bounded. The language (a S b) ω corresponds to the case where the lengths of maximal consecutive sequences of a's tend toward infinity. The language (a + b) * a ω + ((a * b) * a S b) ω is more involved. It corresponds to the language of words where either there are finitely many b's (left argument of the sum), or the number of consecutive a's is unbounded but not necessarily strongly unbounded (right argument of the sum). This is the complement of the language (a B b) ω . Fact 2. 6. Emptiness is decidable for ωBS-regular languages. Proof. An ωBS-regular language is nonempty if and only if one of the languages M · L ω in the union defining the language is such that the regular language M is nonempty and the BS-regular language L contains a word sequence with infinitely many nonempty words. Therefore the problem boils down to checking if a BS-regular language contains a word sequence with infinitely many nonempty words. Let N be the set of BS-regular languages with this property, and let I be the set of BS-regular languages that contain at least one infinite word sequence (possibly with finitely many nonempty words). The following rules determine which languages belong to I and N . • K + L ∈ I iff K ∈ I or L ∈ I. • K · L ∈ I iff K ∈ I and L ∈ I. • L * and L B always belong to I. • L S ∈ I iff L ∈ I. • K + L ∈ N iff K ∈ N or L ∈ N . • K · L ∈ N iff either K ∈ I and L ∈ N , or K ∈ N and L ∈ I. • L * , L B and L S belong to N iff L ∈ N . The constantε will turn out to be a convenient technical device. In practice,ε·L restricts a language of word sequences L to its finite sequences. We denote byL the language of word sequencesε · L. This construction will be used for instance when dealing with intersections: as an example, the equality L B ∩ L S =L * holds when L does not contain ε. It turns out thatε is just syntactic sugar, as stated by: 7. Every ωBS-regular expression (respectively, ωB-regular and ωS-regular ones) is equivalent to one without theε constant. Proposition 2. Note that this proposition does not mean thatε can be eliminated from BS-regular expressions. It can only be eliminated under the scope of the ω-power in ωBS-regular 8 MIKO LAJ BOJAŃCZYK AND THOMAS COLCOMBET expressions. For instance,ε is necessary to capture the BS-regular languageā. However, once under the ω exponent,ε becomes redundant; for instance (ā) ω is equivalent to ∅, while (ā + b) ω is equivalent to (a + b) * b ω . Proposition 2.7 will follow immediately once the following lemma is established: Lemma 2.8. Let T ∈ {BS, B, S}. For every T -regular expression, there is an equivalent one of the formM + L where M is a regular expression and L is T -regular and does not useε. Proof. By structural induction, we prove that for every BS-regular expression L can be equivalently expressed as L =M + K , where M is obtained from L by replacing exponents B and S by Kleene stars and replacinḡ ε by ε, and K is obtained from L by replacingε by ∅. Note that Fact 2.3 is used implicitly above. 2.2. Summary: The Diamond. In this section we present Figure 1, which summarizes the technical contributions of this paper. We call this figure the diamond. Though not all the material necessary to understand this figure has been provided yet, we give it here as a reference and guide to what follows. The diamond illustrates the four variants of languages of ω-words that we consider: ω-regular, ωB-regular, ωS-regular and ωBS-regular languages. The inclusions between the four classes give a diamond shape. We show in Section 2.3 below that the inclusions in the diamond are indeed strict. To each class of languages corresponds a family of automata. The automata come in two variants: "normal automata", and the equally expressive "hierarchical automata". The exact definition of these automata as well as the corresponding equivalences are the subject of Section 3. All the classes are closed under union, since ωBS-regular expressions have finite union built into the syntax. It is also easy to show that the classes are closed under projection, i.e., images under a letter to letter morphism (the operation denoted by π in the figure), and more generally images under a homomorphism. From the automata characterizations we obtain closure under intersection for the four classes; see Corollary 3.4. For closure under complement, things are not so easy. Indeed, in Section 2.3 we show that ωBS-regular languages are not closed under complement. However, some complementation results are still possible. Namely Theorem 5.1 establishes that complementing an ωB-regular language gives an ωS-regular language, and vice versa. This is the most involved result of the paper, and Section 5 is dedicated to its proof. 2. 3. Limits of the diamond. In this section we show that all the inclusions depicted in the diamond are strict. Moreover, we show that there exists an ωBS-regular language whose complement is not ωBS-regular. Lemma 2. 9. Every ωB-regular language over the alphabet {a, b}, which contains a word with an infinite number of b's, also contains a word in (a B b) ω . Proof. Using a straightforward structural induction, one can show that a B-regular language of word sequences L satisfies the following properties. • If L contains a sequence in a * , then it contains a sequence in a B . • If L contains a sequence in (a * b) + a * , then it contains a sequence in (a B b) + a B . The statement of the lemma follows. Corollary 2. 10. The language (a S b) ω is not ωB-regular. The language (a B b) ω is not ωSregular. Proof. Being ωB-regular for (a S b) ω would contradict Lemma 2.9, since it contains a word with an infinite number of b's but does not intersect (a B b) ω . For the second part, assume that the language (a B b) ω is ωS-regular. Consequently, so is the language (a B b) ω + (a + b) * a ω . Using Theorem 5.1, its complement ((a * b) * a S b) ω would be ωB-regular. But this is not possible, by the same argument as above. A proof that does not use complementation-along the same lines as in the first part-can also be given. We now proceed to show that ωBS-regular languages are not closed under complement. We start with a lemma similar to Lemma 2.9. Lemma 2. 11. Every ωBS-regular language over the alphabet {a, b} that contains a word with an infinite number of b's also contains a word in (a B b + a S b) ω . Proof. As in Lemma 2.9, one can show the following properties of a BS-regular language of word sequences L. • If L contains a word sequence in a * , then it contains one in a B + a S . • If L contains a word sequence in (a * b) + a * , then it contains one in (a B b + a S b) + (a B + a S ). The result directly follows. Corollary 2.12. The complement of L = (a B b + a S b) ω is not ωBS-regular. Proof. The complement of L contains the word a 1 ba 1 ba 2 ba 1 ba 2 ba 3 ba 1 ba 2 ba 3 ba 4 b . . . , and consequently, assuming it is ωBS-regular, one can apply Lemma 2.11 to it. It follows that the complement of L should intersect L, a contradiction. Automata In this section we introduce a new automaton model for infinite words, which we call ωBSautomata. We show that these automata have the same expressive power as ωBS-regular expressions. We will actually define two models: ωBS-automata, and hierarchical ωBS-automata. We present them, successively, in Sections 3.1 and 3.2. Theorem 3.3, which shows that both models have the same expressive power as ωBS-regular expressions, is given in Section 3. 3. Section 3.3 also contains Corollary 3.4, which states that ωBS-regular languages are closed under intersection. The proof of Theorem 3.3 is presented in Sections 3.5 and 3.6. 3.1. General form of ωBS-automata. An ωBS-automaton is a tuple (Q, Σ, q I , Γ B , Γ S , δ), in which Q is a finite set of states, Σ is the input alphabet, q I ∈ Q is the initial state, and Γ B and Γ S are two disjoint finite sets of counters. We set Γ = Γ B ∪ Γ S . The mapping δ associates with each letter a ∈ Σ its transition relation δ a ⊆ Q × {i, r, ǫ} Γ × Q . The counters in Γ B are called bounding counters, or B-counters or counters of type B. The counters in Γ S are called unbounding counters, or S-counters or counters of type S. Given a counter α and a transition (p, v, q), the transition is called a reset of α if v(α) = r; it is an increment of α if v(α) = i. When the automaton only has counters of type B, i.e., if Γ S = ∅ (respectively, of type S, i.e., if Γ B = ∅), then the automaton is called an ωB-automaton (respectively, an ωS-automaton). A run ρ of an ωBS-automaton over a finite or infinite word a 1 a 2 · · · is a sequence of transitions ρ = t 1 t 2 · · · of same length such that for all i, t i belongs to δ a i , and the target state of the transition t i is the same as the source state of the transition t i+1 . Given a counter α, every run ρ can be uniquely decomposed as ρ = ρ 1 t n 1 ρ 2 t n 2 . . . in which, for every i, t n i is a transition that does a reset on α, and ρ i is a subrun (sequence of transitions) that does no reset on α. We denote by α(ρ) the sequence of natural numbers, which on its i-th position has the number of occurrences of increments of α in ρ i . This sequence can be finite if the counter is reset only a finite number of times, otherwise it is infinite. A run ρ over an ω-word is accepting if the source of its first transition is the initial state q I , and for every counter α, the sequence α(ρ) is infinite and furthermore, if α is of type B then α(ρ) is bounded and if α is of type S then α(ρ) is strongly unbounded. Example 3. 1. Consider the following automaton with a single counter (the counter action is in the parentheses): q p b(ǫ) b(r) a(i), b(r) a(ǫ), b(ǫ) Assume now that the unique counter in this automaton is of type B. We claim that this automaton recognizes the language L = (a B b(a * b) * ) ω , i.e., the set of ω-words of the form a n 0 ba n 1 b . . . such that lim inf i n i < +∞. We only show that the automaton accepts all words in L, the converse inclusion is shown by using similar arguments. Let w = a n 0 ba n 1 b . . . be a word in L. There exists an infinite set K ⊆ N and a natural number N such that n k < N holds for all k ∈ K. Without loss of generality we assume that 0 ∈ K (by replacing N by max(n 0 , N )). We now construct an accepting run of the automaton on the word w. This run uses state q in each position i ∈ N such that the distance between the last occurrence of b before position i and the first occurrence of b after position i is at most N . For other positions the state p is used and the value of the counter is left unchanged. In this way, the value of the counter will never exceed N . Since furthermore K is infinite, the counter will be reset infinitely many times. This proves that the run is accepting. If the counter is of type S, then the automaton recognizes the language (a S b(a * b) * ) ω . Although this is not directly included in the definition, an ωBS-automaton can simulate a Büchi automaton, and this in two ways: by using either unbounding, or bounding counters (and therefore Büchi automata are captured by both ωS-automata and ωB-automata). Consider then a Büchi automaton, with final states F . One way to simulate this automaton is to have an ωB-automaton with the same state space, and one bounding counter, which is reset every time a state from F is visited, and never incremented. In this case, the accepting condition for ωBS-automata collapses to visiting F infinitely often, since a bounded value is assured by the assumption on not doing any increments. Another way to simulate the Büchi automaton is to use an unbounding counter. Whenever the simulating automaton visits a state in F , it nondeterministically decides to either increment the unbounding counter, or to reset it. It is easy to see that the accepting state is seen infinitely often if and only if there is a policy of incrementing and resetting that satisfies the accepting condition for unbounding counters. 3.2. Hierarchical automata. Hierarchical ωBS-automata are a special case of ωBS-automata where a stack-like discipline is imposed on the counter operations. An ωBS-automaton is called hierarchical if its set of counters is Γ = {1, . . . , n} and whenever a counter i > 1 is incremented or reset, the counters 1, . . . , i − 1 are reset. Therefore, in a hierarchical automaton, a transition (q, v, r) can be of three forms: • v(1) = · · · = v(n) = ǫ, i.e., no counter is affected by the transition. In this case we write ǫ for v. • v(1) = · · · = v(k) = r and v(k + 1) = · · · = v(n) = ǫ, i.e., the maximal affected counter is k, and it is reset by the transition. In this case we write R k for v. • v(1) = · · · = v(k − 1) = r, v(k) = i and v(k + 1) = · · · = v(n) = ǫ., i.e., the maximal affected counter is k, and it is incremented by the transition. In this case we write I k for v. It is convenient to define for a hierarchical automaton its counter type, defined as a word in {B + S} * . The length of this word is the number of counters; its i-th letter is the type of counter i. Example 3.2. Consider the following hierarchical ωBS-automaton: q a(I 1 ), b(R 1 ), c(I 2 ), d(R 2 ) In this picture, we use once more the convention that the resets and increments are in parentheses. If this automaton has counter type T 1 T 2 with T 1 , T 2 ∈ {B, S}, then it recognizes the language (a T 1 b) * a T 1 c T 2 (a T 1 b) * a T 1 d ω . 3.3. Equivalence. The key result concerning the automata is that the hierarchical ones are equivalent to the non-hierarchical ones, and that both are equivalent to the ωBS-regular expressions. Furthermore, this equivalence also holds for the fragments where only Bcounters or S-counters are allowed. Theorem 3.3. Let T ∈ {BS, B, S}. The following are equivalent for a language L ⊆ Σ ω . (1) L is ωT -regular. (2) L is recognized by a hierarchical ωT -automaton. (3) L is recognized by an ωT -automaton. Before establishing this result, we mention an important application. Proof. The corresponding automata (in their non-hierarchical form) are closed under intersection using the standard product construction. The implication from (1) to (2) is straightforward since hierarchical automata are a special case of general automata. In the following sections, we show the two difficult implications in Theorem 3.3: that expressions are captured by hierarchical automata (Section 3.5) and that automata are captured by expressions (Section 3.6). First, we introduce in Section 3.4 the notion of word sequence automata. 3. 4. Word sequence automata. In proving the equivalence of hierarchical ωBS-automata and ωBS-regular languages, we will use a form of automaton that runs over word sequences. A word sequence BS-automaton A is defined as an ωBS-automaton, except that we add a set of accepting states, i.e., it is a tuple (Q, Σ, q I , F, Γ B , Γ S , δ) in which Q, Σ, q I , Γ B , Γ S , δ are as for ωBS-automata, and F ⊆ Q is a set of accepting states. A word sequence BS-automaton A accepts an infinite sequence of finite words u if there is a sequence ρ of finite runs of A such that: • for all i ∈ N, the run ρ i is a run over the word u i that begins in the initial state and ends in an accepting state; • for every counter α of type B, the sequence of natural numbers max(α(ρ 0 )), max(α(ρ 1 )), . . . (in which max is applied to a finite sequence of natural numbers with the obvious meaning) is bounded; • for every counter α of type S, the sequence of natural numbers min(α(ρ 0 )), min(α(ρ 1 )), . . . (in which min is applied to a sequence of natural numbers with the obvious meaning) is strongly unbounded. The variants of word sequence B-automata and word sequence S-automata are defined as expected. The same goes for the hierarchical automata. An equivalent way for describing the acceptance of a word sequence by a BS-automaton A is as follows. Consider the ωBS-automaton A ′ obtained from A by a) removing the set of accepting states F , b) adding a new symbol to the alphabet, and c) setting δ to contain the transitions (q, R, q I ) for q ∈ F with R(α) = r for every counter α. Then A accepts the word sequence v 0 , v 1 , . . . iff A ′ accepts the ω-word v 0 v 1 . . . 3.5. From expressions to hierarchical automata. This section is devoted to showing one of the implications in Theorem 3.3: Lemma 3. 5. Every ωBS-regular (respectively, ωB-regular, ωS-regular) language can be recognized by a hierarchical ωBS-automaton (respectively, ωB-regular, ωS-regular). There are two main difficulties: • Our word sequence automata do not have ε-transitions, which are used in the proof for finite words. Instead of introducing a notion of ε-transition and establishing that such transitions can be eliminated from automata, we directly work on word sequence automata without ε-transitions. • When taking the mix or the concatenation of two languages L, K defined by hierarchical word sequence automata, there are technical difficulties with combining the counter types of the automata for L and K. We overcome these difficulties by first rewriting an expression into normal form before compiling it into a hierarchical word sequence automaton. The basic idea is that we move the mix + to the top level of the expression, and also remove empty words and empty iterations. To enforce that no empty words occur, we use the exponents L + , L S+ and L B+ which correspond to L · L * , L · L S and L · L B , respectively. We say that a BS-regular expression is pure if it is constructed solely with the single letter constants a andā, concatenation ·, and the exponents +, B+ and S+. We say a BS-regular expression is in normal form if it is a mix e 1 + · · · + e n of pure BS-regular expressions e 1 , . . . , e n . An ωBS-regular expression is in normal form if all the BS-regular expressions in it are in normal form. In Section 3.5.1, we show that every BS-regular language (with no occurrence of ε) can be described by a BS-regular expression in normal form. Then, in Section 3.5.2, we show that every ωBS-regular expression in normal form can be compiled into a hierarchical word sequence automaton. 3. 5. 1. Converting an expression into normal form. Given a BS-regular language L, let us define clean(L) to be the set of word sequences in L that have nonempty words on all coordinates. Remark that, thanks to Fact 2.3, L ω is the same as (clean(L)) ω . Therefore, we only need to work on sequence languages of the form clean(L). Using Fact 2.3 one obtains without difficulty: Fact 3. 6. For every BS-regular language L, either L = clean(L), L =ε + clean(L), or L = ε + clean(L). The following lemma concludes the conversion into normal form: 7. For every BS-regular language L, clean(L) can be rewritten as L 1 + · · · + L n , where each L i is pure. Lemma 3. The proof of this lemma has a similar flavor as the proof of the analogous result for finite words, which says that union can be shifted to the topmost level in a regular expression. Proof. The proof is by induction on the structure. • For L = ∅, ε,ε, a, the claim is obvious. • Case L = K · M . There are nine subcases (according to Fact 3.6): -If K is clean(K) and M is clean(M ) then clean(K · M ) = clean(K) · clean(M ) . We then use the induction assumption on clean(K) and clean(M ), and concatenate the two unions of pure expressions. This concatenation is also a union of pure expressions, since i K i · j M j = i,j K i · M j . -If K is clean(K) and M isε + clean(M ) then clean(K · M ) = clean(K) + clean(K) · clean(M ) . In the above, the language clean(K) consists of finite prefixes of sequences in clean(K). A pure expression for this language is obtained from clean(K) by replacing every exponent with + and every letter a withā. • Case L = K * . We have clean(K * ) = (clean(K)) + . By induction hypothesis, this becomes (L 1 + · · · + L n ) + , for pure expressions L i . We need to show how the mix + can be moved to the top level. For n = 2, we use: (L 1 + L 2 ) + =L + 1 + (L + 2 · L + 1 ) + + (L + 2 · L + 1 ) + · L + 2 + + (L + 1 · L + 2 ) + + (L + 1 · L + 2 ) + · L + 1 + L + 2 . The general case is obtained by an inductive use of this equivalence. • Case L = K S . This time, we use clean(K S ) = (clean(K)) S+ , and get by induction an expression of the form (L 1 + · · · + L n ) S+ , for pure expressions L i . We only do the case of n = 2, the general case is obtained by induction on n: (L 1 + L 2 ) S+ = (L 1 + L 2 ) * · L S+ 1 · (L 1 + L 2 ) * + (L 1 + L 2 ) * · L S+ 2 · (L 1 + L 2 ) * + L * 2 · (L + 1 · L + 2 ) S+ · L * 1 . The right side of the equation is not yet in the correct form, i.e., it is not a mix of pure expressions, but it can be made so using the mix, the concatenation and the * exponent cases described above (resulting in a mix of 102 pure expressions). • Case L = K B . Same as for case K * , in which the exponent * is replaced by B and the exponent + is replaced by B+. 3. 5 Proof. Let A be the automaton. For the first construction, we insert a new counter of type B at the correct position, i.e., between counter |t| and |t| + 1, and reset it as often as possible, i.e., we construct a new automaton A ′ of counter type tBt ′ which is similar to A in all respects but every transition (p, v, q) of A becomes a transition (p, v ′ , q) in A ′ with: v ′ =                        R 1 if v = ǫ and |t| = 0 ǫ if v = ǫ and |t| > 0 I k if v = I k and k ≤ |t| R k if v = R k and k < |t| R |t|+1 if v = R |t| I k+1 if v = I k and k > |t| R k+1 if v = R k and k > |t| For every ω-word u, this translation gives a natural bijection between the runs of A over u and the runs of A ′ over u. This translation of runs preserves the accepting condition. Hence the language accepted by A and A ′ are the same. For the second construction, we split the S counter into two nested copies. The automaton chooses nondeterministically which one to increment. Formally, we transform every transition (p, v, q) of A into possibly multiple transitions in A ′ , namely the transitions (p, v ′ , q) with v ′ ∈ V in which: V =                        {ǫ} if v = ǫ {I k } if v = I k and k ≤ |t| {R k } if v = R k and k ≤ |t| {I |t|+1 , I |t|+2 } if v = I |t|+1 {R |t|+2 } if v = R |t|+1 {I k+1 } if v = I k and k > |t| + 1 {R k+1 } if v = R k and k > |t| + 1 For every input ω-word u, this transformation induces a natural surjective mapping from runs of A ′ onto runs of A. Accepting runs are mapped by this translation to accepting runs of A. Hence the language accepted by A ′ is a subset of the one accepted by A. For the converse inclusion, one needs to transform an accepting run ρ of A over u into an accepting run of A ′ over u. For this one needs to decide each time a transition (p, I |t|+1 , q) is used by ρ, whether to use the transition (p, I |t|+1 , q) or the transition (p, I |t|+2 , q) of A ′ . For this, for all maximal subruns of ρ of the form ρ ′ 0 (p 1 , I |t|+1 , q 1 )ρ ′ 1 · · · (p n , I |t|+1 , q n )ρ ′ n in which the counter |t| + 1 is never reset, and such that the counter |t| + 1 is never incremented in the runs ρ ′ i , one replaces it by the run of A ′ : ρ ′′ 0 (p 1 , I x 1 , q 1 )ρ ′′ 1 · · · (p n , I xn , q n )ρ ′′ n where x i = |t| + 2 if i is a multiple of ⌈ √ n⌉ |t| + 1 otherwise, and ρ ′′ i is ρ ′ i in which each counter k > |t| + 1 is replaced by counter k + 1. This operation transforms an accepting run of A into an accepting run of A ′ . We will use the following corollary of Lemma 3.8, which says that any number of hierarchical BS-automata can be transformed into equivalent ones that have comparable counter types. Corollary 3. 9. Given hierarchical BS-automata A 1 , . . . , A n , there exist (respectively) equivalent hierarchical BS-automata A ′ 1 , . . . , A ′ n , such that for all i, j = 1 . . . n the counter type of A ′ i is a prefix of the counter type of A ′ j or vice versa. Furthermore, if A 1 , . . . , A n are B-automata (respectively, S-automata), then so are A ′ 1 , . . . , A ′ n . Recall that we want to compile a normal form expression M · (L 1 + · · · + L n ) ω into a hierarchical ωBS-automaton (or ωB, or ωS automaton, as the case may be). This is done in the next two lemmas. First, Lemma 3.10 translates each L i into a hierarchical word sequence automaton, and then Lemma 3.11 combines these word sequence automata into an automaton for the infinite words (L 1 + · · · + L n ) ω . Since prefixing the language M is a trivial operation for the automata, we thus obtain the desired Lemma 3.5, which says that expressions can be compiled into hierarchical automata. Lemma 3. 10. The language of word sequences described by a pure BS-regular (respectively, B-regular, respectively, S-regular) expression can be recognized by a hierarchical word sequence BS-automaton (respectively, B-automaton, respectively, S-automaton). Proof. By induction on the operations that appear in a pure expression. • Languages accepted by hierarchical word sequence automata are closed under concatenation. Let us compute an automaton recognizing L·L ′ where L, L ′ are languages recognized by hierarchical word sequence automata A, A ′ respectively. Using Corollary 3.9, we can assume without loss of generality that the type of A is a prefix of the type of A ′ (or the other way round, which is a symmetric case). Remark that since L and L ′ are pure, no state in A or A ′ is both initial and final. We do the standard concatenation construction for finite automata (by passing from a final state of A to the initial state of A ′ ), except that when passing from A to A ′ we reset the highest counter available to A. • Languages accepted by hierarchical word sequence automata are closed under the + exponent. We use the standard construction: linking all final states to the initial state while resetting all counters. In order to have non-empty words on all coordinates, the initial state cannot be accepting (if it is accepting, we add a new initial state). • Languages accepted by hierarchical word sequence automata are closed under the S+ exponent. We add a new counter of type S of rank higher than all others. We then proceed to the construction for the + exponent as above, except that we increment the new counter whenever looping from a final state to the initial one. • For the B+ exponent, we proceed as above, except that the new counter is of type B instead of being of type S. The compilation of normal form expressions into automata is concluded by the following lemma: Lemma 3. 11. Let L 1 , . . . , L n be sequence languages recognized by the hierarchical BSautomata A 1 , . . . , A n respectively. The language (L 1 + · · · + L n ) ω is recognized by an ωBSautomaton. Likewise for ωB-automata and ωS-automata. Proof. Thanks to Corollary 3.9, we can assume that the counter type of A i is a prefix of the counter type of A j , for i ≤ j. We use this prefix assumption to share the counters between the automata: we assume that the counters in A i are a subset of the counters in A j , for i ≤ j. Under this assumption, the hierarchical automaton for infinite words can nondeterministically guess a factorization of the infinite word in finite words, and nondeterministically choose one of the automata A 1 , . . . , A n for each factor. 3. 6. From automata to expressions. This section is devoted to showing the remaining implication in Theorem 3.3: Lemma 3.12. Every language recognized by an ωBS-automaton (respectively, an ωBautomaton, an ωS-automaton) can be defined using an ωBS-regular (respectively, ωBregular, ωS-regular) expression. Before we continue, we would like to remark that although long, the proof does not require any substantially original ideas. Basically, it consists of observations of the type: "if a word contains many a's and a b, then it either has many a's before the b or many a's after the b". Using such observations and Kleene's theorem for finite words, we obtain the desired result. We begin by introducing a technical tool called external constraints. These constraints are then used in Section 3.6.2 to prove Lemma 3.6. 3.6.1. External constraints. External constraints provide a convenient way of constructing ωBS-regular languages. Let e be one of 0, +, S, B. Given a symbol a ∈ Σ and a word sequence language L over Σ ∪ {a}, we denote by L[a : e] the word sequence language L ∩ ((Σ * · a) e · Σ * ) with the standard convention that L 0 = [[ε]] . This corresponds to restricting the word sequences in L to ones where the number of occurrences of a satisfies the constraint e. Here we show that external constraints can be eliminated: Controlling counters in BS-regular expressions. In this section we show how BSregular languages can be intersected with languages of specific forms. We use this to write an ωBS-regular expression that describes successful runs of an ωBS-regular automaton, thus completing the proof of Lemma 3. 12. In the following lemma, the languages L should be thought of as describing runs of a BS-automaton. The idea is that the language K in the lemma constrains runs that are good from the point of view of one of the counters. The set of labels I can be thought of as representing the transitions which increment the counter, R as representing the transitions which reset the counter, and A as representing the transitions which do not modify it. The intersection in the lemma forces the counter operations to be consistent with the acceptance condition. Given a word sequence language L and a subset A of the alphabet, denote by L ✶ A * the set of word sequences that give a word sequence in L if all letters from A are erased (i.e., a very restricted form of the shuffle operator). For instance B * ✶ A * is the same as (A + B) * . Lemma 3.14. Let Σ be an alphabet partitioned into sets A, I, R. Let L be a BS-regular word sequence language over Σ. Then K ∩ L is also BS-regular, for K being one of: (I B R) + I B ✶ A * , (I S R) + I * ✶ A * , I * (RI S ) + ✶ A * , I S (RI S ) + ✶ A * . Similarly, B-regular languages are closed under intersection with the first language, and S-regular languages are closed under intersection with the three other languages. Proof. In the proof we rewrite K ∩ L into an expression with external constraints, which will use and constrain letters from some new alphabet Σ ′ disjoint with Σ. In the proof, we will consider equality up to the removal of letters from Σ ′ . We then eliminate the external constraints using Lemma 3.13, and then erase the letters from Σ ′ (erasing letters is allowed, since languages described by our expressions are closed under homomorphic images). ε[a : +] = ∅ ε[a : e] = ε for e ∈ {0, B} ε[a : S] =ε ε[a : e] =ε for e ∈ {0, B, S} ε[a : +] = ∅ b[a : +] = ∅ for b = a b[a : 0] = b for b = a b[a : S] =b for b = a b[a : B] = b for b = a a[a : 0] = ∅ a[a : S] =ā We begin by showing that (I e ✶ A * ) ∩ L is BS-regular, for e = 0, +, B, S. Let a ∈ Σ ′ be a new symbol. The transformation is simple: replace everywhere in the expression the letters i in I by i · a, and constrain the resulting language by [a : e]. For K = (I B R) + I B ✶ A * , the construction is by induction on the size of the expression defining L. We use the following equalities (in which K ′ = I B ✶ A * ): K ∩ b = b if b ∈ R and ∅ otherwise K ∩ (L + L ′ ) = (K ∩ L) + (K ∩ L ′ ) K ∩ (L · L ′ ) = (K ∩ L) · ((K ∩ L ′ ) + (K ′ ∩ L ′ )) + (K ′ ∩ L) · (K ∩ L ′ ) K ∩ L * = ((K ′ ∩ L * ) · (K ∩ L)) + · (K ′ ∩ L * ) . The remaining cases, namely K ∩ L B and K ∩ L S , can be reduced to the L * case as follows. First one rewrites L B (respectively, L S ) as (La) * [a : B] (respectively, (La) * [a : S]), where a ∈ Σ ′ is a new letter (recall that we consider here equality of languages up to removal of letters from Σ ′ ). Second, we use the associativity of intersection, i.e., K ∩ ((La) * [a : e]) = (K ∩ (La) * )[a : e] , for e = B, S . For the case where K is either ((I S R) + I * ) ✶ A * or (I * (RI S ) + ) ✶ A * , a slightly more tedious transformation is involved. This is also done by induction. To make the induction pass, we generalize the result to languages K of the form K e,f = I e R(I S R) * I f ✶ A * , where e, f ∈ { * , S} . The transformations for L = b and L + L ′ are as follows: K e,f ∩ b = b if b ∈ R and e = f = * , ∅ otherwise, K e,f ∩ (L + L ′ ) = (K e,f ∩ L) + (K e,f ∩ L ′ ) . For sequential composition L · L ′ , we use the convenient operation ⊔ over the exponents { * , S}, defined by * ⊔ * = * , and e ⊔ f = S otherwise. The transformation for sequential composition L · L ′ is then the following: K e,f ∩ (L · L ′ ) = e ′ ⊔f ′ =S (K e,e ′ ∩ L) · (K f ′ ,f ∩ L ′ ) + e ′ ⊔f ′ =e (I e ′ ∩ L) · (K f ′ ,f ∩ L ′ ) + e ′ ⊔f ′ =f (K e,e ′ ∩ L) · (I f ′ ∩ L ′ ) The rule for L * is the most complex one. For conceptual simplicity we use an infinite sum. This can be transformed into a less readable but correct expression using standard methods for regular languages. For n ≥ 1, let L n be the mix of all languages of the form (I e 0 ∩ L * ) · (K f 1 ,g 1 ∩ L) · (I e 1 ∩ L * ) · · · (K fn,gn ∩ L) · (I en ∩ L * ) , where the exponents e i , f i , g i ∈ { * , S} satisfy g i ⊔ e i ⊔ f i+1 = S , e 0 ⊔ f 1 = e , and g n ⊔ e n = f . The language L n corresponds to those word sequences, where the reset is done in n separate iterations of L. The language K e,f ∩ L * is then equal to the infinite mix L 1 + L 2 + · · · We now use the above lemma to complete the proof of Lemma 3. 12. Consider an ωBSautomaton A. We will present an expression not for the recognized words, but the accepting runs. The result then follows by projecting each transition onto the letter it reads. Without loss of generality we assume that a transition uniquely determines this letter. Given a counter α, let I α represent the transitions that increment this counter, let R α represent the transitions that reset it and let A α be the remaining transitions. Let Γ B be the set of bounded counters of A and let Γ S be its unbounded counters. Given a state q, we define Pref q to be the language of finite partial runs starting in the initial state and ending in state q, and Loop q to be the language of nonempty finite partial runs starting and ending in state q. Those languages are regular languages of finite words, and hence can be described via regular expressions. We use those expressions as word sequence expressions. The following lemma concludes the proof of Lemma 3.12, by showing how the operations from Lemma 3.14 can be used to check that a run is accepting. 15. A run ρ visiting infinitely often a state q is accepting iff there is a partition of Γ S into two sets Γ S, * and Γ * ,S (either of which may be empty) such that Lemma 3.ρ ∈ Pref q · (Loop q ∩ L B ∩ L S, * ∩ L * ,S ) ω , where the languages L B , L S, * and L * ,S are defined as follows: L B = α∈Γ B ((I B α R α ) + I B α ) ✶ A * α , L e,f = α∈Γ e,f (I e α R α (I S α R α ) * I f α ) ✶ A * α , for (e, f ) = ( * , S), (S, * ) . Proof. It is not difficult to show that membership in Pref q (Loop q ∩ L B ∩ L S, * ∩ L * ,S ) ω is sufficient for ρ to be accepting. For the other direction, consider an accepting run ρ. Let us consider an increasing infinite sequence of positions u 1 , u 2 , . . . in the run ρ such that for each n, all counters are reset between u n and u n+1 and the run assumes state q at position u n . Such a sequence can be found since each counter is reset infinitely often. Consider now an unbounded counter α. For each n we define b α n to be the number of increments of α in ρ happening between u n and the last reset before u n , likewise, we define a α n to be the number of increments of α in ρ happening between u n and the next reset of α after u n . By extracting a subsequence, we may assume that either b α n is always greater than a α n , or a α n is always greater than b α n . In the first case b α n is strongly unbounded and we put α into Γ * ,S ; in the second case a α n is strongly unbounded and we put α into Γ * ,S . We iterate this process for all unbounded counters. Monadic Second-Order Logic with Bounds In this section, we introduce the logic MSOLB. This is a strict extension of monadic secondorder logic (MSOL), where a new quantifier U is added. This quantifier expresses that a property is satisfied by arbitrarily large sets. We are interested in the satisfiability problem: given a formula of MSOLB, decide if it is satisfied in some ω-word. We are not able to solve this problem in its full generality. However, the diamond properties from the previous sections, together with the complementation result from Section 5, give an interesting partial solution to the satisfiability problem. In Section 4.1 we introduce the logic MSOLB. In Section 4.2 we present some decidable fragments of the logic MSOLB, and restate the diamond picture in this new framework. In Section 4.3 we show how the unbounding quantifier can be captured by our automaton model. In Section 4.4 we present an application of our logical results; namely we provide an algorithm that decides if an ω-automatic graph has bounded degree. 4. 1. The logic. Recall that monadic second-order logic (MSOL for short) is an extension of first-order logic where quantification over sets is allowed. Hence a formula of this logic is made of atomic predicates, boolean connectives (∧, ∨, ¬), first-order quantification (∃x.ϕ and ∀x.ϕ) and set quantification (also called monadic second-order quantification) (∃X.ϕ and ∀X.ϕ) together with the membership predicate x ∈ X. A formula of MSOL can be evaluated in an ω-word. In this case, the universe of the structure is the set N of word positions. The formula can also use the following atomic predicates: a binary predicate x ≤ y for order on positions, and for each letter a of the alphabet, a unary predicate a(x) that tests if a position x has the label a. This way, a formula that uses the above predicates defines a language of ω-words: this is the set of those ω-words for which it is satisfied. In the logic MSOLB we add a new quantifier, the existential unbounding quantifier U, which can be defined as the following infinite conjunction: UX.ϕ := N ∈N ∃X. (ϕ ∧ |X| ≥ N ) . The quantified variable X is a set variable and |X| denotes its cardinality. Informally speaking, UX.ϕ(X) says that the formula ϕ(X) is true for sets X of arbitrarily large cardinality. If ϕ(X) is true for some infinite set X, then UX.ϕ(X) is immediately true. Note that ϕ may contain other free variables than just X. From this quantifier, we can construct other meaningful quantifiers: • The universal above quantifier A is the dual of U, i.e., AX.ϕ is a shortcut for ¬UX.¬ϕ. It is satisfied if all the sets X above some threshold of cardinality satisfy property ϕ. • Finally, the bounding quantifier B is syntactically equivalent to the negation of the U quantifier. Historically, this was the first quantifier to be studied, in [1]. It says that a formula BX.ϕ holds if there is a bound on the cardinality of sets satisfying property ϕ. Over finite structures, MSOLB and MSOL are equivalent: a subformula UX.ϕ can never be satisfied in a finite structure, and consequently can be removed from a formula. Over infinite words, MSOLB defines strictly more languages than MSOL. For instance the formula BX. [∀x∈X. a(x)] ∧ [∀x ≤ y ≤ z. (x, z ∈ X) → (y ∈ X)] expresses that there is a bound on the size of contiguous segments made of a's. Over the alphabet {a, b}, this corresponds to the language (a B b) ω . As mentioned previously (recall Corollary 2.10), this language is not regular. Hence, this formula is not equivalent to any MSOL formula. This motivates the following decision problem: Is a given formula of MSOLB satisfied over some infinite word? We do not know the answer to this question in its full generality (this problem may yet be proved to be undecidable). However, using the diamond (Figure 1), we can solve this question for a certain class of formulas. This is the subject of Sections 4.2 and 4. 3. In Section 4.4, we use the logic MSOLB to decide if a graph has bounded outdegree, for graphs interpreted in the natural numbers via monadic formulas. 4.2. A decidable fragment of MSOLB. A classical approach for solving satisfiability of monadic second-order logic is to translate formulas into automata (this is the original approach of Büchi [3,4] for finite and infinite words, which has been later extended by Rabin to infinite trees [12], see [14] for a survey). To every operation in the logic corresponds a language operation. As languages recognized by automata are effectively closed under those operations, and emptiness is decidable for automata, the satisfaction problem is decidable for MSOL. We use the same approach for MSOLB. Unfortunately, our automata are not closed under complement, hence we cannot use them to prove satisfiability for the whole logic, which is closed under complement. Note that in this definition, B-formulas and S-formulas are dual in the sense that the negation of an S-formula is logically equivalent to a B-formula, and vice versa. The above fragments are tightly connected to ωBS-regular languages according to the following fact: Proof. (Sketch.) Thanks to the closure properties of ωBS-regular languages, each BSformula can be translated into an ωBS-regular language. We use here the standard coding of the valuation of free variables in the alphabet of the word, see, e.g. [14]. Closure under ∨ and ∧ is a direct consequence of closure under ∪ and ∩. Closure under ∃ corresponds to closure under projection, which is straightforward for non-deterministic automata. Closure under universal quantification follows as dual of the existential quantifications (using the complementation result, Theorem 5.1). Closure under U of ωS-regular and ωBS-regular languages is the subject of Section 4.3, and more precisely Proposition 4. 4. Closure under A of ωB-regular languages is obtained by duality (once more using Theorem 5.1). For the converse implication, a formula of the logic can use existential set quantifiers in order to guess a run of the automaton, and then check that this run is accepting using the new quantifiers. These fragments are summarized in the logical view of the diamond presented in Figure 3. Also, from this fact we get that BS-formulas are not expressively complete for MSOLB, since MSOLB is closed under negation, while ωBS-regular languages are not closed under complementation. Since emptiness for BS-regular languages is decidable by Fact 2.6, we obtain the following decidability result: Theorem 4. 3. The problem of satisfiability over (ω, <) is decidable for BS-formulas. In the proof of Fact 4.2, we left out the proof of closure under U. We present it in the next section. 4. 3. Closure under existential unbounding quantification ( U ). Here we show that the classes of ωS-and ωBS-regular languages are closed under application of the quantifier U. This closure is settled by Proposition 4. 4. In order to show this closure, we need to describe the quantifier U as a language operation, in the same way existential quantification corresponds to projection. Let Σ be an alphabet, and consider a language L ⊆ (Σ×{0, 1}) ω . Given a word w ∈ Σ ω and a set X ⊆ N, let w[X] ∈ (Σ × {0, 1}) ω be the word obtained from w by setting the second coordinate to 1 at the positions from X and to 0 at the other positions. We define U(L) to be the set of those words w ∈ Σ ω such that for every N ∈ N there is a set X ⊆ N of at least N elements such that w[X] belongs to L. Restated in terms of this operation, closure under unbounding quantification becomes: We begin with a simple auxiliary result. A partial sequence over an alphabet Σ is a word in ⊥ * Σ ω , in which ⊥ ∈ Σ is a fresh symbol. A partial sequence is defined at the positions where it does not have value ⊥, it is undefined at positions of value ⊥. We say that two partial sequences meet if there is some position where they are both defined and have the same value. Lemma 4. 5. Let I be an infinite set of partial sequences over a finite alphabet Σ. There is a partial sequence in I that meets infinitely many partial sequences from I. Proof. A constrainer for I is an infinite word c over P (Σ) such that for each i ∈ N, the i-th position of every sequence in I is either undefined or belongs to c i . The size of a constrainer is the maximal size of a set it uses infinitely often. We prove the statement of the lemma by induction over the size of a constrainer for I. Since every I admits a constrainer of size |Σ|, this concludes the proof. The base case is when I admits a constrainer of size 1; in this case, every two partial sequences in I meet, so any partial sequence in Isatisfies the statement of the lemma. Consider now a set I with a constrainer c of size n. Take some sequence s in I. If s meets infinitely many sequences from I, then we are done. Otherwise let J ⊆ I be the (infinite) set of sequences that do not meet s. One can verify that d is a constrainer for J, where d is defined by d i = c i \ {s i }. Moreover, d is of size n − 1 (since s i = ⊥ can hold only for finitely many i). We then apply the induction hypothesis. Let L be a language of infinite words over Σ × {0, 1} recognized by an ωBS-automaton. We want to show that the language U(L) is also recognized by a bounding automaton. Consider the following language: K = {w[X] : for some Y ⊇ X, w[Y ] ∈ L} . This language is downward closed in the sense that if w[X] belongs to K, then w[Y ] belongs to K for every Y ⊆ X. Furthermore, clearly U(L) = U(K). Moreover, if L is recognized by an ωBS-automaton (resp. ωS-automaton), then so is K. Let then A be an ωBS-automaton recognizing K. We will construct an ωBS-automaton recognizing U(K). Given a word w ∈ Σ ω , we say that a sequence of sets X 1 , X 2 , . . . ⊆ N is an an unbounding witness for K if for every i, the word w[X i ] belongs to K and the sizes of the sets X i are unbounded. An unbounding witness is sequential if there is a sequence of numbers a 1 < a 2 < · · · such that for each i, all elements of the set X i are between a i and a i+1 − 1. The following lemma is a consequence of K being downward closed. Lemma 4. 6. A word that has an unbounding witness for K also has a sequential one. Let X 1 , X 2 , . . . be a sequential unbounding witness and let a 1 < a 2 · · · be the appropriate sequence of numbers. Let ρ 1 , ρ 2 , . . . Proof. By Lemma 4.6, a word belongs to U(K), if and only if it admits a sequential witness. Let X i , a i and ρ i be as above. For i, let s i be the partial sequence that has ⊥ at positions before a i+1 and agrees with ρ i after a i+1 . By applying Lemma 4.5 to the set {s 1 , s 2 . . .}, we can find a run ρ i and a set J ⊆ N such that for every j ∈ J, the runs ρ i and ρ j agree on some position x j after a j+1 . For j ∈ J, let ρ ′ j be a run that is defined as ρ j at positions before x j and is defined as ρ i at positions after x j . Since modifying the counter values over a finite set of positions does not violate the acceptance condition, the run ρ ′ j is also an accepting run over the word w[X j ]. For every j, k ∈ J, the runs ρ ′ j and ρ ′ k agree on almost all positions (i.e., positions after both x j and x k ). Therefore the sequential witness obtained by using only the sets X j with j ∈ J is a good witness. Proof. Given a word w, the automaton is going to guess a sequential witness a 1 < a 2 < · · · X 1 ⊆ [a 1 , a 2 − 1], X 2 ⊆ [a 2 , a 3 − 1] . . . and a run ρ of A over w and verify the following properties: • The run ρ is accepting; • There is no bound on the size of the X i 's; • For every i, some run over w[X i ] agrees with ρ on almost all positions. The first property can be obviously verified by an ωBS-automaton. For the second property, the automaton nondeterministically chooses a subsequence of X 1 , X 2 , . . . where the sizes are strongly unbounded. The third property is a regular property. The statement of the lemma then follows by closure of bounding automata under projection and intersection. 4. 4. Bounds on the out-degree of a graph interpreted on sets. In this section, somewhat disconnected from the rest of the paper, we show how to use the logic MSOLB for solving a non-trivial question concerning ω-automatic structures. An ω-automatic (directed) graph (of injective presentation) is a graph described by two formulas of MSOL, which are interpreted in the natural numbers (hence the term ω-automatic). The vertexes of the ω-automatic graph are sets of natural numbers. The first formula δ(X) has one free set variable, and says which sets of natural numbers will be used as vertexes of the graph. The second formula ϕ(X, Y ) has two free set variables, and says which vertexes of the graph are connected by an edge (if ϕ(X, Y ) holds, then both δ(X) and δ(Y ) must hold). The original idea of automaticity has been proposed by Hodgson [9] via an automata theoretic presentation. The more general approach that we use here of logically defining a structure in the powerset of another structure is developed in [6]. We show in this section that it is possible to decide, given the formulas δ(X) and ϕ(X, Y ) of MSOL defining an ω-automatic graph, whether this graph has bounded out-degree or not. Let ϕ(X, Y ) be a formula of MSOLB with two free set variables. This formula can be seen as an edge relation on sets, i.e., it defines a directed graph with sets as vertexes. We show here that MSOLB can be used to say that this edge relation has unbounded out-degree. The formula presented below will work on any structure-not just (ω, <)-but the decision procedure will be limited to infinite words, since it requires testing satisfiability of MSOLB formulas. In the following, we say that a set Y is a successor of a set X if ϕ(X, Y ) holds. We begin by defining the notion of an X-witness. This is a set witnessing that the set X has many successors. (The actual successors of X form a set of sets, something MSOLB cannot talk about directly.) An X-witness is a set Y such that every two elements x, y ∈ Y can be separated by a successor of X, that is: ∀x, y ∈ Y. x = y → ∃Z.ϕ(X, Z) ∧ (x ∈ Z ↔ y ∈ Z) . (4.1) We claim that the graph of ϕ has unbounded out-degree if and only if there are X-witnesses of arbitrarily large cardinality. This claim follows from the following fact: Fact 4. 9. If X has more than 2 n successors, then it has an X-witness of size at least n. If X has n successors, then all X-witnesses have size at most 2 n . Proof sketch. For the first statement, one first shows that X has at least n successors that are boolean independent, i.e., there exists X 1 , . . . , X n successors of X such that for all i = 1 . . . n, X i is not a boolean combination of X 1 , . . . , X i−1 , X i+1 , . . . , X n . From n boolean independent successors one can then construct by induction an X-witness of size n. For the second statement, consider X with n successors as well as an X-witness. To each element w of the X-witness, associate the characteristic function of 'w ∈ Y ' for Y ranging over the successors of X. If the X-witness had more than 2 n elements, then at least two would give the same characteristic function, contradicting the definition of an Xwitness. As witnessed by (4.1), being an X-witness can be defined by an MSOL formula. Therefore, the existence of arbitrarily large X-witnesses is expressible by an MSOLB formula with a single U operator at an outermost position. This formula belongs to one of the classes with decidable satisfiability in Theorem 4.3 below. This shows: 10. It is decidable if an ω-automatic graph has unbounded out-degree. Proposition 4. Complementation The complementation result. The main technical result of this paper is the following theorem: Theorem 5. 1. The complement of an ωS-regular language is ωB-regular, while the complement of an ωB-regular language is ωS-regular. The proof of this result is long, and takes up the rest of this paper. We begin by describing some proof ideas. For the sake of this introduction, we only consider the case of recognizing the complement of an ωS-regular language by an ωB-regular automaton. Consider first the simple case of a language described by an ωS-automaton A which has a single counter. Furthermore, assume that in every run, between any two resets, the increments form a single connected segment. In other words, between two resets of the counter, the counter is first left unaffected for some time, then during the n following transitions the counter is always incremented, then it is not incremented any more before reaching the second reset. Below, we use the name increment interval to describe an interval of word positions (a set of consecutive word positions) corresponding to a maximal sequence of increments. We do not, however, assume that the automaton is deterministic (the whole difficulty of the complementation result comes from the fact that we are dealing with nondeterministic automata). This means that the automaton resulting from the complementation construction must check that all possible runs of the complemented automaton are rejecting. The complement ωB-automaton B uses a single B-counter, which ticks as a clock along the ω-word (independently of any run of A). A tick of the clock is a reset of the counter. Between every two ticks, the counter is constantly incremented. Since the counter is a Bcounter, the ticks have to be at bounded distance from each other. We say that an interval of word positions is short (with respect to this clock) if it contains at most one tick of the clock. If the clock ticks at most every N steps, then short intervals have length at most 2N − 1. Reciprocally, if an interval has length at most N , then it is short with respect to every clock ticking with a tempo greater than N . According to these remarks, being short is a fair approximation of the length of an interval. The complement automaton B works by guessing the ticks of a clock using non-determinism together with a B-counter, and then checks the following property: every run of A that contains an infinite number of resets, also contains an infinite number of short increment intervals. Once the clock is fixed, checking this is definable in monadic second-order logic, i.e., can be checked by a finite state automaton without bounding conditions. Using this remark it is simple to construct B. MIKO LAJ BOJAŃCZYK AND THOMAS COLCOMBET It is easy to see that if B accepts an ω-word, then this word is not accepted by A. The converse implication is a consequence of the following compactness property: if no run of A is accepting, then there is a threshold N ∈ N such that every run of A either has less than N increments between two resets infinitely often, or does finitely many resets. Such a property can be established using Ramsey-like arguments. Consider now a single counter ωS-automaton, but without the constraint of increments being performed during intervals. In our construction, we use an algebraic decomposition result, Simon's factorization forest theorem [13]. Using Simon's theorem, we reduce the complementation problem to a bounded number of instances of the above construction. In this case, the complement ωB-automaton uses one counter for each level of the factorization, the result being a structure of nested clocks. As above, once the ticks of the clocks are fixed, checking if a run makes few increments can be done by a finite state automaton without any bounding conditions. Finally, for treating the general case of ωS-automata with more than one counter, we use automata in their hierarchical form and do an induction on the number of counters. 5.2. Overview the complementation proof. Theorem 5.1 talks about complementing two classes, and two proofs are necessary. The two proofs share a lot of similarities, and we will try to emphasize common points. From now on, a hierarchical automaton A with states Q, input alphabet Σ and counters Γ is fixed. Either all the counters are of type S or all the counters are of type B. We will denote this type by T ∈ {B, S}, and byT we will denote the other type. We say f ∈ N N is a B-function if it is bounded, and an S-function if it is strongly unbounded. We set to be ≤ if T = S and ≥ if T = B. The idea is that greater means better. For instance, if n m, then replacing n increments by m increments leads to a run that is more likely to be accepting. In this terminology, a run of an ωT -automaton is accepting if its counter values (at the moment before they are reset) are greater than some T -function for the order . When complementing an ωT -automaton our goal is to show that all runs are rejecting, i.e., for every run, either a counter is reset a finite number of times, or infinitely often the number of increments between two resets is smaller than aT -function with respect to the order . Our complementation proof follows the scheme introduced by Büchi in his seminal paper [3] (we also refer the reader to [15]). Büchi establishes that languages recognized by nondeterministic Büchi automata are closed under complement. In this proof Büchi did not determinize the automata as usual in the finite case (this is impossible for Büchi automata). Instead, he used a novel technique which allows to directly construct a nondeterministic automaton for the complement of a recognizable language. The key idea is that-thanks to Ramsey's theorem-each ω-word can be cut into a prefix followed by an infinite sequence of finite words, which are indistinguishable in any context by the automaton (we also say that those words have same type). Whether or not the ω-word is accepted by that automaton depends only on the type of the prefix and the type appearing in the infinite sequence. The automaton accepting the complement guesses this cut and checks that the two types correspond to a rejected word. For this reason the proof can be roughly decomposed into two parts. The first part shows that each word can be cut in the way specified. The second part shows that a cut can be guessed and verified by a Büchi automaton. Our proof strategy is similar. That is why, in order to help the reader gather some intuition, we summarize below Büchi's complementation proof, in terms similar to our own proof for the bounding automata. Then, in Section 5.2.2, we outline our own proof. 5.2. 1. The Büchi proof. In this section, we present a high-level overview of Büchi's complementation proof. A Büchi specification describes properties of a finite run of a given Büchi automaton. It is a positive boolean combination of atomic specifications, which have three possible forms: (1) The run begins with state p; (2) The run ends with state q; (3) The run contains/does not contain an accepting state. We will use the letter τ to refer to specifications, be they Büchi, or the more general form introduced later on for bounding automata. The following statement shows the Büchi strategy for complementation. The statement could be simplified for Büchi automata, but we choose the presentation below to stress the similarities with our own strategy, as stated in Proposition 5.3. Proposition 5.2. Let B be a Büchi automaton with input alphabet Σ. One can compute Büchi specifications τ 1 , . . . , τ n and regular languages L 1 , . . . , L n ⊆ Σ * such that the following statements are equivalent for every infinite word u ∈ Σ ω : (A) The automaton B rejects the word u; (B) There is some i = 1, . . . , n such that u admits a factorization u = wv 1 v 2 · · · where: (i) The prefix w belongs to L i ; (ii) For every j, every run ρ over the finite word v j satisfies τ i . Moreover, for each i = 1, . . . , n, the language K i ⊆ Σ * of words where every run satisfies τ i is regular. In the above statement, the prefix w could also be described in terms of specifications, but we stay with just the regular languages L i to accent the similarities with Proposition 5. 3. Proposition 5.2 immediately implies closure under complementation of Büchi automata, since the complement of the language recognized by B is the union L 1 K ω 1 + · · · + L n K ω n , which is clearly recognizable by a Büchi automaton. The "Moreover.." part of the proposition is straightforward, while the equivalence of conditions (A) and (B) requires an application of Ramsey's Theorem. Our proof follows similar lines, although all parts require additional work. In particular, our specifications will be more complex, and will require more than just a regular language to be captured. The appropriate definitions are presented in the following section. Preliminaries. Below we define the type of a finite run. Basically, the type corresponds to the information stored in a Büchi specification, along with some information on the counter operations. The type contains: 1) the source and target state of the run, like in a Büchi specification; and 2) some information on the counter operations that happen in the run. Since we want to have a finite number of types, we can only keep limited information on the counter operations (in particular, we cannot keep track of the actual number of increments). Formally, a type t is an element of: Q × {∅, {-}, {-•, •-}, {-•, •-•, •-}} Γ × Q . To a given run, we associate its type in the following way. The two states contain respectively the source (i.e. first) and target (i.e. last) state of the partial run. (A partial run is a finite run that does not necessarily begin at the beginning of the word, and does not necessarily end at the end of the word.) The middle component associates to each counter α ∈ Γ a counter profile denoted-by slight abuse-t(α). The counter profile is t(α) = ∅ when there is no increment nor reset on counter α. The counter profile is t(α) = {-} when the counter is incremented but not reset. The counter profile is t(α) = {-•, •-} when the counter is reset just once, while t(α) = {-•, •-•, •-} is used for the other cases, when the counter is reset at least twice. Note that the counter profiles are themselves sets, and elements of these sets have a meaningful interpretation. Graphically, each symbol among -, -•, •-, •-• represents a possible kind of sequence of increments of a given counter. The circle • symbolizes a reset starting or ending the sequence, while the dash -represents the sequence of increments itself. For instance, -• identifies the segment that starts at the beginning of the run and end at the first occurrence of a reset of the counter. Given a type t, we use the name t-event for any pair (α, c) ∈ Γ × {-, -•, •-•, •-} , with c ∈ t(α) . The set of t-events is denoted by events(t). Given a t-event (α, c) and a finite run ρ of type t, the value val (ρ, α, c) is the natural number defined below: val(ρ, α, -) the number of increments on counter α in the run. val(ρ, α, -•) the number of increments on counter α before the first reset of counter α. val(ρ, α, •-) the number of increments on counter α after the last reset of counter α. val(ρ, α, •-•) the minimal (with respect to ) number of increments on counter α between two consecutive resets on counter α. We comment on the last value. When T = S, val (ρ, α, •-•) is the smallest number of increments on counter α that is done between two successive resets of α. When T = B, this is the largest number of increments. At any rate, this is the worst number of increments, as far as the acceptance condition is concerned. A specification is a property of finite runs. It is a positive boolean combination of the following two kinds of atomic specifications: (1) The run has type t. (2) The run satisfies val (ρ, α, c) K. The first atomic specification is defined by giving t. The second atomic specification is defined by giving α and c, but not K, which remains undefined. The number K is treated as a special parameter, or free variable, of the specification. This parameter is shared by all atomic specifications. Given a value of K ∈ N, we define the notion that a run ρ satisfies a specification τ under K in the natural manner. Unlike the Büchi proof, it is important that the boolean combination in the specification is positive. The reason is that we will useT -automata to complement T -automata, and therefore we only talk about one type of behavior (bounded, or strongly unbounded). The decomposition result. In this section we present the main decomposition result, which yields Theorem 5.1. Proposition 5. 3. For every ωT -automaton A one can effectively obtain regular languages L 1 , . . . , L n and specifications τ 1 , . . . , τ n such that the following statements are equivalent for every ω-word u: (A) The automaton A rejects u; (B) There is some i = 1, . . . , n such that u admits a factorization u = wv 1 v 2 · · · where: (i) The prefix w belongs to L i , and; (ii) There is aT -function f such that for every j, every run ρ over v j satisfies τ i under f (j). Moreover, for each i = 1, . . . , n, one can verify with a word sequenceT -automaton B i if a sequence of words v 1 , v 2 , . . . satisfies condition (ii). (Equivalently, the set of word sequences satisfying (ii) isT -regular.) First we note that the above proposition implies Theorem 5.1, since property (B) can be recognized by an ωT -automaton. This is thanks to the "Moreover..." and the closure of ωT -automata under finite union and the prefixing of a regular language. The rest of this paper is devoted to showing the proposition. In Sections 5.3 and 5.4 we show the first part, i.e., the equivalence of (A) and (B). Section 5.3 develops extensions of Ramsey's theorem. Section 5.4 uses these results to show the equivalence. In Sections 5.5 and 5.6 we prove the "Moreover..." part. Section 5.5 contains the construction for a single counter, while Section 5.6 extends this construction to multiple counters. The difficulty in the "Moreover..." part is that (ii) talks about "every run ρ", and therefore the construction has to keep track of many simultaneous runs. 5. 3. Ramsey's theorem and extensions. Ramsey-like statements (as we consider them in our context) are statements of the form "there is an infinite set D ⊆ N and some index i ∈ I such that the property P i (x, y) holds for any x < y in D". This statement is relative to a family of properties {P i } i∈I . In general, the family {P i } i∈I may be infinite. The classical theorem of Ramsey, as stated below, follows this scheme, but for a family of two properties: P 1 = R and P 2 = N 2 \ R. • for all x < y in D, (x, y) ∈ R, or; • for all x < y in D, (x, y) ∈ R. In the original statement of Ramsey's theorem, the set E is not used (i.e. E = N). We use the more general, but obviously equivalent, formulation to compose Ramsey-like statements as follows. Assume that there are two Ramsey-like statements, one using properties {P i } i∈I , and the other using {Q j } j∈J . We can apply the two in cascade and obtain a new statement of the form "there is an infinite set D ⊆ N and indexes i ∈ I, j ∈ J such that both P i (x, y) and Q j (x, y) hold for any x < y in D". This is again a Ramsey-like statement. This composition technique is heavily used below. We will simply refer to it as the compositionality of Ramsey-like statements and shortcut the corresponding part of the proofs. The following lemma is our first Ramsey-like statement which uses an infinite (even uncountable) number of properties. Lemma 5. 5. For any h : N 2 → N there is an infinite set D ⊆ N such that either • there is a natural number M such that h(x, y) ≤ M holds for all x < y ∈ D, or; • there is an S-function g such that h(x, y) > g(x) holds for all x < y ∈ D. Note that in the above statement, only values h(x, y) for x < y are relevant. This will be the case in the other Ramsey-like statements below. Proof. By induction we construct a sequence of sets of natural numbers D 0 ⊇ D 1 ⊇ · · · . The set D 0 is defined to be N. For n > 0, the set D n is defined to be the infinite set D obtained by applying Ramsey's theorem to E = D n−1 \ {min D n−1 } with the binary property R = h(x, y) ≤ n. Two cases may happen. Either for some n, the value of h(x, y) is at most n for all x < y taken from D n . In this case the first disjunct in the conclusion of the lemma holds (with D = D n and M = n). Otherwise, for all n, the value h(x, y) is greater than n for all x < y taken from D n . In this case, we set D to be {min D i : i ∈ N} and g to satisfy g(min D i ) = i. The second conclusion of the lemma holds. Definition 5. 6. A separator is a pair (f, g) where f is aT -function and g is a T -function. The following lemma restates the previous one in terms of separators. Lemma 5. 7. For any h : N 2 → N there is an infinite set D ⊆ N and a separator (f, g) such that either; • for all x < y ∈ D, h(x, y) f (x), or; • for all x < y ∈ D, h(x, y) ≻ g(x). Lemma 5.8 below generalizes Lemma 5.7: instead of having a single element h(x, y) for each x < y, we have a set E x,y of vectors. The conclusion of the lemma describes which components of the input vectors from E x,y are bounded or unbounded simultaneously. When applied to complementing automata, each set h(x, y) will gather information relative to the possible runs of the automaton over the part of a word that begins in position x and ends in position y. Since the automaton is nondeterministic, h(x, y) contains not a single element, but a set of elements, one for each possible run. Since the automaton has many counters, and a counter may come with several events, elements of h(x, y) are vectors, with each coordinate corresponding to a single event. Before stating the lemma, we introduce some notation. Let C be a finite set, which will be used for coordinates in vectors. Given a vector v ∈ N C , a set of coordinates σ ⊆ C and a natural number M , the expression v σ M means that v(α) M holds for all coordinates α ∈ σ. We use a similar notation for ≻, i.e. v ≻ σ M means that v(α) ≻ M holds for all coordinates α ∈ σ . Note that σ is different from ≻ σ . The first says that ≻ {α} holds for some coordinate α ∈ σ, while the second says that ≻ {α} has to hold for all coordinates α ∈ σ. Lemma 5. 8. Let C be a finite set, and for every natural numbers x < y ∈ N, let E x,y ⊆ N C be a finite nonempty set of vectors. There is a family of coordinate sets Θ ⊆ P(C), an infinite set D ⊆ N and a separator (f, g) such that for all x < y in D, (1) for all v ∈ E x,y , there is a coordinate set σ ∈ Θ such that v σ f (x), and; (2) for all coordinate sets σ ∈ Θ, there is v ∈ E x,y such that v σ f (x) and v ≻ C\σ g(x). Proof. If we only take (1) into account, we can see Θ as a disjunction of conjunctions of boundedness constraints, i.e., a DNF formula. The property (1) says that each vector satisfies one of the disjuncts. Keeping this intuition in mind, for two coordinate sets σ, σ ′ ⊆ C we write σ ⇒ σ ′ if σ ⊇ σ ′ . Given two families of coordinate sets Θ, Θ ′ ⊆ P(C), we write Θ ⇒ Θ ′ if for every disjunct σ ∈ Θ there is a disjunct σ ′ ∈ Θ ′ such that σ ⇒ σ ′ . This notation corresponds to the following property: if (1) holds for Θ and Θ ⇒ Θ ′ , then (1) also holds for Θ ′ . There is a minimum element for this preorder which is ∅ (the empty disjunction, equivalent to false), and a maximum element {∅} (a single empty conjunction, equivalent to true). The preorder ⇒ induces an equivalence relation ⇔, which corresponds to logical equivalence of DNF formulas. Let Θ ⊆ P(C) be a nonempty family of coordinate sets. For two natural numbers x < y, we define h Θ (x, y) ∈ N to be h Θ (x, y) = min {M : ∀ v ∈ E x,y . ∃ σ ∈ Θ. v σ M } . By applying Lemma 5.7 to the property h Θ (x, y), we obtain an infinite set D ⊆ N and a separator (f, g) such that either; (a) for all x < y ∈ D, h Θ (x, y) f (x), i.e., for all v ∈ E x,y there exists σ ∈ Θ such that v σ f (x), or; (b) for all x < y ∈ D, h Θ (x, y) ≻ g(x), i.e., there exists v ∈ E x,y such that for all σ ∈ Θ, v σ g(x). Using compositionality of Ramsey-like statements, we can assume that the separator (f, g) works for all possible families Θ simultaneously. Note however, that the choice of item (a) or (b) may depend on the particular family Θ. Furthermore remark that if to Θ corresponds property (a), and Θ ⇒ Θ ′ holds, then property (a) also corresponds to Θ ′ . By removing a finite number of elements of D, we can further assume that for any x in D we have f (x) g(x). Let Θ ⊆ P(C) be a family of coordinate sets that satisfies property (a), but is minimal in the sense that for every family Θ ′ satisfying (a), the implication Θ ⇒ Θ ′ holds. The family Θ exists since {∅} satisfies (a) for any separator (f, g) (it is even unique up to ⇔). Without loss of generality, we assume that Θ does not contain two coordinate sets σ ′ ⊆ σ, since otherwise we can remove the larger coordinate set σ and still get an equivalent family. We will show that the properties (1) and (2) of the lemma hold for this family Θ. Property (1) directly comes from (a). Let us prove (2). Fix a coordinate set σ in Θ, as well as x < y in D. We need to show that for some v ∈ E x,y , both v σ f (x) and v ≻ C\σ g(x). Let Θ ′ be the family of coordinate sets obtained from Θ by removing σ and adding all coordinate sets of the form σ ∪ {β}, for β ranging over C \ σ. It is easy to see that Θ ′ ⇒ Θ, while Θ ⇔ Θ ′ does not hold. In particular, by minimality of Θ, the family Θ ′ cannot satisfy property (a), and therefore it must satisfy property (b). Let v be the vector of E x,y existentially introduced by property (b) applied to Θ ′ . We will show that this vector satisfies both v σ f (x) and v ≻ C\σ g(x). First, we show v σ f (x). Since the family Θ satisfies property (a), there must be a coordinate set σ ′ ∈ Θ such that v σ ′ f (x). We claim that σ ′ = σ. Indeed, otherwise σ ′ would belong to Θ ′ , and by (b) we would have v σ ′ g(x). This is in contradiction with v σ ′ f (x) since f (x) g(x). Second, we show v ≻ C\σ g(x). Let then β be a coordinate in C \ σ. By definition of Θ ′ , the coordinate set σ ∪ {β} belongs to Θ ′ . By (b) for Θ ′ , we get v(α) ≻ g(x) for some α ∈ σ ∪ {β}. Since above we have shown v σ f (x) and f (x) g(x), it follows that α = β and v(β) ≻ g(x). Since this is true for all β ∈ C \ σ, we have v ≻ C\σ g(x). Descriptions. Recall the mapping val(ρ, α, c), which describes the number of increments that run ρ does on counter c in the event α. By fixing a run ρ of type t, we can define val(ρ) as a mapping from the set of t-events events(t) to N, i.e., a vector of natural numbers. This vector measures the number of increments in the run ρ for each event and each counter, keeping track of the numbers which are the worst for the acceptance condition. Note that the coordinates of val(ρ) depend on the type t of ρ-more precisely on events(t)and therefore the vectors val(ρ) and val(ρ ′ ) may not be directly comparable for runs ρ, ρ ′ of different types. The key property of val is that given a sequence of finite runs ρ 1 , ρ 2 , . . ., it is sufficient to know their respective types t 1 , t 2 , . . . and the vectors val(ρ 1 ), val(ρ 2 ), . . . in order to decide whether or not the infinite run ρ 1 ρ 2 . . . satisfies the acceptance condition. The descriptions defined below gather this information in a finite object. Definition 5.9 (description). A description is a set of pairs (t, γ) where t is a type and γ a set of t-events. The intuition is that γ is the set of events where the counter values are bad for the automaton's acceptance condition, i.e.. small in the case when T = S and large in the case when T = B. A cut is any infinite set of natural numbers D, which are meant to be word positions. We also view a cut as a sequence of natural numbers, by ordering the numbers from D in increasing order. Given a cut D = {d 1 < d 2 < · · · } and an ω-word w ∈ Σ ω , we define w| D to be the infinite sequence of finite words obtained by cutting the word at all positions in D: Definition 5.10 (strong description). Let w ∈ Σ ω be an input ω-word, τ a description and D ⊆ N a cut. We say that τ strongly describes w| D if there is a separator (f, g) such that for every x < y ∈ D the following conditions hold. w| D = w[d 1 , . . . , d 2 − 1], w[d 2 , . . . , d 3 − 1], . . . • for every partial run ρ of type t over w from position x to position y, there is a pair (t, γ) ∈ τ such that val(ρ) γ f (x), and; • for every pair (t, γ) ∈ τ , there is a run ρ over w of type t from position x to position y such that val(ρ) γ f (x) and val(ρ) ≻ events(t)\γ g(x). Lemma 5. 11. For every ω-word w ∈ Σ ω , there is a description τ and a cut D such that τ strongly describes w| D . Proof. Using Ramsey's theorem and its compositionality, we find a cut D 0 and a set of types A such that for all x < y in D 0 the following conditions are equivalent; • there is a partial run of type t between positions x and y, and; • the type t belongs to A. The rest follows by applying Lemma 5.8 for all types t in A, and using compositionality of Ramsey-like statements. The problem with strong descriptions is that an ωT -automaton cannot directly check if a description strongly describes some cut word w| D . There are two reasons for this. First, we need to check the description for each x < y in D, and therefore deal with the overlap between the finite words w[x, . . . , y]. Second, the second condition of strong description involves guessing the T -function g, and this cannot be done using a ωT -automaton. Hence, we reduce the property to checking weak descriptions (see Definition 5.12) which are stable (see Definition 5.14). Lemma 5.15 shows that this approach makes sense. Definition 5.12 (weak description). Given a type t, a set of events γ and a natural number N , a finite run ρ, is called consistent with (t, γ) under N if ρ has type t and val (ρ) γ N . Given a description τ , a run ρ is called consistent with τ under N if it is consistent with some (t, γ) ∈ τ under N . Let w ∈ Σ ω be an input ω-word, τ a description and D a cut. We say that τ weakly describes w| D if there is aT -function f such that for all i ∈ N, every run ρ over the i-th word in w| D is consistent with τ under f (i). A weak description is a weakening of strong descriptions for two reasons: only runs between consecutive elements of the cut are considered, and only the first constraint of the strong description is kept. The following lemma shows that weak descriptions can be expressed by specifications. Lemma 5. 13. For every weak description there is an equivalent specification. In other words, for every description τ there is a specification τ ′ such that the following are equivalent for an ω-word w and a cut D ; • the description τ weakly describes w| D , and; • there is aT -function f such that for all i ∈ N, every run ρ over the i-th word in w| D satisfies τ ′ under f (i). Proof. All the conditions in the definition of weak descriptions can be expressed by a specification. In Definition 5.14, we present the notion of a stable description. The basic idea is to mimic the notion of idempotency used in the case of Büchi. To illustrate the above definition, we present an example. In this particular example, the description will be stable for T = B, but it will not be stable for T = S. Let then T = B and consider an automaton with one counter α = 1, and one state q. We will show that for t = (q, {-}, q) (we write {-} instead of the mapping which to counter 1 associates {-}) and γ = {(1, -)}, the description τ = {(t, γ)} is stable. In the case of τ , the function h from the definition of stability will be the identity function h(N ) = N . To show stability of τ , consider any finite run ρ decomposed as ρ 1 · · · ρ k , with each ρ i consistent with τ under N . To show stability, we need to show that val (ρ) γ h(N ) = N . Since T = B, the relation is ≥, so we need to show that val (ρ) is at least N on all events in γ. Since γ has only one event (1, -), this boils down to proving val(ρ, 1, -) ≥ N . But this is simple, because val (ρ, 1, -) = val (ρ 1 , 1, -) + · · · + val (ρ k , 1, -) , and each ρ i satisfies val (ρ i , 1, -) ≥ N by assumption. Note that this reasoning would not go through with T = S, which is the reason why the above description is stable only in the case T = B. We now show that strong descriptions are necessarily stable. Lemma 5. 15. If τ is a strong description of some w| D then τ is stable. Proof. We prove the statement for T = S first, and then for T = B. Case T = S. In this case is ≤. Let ρ 1 , . . . , ρ k be runs such that ρ i is consistent with (t i , γ i ) ∈ τ under N for all i = 1, . . . , k. We need to show that the composition ρ = ρ 1 · · · ρ k of these runs is consistent with some (t, γ) ∈ τ under h(N ), for some Sfunction h independent of ρ 1 , . . . , ρ k . The function h will be a linear function, with the linear constant taken from the assumption that τ strongly describes w| D . Let then (f 0 , g 0 ) be the separator obtained by unraveling the definition of w| D being strongly described by τ . We can assume without loss of generality that f 0 is constant, equal to M . Since g 0 tends toward infinity, we can chose a natural number n such that g 0 (i) ≥ M holds for all i ≥ n. We now mimic each run ρ i by a similar run π i over the (i + n)-th word in the sequence w| D . By definition of a strong description, one can find for i = 1, . . . , k a run π i over the (i + n)-th word in w| D such that val (π i ) ≤ γ j M and val(π i ) > events(t i )\γ i g 0 (i + n). From the last inequality together with g 0 (i + n) ≥ M , we obtain: for all (α, c) ∈ events(t i ), val (π i , α, c) ≤ M implies (α, c) ∈ γ i . (♯) Let π be π 1 . . . π k . Since τ strongly describes w| D , the run π is consistent with (t, γ) under M for some (t, γ) in τ . Formally, val (π) ≤ γ M (♯2) . We will show that ρ is consistent with (t, γ) under N (M + 2), which establishes the stability of τ , under the linear function h(N ) = N (M + 2). Let then (α, c) be an event in γ. We have to prove val (ρ, α, c) ≤ N (M + 2). This is done by a case distinction depending on c. c = -This means that π does not contain any reset of α. Let I ⊆ {1, . . . , k} be the set of indexes j for which π j contains an increment of counter α. By (♯2), and since (α, -) belongs to γ, the number of increments of counter α in π is at most M . In particular, the set I contains at most M indexes. Let now i ∈ I. Still by (♯2), the run π, and hence also π i , contains at most M increments of α. By (♯), this means that (α, -) belongs to γ i . Hence, since ρ i is consistent with (t i , γ i ) under N , ρ i contains at most N increments of α. Furthermore, for i ∈ I, π i does not increment α, and π i has the same type as ρ i . Hence ρ i does not increment the counter α either. Summing up: at most M runs among ρ 1 , . . . , ρ k increment counter α, and those that do, do so at most N times. Overall there are at most M N increments of α in ρ, i.e., val (ρ, α, c) ≤ M N . c = -• Let j be the first index such that ρ j contains a reset of α. Using the previous case, but for k = j − 1, we infer that the prefix ρ 1 . . . ρ j−1 contains at most M N increments of α. By (♯2), there are at most M increments of α before the first reset in π. Since this first reset occurs in π j , the same holds for π j . By (♯) we obtain that (α, -•) belongs to γ j . Finally, since ρ j is consistent with (t j , γ j ) under N , there are at most N increments of α before the first reset in ρ j . Overall there are at most N (M + 1) increments of α before the first reset in ρ. c = •-As in the previous case. c = •-• As previously, but we need the bound N (M + 2) since both ends of the interval have to be considered. Case T = B. In this case is ≥. Let ρ = ρ 1 . . . ρ k be a run such that ρ i is consistent with (t i , γ i ) ∈ τ under N for all i = 1, . . . , k. We will show that ρ is consistent with τ under N , i.e. h is the identity function. Let then (f 0 , g 0 ) be the separator obtained by unraveling the definition of w| D being strongly described by τ . We can assume without loss of generality that g 0 is constant, equal to M . As previously, since f 0 tends toward infinity, we can chose a natural number n such that f 0 (n + 1) ≥ M (N + 2). As in the case of T = S, we mimic each run ρ i by a similar run π i over the (i + n)-th word in w| D . By definition of a strong description, one can find for i = 1, . . . , k a run π i over the (i + n)-th word in w| D such that val (π i ) ≥ γ j f 0 (i + n) and val(π i ) < events(t i )\γ i M . From the last inequality we obtain: for all (α, c) ∈ events(t i ), val (π i , α, c) ≥ M implies (α, c) ∈ γ i .(♯) Let π be π 1 . . . π k . Since τ strongly describes w| D , the run π is consistent with (t, γ) under f 0 (n + 1) for some (t, γ) in τ . In combination with f 0 (n + 1) ≥ M (N + 2), we have: val (π) ≥ γ M (N + 2) .(♯2) We will show that ρ is consistent with (t, γ) under N , which establishes the stability of τ . For this, let (α, c) be an event in γ, we have to prove val (ρ, α, c) ≥ N. This is done by a case distinction depending on c. c = -In this case the run π does not contain any reset of α. Let I ⊆ {1, . . . , k} be the set of indexes j for which π j does an increment of counter α. By (♯2) applied to (α, -), there are at least M N increments (actually, at least M (N + 2) increments, but we only need M N here) of α in π. Two cases can happen; either I contains at least N indexes, or there is some j ∈ I such that ρ j contains at least M increments of α. Consider first the case when I has at least N indexes. Since there is at least one increment of α in every ρ i for i ∈ I, there are at least N increments of α in ρ. Otherwise there is some j ∈ I such that π j contains at least M increments of α. By (♯), this means that (α, -) belongs to γ j . Finally, since ρ j is consistent with γ j under N , we obtain that ρ j , and by consequence also ρ, contains at least N increments of α. Overall there are at least N increments of α in ρ, i.e., val(ρ, α, c) ≥ N . c = -• Let j be the first index for which π j contains a reset of α. By (♯2), there are at least M (N + 1) increments (again, we do not need to use M (N + 2) here) of α before the first reset in π. Two cases are possible: either M N increments of α happen in π 1 . . . π j−1 , or π j contains M increments of α before the first reset of α. In the first case we use the same argument as for c = -(note that we were just using a bound of M N in this case) over the run π 1 . . . π j−1 . We obtain that there are at least N increments of α in ρ 1 . . . ρ j−1 . Otherwise, there are M increments of α in π j before the first reset. Using (♯) we deduce that (α, -•) belongs to γ j . Since ρ j is consistent with (t j , γ j ) under N , we deduce that there are at least N increments of α in ρ j before the first reset. Overall there are at least N increments of α in ρ before the first reset, i.e., val (ρ, α, c) ≥ N . c = •-As in the previous case. c = •-• As previously (this time using the bound M (N + 2)). We will now show how to tell if a word is rejected by inspecting one of its descriptions. This notion of rejection will be parametrized by the set S of states in which the cut may be reached. Note that rejecting loops only make sense for stable descriptions. Definition 5.16 (rejecting loop). We say a state q is a rejecting loop in a description τ , if for all (t, γ) ∈ τ where the source and target state of the type t is q we have: Case T = S. There exists a counter α such that either: • t(α) = ∅ or t(α) = {-}, or; • (α, •-•) ∈ γ, or; • Both (α, -•) and (α, •-) belong to γ. Case T = B. There exists a counter α such that either: • t(α) = ∅ or t(α) = {-}, or; • (α, c) ∈ γ for some c ∈ {-•, •-•, •-}. The idea behind the above definition is as follows: if the description τ weakly describes a cut word w| D , ρ is a run that assumes state q in every position from D, then the fact that q is a rejecting loop implies that ρ not accepting. Since every infinite run can be decomposed into loops, this is the key information when looking for a witness of rejection. Given a description τ and a state p, we write pτ to denote the set of states q such that for some (t, γ) ∈ τ , the source state of t is p and the target state of t is q. This notation is extended to a set of states P τ in the natural manner. If D is a cut and w is an ω-word, then the D-prefix of w is defined to be the prefix of w that leads to the first position in D. Every ω-word w is decomposed into its D-prefix, and then the concatenation of words from w| D . Lemma 5.17. Let w| D be a cut ω-word strongly described by τ . Let P be the states reachable after reading the D-prefix of w. If w is rejected, then every state in P τ is a rejecting loop. Proof. We only do the proof for T = S, the case of T = B being similar. Let D = {d 1 , d 2 , . . .}. To obtain a contradiction, suppose that q ∈ P τ is not a rejecting loop. By definition, there must be a pair (t, γ) in τ -with q the source and target of t-such that for every counter α, none of the conditions from Definition 5.16 hold. That is, the value t(α) contains -• and •-, the event (α, •-•) is outside γ, and one of the events (α, -•), (α, •-) is outside γ. Without loss of generality, let us assume (α, -•) is outside γ. Let (f, g) be the separator appropriate to w| D obtained from the definition of strong descriptions. For each natural number i there is a run π i of type t between positions d i and d i+1 such that val(π i , α, c) > g(d i ) holds for all events (α, c) not in γ. Since t(α) contains -• and •-, the counter α is reset at least once in π i . Furthermore, since (α, •-•) is outside γ, every two consecutive resets of α in π i are separated by at least g(d i ) increments. Finally, since (α, -•) is outside γ, there are at least g(d i ) increments of α before the first reset in π i . Since this holds for every counter, we obtain that the run π 1 π 2 . . . satisfies the accepting condition. By assumption on q ∈ P τ , there is some state p ∈ P and a type in τ that has source state p and target state q. In particular, the state q can be reached in the second position of the cut D: by first reaching p after the D-prefix, and then going from p to q. From q in the second position of D, we can use the run π 2 π 3 . . . to get an accepting run over the word w. This contradicts our assumption that w was rejected by the automaton. The following lemma gives the converse of Lemma 5.17. The result is actually stronger than just the converse, since we use weaker assumptions (the description need only be weak and stable, which is true for every strong description, thanks to Lemma 5.15). Lemma 5.18. Let w be an ω-word, D ⊆ N a cut, and assume that the word sequence w| D is weakly described by a stable description τ . Let P be the states reachable after reading the D-prefix of w. If every state in P τ is a rejecting loop, then w is rejected. Proof. Let ρ be a run of the automaton over w. We will show that this run is not accepting. Let q i be the state used by ρ at position d i . For i < j, we denote by ρ i,j the subrun of ρ that starts in position d i and ends in position d j . Let t i,j be the type of this run. Consider now a run ρ i,j . This run can be decomposed as ρ i,j = ρ i,i+1 ρ i+1,i+2 · · · ρ j−1,j . Let f be theT -function from the assumption that τ weakly describes w| D . By recalling the definition of weak descriptions, there must be sets of events γ i , . . . , γ j−1 such that (t i,i+1 , γ i ) ∈ τ · · · (t j−1,j , γ j−1 ) ∈ τ val(ρ i,i+1 ) γ i f (d i ) · · · val(ρ j−1,j ) γ j−1 f (d j−1 ) . Let g be a function, which to every k ∈ N assigns the maximal, with respect to , value among f (k), f (k + 1), . . . We claim that not only g is well defined, but it is also aT function. Indeed, whenT is B then there are finitely many values of f , so a maximal one exists, and g has also finitely many values. If, on the other hand,T is S, then is ≥. In this case, g(k) is the least-with respect to the standard ordering ≥ on natural numbers-number among f (k), f (k + 1), . . .. This number is well defined, furthermore, g is an S-function since f is an S-function. Since f (d k ) g(d i ) holds for any k ≥ i, we also have val(ρ i,i+1 ) γ i g(d i ) · · · val(ρ j−1,j ) γ j g(d i ) . By assumption on stability of τ , there is an S-function h, such that for all i < j, there is (t i,j , γ i,j ) ∈ τ such that val(ρ i,j ) γ i,j h(g((d i )) . Using the Ramsey theorem in a standard way, we can assume without loss of generality that all the t i,j are equal to the same type t, and all γ i,j are equal to the same γ. If t(α) = {-} holds for some counter α, then this counter is reset only finitely often, so the run ρ is rejecting and we are done. Otherwise, t(α) = {-•, •-•, •-} holds for all counters α. For the rest of the proof, we only consider the case T = S, with B being treated in a similar way. The function g, by its definition, assumes some value M for all but finitely many arguments. Since τ is rejecting there exists a counter α ∈ Γ such that either (α, •-•) ∈ γ or both (α, -•) and (α, •-) belong to γ. In the first case, an infinite number of times there are at most h(M ) increments of α between two consecutive resets of α. In the second case, the same happens, but this time with at most 2h(M ) increments. In both cases the run is not accepting. We are now ready to establish the main lemma of this section. Lemma 5.19. Let A be an ωT -automaton. There exist regular languages L 1 , . . . , L n and stable descriptions τ 1 , . . . , τ n such that for every ω-word w the following items are equivalent: • A rejects w, and; • There is some i = 1, . . . , n and a cut D such that the D-prefix of w belongs to L i , and τ i weakly describes w| D . Proof. We need to construct a finite set of pairs (L i , τ i ). Each such pair (L P , τ ) comes from a set of states P and a description τ that is stable and rejects all loops in P τ . The language L P is the set of finite words v that give exactly states P (as far as reachability from the initial state is concerned) after being read by the automaton. The bottom-up implication is a direct application of Lemma 5.18, and therefore only the top down implication remains. Let w be an ω-word rejected by A. By Lemma 5.11, there exists a cut D and a strong-and therefore also weak-description τ of w| D . Let P be the set of states reached after reading the D-prefix of w. Clearly the D-prefix of w belongs to L P . By Lemma 5.15, the description τ is stable, and hence Lemma 5.17 can be applied to show that all loops in P τ are rejecting. The first part of Proposition 5.2 follows, since weak descriptions are captured by specifications thanks to Lemma 5. 13. Finally, we need to show that our construction is effective: 20. It is decidable if a description is stable. Lemma 5. Proof. Stability can be verified by a formula of monadic second-order logic over (N, ≤). 5. 5. Verifying single events. We now begin the part of the complementation proof where we show that specifications can be recognized by automata. Our goal is as follows: given a specification τ , we want to construct a hierarchical sequenceT -automaton that accepts the word sequences that are consistent with τ . Recall that a specification is a positive boolean combination of two types of atomic conditions. In this section we concentrate solely on atomic specifications where the boolean combination consists of only one atomic condition of the form: (2) The run satisfies val (ρ, α, c) K. In particular, only one event (α, c) is involved in the specification. Furthermore, we also assume that the counter α is the lowest-ranking counter 1. However, the result is stated so that it can then be generalized to any specification. 5.5. 1. Preliminaries and definitions. In this section, we define transition graphs-which are used to represent possible runs of an automaton-and then we present a decomposition result for transition graphs, which follows from a result of Simon on factorization forests [13]. Transition graph. For the rest of Section 5.5, we fix a finite set of states M . This set M is possibly different from the set of states of the automaton we are complementing. The reason is that in Section 5.6, we will increase the state space of the complemented automaton inside an induction. The first concept we need is an explicit representation of the configuration graph of an automaton reading a word, called here an M -transition graph. Fix a finite set L of transition labels, and a finite set of states M . An M -transition graph G (labeled by L) of length k ∈ N is a directed edge labeled graph, where the nodes-called configurations of the graph-are pairs M × {0, . . . , k} and the edge labels are of the form ((q, i), l, (r, i + 1)) for q, r in M , 0 ≤ i < k and l ∈ L. The vertexes of the graph are called configurations, their first component is called the state, while the second is called the position. The edges of the graph are called the transitions. We define the concatenation of M -transition graphs in a natural way. A partial run in a transition graph is just a path in the graph. A run in a transition graph is a path in the graph that begins in a configuration at the first position 0 and ends in a configuration at the last position. The label of a run is the sequence of labels on edges of the path. A transition graph of length k can also be seen as a word of length k over the alphabet P(M × L × M ) . In this case, the concatenation of transition graphs coincides with the standard concatenation of words. When speaking of regular sets of transition graphs, we refer to this representation. Given a ωT -automaton A over the alphabet L of states Q, the product of an Mtransition graph G with A-noted G × A-is the (M × Q)-transition graph which has an l-labeled edge from ((p, q), i) to ((p ′ , q ′ ), i + 1) whenever G has an l-labeled edge from (p, i) to (p ′ , i + 1) and (q, l, q ′ ) is a transition of the automaton A. Furthermore, in the product graph only those configurations are kept that can be reached via a run that begins in a configuration where the second component of the state is the initial state of A. Factorization forest theorem of Simon, and decomposed transition graphs. From now, we will only consider transition graphs where the first component of the labeling ranges over the actions of a hierarchical automaton over counters, i.e., the label alphabet is of the form Act × L, with Act = {ε, I 1 , R 1 , I 2 , . . . , R n } . which contains those types t such that there is a run over G of type t. This set S can be seen as a semigroup, when equipped with the product defined by: s 1 · s 2 = {t 1 · t 2 : t 1 ∈ s 1 , t 2 ∈ s 2 } . In the above, the concatenation of two types t 1 · t 2 is defined in the natural way: the source state of t 2 must agree with the target state of t 1 , and the counter operations are concatenated. (An example of how counter operations are concatenated is: {-•, •-} · {-•, •-} = {-•, •-•, •-} . In particular, the mapping that assigns the type to a counter transition graph is a semigroup morphism, with transition graphs interpreted as words. Two M -transition graphs G, H are called equivalent if they have the same type. This equivalence is clearly a congruence of finite index with respect to concatenation. A transition graph G is idempotent if the concatenation GG is equivalent to G. We now define a complexity measure on graphs, which we call their Simon level. A transition graph G has Simon level 0 if it is of length 1. A transition graph G has Simon level at most k + 1 if it can be decomposed as a concatenation G = HG 1 · · · G n H ′ , where all the transition graphs H, G 1 , . . . , G n , H ′ have Simon level at most k and G 1 , . . . , G n are equivalent and idempotent. The following theorem, which in its original statement concerns semigroups, is presented here in a form adapted to our context. Theorem 5.21 (Simon [13], and [5] for the bound |S|). Given a finite set of states M , the Simon level of M -transition graphs is bounded by |S|. The Simon level is defined in terms of a nested decomposition of the graph into factors. We will sometimes need to refer explicitly to such decompositions; for this we will use symbols (, ), | and write G as (H|G 1 | . . . |G n |H ′ ), and so on recursively for the graphs H, G 1 , . . . , G n , H ′ . Theorem 5.21 shows that each graph admits a decomposition where the nesting of parentheses in this notation is bounded by |S|. We refer to the transition graphs written in this format as decomposed transition graphs. It is not difficult to see that the set of decomposed M -transition graphs is a regular language: a finite automaton can check that the symbols (, ), | indeed describe a Simon decomposition (thanks to Simon's theorem, the automaton does not need to count too many nested parentheses). A hint over a decomposed graph G is a subset of the positions labeled | in the decomposition of G. We call those positions hinted positions. (Note that a hint is relative not just to a transition graph G, but also to some decomposition of this transition graph.) For K ∈ N, a hint is said to be ≥ K if at the same nesting level of the decomposition, every two distinct hinted positions are separated by at least K non-hinted symbols |. Similarly, a hint is ≤ K if sequences of consecutive non-hinted positions at a given Simon level have length at most K − 1. Let G be a transition graph of length k (in the word representation) (with labels Act×L), along with a decomposition and a hint h. Let G 1 · · · G k ∈ (P(M × Act × L × M )) * be the interpretation of this graph as word. In the proofs below, it will be convenient to decorate the graph-by expanding the transition labels-so that the label of each run contains information about the decomposition and the hint. The decorated graph is denoted by (G, h) (we do not include a name for the decomposition in this notation, since we assume that the hint h also contains the information on the Simon decomposition). The graph (G, h) has the same configurations, states and transitions as G; only the labeling of the transitions changes. Instead of having a label in Act × L, as in the graph G, a transition in the graph (G, h) has a label in L S = Act × L × {⊥, 1, . . . , |S|} × {0, 1} . The first two coordinates are inherited from the graph G. The other coordinates are explained below. To understand the encoding, we need two properties. First, in the decomposition, there is exactly one symbol | between any two successive letters G i , G i+1 of the M -transition graph seen as a word. Second, the decomposition is entirely described by the nesting depths of those symbols with respect to the parentheses. Recall also that these nesting depths are bounded by |S|. For a transition in the graph G i , the coordinate {⊥, 1, . . . , |S|} stores the nesting depth (with respect to the parentheses) of the symbol | preceding the transition graph G i ; by the first property mentioned above, the undefined value ⊥ is used only for the first transition in transition graph. Finally, the coordinate {0, 1} says if the symbol | is included in the hint. Runs over decomposed graphs. As remarked above, the transition graph (G, h) only changes the labels of transitions in the graph G. Therefore with each run ρ in G we can associate the unique corresponding run over (G, h), which we denote by (ρ, h). By abuse of notation, we will sometimes write (ρ, h) ∈ L, where L ⊆ (L S ) * . The intended meaning is that the labeling of the run (ρ, h) belongs to the language L. Recall that we are only going to be verifying properties for the counter α = 1 in this section. We consider two runs over the same transition graph to be ≡ •--equivalent, if they agree before the last 1-reset (i.e. use the same transitions on all positions up to and including the last 1-reset). In the notation, the index shows where the runs can be different. Similarly, two runs are ≡ -• -equivalent if they agree after the first 1-reset. Two runs are ≡ •-• -equivalent if they agree before the first 1-reset, after the last 1-reset, and over all 1-resets (but do not necessarily agree between two successive 1-resets). Finally, two runs are ≡ --equivalent if both increment counter 1 but do not reset it. The fundamental property of ≡ c -equivalence, for c ∈ {•-, -•, •-•, -}, is that two ≡ c -equivalent runs are indistinguishable in terms of resets and increments of counters greater or equal to 2, or in terms of events (1, c ′ ) with c ′ = c. This means that as long we are working inside a ≡ c -equivalence class, the values val (ρ, α, c ′ ) for (α, c ′ ) = (1, c) are constant. (This also is the reason why we only work with counter 1 in a hierarchical automaton.) The key lemmas in this section are Lemmas 5.23 and 5.24. These talk about complementing S-automata and B-automata respectively. They both follow the same structure, which can be uniformly expressed in the following lemma, using the order (which is ≤ for complementing S-automata and ≥ for complementing B-automata): Lemma 5.22. Let c be one of -•, •-•, •or -. There are two strongly unbounded functions f and g, and a regular language L c ⊆ (L S ) * such that for every natural number K, every run ρ over every M -transition graph labeled by L whose type contains the event (1, c) satisfies: • (correctness) if a hint h in the graph is K and every run π ≡ c ρ satisfies (π, h) ∈ L c , then val (ρ, 1, c) f (K), and; • (completeness) if a hint h in the graph is g(K) and val (ρ, 1, c) K then (ρ, h) belongs to L c . We would like to underline here that the regular language is a regular language of finite words in the usual sense, i.e., no counters are involved. The above lemma says that the language L c "approximates" the runs ρ satisfying val (ρ, 1, c) K. This "approximation" is given by the two functions f and g. There is however a dissymmetry in this statement. The completeness clause states that if a run has a bad value on event (1, c), i.e. it satisfies val(ρ, 1, c) K, then it is detected by L c for all sufficiently good hints. On the other hand, in the correctness clause, all ≡ c -equivalent runs must be detected by L c in order to say that the value of ρ is bad on event (1, c). The reason for the weaker correctness clause is that we will not be able to check the value on event (1, c) for every run; we will only do it for runs of a special simplified form. And the simplification process happens to transform each run in a ≡ c -equivalent one. The proof of this lemma differs significantly depending on T = S or T = B. The two cases correspond to Lemmas 5.23 and 5.24 which are instantiations of Lemma 5.22. 5.5.2. Case of complementing an ωS-automaton. We consider first the case of complementing an S-automaton. Therefore, the order is ≤. To aid reading, below we restate Lemma 5.22 for the case when T = S. Lemma 5.23. Let c be one of -•, •-•, •or -. There are two strongly unbounded functions f and g, and a regular language L c ⊆ (L S ) * such that for every natural number K, every run ρ over every M -transition graph labeled by L whose type contains the event (1, c) satisfies: • (correctness) if a hint h in the graph is ≤ K and every run π ≡ c ρ satisfies (π, h) ∈ L c , then val (ρ, 1, c) ≤ f (K), and; • (completeness) if a hint h in the graph is ≥ g(K) and val (ρ, 1, c) ≤ K then (ρ, h) belongs to L c . Slightly ahead of time, we remark that g will be the identity function, while f will be a polynomial, whose degree is the maximal Simon level of M -transition graphs, taken from Theorem 5.21. Before proving the lemma, we would like to give some intuition about the language L c . The idea is that we want to capture the runs which do few increments on counter 1. However, this "few" cannot be encoded in the state space of the automaton, since it can be arbitrarily large. That is why we use the hint. One can think of the hint as a clock: if the hint is ≤ K, then the clock ticks quickly, and if the hint is ≥ K, then the clock ticks slowly. The language L c looks at a run and compares it to the clock. For the sake of the explanation, let us consider a piece of a run without resets of counter 1: we want to estimate if a lot of increments of counter 1 are done (at least K) or not (at most f (K)) by comparing it to a suitable clock (in the first case a slow clock, in the second case a quick one). The first case is when the counter is incremented in every position between two ticks of the clock; then the value of the counter concerned is considered 'big', since at least K increments are done if the ticks are ≥ K. For the second case, when there are few increments, we have a more involved argument that uses the idempotents from the Simon decomposition. Proof. The language L c is defined by induction on the Simon level of the transition graph. We construct a language L k c which has the stated property for all M -transition graphs of Simon level at most k. Since there is a bound on the Simon level, the result follows. For k = 0, the construction is straightforward, since the transition graphs are of length 1 and there is a finite number of possible runs to be considered. We now show how to define the language L k+1 c for runs in transition graphs of Simon level k + 1, based on the languages for runs in transition graphs of Simon level up to k. Consider a decomposed M -transition graph G = (H|G 1 | . . . |G n |H ′ ) . of Simon level k + 1. Let h be a hint for this decomposition, which we decompose into sub-hints (h 0 | · · · |h n+1 ) for the graphs H, G 1 , . . . , G n , H ′ . In the proof, instead of writing (ρ, h) ∈ L k+1 c , we write that ρ is c-captured. Let ρ be a run over G. The run ρ can also be decomposed into subruns (ρ 0 |ρ 1 | . . . |ρ n+1 ). Each of these runs ρ 0 , . . . , ρ n+1 is over a transition graph whose Simon level is at most k. By abuse of notation, we will talk about a subrun ρ i being c-captured, the intended meaning being that (ρ i , h i ) belongs to the appropriate language L k ′ c , with k ′ ≤ k being the Simon level of G i (or H if i = 0, or H ′ if i = n + 1). We now proceed to define the language L k+1 c . In other words, we need to say when ρ is c-captured. We only do the case of c = •-•, which is the most complex situation. The idea is that a run is •-•-captured if there are two consecutive resets between which the run does few increments. We define ρ to be •-•-captured if either: (1) some ρ i is •-•-captured, or; (2) for some i < j, the subrun ρ i is •--captured, the run ρ j is -•-captured, each of ρ i+1 , . . . , ρ j−1 is --captured, and one of the following holds: (a) there are m ≤ m ′ in {i, . . . , j} such that: (i) one of ρ m , ρ m+1 , . . . , ρ m ′ does not increment counter 1; and (ii) the source state of ρ m and the target state of ρ m ′ are the same state q, and G 1 admits a run from q to q that increments counter 1, or; (b) between any two hinted (by hints on level k + 1) positions in {i, . . . , j}, at least one of the runs ρ m does not increment counter 1. It is clear that this definition corresponds to a regular property L k+1 •-• of the sequence of labels in a hinted runs (once the hints are provided, the statement above is first-order definable). We try to give an intuitive description of the above conditions. The general idea is that a run gets •-•-captured if it does few increments, at least relatively to the size of the hint. The first reason why a run may do few increments between some two resets, is that it does it inside one of the component transition graphs of smaller Simon level. This is captured by condition 1. The second condition is more complicated. The idea behind 2(a) is that the run is-in a certain sense-suboptimal and can be converted into a ≡ •-• -equivalent one that does "more" increments. Then the more optimal run can be shown-using conditions 1 and 2(b)-to do few increments, which implies that the original suboptimal run also did few increments. We now proceed to show that the properties defined above satisfy the completeness and correctness conditions in the statement of the lemma. Completeness. For this, we set g(K) = K. Let ρ be a run such that val (ρ, 1, •-•) ≤ K and let h be hint over G that is ≥ K. We have to show that ρ is •-•-captured. Consider a minimal subrun ρ i . . . ρ j that resets counter 1 twice and satisfies val (ρ i . . . ρ j , 1, •-•) ≤ K . In particular, the counter 1 is reset in the subruns ρ i and ρ j . If i = j, this means that val (ρ i , 1, •-•) ≤ K, and hence ρ i is •-•-captured on a level below k+1 by induction hypothesis. We conclude with item 1. val (ρ i , 1, •-) ≤ K , val (ρ i+1 , 1, -) ≤ K · · · val (ρ j−1 , 1, -) ≤ K , and val (ρ j , 1, -•) ≤ K . By induction hypothesis, we obtain the header part of item 2. Since the run ρ i . . . ρ j contains less than K increments of 1, no more than K runs among ρ i , . . . , ρ j can increment counter 1. Since the hint h is ≥ K, we get item (b). Correctness. Let f ′ be the strongly unbounded function obtained by the induction hypothesis for Simon level k. We set f (K) = (2K|M | + 2)f ′ (K) Assume now that the hint h is ≤ K and take a run ρ such that every run π ≡ •-• ρ is •-•captured. We need to show that val(ρ, 1, •-•) ≤ f (K) . (5.1) As before, we decompose the run ρ into (ρ 0 | . . . |ρ n+1 ). We are going to first transform ρ into a new ≡ •-• -equivalent run π which, intuitively, is more likely to have many increments on counter 1. We will then show that the new run π satisfies inequality (5.1); moreover we will show that this inequality can then be transferred back to ρ. We begin by describing the transformation of ρ into π. This transformation is decomposed into two stages. In the first stage, which is called the local transformation, we replace the subruns ρ 0 , . . . , ρ n+1 with new equivalent ones of the same type. Each such replacement step works as follows. We take some c = -•, •-•, •-, -and i = 0, . . . , n + 1. If there is a subrun π i ≡ c ρ i that is not c-captured, then we replace ρ i with π i . (We want the local transformation to keep the ≡ •-• -equivalence class, so we do not modify subruns before the first or after the last reset of 1 in ρ.) The local transformation consists of applying the replacement steps as long as possible. This process terminates, since the replacement steps for different c's work on different parts of the subrun and each step decreases the number of captured subruns. In the second stage, which is called the global transformation, we add increments on counter 1 to some subruns. The idea is that at the end of the global transformation, a run does not satisfy condition 2(a). Just as the local transformation, it consists of applying a replacement step as long as possible. The replacement step works as follows. We try to find a subrun ρ m . . . ρ m ′ as in item 2(a). By assumption 2(a) and since all the graphs G i 's are equivalent to G 1 , we can find new subruns π m , . . . , π m ′ in G m , . . . , G m ′ respectively, which increment counter 1 without resetting it (that is, of type -) and go from q to q. We use these runs instead of ρ m . . . ρ m ′ . The iteration of this replacement step terminates, since each time we add new subruns with increments. Neither the local nor the global transformation change the ≡ •-• -equivalence class of the run. Let π be a run obtained from ρ by applying first the local and then the global transformation. (Since the global transformation is nondeterministic, there may be more than one such run.) This run cannot satisfy 2(a), since the global transformation could still be applied. Since π is ≡ •-• -equivalent to ρ, it must be •-•-captured by assumption on ρ. There are two possible reasons: either because of 1, or because of 2(b). We will now do a case analysis on the reasons why this happens. In each case we will conclude that the original run ρ satisfies (5.1). As before, we decompose π into (π 0 | · · · |π n+1 ). Assume now item 1 holds for π, i.e., some subrun π i is •-•-captured for some i. We also know that any run π ′ i ≡ •-• π i would also be •-•-captured, since otherwise π i would be replaced in the local transformation process (the global transformation does not touch subruns with resets on counter 1). In particular, the induction hypothesis gives val(π i , 1, •-•) ≤ K . Moreover, the runs ρ i and π i agree on the part between the first and last reset of counter 1. This is because neither of the transformation processes touched this part. Indeed, the first local transformation process never modified ρ i for c = •-• (since otherwise it would cease being captured), while the global process only modifies subruns without resets of counter 1. This gives the desired (5.1), since val (ρ, 1, •-•) ≤ val (ρ i , 1, •-•) = val (π i , 1, •-•) ≤ f ′ (K) ≤ f (K) . Otherwise, π satisfies item 2(b). Let us fix i, j as in 2(b). Let I ⊆ {i + 1, . . . , j − 1} be the set of those indexes l where π l does at least one increment on counter 1. A maximal contiguous (i.e. containing consecutive numbers) subset of I is called an inc-segment. According to case 2(b), an inc-segment cannot contain two distinct hinted positions, its size is therefore at most twice the maximal width K of h. Assume now that there are more than |M | inc-segments. Then two inc-segments contain runs with the same source state, say state q, at respective positions l and l ′ with l < l ′ . This means that there is a run that goes from q to q in some graph G l+1 · · · G l ′ and does at least one increment but no resets on counter 1. Using idempotency and the equivalence of all the graphs G 1 , . . . , G n , this implies that the graph G l ′ admits such a run. Since ρ l ′ contains no increments of 1 by definition of an inc-segment, this means that 2(a) holds; a contradiction. Consequently there are at most |M | inc-segments, each of size at most 2K. It follows that at most 2K|M | subruns among π i , . . . , π j contain an increment of the counter 1. Let us come back to the original run ρ. As in the case of item 1, we use the induction hypothesis to show that val (ρ i , 1, •-) = val (π i , 1, •-) ≤ f ′ (K) , and val (ρ j , 1, -•) = val (π j , 1, -•) ≤ f ′ (K) . Since the global transformation only adds increments on counter 1, we use the remarks from the previous paragraph to conclude that there are at most 2K|M | subruns among ρ i+1 , . . . , ρ j−1 that increment counter 1. Furthermore, all runs ≡ --equivalent to one of the ρ i+1 , · · · , ρ j−1 are --captured, since otherwise the local transformation would be applied, and then one of the runs π i+1 , · · · , π j−1 would not be captured. Therefore, we can use the induction hypothesis to show that val (ρ i+1 , 1, -), · · · , val (ρ j−1 , 1, -) ≤ f ′ (K) . All this together witnesses the expected val (ρ, 1, •-•) ≤ (2K|M | + 2)f ′ (K) . Completeness. Let g ′ be the strictly increasing function obtained by induction hypothesis for Simon level k. We set g(K) to be min g ′ ( √ K), √ K − 2 |M | + 1 . Assume now that the hint h is ≤ g(K) and that the run ρ satisfies val (ρ, 1, •-•) ≥ K. We want to show that ρ is •-•-captured. Let ρ i . . . ρ j be a minimal part of ρ for which val (ρ i . . . ρ j , 1, •-•) ≥ K . The general idea is very simple. There are two possible cases: either one of the subruns ρ i , . . . , ρ j does more than √ K increments on counter 1, or there are at least √ K subruns among ρ i , . . . , ρ j that increment counter 1. In either case the run ρ will be •-•captured. In the first case, we use the induction hypothesis and g ′ ( √ K) ≥ g(K) to obtain item 1 in the definition of being •-•-captured. The more difficult case is the second one. We will study the subruns ρ i+1 , . . . , ρ j−1 that do not reset counter 1. As in the previous lemma, we consider the set I ⊆ {i + 1, . . . , j − 1} of those indexes l where ρ l does at least one increment on counter 1. By our assumption, I contains at least √ K − 2 indexes (we may have lost 2 because of ρ i and ρ j ). A maximal contiguous subset of I is called an inc-segment. There are two possible cases. Either there are few (at most |M |) inc-segments, in which case one of the inc-segments must perform many increments and the run ρ is •-•-captured by item 2(b), or there are many inc-segments, in which case the run ρ is •-•-captured by item 2(a). The details are spelled out below. Consider first the case when there are at least |M | + 1 inc-segments. In this case, we can find two inc-segments that begin with indexes, respectively, m ′ > m > 1, such that the states q m−1 and q m ′ −1 are the same. Since inc-segments are maximal, the index m − 1 does not belong to I and hence the run ρ m−1 does not increment counter 1 and goes from state q m−2 to state q m−1 . Since the transition graphs G m−1 and G 1 are equivalent, there is such a run in G 1 , too. Since the run ρ m−1 · · · ρ m ′ −1 goes from q m−2 to q m ′ −1 = q m−1 , we obtain item 2(a) and thus ρ is •-•-captured. We are left with the case when there are at most |M | + 1 inc-segments. Recall that I contains at least √ K − 2 elements. Since there are at most |M | + 1 inc-segments, at least one of the inc-segments must have size at least: √ K − 2 |M | + 1 . But by our assumption on the hint h, this inc-segment must contain two hinted positions, and thus ρ is •-•-captured by item 2(b). Correctness. We set f (K) = K. Assume that the hint h is ≥ K and consider a run ρ such that every run π ≡ •-• ρ is •-•-captured. We need to show that val (ρ, 1, •-•) ≥ f (K) = K . (5.2) We proceed as in the previous lemma: we first transform the run ρ into an ≡ c -equivalent run π which is more likely to have fewer increments of counter 1. We then show that the run π satisfies property (5.2), which can then also be transferred back to ρ. As in the previous lemma, there are two stages of the transformation: a local one, and a global one. The local transformation works just the same as the local transformation in the previous lemma: if we can replace some subrun ρ i by an equivalent one that is not captured, then we do so. The global transformation makes sure that 2(a) is no longer satisfied. It works as follows. Assume that item 2(a) is satisfied, and let m and m ′ be defined accordingly. Since G 1 is idempotent and all the G's are equivalent, the transition graph G m+1 . . . G m ′ is equivalent to G 1 . It follows that we can find a run with neither increments nor resets that can be plugged in place of ρ m+1 . . . ρ m ′ . The new run obtained is ≡ c -equivalent to the original one and the value val (ρ, 1, •-•) is diminished during this process. We iterate this transformation until no more such replacements can be applied (this obviously terminates in fewer iterations than the number of increments of counter 1 in the original run). Consider now a run π obtained from ρ by first applying the local and then the global transformation. Since π ≡ c ρ holds, our hypothesis says that π is •-•-captured. Since π does not satisfy item 2(a) by construction, it must satisfy either item 1 or item 2(b). In case of item 1, we do the same reasoning as in the previous lemma and use the induction hypothesis to conclude that (5.2) holds. The remaining case is item 2(b), when there are two distinct positions m < m ′ where each of the runs π m , . . . , π m ′ all increment counter 1. Since the global transformation process only removed subruns that increment counter 1, this means that all the runs ρ m , . . . , ρ m ′ also increment counter 1. But since the hint was ≥ K, we have m ′ − m ≥ K and we conclude with the desired (5.2). 5. 6. Verifying a specification. In this section, we conclude the proof of the "Moreover.." part of Proposition 5.2 (and therefore also the proof of Theorem 5.1). Recall that in the previous section, we did the proof only for atomic specifications, which talk about a single event. The purpose of this section is to generalize those results to all specifications, where positive boolean combinations of atomic specifications are involved. Given a specification τ , we want to construct a sequenceT -automaton A τ that verifies if a sequence of words v 1 , v 2 , . . . satisfies: (*) For someT -function f , in every word v j every run satisfies τ under f (j). We will actually prove the above result in a slightly more general form, where the specification τ can be a generalized specification. The generalization is twofold. First, a generalized specification can describe runs in arbitrary transition graphs, and not just those that describe runs of a counter automaton. This is a generalization since transitions in a transition graph contain not only the counter actions, but also some additional labels (recall that labels in a transition graph are of the form Act × L). Second, in a generalized specification, atomic specifications of the form "the run has type t" can be replaced by more powerful atomic specifications of the form "the run, when treated as a sequence of labels in Act × L, belongs to a regular language L ⊆ (Act × L) * ". This more general form will be convenient in the induction proof. In our construction we will speak quantitatively of runs of counter automata. Consider a sequence counter automaton (we do not specify its acceptance condition) and a run ρ of this automaton over a finite word u. The only requirement on this run is that it starts in an initial state, ends in a final state, and is consistent with the transition function. Given such a run ρ and a natural number K, we say that it is (≥ K)-accepting (respectively, (≤ K)-accepting) if for each counter, any two distinct resets of this counter are separated by at least K increments of it (respectively, at most K). The link with the acceptance condition of B and S-sequence automata is obvious: a run sequence ρ 1 , ρ 2 , . . . is accepting for a B-automaton if there exists a natural number K such that every run ρ i is (≤ K)accepting. Similarly, this run sequence is accepting for an S-automaton if there exists a strongly unbounded function f such that every run ρ i is (≥ f (i))-accepting. The heart of this section is the following lemma, which is established by repeated use of Lemma 5.22. Lemma 5. 25. Given a set of states M and a generalized specification τ over M , there exist two strongly unbounded functions f, g, and a counter automaton A τ that reads finite Mtransition graphs, such that for any M -transition graph G, • (correctness) if there is a ( K)-accepting run ρ of A τ over G then all runs over G satisfy τ under f (K), and; • (completeness) if all runs in G satisfy τ under K then there is a ( g(K))-accepting run ρ of A τ over G. In other words, acceptance by A τ and being captured by the specification are asymptotically the same thing. Before establishing this lemma, we show how it completes the proof. We only do the case of T = S, the other case is done in a similar manner. We want to verify if a sequence of words v 1 , v 2 , . . . satisfies the property (*). Since a B-function is essentially a constant K and is ≤, this boils down to verifying that there is some K such that all runs in v 1 , v 2 , . . . satisfy τ under ≤ K. We take the automaton A τ from Lemma 5.25 and set the acceptance condition so that all of its counters are bounded. The automaton A τ works over transition graphs, while property (*) talks about input words for the complemented automaton A, but this is not a problem: the automaton A τ can be modified so that it treats an input letter as the appropriate transition graph, taken from the transition function of A. We claim that this is the desired automaton for property (*). Indeed, if the automaton A τ accepts a sequence v 1 , v 2 . . . then its counters never exceed some value K. But then by Lemma 5.25, all runs of A in all words v j satisfy τ under ≤ g(K). Conversely, if all runs of A in all words v j satisfy τ under ≤ K, then by Lemma 5.25, the automaton A τ has an accepting run where the counters never exceed the value f (K). Proof of Lemma 5.25. Let us fix a generalized specification τ , for which we want to construct the automaton A τ of Lemma 5.25. The proof is by induction on the number of pairs (α, c) such that the atomic specification val (ρ, 1, c) K appears in τ . If there are no such pairs, satisfying τ under K does not depend anymore on K and can be checked by a standard finite automaton, without counters. Up to a renumbering of counters, we assume that the minimum counter appearing in the generalized specification is 1, and we set c to be such that val (ρ, 1, c) K appears is τ . Our objective is to get rid of all occurrences of val (ρ, 1, c) K in τ . Given as input the M -transition graph G, the automaton A τ we are constructing works as follows. To aid reading, we present A τ as a cascade of nondeterministic automata, which is implemented using the standard Cartesian product construction. By induction hypothesis, every run in (G, h) satisfies τ ↑ under f (K), for some strongly unbounded function f . In particular, for every run ρ over G, (ρ, h) satisfies τ ↑ under f (K). Let ρ be a run over G. We will prove that ρ satisfies τ under f (K). Two cases can happen. • For all π ≡ c ρ, the run (π, h) belongs to L c . Then by Lemma 5.22, we get val (ρ, 1, c) f (K). Since the boolean combination in τ is positive, this means that ρ satisfies τ if it satisfies the specification τ ′ obtained from τ by replacing each occurrence of val (ρ, 1, c) f (K) with true. However, by our assumption, the run (ρ, h) satisfies the specification τ ↑ under f (K), so also ρ satisfies τ ′ under f (K). • Otherwise, for some π ≡ c ρ, the run (π, h) does not belong to L c . Since all runs (ρ, h) in (G, h) satisfy τ ↑ under f (K), this run π must satisfy the generalized specification τ ′ obtained from τ by replacing every occurrence of val(ρ, 1, c) K with false. But a property of the ≡ c -equivalence is that no atomic specification different from val (ρ, 1, c) K can see the difference between two ≡ c -equivalent runs. In particular, ρ also satisfies τ ′ under f (K). By consequence, ρ satisfies τ under f (K). Future work We conclude the paper with some open questions. The first set of questions concerns our proofs. In our proof of Theorem 3.3, the translation from non-hierarchical automata to hierarchical ones is very costly, in particular it uses ωBS-regular expressions as an intermediate step. Is there a better and more direct construction? Second, our complementation proof is very complicated. In particular, our construction is non-elementary (it is elementary if the number of counters is fixed). It seems that a more efficient construction is possible since the, admittedly simpler, but certainly related, limitedness problem for nested distance desert automata is in PSPACE [10]. The second set of questions concerns the model presented in this paper. We provide here a raw list of such questions. Are the ωBS-automata (resp. ωB, resp. ωS) equivalent to their deterministic form? (We expect a negative answer.) Is there a natural form of deterministic automata capturing ωBS-regularity (in the same way deterministic parity automata describe all ω-regular languages)? Are ωBS-automata equivalent to their alternating form? (We expect a positive answer, at least for the class of ωB and ωS-automata.) Does the number of counters induce a strict hierarchy of languages? (We expect a positive answer.) Similarly, does the nesting of B-exponents (resp. S-exponents) induce a strict hierarchy of languages? (We conjecture a positive answer.) Is there an algebraic model for ωBS-regular languages (resp. ωB, ωS-regular languages), maybe extending ω-semigroups? Other questions concern decidability. Is it decidable if an ωBS-regular language is ω-regular (resp. ωB-regular, resp. ωS-regular)? (We think that, at least, it is possible, given an ωB or ωS-regular language, to decide whether it is ω-regular or not.) Are the hierarchies concerning the number of counters, and the number of nesting of exponents decidable? Other paths of research concern the possible extensions of the model. As we have defined them, ωBS-regular languages are not closed under complementation. Can we find a larger class that is? What are the appropriate automata? Such an extension would lead to the decidability of the full logic MSOLB. Last, but not least, is it possible to extend our results to trees? Fact 2 . 3 . 23For every BS-regular language L, the following properties hold:(1) L + L = L, (2) L = { u f (0) , u f (1) , . . . : u ∈ L, f ∈ N ∞ , f ≤ |u|, f strongly unbounded}. Figure 1 : 1The diamond Corollary 3. 4 . 4The classes of ωBS-regular, ωB-regular and ωS-regular languages are closed under intersection. - If K is clean(K) and M is ε + clean(M ) then clean(K · M ) = clean(K) + clean(K) · clean(M ) . - The six other cases are similar. • Case L = K + M . We have clean(K + M ) = clean(K) + clean(M ). Lemma 3 . 13 . 313BS-regular languages of word sequences are closed under external constraints. In other words, if L is a BS-regular language over Σ and a is a letter in Σ, then L[a : e] is a BS-regular language over Σ. B-regular languages are closed under external constraints of type 0, +, B. S-regular languages are closed under external constraints of type 0, +, S. Proof. Structural induction. The necessary steps are shown in Figure 2. For some of the equivalences, we use closure properties of Fact 2.3. Figure 2 : 2a[a : e] = a for e ∈ {B, +} (L + L ′ )[a : e] = L[a : e] + L ′ [a : e] for e ∈ {0, +, B, S} (L · L ′ )[a : e] = L[a : e] · L ′ [a : e] for e ∈ {B, 0} (L · L ′ )[a : e] = L[a : e] · L ′ + L · L ′ [a : e] for e ∈ {+, S} L * [a : 0] = (L[a : 0]) * L * [a : +] = L * · L[a : +] · L * L * [a : B] = (L[a : B] · (L[a : 0]) * ) B L * [a : S] = L * · (L[a : +] · L * ) S + L * · L[a : S] · L * L B [a : 0] = (L[a : 0]) B L B [a : +] = L B · L[a : +] · L B L B [a : B] = (L[a : B]) B L B [a : S] =ε + L B · L[a : S] · L B L S [a : 0] = (L[a : 0]) S L S [a : +] = L S · L[a : +] · L * + L * · L[a : +] · L S L S [a : B] = (L[a : B] · (L[a : 0]) * ) B · (L[a : 0]) S · (L[a : B] · (L[a : 0]) * ) B L S [a : S] = L S · L[a : S] · L * + L * · L[a : S] · L S + (L * · L[a : +]) S · L * Elimination of external constraints. Figure 3 : 3Logical view of the diamond For this reason, we consider the following fragments of the logic MSOLB, which are not, in general, closed under complementation. Definition 4. 1 . 1We distinguish the following syntactic subsets of MSOLB formulas: • The B-formulas include all of MSOL and are closed under ∨, ∧, ∀, ∃ and A. • The S-formulas include all of MSOL and are closed under ∨, ∧, ∀, ∃ and U. • The BS-formulas include all B-formulas and S-formulas, and are closed under ∨, ∧, ∃ and U. Fact 4 . 2 . 42BS-formulas define exactly the ωBS-regular languages. Likewise for B-formulas, and S-formulas, with the corresponding languages being ωB-regular and ωS-regular. Proposition 4. 4 . 4Both ωS and ωBS-regular languages are closed under U. be accepting runs of the automaton A over the words w[X 1 ], w[X 2 ], . . . Such runs exist by definition of the unbounding witness. A sequential witness X 1 , X 2 , . . . is called a good witness if every two runs ρ i and ρ j agree on almost all positions. Lemma 4.7. A word belongs to U(K) if and only if it admits a good witness. Lemma 4. 8 . 8Words admitting a good witness can be recognized by a bounding automaton. Theorem 5. 4 ( 4Ramsey). Given R ⊆ N 2 and an infinite set E ⊆ N, there is an infinite set D ⊆ E such that; Note that the prefix w[0, . . . , d 1 − 1] of w up to position d 1 − 1 is not used here; as in the proof of Büchi, it is treated separately. Definition 5 . 514 (stability). A description τ is stable if there is a T -function h such that for all natural numbers N and all finite runs ρ = ρ 1 . . . ρ k , if each ρ i is consistent with τ under N for all i = 1 . . . k, then ρ is consistent with τ under h(N ). The type of a run is defined as in the previous section, i.e., a type gives the source and target states, as well as a mapping from the set of counters to{∅, {-}, {-•, •-}, {-•, •-•, •-}}. Given an M -transition graph, its type is the element of S = P(M × {∅, {-}, {-•, •-}, {-•, •-•, •-}} Γ × M ) , Lemma 3.8. A hierarchical BS-automaton of counter type tt ′ ∈ {B, S} * can be transformed into an equivalent one of counter type tBt ′ . A hierarchical BS-automaton of counter type tSt ′ ∈ {B, S} * can be transformed into an equivalent one of counter type tSSt ′ ..2. From normal form to automata. Here we show that every expression in normal form can be compiled into a hierarchical automaton. Furthermore, if the expression is ωB-regular (respectively, ωS-regular), then the automaton has the appropriate counter type. We begin by showing how to expand counter types: This work is licensed under the Creative Commons Attribution-NoDerivs License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nd/4.0/ or send a letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or Eisenacher Strasse 2, 10777 Berlin, Germany (1) First, A τ guesses a Simon decomposition of the transition graph G, as well as a hint h over this decomposition (the intention is that the hint h is K, this will be verified in the next step). Note that neither the composition nor the hint need to be unique. The automaton A τ produces the relabeled transition graph (G, h), which is used as input of the next steps.(2)In this step, the automaton A τ checks that the hint is K. This step requires (hierarchical) counters-the automaton follows the structure of the decomposition and uses a counter for each level to verify that the number of hinted positions is consistent with K.(3) The automaton A τ accepts the graph G if the M -transition graph (G, h) is accepted by A τ ↑ in which τ ↑ is a new generalized specification constructed from τ as follows: (a) The regular languages in the atomic specifications are adapted to the new larger transition alphabet (which is L S instead of L). In other words, every atomic specification of the kind: "the labeling of the run belongs to a regular language L" is replaced by: "if the additional coordinates from L S are removed, the labeling of the run belongs to L". (b) Every atomic specification val (ρ, 1, c) K is replaced by an atomic specification "the labeling of the run belongs to L c ", with L c the regular language from Lemma5.22. Note that this way we remove all atomic specifications that involve val (ρ, 1, c), and therefore the induction assumption can be applied to the generalized specification τ ↑.We now proceed to show that this automaton A τ satisfies the statement of Lemma 5.25. This proof is by an induction parallel to the induction used in constructing A τ .Completeness. Let g 1 be the strongly unbounded function obtained from the completeness clause in Lemma5.22, as applied to the event (1, c) that we are eliminating. By applying the induction assumption to the smaller specification τ ↑, but a larger set of transition labels, we know there is some strongly unbounded function g 2 such that if all runs in a graph (G, h) satisfy τ ↑ under K then there is a ( g 2 (K))-accepting run of A τ ↑ over (G, h).Let g be the coordinate-wise maximum (with respect to ) of the functions g 1 , g 2 ; this function is clearly strongly unbounded. We claim that the completeness clause of Lemma 5.25 holds for the function g. Indeed, let G be a graph where all runs satisfy τ under K. We need to show that there is a run of A τ that is ( g(K))-accepting. The hint h guessed in step (2) of the construction is chosen so that every two hinted positions are separated by exactly g 1 (K) non-hinted positions at the same level. Since g is greater than g 1 under , step (2) can be done in a run that is ( g(K))-accepting. Thanks to the assumption on g 1 and the definition of τ ↑, if a run ρ over a transition graph G satisfies τ under K, then the run (ρ, h) satisfies the specification ρ τ ↑ under K. In particular, by the assumption that every run ρ over G satisfies τ under K, we can use the completeness for A τ ↑ to infer that A τ ↑ has a run over (G, h) that is ( g 2 (K))-accepting, which gives an accepting run of A τ .Correctness. The correctness proof essentially follows the same scheme, but requires more care, because of the stronger assumptions in the correctness clause of Lemma5.22. Let G be an M -transition graph and let K be a natural number such that there is a ( K)accepting run of A τ over G. This means that the hint h from step(2)is K, and that (G, h) is ( K)-accepted by A τ ↑ . Case of complementing an ωB-automaton. As when complementing an ωS-automaton, we restate the lemma with expanded to its definition. which is ≥ in this case5.3. Case of complementing an ωB-automaton. As when complementing an ωS-automaton, we restate the lemma with expanded to its definition, which is ≥ in this case. There are two strongly unbounded functions f and g, and a regular language L c ⊆ (L S ) * such that for every natural number K, every run ρ over every M -transition graph labeled by L whose type contains the event (1, c) satisfies: • (correctness) if a hint h is ≥ K in the graph and every run π ≡ c ρ satisfies (π, h) ∈ L c. • , •-• , •-Or - , Lemma 5.24. then val (ρ, 1, c) ≥ f (K), andLemma 5.24. Let c be one of -•, •-•, •-or -. There are two strongly unbounded functions f and g, and a regular language L c ⊆ (L S ) * such that for every natural number K, every run ρ over every M -transition graph labeled by L whose type contains the event (1, c) satisfies: • (correctness) if a hint h is ≥ K in the graph and every run π ≡ c ρ satisfies (π, h) ∈ L c , then val (ρ, 1, c) ≥ f (K), and; • (completeness) if a hint h in the graph is ≤ g(K) and val (ρ, 1, c) ≥ K then (ρ, h) belongs to L c. • (completeness) if a hint h in the graph is ≤ g(K) and val (ρ, 1, c) ≥ K then (ρ, h) belongs to L c . We remark here that f will be the identity function, while g will be more or less a k-fold iteration of the square root. with k the maximal Simon level of M -transition graphs, taken from Theorem 5.21We remark here that f will be the identity function, while g will be more or less a k-fold iteration of the square root, with k the maximal Simon level of M -transition graphs, taken from Theorem 5.21. As in that lemma, the language L c is defined via an induction on the Simon level of the transition graph. We also only treat the case of c = •-•. The other cases can be treated with the same technique. The structure of the proof follows the one in Lemma 5.23. Consider a decomposed transition graph G = (H|G 1 |G 2 | . . . |G n |H ′Proof. The structure of the proof follows the one in Lemma 5.23. As in that lemma, the language L c is defined via an induction on the Simon level of the transition graph. We also only treat the case of c = •-•. The other cases can be treated with the same technique. Consider a decomposed transition graph G = (H|G 1 |G 2 | . . . |G n |H ′ ) of Simon level k + 1, along with a corresponding hint h, decomposed as (h 0 |. |h n+1of Simon level k + 1, along with a corresponding hint h, decomposed as (h 0 | . . . |h n+1 ). As in Lemma 5.23, instead of saying that (ρ, h) belongs to the language L k+1 c , we say that ρ is c-captured. | , |ρ n+1 ). likewise for the runs ρ 0 , . . . , ρ n+1 . We define ρ to be •-•-captured if eitherLet ρ be a run over G, decomposed as (ρ 0 | . . . |ρ n+1 ). As in Lemma 5.23, instead of saying that (ρ, h) belongs to the language L k+1 c , we say that ρ is c-captured; likewise for the runs ρ 0 , . . . , ρ n+1 . We define ρ to be •-•-captured if either: both ρ i and ρ j but not in ρ i+1 , . . . , ρ j−1 , and either: ρ i is •--captured, one of ρ i+1 , . . . , ρ j−1 is --captured. ρ j such that counter 1 is reset in. or ρ j is -•-capturedρ j such that counter 1 is reset in both ρ i and ρ j but not in ρ i+1 , . . . , ρ j−1 , and either: ρ i is •--captured, one of ρ i+1 , . . . , ρ j−1 is --captured, or ρ j is -•-captured; ρ j−1 , and either (a) there are m ≤ m ′ in {i . . . j − 1} such that there is an increment of 1 in one of ρ m. , . , ρ m ′ , but G 1 contains a path from the source state of ρ m to the target state of ρ m ′ without any increment nor reset, orρ j but not in ρ i+1 , . . . , ρ j−1 , and either (a) there are m ≤ m ′ in {i . . . j − 1} such that there is an increment of 1 in one of ρ m , . . . , ρ m ′ , but G 1 contains a path from the source state of ρ m to the target state of ρ m ′ without any increment nor reset, or; The intuition is that a •-•-captured run does a lot of increments on counter 1, at least relative to the size of the hint. The first reason for doing a lot of increments is that there are a lot of increments in a subrun inside one of the component transition graphs, which corresponds to item 1. The clause 2(b) is also self-explanatory: the number of increments is at least as big as the size of the hint. The first clause 2(a) is more involved. It says that the run is suboptimal in a sense, i.e., it can be converted into one that does fewer increments. We will then show-using 1 and 2(b)-that the run with fewer increments also does a lot of increments. j − 1} such that all the subruns ρ m , . . . , ρ m ′ increment counter 1. As in the case of T = S, this definition corresponds to a regular property L k+1 •-• of hinted runs. We now proceed to show that the above definition satisfies the completeness and corj − 1} such that all the sub- runs ρ m , . . . , ρ m ′ increment counter 1. As in the case of T = S, this definition corresponds to a regular property L k+1 •-• of hinted runs. The intuition is that a •-•-captured run does a lot of increments on counter 1, at least relative to the size of the hint. The first reason for doing a lot of increments is that there are a lot of increments in a subrun inside one of the component transition graphs, which corresponds to item 1. The clause 2(b) is also self-explanatory: the number of increments is at least as big as the size of the hint. The first clause 2(a) is more involved. It says that the run is suboptimal in a sense, i.e., it can be converted into one that does fewer increments. We will then show-using 1 and 2(b)-that the run with fewer increments also does a lot of increments. We now proceed to show that the above definition satisfies the completeness and cor- A bounding quantifier. M Bojańczyk, Computer Science Logic. 3210M. Bojańczyk. A bounding quantifier. In Computer Science Logic, volume 3210 of Lecture Notes in Computer Science, pages 41-55, 2004. Pushdown games with unboundedness and regular conditions. A Bouquet, O Serre, I Walukiewicz, Foundations of Software Technology and Theoretical Computer Science. 2914A. Bouquet, O. Serre, and I. Walukiewicz. Pushdown games with unboundedness and regular conditions. In Foundations of Software Technology and Theoretical Computer Science, volume 2914 of Lecture Notes in Computer Science, pages 88-99, 2003. Weak second-order arithmetic and finite automata. J R Büchi, Z. Math. Logik Grundl. Math. 6J. R. Büchi. Weak second-order arithmetic and finite automata. Z. Math. Logik Grundl. Math., 6:66-92, 1960. On a decision method in restricted second-order arithmetic. J R Büchi, Proc. 1960 Int. Congr. for Logic. 1960 Int. Congr. for LogicJ. R. Büchi. On a decision method in restricted second-order arithmetic. In Proc. 1960 Int. Congr. for Logic, Methodology and Philosophy of Science, pages 1-11, 1962. Factorisation forests for infinite words, application to countable scattered linear orderings. T Colcombet, FCT. T. Colcombet. Factorisation forests for infinite words, application to countable scattered linear orderings. In FCT, pages 226-237, 2007. Transforming structures by set interpretations. T Colcombet, C Löding, Logical Methods in Computer Science. 32T. Colcombet and C. Löding. Transforming structures by set interpretations. Logical Methods in Com- puter Science, 3(2), 2007. Transition graphs and the star height of regular events. L C Eggan, Michigan Math. J. 10L.C. Eggan. Transition graphs and the star height of regular events. Michigan Math. J., 10:385-397, 1963. Algorithms for determining relative star height and star height. K Hashiguchi, Inf. Comput. 782K. Hashiguchi. Algorithms for determining relative star height and star height. Inf. Comput., 78(2):124- 169, 1988. Décidabilité par automate fini. B R Hodgson, Ann. Sci. Math. Québec. 73B. R. Hodgson. Décidabilité par automate fini. Ann. Sci. Math. Québec, 7(3):39-57, 1983. Distance desert automata and the star height problem. D Kirsten, RAIRO. 339D. Kirsten. Distance desert automata and the star height problem. RAIRO, 3(39):455-509, 2005. Parikh automata and monadic second-order logics with linear cardinality constraints. F Klaedtke, H Ruess, 177Institute of Computer Science at Freiburg UniversityTechnical ReportF. Klaedtke and H. Ruess. Parikh automata and monadic second-order logics with linear cardinality constraints. Technical Report 177, Institute of Computer Science at Freiburg University, 2002. Decidability of second-order theories and automata on infinite trees. M O Rabin, Transactions of the AMS. 141M. O. Rabin. Decidability of second-order theories and automata on infinite trees. Transactions of the AMS, 141:1-23, 1969. Factorization forests of finite height. I Simon, Theoretical Computer Science. 72I. Simon. Factorization forests of finite height. Theoretical Computer Science, 72:65 -94, 1990. Languages, automata, and logic. W Thomas, Handbook of Formal Language Theory. G. Rozenberg and A. SalomaaSpringerIIIW. Thomas. Languages, automata, and logic. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Language Theory, volume III, pages 389-455. Springer, 1997. Complementation of Büchi automata revisited. W Thomas, Jewels are forever. Contributions on Theoretical Computer Science in Honor of Arto Saalomaa. SpringerW. Thomas. Complementation of Büchi automata revisited. In Jewels are forever. Contributions on Theoretical Computer Science in Honor of Arto Saalomaa, pages 109 -120. Springer, 1999.
[]
[ "A Geometric Approach to Harmonic Color Palette Design", "A Geometric Approach to Harmonic Color Palette Design" ]
[ "Carlos Lara-Alvarez \nCONACYT Research Fellow -Center for Mathematical Research CIMAT\nZacatecasMexico\n", "Tania Reyes \nCenter for Mathematical Research CIMAT\nSoftware Engineering Group\nZacatecasMexico\n" ]
[ "CONACYT Research Fellow -Center for Mathematical Research CIMAT\nZacatecasMexico", "Center for Mathematical Research CIMAT\nSoftware Engineering Group\nZacatecasMexico" ]
[ "Journal" ]
We address the problem of finding harmonic colors, this problem has many applications, from fashion to industrial design. The proposed approach enables to evaluate and generate color combinations incrementally. In order to solve this problem we consider that colors follow normal distributions in tone (chroma and lightness) and hue. The proposed approach relies in the CIE standard for representing colors and evaluate proximity. Other approaches to this problem use a set of rules. Experimental results show that lines with specific parameters -angles of inclination, and distance from the reference point -are preferred over others, and that uncertain line patterns outperform non-linear patterns.
10.1002/col.22292
[ "https://arxiv.org/pdf/1709.02252v1.pdf" ]
26,674,432
1709.02252
6e603641e09d71dc1797e4e5647d41f07a03e9d3
A Geometric Approach to Harmonic Color Palette Design 1-5, 2013 Carlos Lara-Alvarez CONACYT Research Fellow -Center for Mathematical Research CIMAT ZacatecasMexico Tania Reyes Center for Mathematical Research CIMAT Software Engineering Group ZacatecasMexico A Geometric Approach to Harmonic Color Palette Design Journal XXI11-5, 2013Additional note *Corresponding author:Color Harmony -Uncertainty -Perception We address the problem of finding harmonic colors, this problem has many applications, from fashion to industrial design. The proposed approach enables to evaluate and generate color combinations incrementally. In order to solve this problem we consider that colors follow normal distributions in tone (chroma and lightness) and hue. The proposed approach relies in the CIE standard for representing colors and evaluate proximity. Other approaches to this problem use a set of rules. Experimental results show that lines with specific parameters -angles of inclination, and distance from the reference point -are preferred over others, and that uncertain line patterns outperform non-linear patterns. Introduction At first the clothing was created to protect people form weather an other uncomfortable conditions, but it has evolved to become a form of identity, expression, and creativity. Clothing is an essential part in the social perception; i.e., how people form impressions of and make inferences about other people. For instance, several clothing categories have been created for different ocassions -e.g., for formal events, work, and sport. In this sense, color of clothing is a key factor that mainly drives these people impressions. Colors seen together to produce a pleasing affective response are said to be in harmony [1]. Color harmony represents a satisfying balance or unity of colors. The study of harmonious colors has a long tradition in many areas, and it is attractive for artists, industrial applications, and scientists in different fields. We envisage a color harmony model that integrates several divergent theories described in the literature. For this purpose, colors are represented in the CIE L*C*h -a.k.a. CIELChcolor scale. This scale is approximately uniform in a polar space with Ligthness (L), chroma (C) and hue (h), being the height radial and azimuthal coordinates respectively. Like other approaches, the proposed one considers that harmonious colors can be identified from the analysis of two components ( Fig. 1): (i) relationship of colors in the color wheel, and (ii) relationship of colors in the plane of tones (chromalightness). This paper proposes a generalization of geometric approaches to color harmony using a normal distribution to de-scribe color variability. Regarding the color relation in the hue circle, the proposed approach uses conventional techniques to find harmonious hue patterns. These patterns -adjacency, opposite, triad -are very common in the literature. On the other hand, the proposed approach considers that harmonious colors follow specific paths in the plane of tones, this paper explores the hypothesis that harmonic colors follow straight lines in the tone plane. We state that many theories can be explained by describing color variability in a proper manner; for instance, a simple straight line with uncertainty describes many patterns proposed by Matsuda [2]. The contribution of this paper is threefold: Uncertainty of colors. A normal distribution is used to represent uncertainty of colors both on the hue wheel and in the plane of tones. The formulation presented in [3] is the basis to obtain an estimation of uncertainty. This representation allows us to compare the hue of colors for evaluating a given pattern -adjacency, opposite, or triad -in the color circle, or to compare color for evaluating their distance in the tone plane -e.g., Fig. 2 illustrates a minimum distance constraint for colors with low chroma and lightness Neutral colors. Neutral colors have small chroma; i.e., they are located in the vicinity of white-black axis for any cylindrical representation; hence, they usually have inconsistencies in hue. Traditional approaches tackle this problem by treating near neutral colors as a special case; whereas, the proposed approach associates a large uncertainty in the hue of neutral colors, which allows to use them without a special treatment. Indeed, this approach is able to combine neutral colors to other colors. Harmonization in the chroma-lightness plane. The normal distribution associated with each color in the plane of tones also allows to evaluate if the colors approximately follow some geometric pattern. This concept is illustrated in Fig. 3, first a line joining tones of the first two colors is calculated, if the following point (turquoise) is harmonic to the previous points it must be in the vicinity of the line. The line primitive is simple but flexible enough to be adapted to different applications. Uncertainties of points can be propagated to estimate the line uncertainty; hence, the approach can be used incrementally, which is an advantage when selecting one color at a time. The approach also considers that certain line directions are preferred over others. The rest of this paper is organized as follows: First, Section 2 reviews some of the color theories related to the geometric approach presented in this paper. Then, Section 3 explains the uncertainty representation, and its use to discover hue an chroma-lightness patterns. Section 4 describes the experimental methods used to examine the proposed technique, and Section 2 describes and discusses the results of the study. Finally, Section 5 concludes with a summary of the approach presented and the future areas of interest for this research work on color harmony. Related Work Plenty of theories of color harmony have been proposed in the literature, as summarized by Westland et al. [4]. The following paragraphs describe some of these theories and their relationship to the geometric approach presented here. Moon and Spencer [5] postulate that "harmonious combinations are obtained when: (i) the interval between any two colors is unambiguous, and (ii) colors are so chosen that the points representing them in a (metric color) space are related in a simple geometric manner". The basic aesthetic principle behind the first postulate is that "the observer should not be confused by the stimuli"; i.e., two harmonic colors not be so close together that there is doubt as to whether they were intended to be identical or only similar. As shown in Fig. 4, Moon and Spencer's model considers that harmonic colors give sensations of identity, similarity or contrast. The basic aesthetic principle behind the second postulate is that pleasure is experienced by the recognition of order. They suggest several patterns to order colors -e.g., points with constant hue must be on a: straight line, circle, triangle, or rectangle. The relationship between harmony and color difference is described by Chuang et al. [6] as a cubic function in which it is possible to identify intervals related to the harmonic regions in the Moon and Spencer's model; the study also reveals that the first ambiguity color interval is the most critical, as the perceived color harmony is substantially lower than the seen in the other intervals. In this paper two close points in the chroma-lightness plane are considered not harmonious ( Fig. 2), as it is difficult to recognize identical colors -e.g., color of an object becomes different from one picture to another even with small changes in ambient illumination. On the contrary, Schloss et al. [7] show that pair preference -defined as how much an observer likes a given pair of colors as a Gestalt, or whole -decreases monotonically as a function of the difference in hue. In this paper two hues with similar values are considered harmonious. Uncertainty associated to color values presented here allow explicit representation of the degree of similarity between colors. Another widely used approach, proposed by Matsuda [2], suggests that harmonic colors in clothes follow certain patterns both on the hue wheel (Fig. 5a) and in the tone plane (Fig. 5b). We state that the majority of the Matsuda's hue patterns -except the Type L and T -can be represented by just three patterns -analog, opposite, and triad -with some degree of uncertainty; in addition, a satisfactory description of uncertainty allows to combine 'Type N' pattern with other types; e.g., combining neutral colors with lighter or brighter colors, which is a common practice in fashion and design. Analogously, all the value and chroma patterns proposed by Matsuda -except the maximum contrast pattern -follow in some extent a line. This idea also reproduces the second postulate of Moon and Spencer [5] that harmonic colors can be represented in a simple geometric manner. Moreover, we state that some of these linear patters are preferred over others; for instance, Nemcsics [8] examined the variations in the extent of harmony content for compositions that follow lines in the Coloroid saturation-brightness plane. He found that a linear composition is more harmonic when its angle is between 30 • and 155 • measured counterclockwise from the positive horizontal line perpendicular to the gray axis. When comparing vertical and horizontal lines for a given hue, he found that vertical lines with saturation between 25 and 45 and horizontal lines with brightness between 75 and 45 are felt more harmonic. There are several authors that propose to manage color preferences by representing uncertainty; for instance, Hsiao et al. [9] integrates fuzzy set theory to build an aesthetic measure based color design/selection system. Lu et al. [10] categorize models to generate color harmony as: (i) empirical (defined by artists and designers), and (ii) learning-based methods (those that use machine learning); then they formulate a Bayesian color harmony model where empirical methods are used as the prior, and the patterns discovered from learning-based methods as the likelihood. One key advantage of the approach proposed here is that it does not requires the outcomes of machine-learning techniques and the uncertainty closely follows the widely accepted color difference formula [11]. Uncertain Harmonic Patterns This section first describes the uncertainty estimation of each color which is based on the color difference formula [11], then each color is used to evaluate patterns in the hue circle and geometric pattern in the chroma-lightness plane. Hue variance. The standard deviation of hue is approximated as: Color Uncertainty σ h = f (h, c) = k h S h + k N γ 2 c 2 + γ 2 = k h (1 + 0.015 c H T ) + k N γ 2 c 2 + γ 2(1) where k h , k N and γ are parameters, and H T makes the hue space more uniform, H T = 1 − 0.17 cos(h − 30 • ) + 0.24 cos(2h) + + 0.32 cos(3h + 6 • ) − 0.20 cos(4h − 65 • ). The value of γ in the second term of (1) controls the set of colors perceived as 'neutral'. Henceforth, hue of C is modeled as a normal distribution: H ∼ N(h, σ 2 h ) = N(h, f 2 (h, c)).(2) Chroma-lightness Covariance. The tone of a color, t(C ), is modeled as a bivariate normal distribution: T j ∼ N([c, L] , Σ cL ).(3) where Σ cL = k 2 c S 2 c 0 0 k 2 L S 2 L ,(4) Hue Patterns For simplicity, the proposed approach only considers three patterns in the chromatic circle -analog, opposite, and triadas shown in Fig. 6. To evaluate them, hue values are standardized and compared. Let θ m , θ n ∈ [0 • , 360 • ) be two angles, and be ∆θ (·, ·) the central angle subtended by them. The standardized i-th difference is defined as ∆ i θ (θ m , θ n ) = ∆θ (α(θ m ), α(θ n )) i where α(θ ) = i(θ mod 360 i ); values i = 1, 2, 3 are used to discover analog, opposite, and triad patterns, respectively. Let H j = N(h j , σ 2 h j ), H k = N(h k , σ 2 h k ) be two distributions of the hue values for colors C j and C k , respectively, and D B (P, Q, d µ P µ Q ) the Bhattacharyya distance of two distributions with d µ P µ Q a distance of µ P and µ Q . Colors C j , C k are harmonious on the chromatic circle for the i-th pattern if D B (H j , H k , ∆ i θ (h j , h k )) ≤ 3.(5) To obtain an incremental algorithm, it is necessary to fuse two hue distributions. Let N(h j , σ 2 h j ) and N(h k , σ 2 h k ) be two normal hue distributions which are harmonic, the estimated values for hue and chroma could be obtained as: h = v j h j + v k h k v j + v k , c = v j c j + v k c k v j + v k , where v j = 1 /σ 2 h j , and v k = 1 /σ 2 h k . By using (2), the new hue follows a normal distribution: H = N(ĥ, f 2 (ĥ,ĉ))(6) Algorithm 1 is used to evaluate a list L of colors. The input list L can be sorted according to certain criteria -e.g., area of each color. Although any harmony label can be assigned, the algorithm prioritizes 'analog' above 'opposite' and 'opposite' above 'triads' (line 2); as a consequence, the algorithm follows the Occam's razor principle from philosophy: Suppose there exist two explanations for an occurrence, the simpler one is usually better. Chroma-Lightness Patterns This section is devoted to describe models for evaluating spatial relations of colors in the chroma-lightness plane. As stated earlier, two harmonic colors must not have the same tone (Fig. 2). Instead of the color difference formula [11], the Bhattacharyya distance D B for multivariate distributions is used here to evaluate proximity of tones. Two tones t i , t j with distributions T i and T j are considered not ambiguous if D B (T i , T j ) ≥ 3.(7) This paper explores the hypothesis that harmonic colors follow a line in the chroma-lightness plane; hence, the following paragraphs introduces the estimation of parameters and covariance of a line. A line in the plane is represented by its normal form, = (r, φ ); where r and φ are the length and the angle of inclination of the normal, respectively. As shown in Fig. 7, the normal is the shortest segment between the line and the origin of a given coordinate frame. Points t = (c, L) on the line = (r, φ ) satisfy r j = c cos φ j + L sin φ j . Given a set of points D = {t i = (c i , L i ) |i = 1, . . . , n}, the maximum likelihood lineˆ = r,φ , in the weighted leastsquares sense [12], is calculated aŝ r = c cosφ + L sinφ φ = 1 2 arctan −2 ∑ i w i (L − L i )(c − c i ) ∑ i w i [(L − L i ) 2 − (c − c i ) 2 ] ,(8) where c = (∑ w i c i )/(∑ w i ) and L = (∑ w i L i )/(∑ w i ), are the weighted means; as stated by [13] the individual weight w i for each measurement can be modeled as w i = (k c S c k L S L ) −2 . The covariance of the line is C = n ∑ i=1 b i C i b i (9) O φ P d ⊥ (t i , ) Chroma Lightness t i where C i is the covariance of the i-th point, and b i is the Jacobian matrix defined as b i = ∂r /∂c i ∂r /∂L i ∂φ /∂c i ∂φ /∂L i . The distance of a tone t = (c, L) to a given line = r,φ is d ⊥ = r − c cosφ − L sinφ ; hence, the variance of the distance is σ 2 d ⊥ = J · C i 0 0 C · J , where J = [ ∂ ∂r d ⊥ , ∂ ∂φ d ⊥ , ∂ ∂ x d ⊥ , ∂ ∂ y d ⊥ ]. Finally, the point is inlier of if d ⊥ − 2 σ 2 d ⊥ ≤ t .(10) Algorithm 2 discovers tone patterns in an incremental way. First, it calculates the tone distribution of each color in the sorted list L (line 1). The set T includes harmonic tones, first it only has the element T 1 (line 3). Then the rest of colors in the list L are analyzed one-by-one in sequence (line 4). Two or more harmonic tones are separated enough from each other (line 5) and form a geometric line (line 7). Once a tone is accepted as harmonic, it is included in the list (line 8) and the parameters and covariance of the line are updated (lines 11 and 12). Methods Given a set of colors in the CIELCh model that follow the proposed uncertain geometric patterns, the purpose of this study is twofold: first, to assess the minimum conditions to obtain color harmony -in terms of hue patterns and the non-ambiguos condition (7). Once these conditions has been established, the second objective is to assess the effect of line's parameters on the color's harmony and to compare colors that do or do not follow the proposed linear patterns. C ← Estimate covariance of (Eq.9); 13 return 2 Displaying Color Combinations. The circle shown in Fig. 8 was used to display combinations of three colors, that circle is placed on a neutral gray background color. Although the proposed approach can be used for any number of colors, the study compares combinations of three colors because they are easier to generate and evaluate. Participants were asked to rate each combination on a 10-point Likert scale ranging from 'inharmonic' to 'harmonic'. Subjects. Thirty volunteers participated in this study, fifty in each experiment (age range 23-35, Experiment 1; 21-38, Experiment 2). Minimum conditions (Experiment 1) . This study was designed to assess hue patterns and the nonambiguous condition (7) of the parameters of the chromalightness line on the harmony perception. Algorithm 3 was used to generate different color combinations by varying the parameters of the line (r, φ ) within reasonable bounds. Comparison of linear and non-linear patterns (Experiment 2). This study was designed to compare the harmony perception of combination of colors that do or do not follow a linear pattern. Algorithm 3 was used to generate different color combinations by varying the parameters of the line (r, φ ) within reasonable bounds. Lines were generated by setting k c = k L = 2. For each single linear combination, we create a corresponding a non-linear combination by simply moving one of the three colors. Combinations that do not acomplish the minimum conditions established in Experiment 1 were excluded from this experiment. The position of showing linear and non-linear patterns was assigned randomly. Determine the color point C i in CIE L * c * h and its covariance Σ cL . First, C i is the closest point to color C * i = (y i , x i , h i ), and Σ cL is calculated with (4) . If the Mahalanobis distance between N(t(C i ), Σ cL ) and p i is less than a threshold, C i is included in the testing list L. 5 if |L| = k and ∀i = j, C i , C j ∈ L are unambiguous (7) then return L 6 return [] Results Results of Experiment 1. A one-way between subjects ANOVA was conducted to compare the effect of the hue patterns. There was a significant effect on color preferences at the p < 0.05 level for the four hue patterns [F(3, 582) = 5.231, p = 0.001]. Post hoc comparisons using the Tukey HSD test indicated that the mean score preference for triad (M=6.81, SD=1.14) was significantly different than the 'analog' (M=7.72, SD=1.14) and 'incomplete triad' (M=7.61, SD=1.24), However, the 'opposite' pattern (M=7.46, SD=1.28) do not differ from the other patterns. There was a extremely significant difference in the prefer- ences for combinations that do acomplish the non-ambiguous condition (7.59 ± 1.2) in comparison to combinations that do not (6.07 ± 1.81), t(1026)= 16.18, p < 0.005. Results of Experiment 2. There was a extremely significant difference in the preferences for combinations that follow a linear pattern (7.82 ± 1.036) and combinations that do not (6.02 ± 1.36), t(388)= 31.065, p < 0.005. Preferences according to the angle of inclination of the line are shown in Fig. 9. Lower preferences for lines at 90 • were observed in comparison to lines at other inclinations. A similar effect was observed for lines at φ = 60 • and r < −25. Conclusion and Future Work Results show that the proposed uncertain harmonic patterns can generate harmonic combinations. The non-ambiguous condition (7) is mandatory to generate harmonic combinations. Results also show that linear patterns (7) outperform other patterns. Color harmony depends on many intrinsic factors -e.g., a person may consider certain harmonious colors based on their culture, age, social status, etc. On the other hand, what is considered ugly or nice depends on extrinsic factors like climate, type of emotion that the user wants to transmit, fashion, etc. The difference in preference mean for linear patterns found between experiments 1 and 2, can be explained by these intrinsic (unhomogeneous groups) and extrinsic (a contrast effect caused by showing two similar patterns one after other. We are developing an App that helps people to generate harmonic combinations for clothing. Hence, more accurate information can be obtained from the users' preferences. Be-sides completing our theory we are adapting a clothing search system to user's subjectivity. Figure 1 . 1Color harmony models should follow patterns both in the color wheel and in the tone plane. This paper studies harmonic colors in the tone -chroma-lightness -plane. arXiv:1709.02252v1 [cs.CV] 4 Sep 2017 Figure 2 . 2Color harmony in the tone plane: colors {C 1 , C 2 } are not harmonic because their distance is close to zero; on the other hand, {C 1 , C 3 } y {C 2 , C 3 } are harmonious. Figure 3 . 3Evaluating Tone Harmony (a) the most vivid color (turquoise) is not harmonic with the other two because it does not follow a line (b) Three harmonic points can de described by a line, i.e., the turquoise point is in the vicinity of the previous line. Figure 4 . 4Difference between two colors in the chroma-ligthness plane (for a given hue). Harmonic regions are interleaved by 'ambiguous' regions (gray). Figure 5 . 5Matsuda's harmonic templates in Munsell ColorSpace[2] defined: (a) on the hue wheel, (b) in the value and chroma plane. Given a color point C = (L, c, h) in CIE L * c * h coordinates, with lightness L ∈ [0, 100], chroma c ∈ [0, 100], and hue h = [0 • , 360 • ), Henceforth, we refer as hue of color by h and the tone of a color C = (L, c, h) as t(C ) = t = (L, c). Figure 6 . 6Simple patterns in the hue circle (shown from left to right): Analog, colors are close to each other; opposite, colors are approximately 180 • ; and triad, colors are approximately 120 • each other. Algorithm 1: Evaluating Hue Harmony Data: L = [(L 1 , c 1 , h 1 ), . . . , (L N , c N , h N )]: a list of N colors. Result: Hue harmony label, l H ∈ {0 → 'no harmonic', 1 → 'analog', 2 → 'opposite', 3 → 'triad'} 1 Calculate hue distribution H i using (2) for each color 2 for i ∈ {1, . . . , 3} do 3Ĥ ← N(h 1 , σ 2 h 1 ) 4 for j ∈ {2, . . . , N} do 5 ifĤ and H j are harmonious (Eq.5) for pattern i then 6Ĥ ← FuseĤ and H j with (Eq. Figure 7 . 7Line parameters in the polar form. The shorter distance from the origin to the line is r = |OP|. Algorithm 2 : 7 ← 27Evaluating tone harmony Data: L = [(L 1 , c 1 , h 1 ), . . . , (L N , c N , h N )]: a list of N colors. Result: Tone harmony label, l t ∈ {0 → 'no harmonic', 1 → 'point', 2 → 'line'} 1 Calculate tone distribution T j using (3) for each color 2 if N = 1 then return 1 3 T ← {T 1 } 4 for j ∈ {2, . . . , N} do 5 if ∃T i ∈ T ambiguous with T j (Eq.Estimate line with the set T (Eq. 8)12 Algorithm 3 : 3Generate a test list of k colors that follow a given line in the tone plane Data: (r, φ ): Parameters of the testing line , k:Number of colors to be tested.Result: A list of k colors for testing. If the combination of colors was not found, then the testing list is empty.1 if there are k points p i = (x i , y i ), on such that x i ∈ [0, 100], y i ∈ [0,100], and ∀i = j, d(p i , p j ) ≥ 20 then 2 Draw a random value, h ∈ [0, 360), from a uniform distribution. Draw a random pattern for hue, u, from the following discrete distribution, P(U = 'analog' ) = 0.3, P(U = 'opposite' ) = 0.3, P(U = 'triad' ) = 0.1, and P(U = 'incomplete triad' ) = 0.3. 3 Calculate hue values, h i , i = 1, . . . , k, by using h and u. These hue values are randomly disturbed by a normal error with mean zero and standard deviation calculated from (1) with c i = x i .4 Figure 8 . 8Circular pattern of three colors used to show combinations. Figure 9 . 9Average preferences for different inclinations, φ . Reference point for lines is at c = 20, L = 60, angles are in degrees, measured counterclockwise from the horizontal positive axis. CONACYT Research Fellow -Center for Mathematical Research CIMAT, Zacatecas, Mexico 2 Software Engineering Group, Center for Mathematical Research CIMAT, Zacatecas, Mexico *Corresponding author: . Burchett Kenneth E. Color harmony Color Research & Application. 27Burchett Kenneth E. Color harmony Color Research & Application. 2002;27:28-31. Color design Asakura Shoten. Y Matsuda, 210Matsuda Y. Color design Asakura Shoten. 1995;2:10. Delta E (CIE 1994) Delta E (CIE 1994). Lindbloom Bruce, Lindbloom Bruce. Delta E (CIE 1994) Delta E (CIE 1994). 1994. Colour harmony JAIC-Journal of the International Colour Association. Westland Stephen, Laycock Kevin, Cheung Vien, Henry Phil, Mahyar Forough, 1Westland Stephen, Laycock Kevin, Cheung Vien, Henry Phil, Mahyar Forough. Colour harmony JAIC-Journal of the International Colour Association. 2012;1. Geometric formulation of classical color harmony JOSA. Moon Parry, Spencer Domina Eberle, 34Moon Parry, Spencer Domina Eberle. Geometric formu- lation of classical color harmony JOSA. 1944;34:46-59. Influence of a holistic color interval on color harmony COLOR research and application. Chuang Ming-Chuen, Ou Li-Chen, 26Chuang Ming-Chuen, Ou Li-Chen, others . Influence of a holistic color interval on color harmony COLOR research and application. 2001;26:29-39. Aesthetic response to color combinations: preference, harmony, and similarity Attention. B Schloss Karen, Palmer Stephen, E , Perception, & Psychophysics. 73Schloss Karen B, Palmer Stephen E. Aesthetic response to color combinations: preference, harmony, and similar- ity Attention, Perception, & Psychophysics. 2011;73:551- 571. Experimental determination of laws of color harmony. Part 1: Harmony content of different scales with similar hue Color Research & Application. Nemcsics Antal, 32Nemcsics Antal. Experimental determination of laws of color harmony. Part 1: Harmony content of different scales with similar hue Color Research & Application. 2007;32:477-488. A computer-assisted colour selection system based on aesthetic measure for colour harmony and fuzzy logic theory Color Research & Application. Hsiao Shih-Wen, Chiu Fu-Yuan, Hsu Hsin-Yi, 33Hsiao Shih-Wen, Chiu Fu-Yuan, Hsu Hsin-Yi. A computer-assisted colour selection system based on aes- thetic measure for colour harmony and fuzzy logic theory Color Research & Application. 2008;33:411-423. Towards aesthetics of image: A Bayesian framework for color harmony modeling Signal Processing. Lu Peng, Peng Xujun, Li Ruifan, Wang Xiaojie, Image Communication. 39Lu Peng, Peng Xujun, Li Ruifan, Wang Xiaojie. Towards aesthetics of image: A Bayesian framework for color harmony modeling Signal Processing: Image Communi- cation. 2015;39:487-498. Dalal Edul N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations Color Research & Application. Sharma Gaurav, Wu Wencheng, 30Sharma Gaurav, Wu Wencheng, Dalal Edul N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical obser- vations Color Research & Application. 2005;30:21-30. Applied Regression Analysis. R Draper Norman, Smith Harry, Wiley3rd ed.Draper Norman R., Smith Harry. Applied Regression Analysis. Wiley,3rd ed. 1998. Introduction to Autonomous Mobile Robots. The MIT Presssecond ed. Siegwart Roland, Nourbakhsh Illah, R Scaramuzza, Davide , Siegwart Roland, Nourbakhsh Illah R., Scaramuzza Da- vide. Introduction to Autonomous Mobile Robots. The MIT Presssecond ed. 2011.
[]
[ "Remarks on the pion-nucleon σ-term", "Remarks on the pion-nucleon σ-term" ]
[ "Martin Hoferichter \nInstitute for Nuclear Theory\nUniversity of Washington\n98195-1550SeattleWAUSA\n", "Jacobo Ruiz De Elvira \nHelmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany\n", "Bastian Kubis \nHelmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany\n", "Ulf-G Meißner \nHelmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany\n\nInstitut für Kernphysik\nInstitute for Advanced Simulation\nJülich Center for Hadron Physics\nJARA-FAME\nJARA-HPC\nForschungszentrum Jülich\nD-52425JülichGermany\n" ]
[ "Institute for Nuclear Theory\nUniversity of Washington\n98195-1550SeattleWAUSA", "Helmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany", "Helmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany", "Helmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany", "Institut für Kernphysik\nInstitute for Advanced Simulation\nJülich Center for Hadron Physics\nJARA-FAME\nJARA-HPC\nForschungszentrum Jülich\nD-52425JülichGermany" ]
[]
The pion-nucleon σ-term can be stringently constrained by the combination of analyticity, unitarity, and crossing symmetry with phenomenological information on the pion-nucleon scattering lengths. Recently, lattice calculations at the physical point have been reported that find lower values by about 3σ with respect to the phenomenological determination. We point out that a lattice measurement of the pion-nucleon scattering lengths could help resolve the situation by testing the values extracted from spectroscopy measurements in pionic atoms.
10.1016/j.physletb.2016.06.038
[ "https://arxiv.org/pdf/1602.07688v2.pdf" ]
9,635,035
1602.07688
7c7f54f1e7bcaa1ca5ae3d16479c87837c17538b
Remarks on the pion-nucleon σ-term 29 Jun 2016 Martin Hoferichter Institute for Nuclear Theory University of Washington 98195-1550SeattleWAUSA Jacobo Ruiz De Elvira Helmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics Universität Bonn D-53115BonnGermany Bastian Kubis Helmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics Universität Bonn D-53115BonnGermany Ulf-G Meißner Helmholtz-Institut für Strahlen-und Kernphysik (Theorie) and Bethe Center for Theoretical Physics Universität Bonn D-53115BonnGermany Institut für Kernphysik Institute for Advanced Simulation Jülich Center for Hadron Physics JARA-FAME JARA-HPC Forschungszentrum Jülich D-52425JülichGermany Remarks on the pion-nucleon σ-term 29 Jun 2016Pion-baryon interactionsDispersion relationsChiral LagrangiansChiral symmetries PACS: 1375Gx1155Fv1239Fe1130Rd The pion-nucleon σ-term can be stringently constrained by the combination of analyticity, unitarity, and crossing symmetry with phenomenological information on the pion-nucleon scattering lengths. Recently, lattice calculations at the physical point have been reported that find lower values by about 3σ with respect to the phenomenological determination. We point out that a lattice measurement of the pion-nucleon scattering lengths could help resolve the situation by testing the values extracted from spectroscopy measurements in pionic atoms. Introduction The pion-nucleon (πN) σ-term σ πN is a fundamental parameter of low-energy QCD. It measures the amount of the nucleon mass that originates from the up-and down-quarks, in contrast to the predominant contribution from the gluon fields generated by means of the trace-anomaly of the energy-momentum tensor. A precise knowledge of the σ-term has become increasingly important over the last years since it determines the scalar matrix elements N|m qq q|N for q = u, d [1], which, in turn, are crucial for the interpretation of dark-matter direct-detection experiments [2][3][4] and searches for lepton flavor violation in µ → e conversion in nuclei [5,6] in the scalar-current interaction channel. Despite its importance, the value of σ πN has been under debate for decades. Phenomenologically, the σ-term can be extracted from πN scattering by means of a low-energy theorem that relates the scalar form factor of the nucleon evaluated at momentum transfer t = 2M 2 π to an isoscalar πN amplitude at the Cheng-Dashen point [7,8], which unfortunately lies outside the region directly accessible to experiment. The necessary analytic continuation, performed in [9][10][11] based on the partial-wave analysis from [12,13], led to the classical value of σ πN ∼ 45 MeV [10]. Within the same formalism, this result was later contested by a new partial-wave analysis [14] that implied a significantly larger value σ πN = (64 ± 8) MeV. Recently, a new formalism for the extraction of σ πN has been suggested relying on the machinery of Roy-Steiner equations [15][16][17][18][19], a framework designed in such a way as to maintain analyticity, unitarity, and crossing symmetry of the scattering amplitude within a partial-wave expansion. One of the key results of this approach is a robust correlation between the σ-term and the S -wave scattering lengths σ πN = (59.1 ± 3.1) MeV + I s c I s a I s −ā I s ,(1)c 1/2 = 0.242 MeV × 10 3 M π , c 3/2 = 0.874 MeV × 10 3 M π , where the sum extends over the two s-channel isospin channels and a I s −ā I s measures the deviation of the scattering lengths from their reference values extracted from pionic atoms a 1/2 = (169.8 ± 2.0) × 10 −3 M −1 π , a 3/2 = (−86.3 ± 1.8) × 10 −3 M −1 π .(2) In this way, the main drawback of the formalism from [9,10], the need for very precise input for a particular P-wave scattering volume, could be eliminated. In combination with the experimental constraints on the scattering lengths from pionic atoms, the low-energy theorem thus leads to a very stringent constraint [17] σ πN = (59.1 ± 3.5) MeV. Given that already 4.2 MeV of the increase originate from updated corrections to the low-energy theorem (thereof 3.0 MeV from the consideration of isospin-breaking effects), the net increase in the πN amplitude compared to [10] adds up to about 10 MeV, roughly half-way between [10] and [14]. While for a long time lattice calculations were hampered by large systematic uncertainties due to the extrapolation to physical quark masses, recently three calculations near or at the physical point appeared [20][21][22], with results collected in Ta errors are added in quadrature). As we will argue in this Letter, this discrepancy should be taken very seriously as it suggests that the lattice σ-terms are at odds with experimental data on pionic atoms. An analysis of flavor S U(3) breaking suggests a σ-term closer to the small values obtained on the lattice (cf. the discussion in [23] and references therein): assuming violation of the OZI rule to be small, it should be not too far from the matrix element σ 0 = (m u +m d )/2× N|ūu+dd−2ss|N , which can be related to the mass splitting in the baryon ground state octet and is usually found to be of the order of 35 MeV [24,25]. However, significantly larger values have been obtained in the literature when including effects of the baryon decuplet explicitly in the loops, both in covariant and heavy-baryon approaches [26,27], making it unclear how well the perturbation series in the breaking of flavor S U(3) behaves, and the uncertainties difficult to quantify. A similar puzzle emerged recently in a lattice calculation of K → ππ [28], which quotes a value of the isospin-0 S -wave ππ phase shift at the kaon mass δ 0 0 (M K ) = 23.8(4.9)(1.2) • , about 3σ smaller than the phenomenological result from ππ Roy equations [29,30]. A potential origin of this discrepancy could be that the strong ππ rescattering, known to be particularly pronounced in the isospin-0 S -wave, is not fully captured by the lattice calculation, given that the result for the isospin-2 phase shift δ 2 0 (M K ) = −11.6(2.5)(1.2) • is much closer to phenomenology. This explanation could be tested by a fully dynamical calculation of the corresponding scattering length a 0 0 , which is predicted very accurately from the combination of Roy equations and Chiral Perturbation Theory [31], a prediction in excellent agreement with the available experimental information (see [23] for a review of the present situation). In the same way as a 0 0 provides a common ground where lattice, experiment, and dispersion theory can meet to resolve the discrepancy in the ππ case, a lattice measurement of the πN scattering lengths could help clarify the σ-term puzzle. In this Letter we present our arguments why we believe this to be the case. πN scattering lengths from pionic atoms The linear relation (1) between σ πN and the πN scattering lengths proves to be a very stable prediction of πN Roy-Steiner equations: while derived as a linear expansion around the central values (2), we checked the potential influence of higher terms by additional calculations on a grid aroundā I s with maximal extension of twice the standard deviation quoted in (2) in each direction, with the result that also in this extended region quadratic terms are entirely negligible. The numbers for c I s given in (1) refer to this extended fit and therefore differ slightly from the ones given in [17]. An additional check of the formalism is provided by the fact that if the scattering lengths from [13] are inserted, the lower σ-term from [10] is recovered. Irrespective of the details of uncertainty estimates, this behavior clearly demonstrates that the origin for the upward shift in the central value is to be attributed to the updated input for the scattering lengths. The latter exercise also shows that the solution linearized around the pionic-atom reference point remains approximately valid in a much larger range of parameter space: for the scattering lengths from [13] it differs from the full solution by less than 2 MeV. In pionic atoms, electromagnetic bound states of a π − and a proton/deuteron core, strong interactions leave imprints in the level spectrum that are accessible to spectroscopy measurements [32]. In pionic hydrogen (πH) the ground state is shifted compared to its position in pure QED and acquires a finite width due to the decay to π 0 n (and nγ) final states. The corresponding observables are therefore sensitive to the π − p → π − p and π − p → π 0 n scattering channels. Although the width in pionic deuterium (πD) is dominated by π − d → nn, the level shift measures the isoscalar combination of π − p → π − p and π − n → π − n, once few-body corrections are applied, and thus provides a third constraint on threshold πN physics. Experimentally, the level shifts have been measured with high accuracy at PSI [33,34], and a preliminary value for the πH width is reported in [35]. At this level of accuracy a consistent treatment of isospinbreaking [36][37][38][39] and few-body [40][41][42][43][44][45][46] corrections becomes paramount if all three constraints are to be combined in a global analysis of πH and πD [47,48]. In the isospin limit, the πN amplitude can be decomposed into two independent structures T ba = δ ba T + + 1 2 [τ b , τ a ]T − ,(4) where a and b refer to the isospin label of the incoming and outgoing pion, τ a to isospin Pauli matrices, and T ± to isoscalar/isovector amplitudes. Their threshold values are related to the S -wave scattering lengths by T ± threshold = 4π 1 + M π m N a ± ,(5) where M π and m N are the masses of pion and nucleon, and spinors have been normalized to 1. Conventionally, the combined analysis of pionic atom data is not performed in terms of a + , but using [49] a + = a + + 1 4π 1 + M π m N 4∆ π F 2 π c 1 − 2e 2 f 1(6) instead, where ∆ π = M 2 π − M 2 π 0 denotes the pion mass difference, F π the pion decay constant, e = √ 4πα, and c 1 and f 1 are low-energy constants that yield a universal shift in a + away from its isospin limit that cannot be resolved from pionic atoms alone. Moreover, we have defined particle masses in the isospin limit to coincide with the charged particle masses. The central values for the s-channel isospin scattering lengths (2) have been obtained from such a combined analysis as follows [19]: first, we subtracted the contributions from virtual photons to avoid the presence of photon cuts, and second, we identified the I s = 1/2, 3/2 channels from the physical π ± p amplitudes a 1/2 = 1 2 3a π − p→π − p − a π + p→π + p , a 3/2 = a π + p→π + p .(7) The main motivation for this convention is that a π − p→π − p can be extracted directly from the πH level shift without any further corrections, while a π + p→π + p can be reconstructed from a π − p→π − p andã + with minimal sensitivity to a − and thus the preliminary value for the πH width. Of course, this convention has to be reflected in the precise form of the low-energy theorem for σ πN [17,19], with uncertainties included in the error given in (1). To illustrate the tension between phenomenological and lattice determinations of σ πN it is most convenient to revert this change of basis by means of a 1/2 =ã + + 2a − + ∆a 1/2 , a 3/2 =ã + − a − + ∆a 3/2 ,(8) where ∆a 1/2 = (−2.8 ± 1.3) × 10 −3 M −1 π , ∆a 3/2 = (−2.6 ± 0.7) × 10 −3 M −1 π .(9) The linear relation (1) can then be recast as c 1/2 + c 3/2 ã + + 2c 1/2 − c 3/2 a − = C(σ πN ),(10) where the right-hand side is given by C(σ πN ) = σ πN − (59.1 ± 3.1) MeV − I s c I s ∆a I s −ā I s = σ πN − (90.5 ± 3.1) MeV.(11) The corresponding bands in theã + -a − plane are shown in Fig. 1. As expected due to the isoscalar nature of the σ-term, the constraint from the lattice results is largely orthogonal to a − , although non-linear effects in the Roy-Steiner solution generate some residual dependence on a − as well. The overall picture reflects the core of the discrepancy between lattice and phenomenology: while the three bands from the pionic-atom measurements nicely overlap, the lattice σ-terms favor a considerably smaller value ofã + . 1 The exact significance again depends on if and how the three lattice measurements are combined, but in any case the fact remains that there is a disagreement with pionic-atom phenomenology around the 3σ level. 1 In this context, it is also worth stressing that changing a 3/2 alone, where most of the difference between pionic atoms and [13] resides, is not an option: in doing so, one would infer, via the Goldberger-Miyazawa-Oehme sum rule [50] that is sensitive to the isovector combination a − , a value of the πN coupling constant significantly too large compared to extractions from both nucleon-nucleon [51,52] and pion-nucleon scattering [53]; see [48]. and from lattice σ-terms (orange: BMW [20], violet: χQCD [21], brown: ETMC [22]). Lattice calculation of the πN scattering lengths The discussion in the previous section makes it apparent that another independent determination of the πN scattering lengths would imply additional information on σ πN that could help isolate the origin of the σ-term puzzle. Since a lattice calculation of a I s would proceed directly in the isospin limit, we reformulate the relation (1) accordingly. First, we assume that the isospin limit would still be defined by the charged particle masses, 2 but due to the absence of electromagnetic effects the corresponding scattering lengths as extracted from pionic atoms become a 1/2 c = a 1/2 − ∆a 1/2 − ã + − a + = (178.8 ± 3.8) × 10 −3 M −1 π , a 3/2 c = a 3/2 − ∆a 3/2 − ã + − a + = (−77.5 ± 3.5) × 10 −3 M −1 π ,(12) where we have used c 1 = −1.07(2) GeV −1 [18] and | f 1 | ≤ 1.4 GeV −1 [36,54]. The size of the shifts compared to (2) is larger than one might naively expect from the chiral expansion, but the origin of the enhanced contributions is well understood: the bulk is generated from the term proportional to 4c 1 ∆ π /F 2 π , see (6), which appears because the operator involving c 1 in the chiral Lagrangian generates a term proportional to the quark masses and thus, by the Gell-Mann-Oakes-Renner relation, to the neutral pion mass, which results in a large tree-level shift. The remainder is mainly due to a particular class of loop topologies, so-called triangle diagrams, which are enhanced by a factor of π and an additional numerical factor. In view of these effects one might wonder about the potential impact of O(p 4 ) isospin-breaking corrections. However, both enhancement mechanisms will become irrelevant at higher orders simply due to the fact that the chiral S U(2) expansion converges with an expansion parameter M π /m N ∼ 0.15 unless large chiral logs appear or additional degrees of freedom enhance the size of low-energy constants. This leaves as potentially large O(p 4 ) corrections loop diagrams with low-energy constants c i , which are numerically enhanced due to saturation from the ∆(1232), but at this order cannot appear in triangletype topologies and therefore are not sufficiently enhanced to become relevant. Finally, similarly to c 1 at tree level, there is another artifact from the definition of the operator accompanying c 2 , which is conventionally normalized to the nucleon mass in the chiral limit. At O(p 4 ) this generates a quark-mass correction proportional to c 1 c 2 that renormalizes the aforementioned isospin-breaking correction involving c 1 by a factor 1 + 4c 2 M 2 π /m N = 1.27, resulting in an additional shift in a I s c by 1.6 units. Given that we do not have a full O(p 4 ) calculation, we did not include this correction in the central values in (12), but, to stay conservative, in the quoted uncertainty as an estimate of the potential impact of higher-order terms. If we finally rewrite (1) in terms of a I s c in order to illustrate the impact of a lattice determination of the pion-nucleon scattering lengths on the σ-term, we obtain σ πN = (59.1 ± 2.9) MeV + I s c I s a I s c −ā I s c ,(13) where the new reference valuesā I s c refer to the central values given in (12). In this formulation the uncertainty even decreases slightly because the electromagnetic shift proportional to f 1 cancels to a large extent a similar correction in the low-energy theorem. The final uncertainty in σ πN for a given relative accuracy in the scattering lengths is shown in Fig. 2. For instance, if both isospin channels could be calculated at [5 . . . 10]%, one would obtain the σ-term with an uncertainty [5.0 . . . 8.5] MeV. We therefore see that to add conclusive information to the resolution of the σ-term puzzle by means of a lattice determination of the scattering lengths, a calculation at or below the 10% level would be required. However, also more moderate lattice information may be helpful, e.g. in case one of the scattering lengths can be obtained more accurately than the other: as Fig. 1 suggests, also a single additional band could point towards significant tension with the very precise overlap region of the three pionic-atom experimental constraints. Conclusions In this Letter we highlighted the current tension between lattice and phenomenological determinations of the πN σ-term. We argued that the puzzle becomes particularly apparent when formulated at the level of the πN scattering lengths, which play a decisive role for the phenomenological value: a linear relation between the two scattering lengths of definite isospin and the σ-term allows one to reformulate any value for the latter as a constraint on the former, pointing towards a clear disagreement between lattice and pionic-atom data. In a similar way as a direct lattice calculation of the isospin-0 S -wave ππ scattering length could help resolve a comparable discrepancy between lattice and Roy equations in K → ππ, we suggested that a lattice calculation of the πN scattering lengths would amount to another independent determination of σ πN that could help identify the origin of the discrepancy. Note added in proof While this paper was under review, another lattice calculation near the physical point appeared [55]. The quoted result σ πN = 35(6) MeV lies within the range of [20][21][22]. Figure 1 : 1Constraints on the πN scattering lengths from pionic atoms (black: level shift in πH, blue: width of πH ground state, red: level shift in πD) Figure 2 : 2Uncertainty in σ πN as a function of the relative accuracy in a Is c . A similar analysis could be performed if the isospin limit were defined by the neutral pion mass. In this case, one would need to take the chiral isospinlimit expressions for the scattering lengths to adjust the pion mass from the charged to the neutral one, analogously to a chiral extrapolation. AcknowledgementsWe thank Gilberto Colangelo, Jürg Gasser, Heiri Leutwyler, and Martin J. Savage for comments on the manuscript. . A Crivellin, M Hoferichter, M Procura, arXiv:1312.4951Phys. Rev. D. 8954021hep-phA. Crivellin, M. Hoferichter and M. Procura, Phys. Rev. D 89 (2014) 054021 [arXiv:1312.4951 [hep-ph]]. . A Bottino, F Donato, N Fornengo, S Scopel, hep-ph/9909228Astropart. Phys. 13215A. Bottino, F. Donato, N. Fornengo and S. Scopel, Astropart. Phys. 13 (2000) 215 [hep-ph/9909228]. . A Bottino, F Donato, N Fornengo, S Scopel, hep-ph/0111229Astropart. Phys. 18205A. Bottino, F. Donato, N. Fornengo and S. Scopel, Astropart. Phys. 18 (2002) 205 [hep-ph/0111229]. . J R Ellis, K A Olive, C Savage, arXiv:0801.3656Phys. Rev. D. 7765026hep-phJ. R. Ellis, K. A. Olive and C. Savage, Phys. Rev. D 77 (2008) 065026 [arXiv:0801.3656 [hep-ph]]. . V Cirigliano, R Kitano, Y Okada, P Tuzon, arXiv:0904.0957Phys. Rev. D. 8013002hep-phV. Cirigliano, R. Kitano, Y. Okada and P. Tuzon, Phys. Rev. D 80 (2009) 013002 [arXiv:0904.0957 [hep-ph]]. . A Crivellin, M Hoferichter, M Procura, arXiv:1404.7134Phys. Rev. D. 8993024hep-phA. Crivellin, M. Hoferichter and M. Procura, Phys. Rev. D 89 (2014) 093024 [arXiv:1404.7134 [hep-ph]]. . T P Cheng, R F Dashen, Phys. Rev. Lett. 26594T. P. Cheng and R. F. Dashen, Phys. Rev. Lett. 26 (1971) 594. . L S Brown, W J Pardee, R D Peccei, Phys. Rev. D. 42801L. S. Brown, W. J. Pardee and R. D. Peccei, Phys. Rev. D 4 (1971) 2801. . J Gasser, H Leutwyler, M P Locher, M E Sainio, Phys. Lett. B. 21385J. Gasser, H. Leutwyler, M. P. Locher and M. E. Sainio, Phys. Lett. B 213 (1988) 85. . J Gasser, H Leutwyler, M E Sainio, Phys. Lett. B. 253252J. Gasser, H. Leutwyler and M. E. Sainio, Phys. Lett. B 253 (1991) 252. . J Gasser, H Leutwyler, M E Sainio, Phys. Lett. B. 253260J. Gasser, H. Leutwyler and M. E. Sainio, Phys. Lett. B 253 (1991) 260. . R Koch, E Pietarinen, Nucl. Phys. A. 336331R. Koch and E. Pietarinen, Nucl. Phys. A 336 (1980) 331. Pion-Nukleon-Streuung: Methoden und Ergebnisse. G Höhler, Landolt-Börnstein, 9b2. H. SchopperBerlinSpringer VerlagG. Höhler, Pion-Nukleon-Streuung: Methoden und Ergebnisse, in Landolt-Börnstein, 9b2, ed. H. Schopper, Springer Verlag, Berlin, 1983. . M M Pavan, I I Strakovsky, R L Workman, R A Arndt, hep-ph/0111066PiN Newslett. 16110M. M. Pavan, I. I. Strakovsky, R. L. Workman and R. A. Arndt, PiN Newslett. 16 (2002) 110 [hep-ph/0111066]. . C Ditsche, M Hoferichter, B Kubis, U.-G Meißner, arXiv:1203.4758JHEP. 120643hep-phC. Ditsche, M. Hoferichter, B. Kubis and U.-G. Meißner, JHEP 1206 (2012) 043 [arXiv:1203.4758 [hep-ph]]. . M Hoferichter, C Ditsche, B Kubis, U.-G Meißner, arXiv:1204.6251JHEP. 120663hep-phM. Hoferichter, C. Ditsche, B. Kubis and U.-G. Meißner, JHEP 1206 (2012) 063 [arXiv:1204.6251 [hep-ph]]. . M Hoferichter, J Ruiz De Elvira, B Kubis, U.-G Meißner, arXiv:1506.04142Phys. Rev. Lett. 11592301hep-phM. Hoferichter, J. Ruiz de Elvira, B. Kubis and U.-G. Meißner, Phys. Rev. Lett. 115 (2015) 092301 [arXiv:1506.04142 [hep-ph]]. . M Hoferichter, J Ruiz De Elvira, B Kubis, U.-G Meißner, arXiv:1507.07552Phys. Rev. Lett. 115192301nucl-thM. Hoferichter, J. Ruiz de Elvira, B. Kubis and U.-G. Meißner, Phys. Rev. Lett. 115 (2015) 192301 [arXiv:1507.07552 [nucl-th]]. . M Hoferichter, J Ruiz De Elvira, B Kubis, U.-G Meißner, arXiv:1510.06039Phys. Rept. 6251hep-phM. Hoferichter, J. Ruiz de Elvira, B. Kubis and U.-G. Meißner, Phys. Rept. 625 (2016) 1 [arXiv:1510.06039 [hep-ph]]. . S Dürr, BMW CollaborationarXiv:1510.08013Phys. Rev. Lett. 116172001hep-latS. Dürr et al. [BMW Collaboration], Phys. Rev. Lett. 116 (2016) 172001 [arXiv:1510.08013 [hep-lat]]. . Y B Yang, A Alexandru, T Draper, J Liang, K F Liu, arXiv:1511.09089χQCD Collaboration. hep-latY. B. Yang, A. Alexandru, T. Draper, J. Liang and K. F. Liu [χQCD Col- laboration], arXiv:1511.09089 [hep-lat]. . A , EMT CollaborationarXiv:1601.01624hep-latA. Abdel-Rehim et al. [EMT Collaboration], arXiv:1601.01624 [hep-lat]. . H Leutwyler, arXiv:1510.07511PoS CD. 1522hep-phH. Leutwyler, PoS CD 15 (2015) 022 [arXiv:1510.07511 [hep-ph]]. . J Gasser, Annals Phys. 13662J. Gasser, Annals Phys. 136 (1981) 62. . B Borasoy, U.-G Meißner, hep-ph/9607432Annals Phys. 254192B. Borasoy and U.-G. Meißner, Annals Phys. 254 (1997) 192 [hep-ph/9607432]. . J M Alarcón, L S Geng, J Martin Camalich, J A Oller, arXiv:1209.2870Phys. Lett. B. 730342hep-phJ. M. Alarcón, L. S. Geng, J. Martin Camalich and J. A. Oller, Phys. Lett. B 730 (2014) 342 [arXiv:1209.2870 [hep-ph]]. . D.-L Yao, arXiv:1603.03638JHEP. 160538hep-phD.-L. Yao et al., JHEP 1605 (2016) 038 [arXiv:1603.03638 [hep-ph]]. . Z Bai, RBC and UKQCD CollaborationsarXiv:1505.07863Phys. Rev. Lett. 115212001hep-latZ. Bai et al. [RBC and UKQCD Collaborations], Phys. Rev. Lett. 115 (2015) 212001 [arXiv:1505.07863 [hep-lat]]. . G Colangelo, J Gasser, H Leutwyler, hep-ph/0103088Nucl. Phys. B. 603125G. Colangelo, J. Gasser and H. Leutwyler, Nucl. Phys. B 603 (2001) 125 [hep-ph/0103088]. . R García-Martín, R Kamiński, J R Peláez, J Ruiz De Elvira, F J Ynduráin, arXiv:1102.2183Phys. Rev. D. 8374004hepphR. García-Martín, R. Kamiński, J. R. Peláez, J. Ruiz de Elvira, and F. J. Ynduráin, Phys. Rev. D 83 (2011) 074004 [arXiv:1102.2183 [hep- ph]]. . G Colangelo, J Gasser, H Leutwyler, hep-ph/0007112Phys. Lett. B. 488261G. Colangelo, J. Gasser and H. Leutwyler, Phys. Lett. B 488 (2000) 261 [hep-ph/0007112]. . J Gasser, V E Lyubovitskij, A Rusetsky, arXiv:0711.3522Phys. Rept. 456167hep-phJ. Gasser, V. E. Lyubovitskij and A. Rusetsky, Phys. Rept. 456 (2008) 167 [arXiv:0711.3522 [hep-ph]]. . T Strauch, arXiv:1011.2415Eur. Phys. J. A. 4788nuclexT. Strauch et al., Eur. Phys. J. A 47 (2011) 88 [arXiv:1011.2415 [nucl- ex]]. . M Hennebach, arXiv:1406.6525Eur. Phys. J. A. 50190nucl-exM. Hennebach et al., Eur. Phys. J. A 50 (2014) 190 [arXiv:1406.6525 [nucl-ex]]. . D Gotta, Lect. Notes Phys. 745165D. Gotta et al., Lect. Notes Phys. 745 (2008) 165. . J Gasser, M A Ivanov, E Lipartia, M Mojžiš, A Rusetsky, hep-ph/0206068Eur. Phys. J. C. 2613J. Gasser, M. A. Ivanov, E. Lipartia, M. Mojžiš and A. Rusetsky, Eur. Phys. J. C 26 (2002) 13 [hep-ph/0206068]. . U.-G Meißner, U Raha, A Rusetsky, nucl-th/0512035Phys. Lett. B. 639478U.-G. Meißner, U. Raha and A. Rusetsky, Phys. Lett. B 639 (2006) 478 [nucl-th/0512035]. . M Hoferichter, B Kubis, U.-G Meißner, arXiv:0903.3890Phys. Lett. B. 67865hep-phM. Hoferichter, B. Kubis and U.-G. Meißner, Phys. Lett. B 678 (2009) 65 [arXiv:0903.3890 [hep-ph]]. . M Hoferichter, B Kubis, U.-G Meißner, arXiv:0909.4390Nucl. Phys. A. 83318hep-phM. Hoferichter, B. Kubis and U.-G. Meißner, Nucl. Phys. A 833 (2010) 18 [arXiv:0909.4390 [hep-ph]]. . S Weinberg, hep-ph/9209257Phys. Lett. B. 295114S. Weinberg, Phys. Lett. B 295 (1992) 114 [hep-ph/9209257]. . S R Beane, V Bernard, E Epelbaum, U.-G Meißner, D R Phillips, hep-ph/0206219Nucl. Phys. A. 720399S. R. Beane, V. Bernard, E. Epelbaum, U.-G. Meißner, and D. R. Phillips, Nucl. Phys. A 720 (2003) 399 [hep-ph/0206219]. . V Baru, C Hanhart, A E Kudryavtsev, U.-G Meißner, nucl-th/0402027Phys. Lett. B. 589118V. Baru, C. Hanhart, A. E. Kudryavtsev, and U.-G. Meißner, Phys. Lett. B 589 (2004) 118 [nucl-th/0402027]. . V Lensky, nucl-th/0608042Phys. Lett. B. 64846V. Lensky et al., Phys. Lett. B 648 (2007) 46 [nucl-th/0608042]. . V Baru, arXiv:0706.4023Phys. Lett. B. 659184nucl-thV. Baru et al., Phys. Lett. B 659 (2008) 184 [arXiv:0706.4023 [nucl-th]]. . S Liebig, V Baru, F Ballout, C Hanhart, A Nogga, arXiv:1003.3826Eur. Phys. J. A. 4769nucl-thS. Liebig, V. Baru, F. Ballout, C. Hanhart, and A. Nogga, Eur. Phys. J. A 47 (2011) 69 [arXiv:1003.3826 [nucl-th]]. . V Baru, arXiv:1202.0208Eur. Phys. J. A. 4869nucl-thV. Baru et al., Eur. Phys. J. A 48 (2012) 69 [arXiv:1202.0208 [nucl-th]]. . V Baru, arXiv:1003.4444Phys. Lett. B. 694473nucl-thV. Baru et al., Phys. Lett. B 694 (2011) 473 [arXiv:1003.4444 [nucl-th]]. . V Baru, arXiv:1107.5509Nucl. Phys. A. 87269nucl-thV. Baru et al., Nucl. Phys. A 872 (2011) 69 [arXiv:1107.5509 [nucl-th]]. . V Baru, arXiv:0711.2743127nucl-thV. Baru et al., eConf C 070910 (2007) 127 [arXiv:0711.2743 [nucl-th]]. . M L Goldberger, H Miyazawa, R Oehme, Phys. Rev. 99986M. L. Goldberger, H. Miyazawa and R. Oehme, Phys. Rev. 99 (1955) 986. . J J De Swart, M C M Rentmeester, R G E Timmermans, nucl-th/9802084PiN Newslett. 1396J. J. de Swart, M. C. M. Rentmeester and R. G. E. Timmermans, PiN Newslett. 13 (1997) 96 [nucl-th/9802084]. . R Navarro Pérez, J E Amaro, E. Ruiz Arriola, arXiv:1606.00592nucl-thR. Navarro Pérez, J. E. Amaro and E. Ruiz Arriola, arXiv:1606.00592 [nucl-th]. . R A Arndt, W J Briscoe, I I Strakovsky, R L Workman, nucl-th/0605082Phys. Rev. C. 7445205R. A. Arndt, W. J. Briscoe, I. I. Strakovsky and R. L. Workman, Phys. Rev. C 74 (2006) 045205 [nucl-th/0605082]. . N Fettes, U.-G Meißner, hep-ph/0008181Phys. Rev. C. 6345201N. Fettes and U.-G. Meißner, Phys. Rev. C 63 (2001) 045201 [hep-ph/0008181]. . G S Bali, RQCD CollaborationarXiv:1603.00827Phys. Rev. D. 9394504hep-latG. S. Bali et al. [RQCD Collaboration], Phys. Rev. D 93 (2016) 094504 [arXiv:1603.00827 [hep-lat]].
[]
[ "On Ground States and Phase Transitions of λ-Model on the Cayley Tree", "On Ground States and Phase Transitions of λ-Model on the Cayley Tree" ]
[ "Farrukh Mukhamedov [email protected] \nDepartment of Mathematical Sciences\nCollege of Science\nThe United Arab Emirates University\nP.O. Box15551Al Ain, Abu DhabiUAE\n", "Chin Hee Pah ", "Hakim Jamil \nDepartment of Computational & Theoretical Sciences\nFaculty of Science\nInternational Islamic University Malaysia\nP.O. Box, 14125710Kuantan\n" ]
[ "Department of Mathematical Sciences\nCollege of Science\nThe United Arab Emirates University\nP.O. Box15551Al Ain, Abu DhabiUAE", "Department of Computational & Theoretical Sciences\nFaculty of Science\nInternational Islamic University Malaysia\nP.O. Box, 14125710Kuantan" ]
[]
In the paper, we consider the λ-model with spin values {1, 2, 3} on the Cayley tree of order two. We first describe ground states of the model. Moreover, we also proved the existence of translation-invariant Gibb measures for the λ-model which yield the existence of the phase transition. Lastly, we established the exitance of 2-periodic Gibbs measures for the model.
null
[ "https://arxiv.org/pdf/1610.07019v1.pdf" ]
119,148,438
1610.07019
fefbe0a52211581f7972781aef977e254cc0c2ab
On Ground States and Phase Transitions of λ-Model on the Cayley Tree 22 Oct 2016 Farrukh Mukhamedov [email protected] Department of Mathematical Sciences College of Science The United Arab Emirates University P.O. Box15551Al Ain, Abu DhabiUAE Chin Hee Pah Hakim Jamil Department of Computational & Theoretical Sciences Faculty of Science International Islamic University Malaysia P.O. Box, 14125710Kuantan On Ground States and Phase Transitions of λ-Model on the Cayley Tree 22 Oct 2016 In the paper, we consider the λ-model with spin values {1, 2, 3} on the Cayley tree of order two. We first describe ground states of the model. Moreover, we also proved the existence of translation-invariant Gibb measures for the λ-model which yield the existence of the phase transition. Lastly, we established the exitance of 2-periodic Gibbs measures for the model. Introduction The choice of Hamiltonian for concrete systems of interacting particles represents an important problem of equilibrium statistical mechanics [1]. The matter is that, in considering concrete real systems with many (in abstraction, infinitely many) degrees of freedom, it is impossible to account for all properties of such a system without exceptions. The main problem consists in accounting only for the most important features of the system, consciously removing the other particularities. On the other hand, the main purpose of equilibrium statistical mechanics consists in describing all limit Gibbs distributions corresponding to a given Hamiltonian [7]. This problem is completely solved only in some comparatively simple cases. In particular, if there are only binary interactions in the system then problem of describing the limit Gibbs distributions simplifies. One of the important models in statistical mechanics is Potts models. These models describe a special and easily defined class of statistical mechanics models. Nevertheless, they are richly structured enough to illustrate almost every conceivable nuance of the subject. In particular, they are at the center of the most recent explosion of interest generated by the confluence of conformal field theory, percolation theory, knot theory, quantum groups and integrable systems [9,12]. The Potts model [17] was introduced as a generalization of the Ising model to more than two components. At present the Potts model encompasses a number of problems in statistical physics (see, e.g. [23]). Investigations of phase transitions of spin models on hierarchical lattices showed that they make the exact calculation of various physical quantities [3], [13,14], [22]. Such studies on the hierarchical lattices begun with development of the Migdal-Kadanoff renormalization group method where the lattices emerged as approximants of the ordinary crystal ones. In [15,16] the phase diagrams of the q-state Potts models on the Bethe lattices were studied and the pure phases of the the ferromagnetic Potts model were found. In [4,5] using those results, uncountable number of the pure phase of the 3-state Potts model were constructed. These investigations were based on a measure-theoretic approach developed in [18], [15,16]. The structure of the Gibbs measures of the Potts models has been investigated in [4,6]. It is natural to consider a model which is more complicated than Potts one, in [10] we proposed to study so-called λ-model on the Cayley tree (see also [19,20]). In the mentioned paper, for special kind of λ-model, its disordered phase is studied (see [6,11]) and some its algebraic properties are investigated. In the present, we consider symmetric λ-model with spin values {1, 2, 3} on the Cayley tree of order two. This model is much more general than Potts model, and exhibits interesting structure of ground states. We first describe ground states of the model. Moreover, we also proved the existence of translation-invariant Gibb measures for the λ-model which yield the existence of the phase transition. Lastly, we established the exitance of 2-periodic Gibbs measures for the model. Preliminaries Let Γ k + = (V, L) be a semi-infinite Cayley tree of order k ≥ 1 with the root x 0 (whose each vertex has exactly k + 1 edges, except for the root x 0 , which has k edges). Here V is the set of vertices and L is the set of edges. The vertices x and y are called nearest neighbors and they are denoted by l = x, y if there exists an edge connecting them. A collection of the pairs x, x 1 , . . . , x d−1 , y is called a path from the point x to the point y. The distance d(x, y), x, y ∈ V , on the Cayley tree, is the length of the shortest path from x to y. W n = x ∈ V | d(x, x 0 ) = n , V n = n m=1 W m , L n = {l =< x, y >∈ L | x, y ∈ V n } . The set of direct successors of x is defined by S(x) = {y ∈ W n+1 : d(x, y) = 1} , x ∈ W n . Observe that any vertex x = x 0 has k direct successors and x 0 has k + 1. Now we are going to introduce a coordinate structure in Γ k + . Every vertex x (except for x 0 ) of Γ k + has coordinates (i 1 , . . . , i n ), here i m ∈ {1, . . . , k}, 1 ≤ m ≤ n and for the vertex x 0 we put (0) (see Figure 1). Namely, the symbol (0) constitutes level 0 and the sites i 1 , . . . , i n form level n of the lattice. In this notation for x ∈ Γ k + , x = {i 1 , . . . , i n } we have S(x) = {(x, i) : 1 ≤ i ≤ k}, here (x, i) means that (i 1 , . . . , i n , i). Let us define on Γ k + a binary operation • : Γ k + × Γ k + → Γ k + as follows, for any two elements x = (i 1 , . . . , i n ) and y = (j 1 , . . . , j m ) put x • y = (i 1 , . . . , i n ) • (j 1 , . . . , j m ) = (i 1 , . . . , i n , j 1 , . . . , j m ) and y • x = (j 1 , . . . , j m ) • (i 1 , . . . , i n ) = (j 1 , . . . , j m , i 1 , . . . , i n ). By means of the defined operation Γ k + becomes a noncommutative semigroup with a unit. Using this semigroup structure one defines translations τ g : Γ k + → Γ k + , g ∈ Γ k by τ g (x) = g • x. Let G ⊂ Γ k + be a sub-semigroup of Γ k + and h : V → R be a function. We say that h is a G-periodic if h(τ g (x)) = h(x) for all x ∈ V ,g ∈ G and l ∈ L. Any Γ k + -periodic function is called translation-invariant. Put G m = x ∈ Γ k + : d(x, x 0 ) ≡ 0(mod m) , m ≥ 2. One can check that G m is a sub-semigroup with a unit. Let us consider some examples. Let m=2, k=2, then G 2 can be written as follows: G 2 = {(0), (i 1 , i 2 , . . . , i 2n ), n ∈ N} In this case, G 2 -periodic function h has the following form: h(x) = h 1 , x = (i 1 , i 2 , . . . , i 2n ), h 2 , x = (i 1 , i 2 , . . . , i 2n+1 )(1) for i k ∈ {1, 2} and k ∈ V . In this paper, we consider the models where the spin takes values in the set Φ = {1, 2, . . . , q} and is assigned to the vertices of the tree. A configuration σ on V is then defined as a function x ∈ V → σ(x) ∈ Φ; the set of all configurations coincides with Ω = Φ Γ k . The Hamiltonian the λ-model has the following form H(σ) = <x,y>∈L λ(σ(x), σ(y)) (2) where the sum is taken over all pairs of nearest-neighbor vertices x, y , σ ∈ Ω. From a physical point of view the interactions between particles do not depend on their locations, therefore from now on we will assume that λ is a symmetric function, i.e. λ(u, v) = λ(v, u) for all u, v ∈ R. We note that λ-model of this type can be considered as generalization of the Potts model. The Potts model corresponds to the choice λ(x, y) = −Jδ xy , where x, y, J ∈ R. In what follows, we restrict ourself to the case k = 2 and Φ = {1, 2, 3}, and for the sake of simplicity, we consider the following function: λ(i, j) =    a, if |i − j| = 2, b, if |i − j| = 1, c, if i = j,(3) where a, b, c ∈ R for some given numbers. Remark 2.1. We point out the considered model is more general then well-known Potts model [23], since if a = b = 0, c = 0, then this model reduces to the mentioned model. Ground States In this section, we describe ground state of the λ-model on a Cayley tree. For a pair of configurations σ and ϕ coinciding almost everywhere, i.e., everywhere except finitely many points, we consider the relative Hamiltonian H(σ, ϕ) determining the energy differences of the configurations σ and ϕ: H(σ, ϕ) = <x,y> x,y∈V (λ(σ(x), σ(y)) − λ(ϕ(x), ϕ(y)))(4) For each x ∈ V , the set {x, S(x)} is called a ball, and it is denoted by b x . The set of all balls we denote by M . We define the energy of the configuration σ b on b as follows U (σ b ) = 1 2 <x,y> x,y∈V (λ(σ(x), σ(y))) From (4), we got the following lemma. H(σ, ϕ) = b∈M (U (σ b ) − U (ϕ b )). Lemma 3.2. The inclusion U (ϕ b ) ∈ α + β 2 : ∀α, β ∈ {a, b, c}(5) holds for every configuration ϕ b on b (b ∈ M ). A configuration ϕ is called a ground state of the relative Hamiltonian H if U (ϕ b ) = min α + β 2 : ∀α, β ∈ {a, b, c}(6) for any b ∈ M For any configuration σ b , we have U (σ b ) ∈ {U 1 , U 2 , U 3 , U 4 , U 5 , U 6 }, where U 1 = a , U 2 = (a + b)/2 , U 3 = (a + c)/2, U 4 = b , U 5 = (b + c)/2 , U 6 = c.(7) We denote A m = (a, b, c) ∈ R 3 | U m = min 1≤k≤6 {U k }(8) Using (8), we obtain A 1 = (a, b, c) ∈ R 3 | a ≤ b, a ≤ c , A 2 = (a, b, c) ∈ R 3 | a = b ≤ c , A 3 = (a, b, c) ∈ R 3 | a = c, ≤ b , A 4 = (a, b, c) ∈ R 3 | b ≤ a, b ≤ c , A 5 = (a, b, c) ∈ R 3 | b = c ≤ a , A 6 = (a, b, c) ∈ R 3 | c ≤ a, c ≤ b . Now, we want to find ground states for each considered cases. To do so, we introduce some notation. For each sequence {k 0 , k 1 , . . . , k n , . . . }, k n ∈ {1, 2, 3}, n ∈ N ∪ {0}, we define a configuration σ on Ω by Proof. Let (a, b, c) ∈ A 1 , then one can see that for this triple, the minimal value is a, which is achieved by the configuration on b, given in Figure 4. Now using Figure 4, for each n ∈ N, one can construct configurations on Ω defined by: σ(x) = k ℓ , if x ∈ W ℓ , ℓ ≥ 0.σ (2) 1 = σ [1,3] , σ(2)2 = σ [3,1] . Then, we can see that for any b ∈ M , one has U (σ (2) b 1,2 ) = min 1≤k≤6 {U k } which means σ(2) 1,2 is a ground state. Moreover, σ 1,2 is G 2 -periodic. Note that all ground states will coincide with these ones. Proof. Let (a, b, c) ∈ A 2 , then one can see that for this triple, the minimal value is (a + b)/2, which is achieved by the configurations on b given in Figure 6. (i) Now using Figure 6, for each n ∈ N, one can construct configurations on Ω defined by Figure 7. Configuration for σ (2) Then, we can see that for any b ∈ M , one has U (σ (ξ) ) = min 1≤k≤6 {U k }, ξ ∈ {2n, 2n + 1} which means σ (n) is a G n -periodic ground state. (ii) To construct uncountable number of ground states, we consider the set Σ 1,2,3 = (t n )|t n ∈ {1, 2, 3}, δ {tn ,t n+1 } = 0, n ∈ N(9) where δ is the Kroneker delta. One can see that the set Σ 1,2,3 is uncountable. Take any t = (t n ) ∈ Σ 1,2,3 . Let us construct a configuration by σ (t) = 1, x = (0), t k , x ∈ W k , k ∈ V. One can check that σ (t) is a ground state, and the correspondence t ∈ Σ 1,2,3 → σ (t) shows that the set {σ (t) , t ∈ Σ 1,2,3 } is uncountable. This completes the proof. Theoram 3.5. Let (a, b, c) ∈ A 3 , then the following statements hold: (i) there are three translation-invariant ground states; (ii) for every n ∈ N, there is G n -periodic ground state. Proof. Let (a, b, c) ∈ A 3 , then one can see that for this triple, the minimal value is (a + c)/2, which is achieved by the configurations on b given in Figure 9. (i) In this case, we have three σ (k) = σ [k] , k = {1, 2, 3} configurations, which are translation-invariant ground states. (ii) Using Figure 9, for each n ∈ N, one can construct a configuration on Ω defined by σ (n) = σ [1, 3, 3, . . . , 3] n . Then, we can see that for any b ∈ M , one has U (σ (n) ) = min 1≤k≤6 {U k } which means σ (n) is a G n -periodic ground state. Theoram 3.6. Let (a, b, c) ∈ A 4 , then for every n ∈ N, there is G (3n+1) -periodic ground state. Proof. Let (a, b, c) ∈ A 4 , then one can see that for this triple, the minimal value is b, which is achieved by the configurations on b given, in Figure 11. Now, using Figure 11, for each n ∈ N, one can construct a configuration on Ω defined by σ (3n+1) = σ [1,(2, 3, 2), . . . , (2, 3, 2) n ] Then, we can see that for any b ∈ M , one has (ii) for every n ∈ N,there is G n -periodic ground state; (iii) there is uncountable number of ground states. Proof. Let (a, b, c) ∈ A 5 , then one can see that for this triple, the minimal value is (b + c)/2, which is achieved by the configurations on b given in Figure 13. (i) In this case, we have three configurations: σ (k) = σ [k] , k = {1, 2, 3} which are translation-invariant ground states. (ii) Now, using Figure 13, for each n ∈ N, one can see a configuration on Ω defined by σ (n) = σ [1, 2, 2, . . . , 2] n . Then, we can see that for any b ∈ M , one has U (σ (n) ) = min 1≤k≤6 {U k } which means σ (n) is a G n -periodic ground state. (iii) To construct uncountable number of ground states, we consider the set Σ 2,3 = {(t n )|t n ∈ {2, 3}, n ∈ N}(10) which is uncountable. Take any t = (t n ) ∈ Σ 2,3 . Let us construct a configuration by One can check that σ (t) is a ground state, and the correspondence t ∈ Σ 2,3 → σ (t) shows that the set {σ (t) , t ∈ Σ 2,3 } is uncountable. σ (t) = 2, x = (0), t k , x ∈ W k , k ∈ V. Theoram 3.8. Let (a, b, c) ∈ A 6 , then there are only three transition-invariant ground states. Proof. Let (a, b, c) ∈ A 6 ,then one can see that for this triple, the minimal value is c, which is achieved by the configurations on b given in Figure 16. In this case, we have three configurations: σ (k) = σ [k] , k = {1, 2, 3} which are translation-invariant ground states. Construction of Gibbs States for the λ-model We define a finite-dimensional distribution of probability measure µ (n) in a volume V n as µ (n) (σ n ) = Z −1 n exp{βH n (σ n ) + x∈Wn h σ(x),x } , σ n ∈ Φ Vn(11) where β = 1/T , T > 0 is the temperature, and Z −1 n = σ∈Φ Vn exp{βH n (σ n ) + x∈Wn h σ(x),x } is the normalizing factor. In (11), {h x = (h 1,x , . . . , h q,x ) ∈ R, x ∈ V } is the set of vectors, and H n (σ n ) = <x,y>∈Ln λ(σ(x), σ(y)) We say that sequence of a probability distribution {µ (n) } is consistent if for all n ≥ 1 and σ n−1 ∈ Φ V n−1 one has ωn∈Φ Wn µ (n) (σ n−1 ∨ ω n ) = µ (n−1) (σ n−1 )(12) Here,σ n−1 ∨ ω n is the union of all configurations. In this case, we have a unique measure µ on Φ V such that for all n and σ n ∈ Φ Vn , we have µ({σ|V n = σ n }) = µ (n) (σ n ) Such a measure is called a splitting Gibbs measure corresponding to Hamiltonian (2) and to the vector-valued function h x , x ∈ V (see [20] for more information about splitting measures). The next statement describes the condition on h x ensuring that the sequence {µ (n) } is consistent. Theoram 4.1. The measures µ (n) , n = 1, 2, . . . , satisfy the consistency condition if and only if for any x ∈ V the following equation holds: u k,x = y∈S(x) q−1 j=1 exp{βλ(k, j)}u j,y + exp{βλ(k, q)} q−1 j=1 exp{βλ(q, j)}u j,y + exp{βλ(q, q)} ,(13) where u k,x = exp{h k,x − h q,x },k = 1, q − 1 Proof: Necessity. According to the consistency condition (12), we have w∈Φ Vn 1 Z n exp{βH n (σ n−1 ∨ ω) + x∈Wn h ω(x),x } = 1 Z n−1 exp{βH n−1 (σ n−1 ) + x∈W n−1 h σ(x),x )} Keeping in mind that H n (σ n−1 ∨ ω n ) = σ∈Φ V n−1 λ(σ(x), σ(y)) + x∈W n−1 y∈S(x) λ(σ(x), ω(y)) = H n−1 (σ) + x∈W n−1 y∈S(x) λ(σ(x), ω(y)), We have Z n−1 Z n ω∈Φ Vn exp{β x∈W n−1 y∈S(x) λ(σ(x), ω(y)) + β x∈W n−1 y∈S(x) h ω(x),(x) = exp{ x∈W n−1 h σ(x),(x) } which yields Z n−1 Z n x∈W n−1 y∈S(x) ω∈Φ Vn exp{βλ(σ(x), ω(y)) + h ω(y),(y) = x∈W n−1 exp{h σ(x),(x) }(14) Considering configurations σ (k) ∈ Φ v n−1 ,Φ = {1, . . . , q} such that σ(x) = k for fixed x ∈ V and k = 1, q, and dividing (14) at σ (k) by (14) at σ (q) , one gets y∈S(x) w∈Φ exp{βλ(k, ω(y)) + h ω(y),y } ω∈Φ exp{βλ(q, ω(y)) + h ω(y),y } = exp{h k,x } exp{h q,x } .(15) So, y∈S(x) q w∈Φ exp{βλ(k, j) + h j,y } q w∈Φ exp{βλ(q, j) + h j,y } = exp{h k,x − h q,x }.(16) Hence, by denoting u k,x = exp{h k,x − h q,x } from (16) one finds y∈S(x) q−1 j=1 exp{βλ(k, j) + u j,y } + exp{βλ(k, q)} q−1 j=1 exp{βλ(q, j) + u j,y } + exp{βλ(q, q)}} = u k,x(17) Sufficiency. Suppose that (12) holds, then we get (16). which yields that y∈S(x) q j=1 exp{βλ(k, j) + h j,y } = a(x) exp{h k,x }, k = 1, q, x ∈ W n−1(18) for some function a(x) > 0, x ∈ V . Let us multiply (18) with respect to x ∈ W n−1 , then we obtain x∈W n−1 y∈S(x) q j=1 exp{βλ(σ(x), j) + h j,y } = x∈W n−1 (a(x) exp{h k,x })(19) for any configuration σ ∈ Φ V n−1 . Denoting A n−1 = x∈W n−1 a(x), from (19), one finds x∈W n−1 y∈S(x) w∈Φ exp{βλ(σ(x), ω(y)) + h ω,y } = A n x∈W n−1 exp{h σ(x),x }(20) We multiply both sides of (20) by exp{βH n−1 (σ)}, we get exp{βH n−1 (σ)} x∈W n−1 y∈S(x) ω∈Φ exp{βλ(σ(x), ω(y)) + h ω(y),y } = A n exp{βH n−1 (σ)} x∈W n−1 exp{h σ(x),x } which yields Z n ωn∈Φ Vn µ (n) (σ ∨ ω n ) = A n−1 Z n−1 µ (n−1) (σ)(21) since µ (n) (σ ∨ ω n ), n ≥ 1 is a probabilistic measure, we have ωn∈Φ Vn µ (n) (σ ∨ ω n ) = µ (n−1) (σ) = 1 which from (21) yields Z n = A n−1 Z n−1 This completes the proof. Description of translation-invariant Gibbs measures. In this section, we are going to establish the existence of phase transition for the λ-model give by (3). As before, in what follows, we assume that k = 2, q = 3. To establish a phase transition, we will find translation-invariant Gibbs measures. Here, by translation-invariant Gibbs measure we mean a splitting Gibbs measure which correspond to a solution u x of the equation (17) which is translation-invariant, i.e. u x = u y for all x, y ∈ V . This means u x = u, where u = (u 1 , u 2 ), u 1 , u 2 > 0. Due to Theorem 4.1, u 1 and u 2 must satisfy the following equation: u 1 = u 1 θ 1 + u 2 θ 2 + θ 3 u 1 θ 3 + u 2 θ 2 + θ 1 2 , u 2 = u 1 θ 2 + u 2 θ 1 + θ 2 u 1 θ 3 + u 2 θ 2 + θ 1 2 ,(22) where θ 1 = exp{βc}, θ 2 = exp{βb}, θ 3 = exp{βa} (here we have used (3)). From (22) one can see that u 1 = 1 is invariant line of the equation. Therefore, the equation over this invariant line reduces to u 2 = θ 1 u 2 + 2θ 2 θ 2 u 2 + θ 1 + θ 3 2 .(23) Denoting x = u 2 θ 1 2θ 2 , a = 1 8θ 5 2 , b = θ 1 (θ 1 + θ 3 ) 2θ 2 2 ,(24) we rewrite (23) as follows: ax = 1 + x b + x 2 .(25) Since x > 0, k ≥ 1, a > 0, and b > 0; [18,Proposition 10.7] implies the following Lemma 5.1. (1). If b ≤ 9 then a solution to (25) is unique. (2). If b > 9 then there are η 1 (b) and η 2 (b) such that 0 < η 2 < η 2 and if η 1 < a < η 2 then (25) has three solutions. (3). If a = η 1 and a = η 2 then (25) has two solutions. The quantity η 1 and η 2 are determined from the formula η i (b) = 1 x i 1 + x i b + x i 2 , i = 1, 2,(26) Periodic Gibbs Measure In this section, we are going to study 2-periodic Gibbs measures. Recall that function u x is 2-periodic if u x = u y whenever d(x, y) is divisible by 2 (see for detail section (2)). Let u x be a 2-periodic function. Then, to exist the corresponding Gibbs measure, the function u x = (u x,1 , u x,2 ) should satisfy the following equation: u x,1 = u y,1 θ 1 + u y,2 θ 2 + θ 3 u y,1 θ 3 + u y,2 θ 2 + θ 1 2 , u x,2 = u y,1 θ 2 + u y,2 θ 1 + θ 2 u y,1 θ 3 + u y,2 θ 2 + θ 1 Figure 1 . 1The first levels of Γ 2 + Figure 2 . 2Cayley tree for G 2 Lemma 3. 1 . 1The relative Hamiltonian (4) has the form This configuration is denoted by σ[kn] . Figure 3 . 3Configuration for σ[kn] If the sequence {k 0 , k 1 , . . . , k n , . . . } is n-periodic,(i.e. k ℓ+n = k ℓ , ∀n ∈ N), then instead of {k 0 , k 1 , . . . , k n , . . . }, we write {k 0 , k 1 , . . . , k n−1 }. Correspondingly, the associated configuration is denoted by σ [k 0 ,k 1 ,...,k n−1 ] Theoram 3.3.Let (a, b, c)∈ A 1 , then there are only two G 2 -periodic ground states. Figure 4 . 1 Figure 5 . 415Configurations for A Configuration Theoram 3. 4 . 4Let (a, b, c) ∈ A 2 , then the following statements hold:(i) for every n ∈ N, there is G n -periodic ground state; (ii) there is uncountable number of ground states. Figure 6 . 6Configurations for A 2 Figure 8 . 8Configuration for σ (t) Figure 9 .for A 3 Figure 10 . 9310Configuration Configuration for σ (n) Figure 11 . 11Configurations for A 4 Figure 12 . 12Configuration for σ(4) U (σ (3n+1) ) = min1≤k≤6 {U k } which means σ (3n+1) is a G (3n+1) -periodic ground state.Theoram 3.7. Let (a, b, c) ∈ A 5 , then the following statements hold: (i) there are three translation-invariant ground states; Figure 13 . 13Configuration for A 5 Figure 14 .Figure 15 . 1415Configuration for σ (n) Configuration for σ (t) Figure 16 . 16Configurations for A 6 where x 1 1and x 2 are solutions to the equation x 2 + (3 − b)x + b = 0. From Lemma 5.1 we conclude the following result: Theoram 5.2. If condition (2) of lemma 5.1 is satisfied, then there occurs a phase transition. Let us consider some concrete examples. Example 5.1. Let b = 10. We have x 1 = 2 and x 2 = 5. Then we have η 1 = 1/32 and η 2 = 4/125. So from theorem 5.2, we can conclude that if where d(x, y) = 2 for all x, y ∈ V . According to the previous section,u x,1 = 1 is invariant line for the equation (27). Therefore, in what follows, we assume u x,1 = 1 for all x ∈ V . Then, (27) reduces towhereRoots of u 2 = f (u) are clearly roots of Eq.(28). In order to find the other roots of Eq. (28) that differ from the roots of u = f (u), we must therefore consider the equationwhich yields the quadratic equationNote that the positive roots of (29) generate periodic Gibbs measures. In general, the existence of two positive roots are given by the following conditions:where B =θ 2 1 θ 2 3 + 6θ 2 θ 3 1 + 8θ 2 1 θ 2 2 + 8θ 3 θ 2 2 θ 1 + 2θ 2 θ 3 3 + 2θ 3 1 θ 3 + 6θ 2 θ 2 1 θ 3 − 4θ 4 2 + 6θ 2 θ 2 3 θ 1 + θ 4 Exactly Solved Models in Statistical Mechanics. R J Baxter, Academic PressLondon/New YorkR.J. Baxter, Exactly Solved Models in Statistical Mechanics, (Academic Press, London/New York, 1982). The description of a random field by means of conditional probabilities and conditions of its regularity. R L Dobrushin, Theor. Probab. Appl. 13R.L. Dobrushin, The description of a random field by means of conditional probabilities and conditions of its regularity,Theor. Probab. Appl. 13 (1968), 197-224. Potts model on complex networks. S N Dorogovtsev, A V Goltsev, J F F Mendes, Eur. Phys. J. B. 38S.N. Dorogovtsev, A.V. Goltsev, J.F.F.Mendes, Potts model on complex networks, Eur. Phys. J. B 38 (2004), 177-182. On pure phases of the three-state ferromagnetic Potts model on the second-order Bethe lattice. N N Ganikhodjaev, Theor. Math. Phys. 85N.N. Ganikhodjaev, On pure phases of the three-state ferromagnetic Potts model on the second-order Bethe lattice. Theor. Math. Phys. 85 (1990), 1125-1134. On the three state Potts model with competing interactions on the Bethe lattice. N N Ganikhodjaev, F M Mukhamedov, J F F Mendes, Jour. Stat. Mech. 29pN.N. Ganikhodjaev, F.M. Mukhamedov, J.F.F. Mendes, On the three state Potts model with competing interactions on the Bethe lattice, Jour. Stat. Mech. 2006, P08012, 29 p On disordered phase in the ferromagnetic Potts model on the Bethe lattice. N N Ganikhodjaev, U A Rozikov, Osaka J. Math. 37N.N. Ganikhodjaev, U.A. Rozikov, On disordered phase in the ferromagnetic Potts model on the Bethe lattice. Osaka J. Math. 37 (2000), 373-383. H O Georgii, Gibbs measures and phase transitions. BerlinWalter de GruyterH.O. Georgii, Gibbs measures and phase transitions (Walter de Gruyter, Berlin, 1988). Robust reconstruction on trees is determined by the second eigenvalue. S Janson, E Mossel, Ann. Probab. 32S. Janson, E. Mossel, Robust reconstruction on trees is determined by the second eigenvalue. Ann. Probab. 32 (2004), 2630-2649. Three-state Potts model with antiferromagnetic interactions: a MFRG approach. M C Marques, J.Phys. A: Math. Gen. 21M.C. Marques, Three-state Potts model with antiferromagnetic interactions: a MFRG approach, J.Phys. A: Math. Gen. 21(1988), 1061-1068. On a factor associated with the unordered phase of λ-model on a Cayley tree. F M Mukhamedov, Rep. Math. Phys. 53F.M. Mukhamedov, On a factor associated with the unordered phase of λ-model on a Cayley tree. Rep. Math. Phys. 53 (2004), 1-18. Extremality of the disordered phase of the nonhomogeneous Potts model on the Cayley tree. F M Mukhamedov, U A Rozikov, Theor. Math. Phys. 124F.M.Mukhamedov and U.A.Rozikov, Extremality of the disordered phase of the nonhomogeneous Potts model on the Cayley tree. Theor. Math. Phys. 124 (2000), 1202-1210 Three-state square lattice Potts antiferromagnet. M P Nightingale, M Schick, J.Phys. A: Math. Gen. 15M.P. Nightingale, M. Schick, Three-state square lattice Potts antiferromagnet, J.Phys. A: Math. Gen. 15 (1982), L39-L42. Probability measures and Hamiltonian models on Bethe lattices. I. Properties and construction of MRT probability measures. F Peruggi, J. Math. Phys. 25F. Peruggi, Probability measures and Hamiltonian models on Bethe lattices. I. Properties and construction of MRT probability measures. J. Math. Phys. 25 (1984), 3303-3315. Probability measures and Hamiltonian models on Bethe lattices. II. The solution of thermal and configurational problems. F Peruggi, J. Math. Phys. 25F. Peruggi, Probability measures and Hamiltonian models on Bethe lattices. II. The solution of thermal and configurational problems. J. Math. Phys. 25 (1984), 3316-3323. Potts model on Bethe lattices. I. General results. F Peruggi, F Di Liberto, G Monroy, J. Phys. A. 16F. Peruggi, F. di Liberto, G. Monroy, Potts model on Bethe lattices. I. General results. J. Phys. A 16 (1983), 811-827. Phase diagrams of the q-state Potts model on Bethe lattices. F Peruggi, F Di Liberto, G Monroy, Physica A. 141F. Peruggi, F. di Liberto, G. Monroy, Phase diagrams of the q-state Potts model on Bethe lattices. Physica A 141 (1987) 151-186. Some generalized order-disorder transformations. R B Potts, Proc. Cambridge Philos. Soc. 48R.B. Potts, Some generalized order-disorder transformations, Proc. Cambridge Philos. Soc. 48(1952), 106- 109. C Preston, Gibbs states on countable sets. LondonCambridge University PressC. Preston, Gibbs states on countable sets (Cambridge University Press, London 1974). Description of limit Gibbs measures for λ-models on the Bethe lattice. U A Rozikov, Siberan Math. Jour. 39373380U.A. Rozikov, Description of limit Gibbs measures for λ-models on the Bethe lattice, Siberan Math. Jour. 39(1998), 373380. . U A Rozikov, Gibbs Measures on Cayley Trees, World Scientific. Rozikov U.A. Gibbs Measures on Cayley Trees, World Scientific, Singapore, 2013. . A N Shiryaev, Probability. NaukaA.N.Shiryaev,Probability (Nauka, Moscow, 1980). Inhomogeneity-induced second order phase transitions in the Potts models on hierarchical lattices. P N Timonin, JETP. 99P.N. Timonin, Inhomogeneity-induced second order phase transitions in the Potts models on hierarchical lattices. JETP 99 (2004), 1044-1053. The Potts model. F Y Wu, Rev. Mod. Phys. 54F.Y. Wu, The Potts model, Rev. Mod. Phys. 54 (1982), 235-268.
[]
[ "Accelerated design of Fe-based soft magnetic materials using machine learning and stochastic optimization", "Accelerated design of Fe-based soft magnetic materials using machine learning and stochastic optimization" ]
[ "Yuhao Wang \nDepartment of Mechanical Engineering\nTexas A&M University\n77843College StationTXUSA\n", "Yefan Tian \nDepartment of Physics and Astronomy\nTexas A&M University\n77843College StationTXUSA\n", "Tanner Kirk \nDepartment of Mechanical Engineering\nTexas A&M University\n77843College StationTXUSA\n", "Omar Laris \nDepartment of Materials Science and Engineering\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Joseph H Ross Jr\nDepartment of Physics and Astronomy\nTexas A&M University\n77843College StationTXUSA\n\nDepartment of Materials Science and Engineering\nTexas A&M University\n77843College StationTXUSA\n", "Ronald D Noebe \nMaterials and Structures Division\nNASA Glenn Research Center\n44135ClevelandOHUSA\n", "Vladimir Keylin \nMaterials and Structures Division\nNASA Glenn Research Center\n44135ClevelandOHUSA\n", "Raymundo Arróyave \nDepartment of Mechanical Engineering\nTexas A&M University\n77843College StationTXUSA\n\nDepartment of Materials Science and Engineering\nTexas A&M University\n77843College StationTXUSA\n" ]
[ "Department of Mechanical Engineering\nTexas A&M University\n77843College StationTXUSA", "Department of Physics and Astronomy\nTexas A&M University\n77843College StationTXUSA", "Department of Mechanical Engineering\nTexas A&M University\n77843College StationTXUSA", "Department of Materials Science and Engineering\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Physics and Astronomy\nTexas A&M University\n77843College StationTXUSA", "Department of Materials Science and Engineering\nTexas A&M University\n77843College StationTXUSA", "Materials and Structures Division\nNASA Glenn Research Center\n44135ClevelandOHUSA", "Materials and Structures Division\nNASA Glenn Research Center\n44135ClevelandOHUSA", "Department of Mechanical Engineering\nTexas A&M University\n77843College StationTXUSA", "Department of Materials Science and Engineering\nTexas A&M University\n77843College StationTXUSA" ]
[]
Machine learning was utilized to efficiently boost the development of soft magnetic materials. The design process includes building a database composed of published experimental results, applying machine learning methods on the database, identifying the trends of magnetic properties in soft magnetic materials, and accelerating the design of next-generation soft magnetic nanocrystalline materials through the use of numerical optimization. Machine learning regression models were trained to predict magnetic saturation (B S ), coercivity (H C ) and magnetostriction (λ), with a stochastic optimization framework being used to further optimize the corresponding magnetic properties. To verify the feasibility of the machine learning model, several optimized soft magnetic materials -specified in terms of compositions and thermomechanical treatments -have been predicted and then prepared and tested, showing good agreement between predictions and experiments, proving the reliability of the designed model. Two rounds of optimization-testing iterations were conducted to search for better properties.
10.1016/j.actamat.2020.05.006
[ "https://arxiv.org/pdf/2002.05225v1.pdf" ]
68,190,637
2002.05225
b666ea22ab7b4ac17e4b797d7dd3283a67a90c41
Accelerated design of Fe-based soft magnetic materials using machine learning and stochastic optimization 4 Feb 2020 Yuhao Wang Department of Mechanical Engineering Texas A&M University 77843College StationTXUSA Yefan Tian Department of Physics and Astronomy Texas A&M University 77843College StationTXUSA Tanner Kirk Department of Mechanical Engineering Texas A&M University 77843College StationTXUSA Omar Laris Department of Materials Science and Engineering Massachusetts Institute of Technology 02139CambridgeMAUSA Joseph H Ross Jr Department of Physics and Astronomy Texas A&M University 77843College StationTXUSA Department of Materials Science and Engineering Texas A&M University 77843College StationTXUSA Ronald D Noebe Materials and Structures Division NASA Glenn Research Center 44135ClevelandOHUSA Vladimir Keylin Materials and Structures Division NASA Glenn Research Center 44135ClevelandOHUSA Raymundo Arróyave Department of Mechanical Engineering Texas A&M University 77843College StationTXUSA Department of Materials Science and Engineering Texas A&M University 77843College StationTXUSA Accelerated design of Fe-based soft magnetic materials using machine learning and stochastic optimization 4 Feb 2020machine learningsoft magneticnanocrystallinematerials design Machine learning was utilized to efficiently boost the development of soft magnetic materials. The design process includes building a database composed of published experimental results, applying machine learning methods on the database, identifying the trends of magnetic properties in soft magnetic materials, and accelerating the design of next-generation soft magnetic nanocrystalline materials through the use of numerical optimization. Machine learning regression models were trained to predict magnetic saturation (B S ), coercivity (H C ) and magnetostriction (λ), with a stochastic optimization framework being used to further optimize the corresponding magnetic properties. To verify the feasibility of the machine learning model, several optimized soft magnetic materials -specified in terms of compositions and thermomechanical treatments -have been predicted and then prepared and tested, showing good agreement between predictions and experiments, proving the reliability of the designed model. Two rounds of optimization-testing iterations were conducted to search for better properties. Introduction Motivation The pursuit of increased efficiency in energy conversion and transformation requires a new generation of energy materials. Soft magnetic materials are capable of rapidly switching their magnetic polarization under relatively small magnetic fields. They typically have small intrinsic coercivity and are used primarily to enhance or channel the flux produced by an electric current. These alloys are used in a large number of electromagnetic distribution, conversion, and generation devices, such as transformers, converters, inductors, motors, generators, and even sensors. In current materials science community, the accelerated discovery and design of new energy materials has gained considerable attention in light of the many societal and environmental challenges we currently face. Soft magnetic materials are considerably crucial as they are essential elements of electro-magnetic energy transformation technologies. For example, the power transformer is a critical component of the solar energy conversion system, whose performance is ultimately limited by the magnetic properties of the materials used to build the cores. In 1988, Yoshizawa et al. presented a new nanocrystalline soft magnetic material referred to as FINEMET, which exhibits extraordinary soft magnetic performance [1]. This alloy was prepared by partially crystallizing an amorphous Fe-Si-B alloy with minor addition of Cu and Nb. This unusual combination of chemistry and processing conditions led to an ultrafine grain structure in an amorphous matrix resulting in excellent soft magnetic properties. The resulting soft magnetic properties of FINEMET type alloys, relevant to electromagnetic energy conversion devices, are a unique combination of low energy losses, low magnetostriction, and high magnetic saturation, up to 1.3 T. This was achieved through an ultrafine composite microstructure of cubic-DO 3 structured Fe-Si grains with grain sizes of 10-15 nm in a continuous amorphous matrix, providing a new path for designing next-generation soft magnetic materials. FINEMET-type soft magnetic materials The target material system in this work is FINEMET-type soft magnetic nanocrystalline alloys whose properties are categorized into two groups, intrinsic and extrinsic properties. Intrinsic properties include magnetic saturation (B S ), magnetocrystalline anisotropy (K 1 ), magnetostriction (λ), and Curie temperature (T C ). K 1 and λ indirectly influence the hysteretic behavior (B-H loop) for each type of core material by influencing coercivity and core losses of the material. Extrinsic properties include permeability (µ), susceptibility (χ), coercivity (H C ), remanence (M r ), and core losses (P cv ). These are influenced not only by the microstructure, but also the geometry of materials, the different forms of anisotropy, and the effect of switching frequency of the applied fields [2]. Among these soft magnetic properties, most can be obtained from its unique hysteresis loop -known as the B-H curve, shown in Fig. 1(a), where B is the flux density generated by an electromagnetic coil of the given material as a function of applied magnetic field strength, H. From this curve the following terms can be defined: (a) Coercivity (H C ) is the intensity of the applied magnetic field required to reduce the residual flux density to zero after the magnetization of the sample has been driven to saturation. Thus, coercivity measures the resistance of a ferromagnetic material to be demagnetized. (b) Magnetic saturation (B S ) is the limit to which the flux density can be generated by the core as the domains in the material become fully aligned. It can be determined directly from the hysteresis loop at high fields. Large values of flux density are desirable since most applications need a device that is light in mass and/or small in volume. (c) Permeability (µ): µ = B/H = 1 + χ, is the parameter Reprinted from Ref. [3]. (c) Schematic representation of the random anisotropy model for grains embedded in an ideally soft ferromagnetic matrix. The double arrows indicate the random fluctuating anisotropy axis, the hatched area represents the ferromagnetic correlation volume determined by the exchange length L ex . Reprinted from Ref. [3]. that describes the flux density, B, produced by a given applied field, H. Permeability can vary over many orders of magnitude and should be optimized for a given application. For example, EMI filters usually require large values to produce substantial changes in magnetic flux density in small fields. For other applications, such as filter inductors, permeability does not necessarily need to be high but needs to be constant so that the core does not saturate readily. (d) Core loss is one of the most essential properties of the material as it is a direct measure of the heat generated by the magnetic material in A/C applications. It is the area swept out by the hysteresis loop, which should be minimized to provide a high energy efficiency for the core. Contributions to the core loss include hysteretic sources from local and uniform anisotropies and eddy currents at high frequencies. Maximizing B S and Minimizing H C are the most important design objectives for most applications requiring soft magnetic materials and therefore were the design goals in this study. Since µ heavily depends on the application and can range over several orders of magnitude, even for a fixed composition, depending on secondary processing conditions, it was therefore not a parameter that was optimized or considered in this study. Based on the nature of FINEMET-type nanocrystalline alloys, several constraints have been incorporated in this study. The magnetic transition metal element has been set to Fe in this work and other elements, such as Co and Ni are excluded, because at relatively small additions they will tend decrease B S . The composition of Fe ranged from 60%-90%. The percentages of the remaining elements in total varied from 10%-40%. Although the early transition metal element is Nb in current commercial FINEMET alloys, other elements, such as Zr, Hf, Ta, Mo or even combinations of different early transition metal elements were considered. In commercial alloys, metalloids B and Si are added to promote glass formation in the precursor and we also allowed for P. The noble metal elements are selected from Cu, Ag, or Au serving as nucleating agents for the ferromagnetic nanocrystalline phase. The random anisotropy model [4] provides a concise and explicit picture for understanding the soft magnetic properties of nanocrystalline ferromagnetic materials, such as FINEMET. As illustrated in Fig.1, the microstructure is characterized by a random distribution of structural units or grains in a ferromagnetic matrix with an effective magnetic anisotropy with a scale D. For a finite number (N) of grains within the ferromagnetic correlation volume (V = L 3 ex ), the corresponding average anisotropy constant K 1 is given by K 1 ≈ K 1 √ N = K 1 ( D L ex ) 3/2 ,(1) which is determined by the statistical fluctuations from averaging over the grains. If there are no other anisotropies, the coercivity H C and the magnetic saturation B S are directly related to the average anisotropy constant K by H C = p c K B S ,(2) where p c is a dimensionless pre-factor. These relations were initially derived for coherent magnetization rotation in conventional fine particle systems. In the regime D < L ex , however, they also apply for domain wall displacements. Accordingly, coercivity was shown to vary with grain size as H C ∝ D 6 for very fine grained materials( Figure.1(b) ). For large grain sizes, it shows the typical 1/D-dependence and thus great soft magnetic properties require large grain sizes [5]. However, for small grain sizes, H C shows an extraordinary D 6 -dependence behavior, which provides another path to realize excellent soft magnetic performance, through the generation of nanocrystalline microstructures [6,7]. FINEMET materials design Machine learning as one of the most popular and efficient statistical techniques has enormous potential to bring the discovery and design of soft magnetic alloys to the next level. In this work, machine learning was used as the primary technique to expand the boundaries of Fe-based FINEMET-type material space and design the next-generation of soft magnetic materials. As a first step, we built an experimental database compiled from 76 journal articles published starting in 1988. The database was carefully curated to only include data entries in the nanocrystalline regime. Feature engineering was applied so that the original inputs have been transformed into more general atomic properties. We built separate models to predict each magnetic property and the number of predictive features were reduced to between 5 and 20 for each property. Various machine learning algorithms were applied to the selected data sets, and the prediction evaluated using 20-fold cross-validation based on the coefficient of determination (R 2 ). Stochastic optimization framework was used to guide the design of new and better soft magnetic nanocrystalline materials. Data set overview To start the soft magnetic materials design process, the first step was to build a database thermal treatments or other types of secondary processing, with some relevant measure of various properties. Due to the limitation of not being able to access the original data directly, it was challenging to collect and organize all the data from the literature to interpret the relations between processing, compositions, structures, and properties. Nevertheless, it is a valuable task to create, maintain and preserve such a data set to share in the materials science community for collaborative research purposes. The sources of our data set are open literature and patent information, from published experimental work. All publications assembled in the database are listed in Table 1. The essential components of the database are chemical compositions, thermal processing conditions applied to the amorphous precursor ribbon, and soft magnetic properties. Data contained in plots were extracted using WebPlotDigitizer [83]. The soft magnetic properties of nanocrystalline materials are heavily dependent on the chemical composition and the different elements in the composition can be split into four different categories: magnetic transition metal (MTM), early transition metal (ETM), post-transition metal (PTM) and late transition metal (LTM) as shown in Fig. 2. MTM includes Fe, Co, and Ni, but in this first attempt to model behavior, we focused on Fe-based materials, therefore, Co-and Ni-containing alloys were not considered in the database. ETM elements normally act as grain refiners, which help slow down the diffusion process during the thermal crystallization step and help maintain a fine grain size. PTM elements form the glassy phase. In addition, Si serves the critical role of producing the Fe 3 Si crystalline phase in FINEMET, which has a negative magnetostriction and reduces the overall magnetostriction of the alloy by balancing out the positive magnetostriction in the glassy phase. LTM elements like Cu, Ag, and Au can act as nucleants which form clusters in the glassy phase, responsible for nucleating a high density of grains. The distribution of Fe in the FINEMET data set is shown in Fig. 3(a), the dominating composition is 73.5, which is the original atomic percentage of Fe in FINEMET. The distribution of nucleant elements is shown in Figs. 3(b) and 3(c). It is shown that only 33 data entries contain Au which is around 2% of overall entry count. Cu occupies the majority of Chemical Elements Fe, Si, C, Al, B, P, Ga, Ge, Cu, Ag, Au, Zn, Ti, V, Cr, Zr, Nb, Mo, Hf, Ta, W, Ce, Pr, Gd, U. Experimental measurements Annealing temperature, Annealing Time, Primary Crystallization Onset, Primary Crystallization Peak, Secondary Crystallization Peak, Longitudinal Annealing field, Transverse Annealing field, Ribbon Thickness, Magnetic Saturation (B S ), Coercivity (H C ), Permeability (µ), Magnetostriction (λ), Core Loss, Electrical Resistivity (ρ), Curie Temp (T C ), Grain Size (D). our studied compositions, and most of the entries contain 1% of Cu. For ETM elements, the distribution of our study is shown in Figs. 3(h), 3(j), 3(k), and 3(l). The majority of our studied compositions contain Nb as grain refiners and most of them have a Nb fraction around 3%. The distribution of PTM elements are shown in Figs. 3(d), 3(e), 3(f), 3(g), and 3(i). B occurs in most of the entries, and sometimes appears along with P or Ge. The dominant composition for B is 9%. Si also occurs most of the time because of the need to generate the Fe 3 Si crystalline phase with a negative magnetostriction and to serve as a potential glass former. We can also visualize the distribution of selective magnetic properties of interest in Fig. 4. The distribution of coercivity is heavily right-skewed so that we applied the log transformation to make it approximate the normal distribution. It is also evident that people tend to report good results, such as high magnetic saturation (larger than 1.5 T) and low magnetostriction (close to 0), rather than "unattractive" results. However, in terms of the machine learning process, the unattractive results can be just as beneficial to the overall process. In the course of constructing the database, several difficulties, which afflict most materials design problems, need to be clearly stated. First, we chose FINEMET-type nanocrystalline alloys as the material design space, and there is a vast composition parameter space where few researchers have done measurements. Several cluster of data in the material database represent trendy materials, which are only a small percentage of all candidate material compositions. Second, given the different research habits of various groups and diversity of equipment, some researchers did not clarify everything, such as manner in which magnetic saturation was defined. Finally, not all publications include every soft magnetic property of interest. For example, compared to coercivity, magnetic saturation is more difficult to measure, which leads to the fact that we have many data points for the former and less for magnetic saturation. As a result, considering the primary target is to optimize both coercivity and magnetic saturation, it is helpful to split the database into separate sets, each for a different magnetic property as the objective. All features are listed in Table 2. The features mainly contain two different categories: chemical components and experimental measurements. The purpose of the study was to evaluate nanocrystalline FINEMET type alloys. These alloys are generally produced by partial crystallization of an amorphous precursor by an annealing treatment, which also significantly increases the complexity of the problem. To ensure data consistency, some data points in the data sets were modified by the following procedures: (a) (b) (c) (d) (g) (j) (e) (f) (h) (i) (k) (l) 1. Data which were missing an annealing temperature or annealing time, all as-quenched data, and all data processed below room temperature were removed. 2. Annealing Temperatures were rounded to every 5th degree Celsius. 3. Annealing Times were rounded to the nearest hour or half hour depending on the magnitude of the value. 4. Data points out of the nanocrystalline regime. -i.e., grain diameters over 60 nm, were removed. 5. Any features, which are unused after data reduction were removed. Machine learning model The next stage in our analysis was to build a series of predictive models using machine learning techniques to relate the magnetic properties with chemical components and processing conditions. The approach consisted of two main steps: feature selection and model selection. Feature selection We employed a five step process to identify features that could be removed in order to simplify the model, without incurring a significant loss of information. The procedure includes Specifically, step c) utilizes the Pearson correlation coefficient to identify pairs of collinear features. For each pair above the specified threshold (in absolute value), it finds one of the variables to be removed. Steps d) and e), required a supervised learning problem with labels to estimate the importance of features and in this work, a gradient boosting machine implemented in the LightGBM library [85] was utilized. A gradient boosting machine works by utilizing multiple "weak learners" (in our case, decision trees) and combining them into a strong learner. Trees are constructed in a greedy manner sequentially and the subsequent predictors can learn from the mistakes of the previous predictors. In a single tree, a decision node splits data into two parts each time based on one of the features, and the relative rank (i.e. depth) of a feature can be used to assess the relative importance of that feature with respect to the target value. Features used at the top level of the tree have a larger contribution to the final prediction based on the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features. In a gradient boosting machine, one can average the importance of the features over a sequence of decision trees to reduce the variance of the estimation in order to use it for feature selection [86]. For step d), features with zero importance were removed. Step e) builds off the feature importance from step d) and removes the lowest important features not needed to reach a specified cumulative total feature importance specified by users. For example, if we input a cumulative total feature importance value 0.99, it can find the lowest important features that are not needed to reach 99% of the total feature importance and remove them. After the feature selection procedure, the remaining features are shown in Table 3. There are 13 features for coercivity; 6 features for Curie temperature; 6 features for grain size; 10 features for magnetic saturation; 7 features for magnetostriction and 6 features for Permeability. The following machine learning models were built based on the selected feature sets for each property. Machine learning algorithms and results Five different machine learning algorithms including linear regression, support vector machines, decision trees, k-nearest neighbors and random forest, were utilized and compared with each other in the process of building predictive models [86]. The comparison of the coefficient of determination (R 2 ) score from 20-fold cross-validation is shown in Fig. 5. Note that we performed a natural log transformation on coercivity to fix its skewness. Based on the R 2 score, it is evident that the random forest model is the best. Random forest regression(RFR) [87] is an ensemble of regression trees, induced from bootstrap samples of the training data, using random feature selection in the tree induction process. Prediction is then made by averaging the outputs of the ensemble of trees. Random forest generally exhibits a substantial performance improvement over the single tree classifier, such as CART and C4.5. It yields a generalization error rate that compares favorably to AdaBoost, yet is more robust to noise [88]. Predicted values of magnetic saturation, coercivity and magnetostriction from random forest models are compared with actual experimental values in Fig. 6 Table 3: Remaining features after feature selection for different materials properties.T c,0 , T c,1 , and T c,2 represent Primary Crystallization Onset, Primary Crystallization Peak, and Secondary Crystallization peak, respectively. ∆T 0 and ∆T 1 are the difference of T c,0 and T c,1 with annealing temperature, respectively. regression data closer, it is apparent that the RFR model underestimates coercivity within the low coercivity region. This is potentially due to the high sensitivity of coercivity to the processing conditions. Furthermore, for all three properties, the data is not perfectly uniformly distributed across the entire range of interest. The discrepancy between predictions and measurements likely arises from a lack of data within the corresponding regions. Material design and optimization Based on the predictive models built above, we attempted to find optimized material compositions and heat treatment conditions that can achieve large magnetic saturation and minimum loss. To achieve this objective, we performed two rounds of optimizations using the differential evolution algorithm. Experimental measurements conducted after the first round of optimization was added to the data set to try to improve the results in the second round. For the first round, the input space of the optimization process was the combination of features in the coercivity model, magnetostriction model, and magnetic saturation model, which are shown in Table 3. For the second round, the input space of the optimization process was the combination of features in the coercivity model and magnetic saturation model. Differential Evolution Differential Evolution (DE) [89] is a stochastic population based method that is useful for global optimization problems. It utilizes NP D-dimensional parameter vectors x i,G , i = 1, 2, ..., NP(3) as a population for each generation G. The initial vector population is chosen randomly, assuming a uniform probability distribution. DE generates new parameter vectors by adding the weighted difference between two population vectors to a third vector which is called mutation. The mutated solution is mixed with other candidate solutions to create a trial candidate. In this study, the "best1bin" strategy was utilized. In this strategy, two members of the population are randomly chosen. Their difference is used to mutate the trial candidate µ i,G+1 = x r 1 ,G + F · (x r 2 ,G − x r 3 ,G ),(4) while r 1 , r 2 , r 3 ∈ {1, 2, ..., NP} are random different indexes and F is a constant parameter ∈ [0, 2] that controls the amplitude of the differential variation (x r 2 ,G − x r 3 ,G ). In order to increase the diversity of the perturbed parameter vectors, a crossover is introduced. To this end, the trial candidate ν i,G+1 = (ν 1i,G+1 , ν 2i,G+1 , ..., ν Di,G+1 )(5) is formed, where ν ji,G+1 (j = 1, 2, ..., D) is determined by a binomial distribution. A random number ∈ [0, 1) is generated, if this number is less than the recombination constant which is determined by the user, then ν ji,G+1 is loaded from µ ji,G+1 , otherwise it is loaded from the original candidate x ji,G+1 . It is ensured that ν i,G+1 gets at least one parameter from µ i,G+1 . The choice of whether to use trial candidate ν i,G+1 or the original candidate x i,G+1 is made using the greedy criterion. Once the trial candidate is built, its fitness is assessed. If the trial is better than the original candidate, then it takes its place. If it is also better than the best overall candidate, it replaces that value too. Optimization Results Our primary goal in designing improved Fe-based soft magnetic alloys is to minimize the core loss to help reduce the energy waste during operation. A secondary goal is to maximize the magnetic saturation. To ensure the existence of Fe-Si phase, a constraint was employed to ensure the atomic percentage of Si was no less than 3%. Both design targets can contribute to the design of high energy efficiency. Properties like coercivity and magnetostriction which directly affect the core loss could serve as our targets for loss minimization. Si > 3%; λ < 3 (×10 −6 ); Ta, Mo, Nb, Zr could be constrained to zero in certain cases. Table 4: First round multi-objective strategy problem formulation. Where T a is annealing temperature (K) and t a is annealing time (s). The first choice is to formulate the problem as a single-objective optimization with the objective function being the magnetic saturation. To satisfy the pre-request of low core loss, we reformulated two objectives as constraints to restrict the design space while maximizing the objective function. The constraints were described as ln(coercivity) not exceeding C 0 (A/m) and magnetostriction is not exceeding M 0 (×10 −6 ). The second choice is to formulate the problem as a multi-objective optimization. We only reformulate magnetostriction as the constraint and define a composite objective function to be minimized as: V = −α 1 (B S ) + α 2 ln(H C )(6) where α 1 and α 2 values the importance of each of the properties to achieve a balance between conflict objectives. The first round of optimization emphasized achieving a low coercivity and the second round emphasized achieving a high magnetic saturation. For the first round, the optimization ran on the composition space of Fe, Si, Cu Ta, Mo, Nb, Zr, B, and P. We constrained our composition space further so that Si was larger than 3% to ensure the existence of Fe 3 Si phase. We tried a total of four different strategies of single-objective and multi-objective approaches. In single-objective methods, the value of C 0 was chosen to be −1.5 or −0.5 and M 0 was chosen to be 3. In multi-objective methods, M 0 was also set to 3 and two different weight combinations have been explored in our calculations: (α 1 = α 2 = 1) and (α 1 = 4, α 2 = 1). For the second round, we added the additional experimental results, which we generated following the first round of optimization to our database and re-trained the machine learning model. The second round of optimization ran on a smaller composition space of Fe, Si, Cu, Mo, Nb, B, Ge, and P where Si needed to be larger than 3%. In this round, our model only ran on coercivity and magnetic saturation because of the high correlation between coercivity and magnetostriction. For the second round, we tried the single-objective approach and focused on maximizing magnetic saturation with constraints of C 0 to be two separate values of 0.5 or 0. The problem formulations of both first and second round optimizations are shown in Tables 4 to 6. Fig. 7 shows the different combination of trade-off surface plots for the results of all the optimization methods we tried, including both single-objective and multi-objective methods. Fig. 7(a)-(c) are the plots of experimental measurements from the database and the right column is the optimization results achieved by RFR and DE methods. When there are two or more objectives, solutions rarely exist that optimize all at once. The objectives are normally Si > 3%; λ < 3 (×10 −6 ); ln(Hc) < -1.5 or 0.5; Ta, Mo, Nb, Zr could be constrained to zero in certain cases. Si > 3%; Fe > 75%; ln(Hc) < 0 or 0.5; Mo, Ge, P could be constrained to zero in certain cases. measured in different units, and any improvement in one is at the loss of another [90]. It can be seen in Fig. 7(d)-(g) that there is a systematic trade-off between coercivity and magnetostriction, versus magnetic saturation. Increasing coercivity and magnetostriction generally led to an increase of magnetic saturation. Solutions marked by numbers and shown in Table 7 and Table 8 are identified as optimum solutions based on the trade-off surface described by the ln(coercivity)-magnetic saturation plot, as these two are defined as our main objectives. Further selection in the optimum set could be based on different application scenarios and different weighting strategies of the two competing aspects. (a) (b) (c) (d) (e) (f) (g) (h) (i) Experimental validation To experimentally validate our machine learning model, several predicted compositions near the Pareto front of the ln(coercivity) -magnetic saturation plots have been synthesized. For the first round, we chose three points No. 14,No.17 and No.19 in the points region of intermediate value of both coercivity and magnetic saturation as shown in Figure 7(d). While for the second round, we chose one point from each of the three point regions, namely No. 2,No.4 and No.5, to try to test the behavior of different regions of point segregation in Figure 7(g). All alloys were melted from elemental constituents and remelted several times to ensure homogeneity and then cast into cigar-shaped ingots. The ingots weighed approximately 60 g. The ingots were then used as melt stock for melt spinning using an Edward Buehler HV melt spinner using a planar flow casting process and wheel speed of 25.9 m/s. The cast ribbons were approximately 19.5 microns thick and ∼16.5 mm wide. Composition of all melt-spun ribbons were confirmed by inductively coupled plasma atomic emission spectroscopy (ICP-AES). The ribbon was wound into small cores, wrapped with a piece of copper wire to hold the core together and then heat-treated in argon after first pulling a vacuum to approximately 1×10 −7 torr and heat-treated at the times/temperatures specified. The heating rate was 3 • C/min and samples were cooled at a rate of 8 • C/min after the Figure 7(h) and (i), we determined all magnetic properties, including B S and H C . The comparison of predicted and experimental properties of selected samples from both the first iteration and second iteration are shown in Table 9. The compositions are recorded by experimental measurement of the prepared samples and the heat treatment times and temperatures are the same as the model predictions and listed in Tables 7 and 8. It should be noted from Figure 7(h) and (i) that sample No.19 in first round and No.2 in second round have similar performance compared to commercial Finemetlike alloys, which shows that our approach is effective in identifying other compositions with very good properties. We collected a substantial portion of the reported experimental data, which is shown in Fig. 7(a)-(c) as Ashby plots. Although there are a large number of data points describing ln(coercivity)-magnetic saturation property space in Fig. 7(a), it can still be observed that the high-B S , low-H C area (top left corner) is completely empty. The ultimate goal of the material design is to breach the boundary and reach the target area. To improve the machine learning model, we need more data, and an efficient way is to explore the property space that is missing, if at all possible. Another potential strategy is to continue to iterate the design process. By using experimental results of proposed compositions associated with better properties it may be possible to continue to iterate until the target properties are obtained. As a complex material system, soft magnetic nanocrystalline alloys usually contain several elements, which makes it difficult to decide what elements should be included. Several principles could be considered: (a) For applications of soft magnetic materials, the prices of elements are non-negligible factors. In our design process, the selection of late transition metal is a typical issue due to the hefty price difference between Au and Cu. The price of Cu is about 6 USD/kg, Ag about 500 USD/kg and Au over 40,000 USD/kg. (b) If possible, we try not to include too many different elements that serve the same function in the alloy. (c) In this work, the only magnetic element included was Fe, which can provide a pure α-Fe 3 Si nanocrystalline phase. Based on these principles, we proposed the optimized compositions listed in Table 7 and Table 8 for potential experimental validations, in addition to the selected alloys confirmed in Table 9. Summary In this work, we built a general database for Fe-based FINEMET-type soft magnetic nanocrystalline alloys using experimental data from the open literature. Based on this, machine learning techniques were applied to analyze the statistical inference of different features and then build predictive models to establish the relation between materials properties and material compositions along with processing conditions. We chose the random forest model as our modeling tool due to its better performance compared with several other machine learning methods. Optimization process was performed to establish and then solve the in- verse problem that is to find a suitable combination of element components and processing conditions to achieve minimum loss and maximum magnetic saturation. Experimental validations have been applied on several predicted materials, which showed that the predicted novel material could have similar performance as the commercial Finemet-like alloys. However, as shown in Table 9 there are notable discrepancies between predicted and measured values and that probably arise due to the fact that we were predicting in a high-dimensional space without enough data. We believe, with more data collected in future iterations, the errors will become smaller as the models become more robust. Furthermore, the collected data set and analysis procedure can create more insight on how to design the next-generation optimized Fe-based soft magnetic nanocrystalline alloys motivated by various applications. The data set is available on Github. Figure 1 : 1(a) B-H loop. (b) Coercivity H C vs. grain size D for various soft magnetic metallic alloys. by collecting available properties from the open literature. There are hundreds of accessible publications in the open literature, which describe different types of soft magnetic nanocrystalline alloys in single or a small range of alloy compositions given one or several types of 4 Figure 2 : 2Elemental components of FINEMET-like-soft magnetic alloys including magnetic transition metal (MTM), early transition metal (ETM), post-transition metal (PTM), late transition metal (LTM) elements. Figure 3 : 3Distribution of various element atomic percentage (at.%) in the database. Figure 4 : 4Distribution of selective magnetic properties in the database. Solid lines are kernel density estimations to estimate the probability density function of data entries. [84]: (a) Remove the features that have over 50% missing values. (b) Remove the features containing only one unique value. (c) Identify collinear features and remove them. (d) Remove zero importance features using gradient boosting decision tree algorithm. (e) Remove cumulative low importance features using gradient boosting decision tree algorithm. 9 Figure 5 : 5R 2 value of different machine learning algorithms. Figure 6 : 6Comparison between predicted values from the machine learning model and experimental values for (a) magnetic saturation, (b) coercivity, and (c) magnetostriction. Si, Cu, Ta, Mo, Nb, Zr, B, P, T a , t a , ln(H C ), λ, B S Objective function V = −α 1 (B S ) + α 2 ln(H C ) Constraints Figure 7 : 7Trade-off surface of different combinations of magnetic properties. (a)-(c) are from the experimental data set and (d)-(g) are from the machine learning optimizations. (d)-(f) are for first iteration and (g) is for second iteration. n is the number of points contained in the plots. The selected results marked by index are shown in Table 7. (h) and (i) are experimental measurements of B-H loops for different samples and compared with a commercial Finemet-like alloys composition processed similar to the experimental alloys investigated. (h) is for first iteration and (i) is for second iteration. Si, Cu, Ta, Mo, Nb, Zr, B, P, T a , t a , ln(H C ), λ, B S Objective function V = −B S Constraints Si, Cu, Mo, Nb, B, Ge, P, T a , t a , ln(H C ), B S Objective function V = −B S Constraints Table 2 : 2Table of all features from the soft magnetic data set. . The resulting RFR achieves a good agreement with the experimentally measured properties. Investigating theRandom Forest kNN Decision Tree SVM Linear Regression Magnetic Saturation Magnetostriction Coercivity Curie Temperature Grain Size Permeability 0.86 0.8 0.82 0.58 0.57 0.82 0.79 0.8 0.56 0.6 0.76 0.68 0.6 0.3 0.32 0.78 0.64 0.64 0.15 0.09 0.72 0.56 0.55 0.35 0.34 0.58 0.55 0.41 0.12 0.11 0.0 0.2 0.4 0.6 0.8 1.0 Table 5 : 5First round single-objective strategy problem formulation. Table 6 : 6Second round problem formulation. Table 7 : 7Selected first round Optimization results obtained by DE using Random Forest model. Where T a is annealing temperature (K) and t a is annealing time (s). α 1 , α 2 , C 0 are optimization parameters, Inf means there's no constraint on coercivity. Constraint on magnetostriction(M 0 ) is always set to be 3.Index Fe Si Cu Mo Nb B Ge P Ta ta ln(H C ) B S C 0 1 76.23 11.95 0.30 0.00 2.25 8.83 0.41 0.04 785 2862 -0.37 1.42 0 2 76.31 11.96 0.33 0.21 2.21 8.99 0.00 0.00 783 2585 -0.36 1.42 0 3 76.66 11.88 0.46 0.00 2.25 8.70 0.00 0.04 783 2938 -0.35 1.42 0 4 76.97 11.51 0.45 0.00 2.29 8.78 0.00 0.00 663 3150 0.44 1.47 0.5 5 77.04 11.29 0.35 0.00 2.35 8.86 0.12 0.00 661 3078 0.44 1.47 0.5 6 76.62 11.43 0.49 0.00 2.43 8.58 0.44 0.01 662 3021 0.47 1.47 0.5 7 76.67 11.67 0.38 0.18 2.26 8.56 0.28 0.00 663 2766 0.47 1.47 0.5 8 76.88 11.43 0.42 0.14 2.34 8.80 0.00 0.00 662 3132 0.47 1.47 0.5 9 76.74 11.87 0.48 0.02 2.29 8.61 0.00 0.00 663 3201 0.47 1.47 0.5 Table 8 : 8Selected second round optimization results obtained by DE using Random Forest model. specified treatment. Properties of the wound cores were determined by IEEE Standard 393-1991: Standard for Test Procedures for Magnetic Materials. Testing was performed at 1000 Hz. From the resulting B-H loops as shown in Table 9 : 9Predicted and measured properties of soft magnetic alloys from the first and second round of optimization. The index corresponds to the value in Tables 7 and 8. Compositions are the experimental values from samples. Heat treatment time and temperature are referenced from Tables 7 and 8. . Saad, 8Saad et al. [8] 2002 . Kataoka, 9Kataoka et al. [9] 1989 Skorvanek et al. [10] 2002 . Herzer, 11Herzer et al. [11] 1990 . Mitrovic, 12Mitrovic et al. [12] 2002 . Suzuki, 13Suzuki et al. [13] 1990 . Marin, 14Marin et al. [14] 2002 . Yoshizawa, 15Yoshizawa et al. [15] 1991 Zorkovska et al. [16] 2002 . Suzuki, 17Suzuki et al. [17] 1991 . Sulictanu, 18Sulictanu et al. [18] 2002 . Fujii, 19Fujii et al. [19] 1991 . Cremaschi, 20Cremaschi et al. [20] 2002 . Makino, 21Makino et al. [21] 1991 . Chau, 22Chau et al. [22] 2003 . Lim, 23Lim et al. [23] 1993 Ponpandian et al. [24] 2003 . Tomida, 25Tomida et al. [25] 1994 . Kwapulinski, 26Kwapulinski et al. [26] 2003 . Makino, 27Makino et al. [27] 1994 . Crisan, 28Crisan et al. [28] 2003 . Kim, 29Kim et al. [29] 1995 . Sovak, 30Sovak et al. [30] 2004 . Inoue, 31Inoue et al. [31] 1995 . Cremaschi, 32Cremaschi et al. [32] 2004 . Vlasak, 33Vlasak et al. [33] 1997 . Ohnuma, 34Ohnuma et al. [34] 2005 . Lovas, 35Lovas et al. [35] 1998 . Chau, 36Chau et al. [36] 2006 . Grossinger, 37Grossinger et al. [37] 1999 . Ohta, 38Ohta et al. [38] 2007 . Yoshizawa, 39Yoshizawa et al. [39] 1999 . Lu, 40Lu et al. [40] 2008 . Kopcewicz, 41Kopcewicz et al. [41] 1999 . Pavlik, 42Pavliket al. [42] 2008 . Frost, 43Frost et al. [43] 1999 . Makino, 44Makino et al. [44] 2009 . Franco, 45Franco et al. [45] 1999 . Ohnuma, 46Ohnuma et al. [46] 2010 . Turtelli, 47Turtelli et al. [47] 2000 . Butvin, 48Butvin et al. [48] 2010 . Xu, 49Xu et al. [49] 2000 . Lu, 50Lu et al. [50] 2010 . Todd, 51Todd et al. [51] 2000 . Makino, 52Makino et al. [52] 2011 . Borrego, 53Borrego et al. [53] 2000 . Kong, 54Kong et al. [54] 2011 . Kemeny, 55Kemeny et al. [55] 2000 . Urata, 56Urata et al. [56] 2011 . Ilinsky, 57Ilinsky et al. [57] 2000 . Makino, 58Makino et al. [58] 2012 . Varga, 59Varga et al. [59] 2000 . Sharma, 60Sharma et al. [60] 2014 . Vlasak, 61Vlasak et al. [61] 2000 . Liu, 62Liu et al. [62] 2015 . Zorkovska, 63Zorkovska et al. [63] 2000 . Wen, 64Wen et al. [64] 2015 . Solyom, 65Solyom et al. [65] 2000 . Xiang, 66Xiang et al. [66] 2015 . Lovas, 67Lovas et al. [67] 2000 . Sinha, 68Sinha et al. [68] 2015 . Kwapulinski, 69Kwapulinski et al. [69] 2001 . Wan, 70Wan et al. [70] 2016 . Borrego, 71Borrego et al. [71] 2001 . Dan , 72Dan et al. [72] 2016 . Franco, 73Franco et al. [73] 2001 . Li, 74Li et al. [74] 2017 . Mazaleyrat, 75Mazaleyrat et al. [75] 2001 . Jiang, 76Jiang et al. [76] 2017 . Wu, 77Wu et al. [77] 2001 . Li, 78Li et al. [78] 2017 . Borrego, 79Borrego et al. [79] 2001 . Jia, 80Jia et al. [80] 2018 . Gorria, 81Gorria et al. [81] 2001 . Cao, 82Cao et al. [82] 2018 Table 1: List of all the soft magnetic papers from which the experimental data were mined. Table 1: List of all the soft magnetic papers from which the experimental data were mined. New Fe-based soft magnetic alloys composed of ultrafine grain structure. Y Yoshizawa, S Oguma, K Yamauchi, J. Appl. Phys. 6410Y. Yoshizawa, S. Oguma, K. Yamauchi, New Fe-based soft magnetic alloys composed of ultrafine grain structure, J. Appl. Phys. 64 (10) (1988) 6044-6046. Nanocrystalline soft magnetic alloys two decades of progress. M A Willard, M Daniil, Handb. Magn. Mater. 21ElsevierM. A. Willard, M. Daniil, Nanocrystalline soft magnetic alloys two decades of progress, in: Handb. Magn. Mater., Vol. 21, Elsevier, 2013, pp. 173-342. Modern soft magnets: Amorphous and nanocrystalline materials. G Herzer, Acta Mater. 613G. Herzer, Modern soft magnets: Amorphous and nanocrystalline materials, Acta Mater. 61 (3) (2013) 718-734. Grain structure and magnetism of nanocrystalline ferromagnets. G Herzer, IEEE Trans. Magn. 255G. Herzer, Grain structure and magnetism of nanocrystalline ferromagnets, IEEE Trans. Magn. 25 (5) (1989) 3327-3329. Soft magnetic Ni-Fe and Co-Fe alloys-some physical and metallurgical aspects. F Pfeifer, C Radeloff, J. Magn. Magn. Mater. 191-3F. Pfeifer, C. Radeloff, Soft magnetic Ni-Fe and Co-Fe alloys-some physical and metallurgical aspects, J. Magn. Magn. Mater. 19 (1-3) (1980) 190-207. New nanocrystalline high-remanence Nd-Fe-B alloys by rapid solidification. A Manaf, R A Buckley, H A Davies, J. Magn. Magn. Mater. 1283A. Manaf, R. A. Buckley, H. A. Davies, New nanocrystalline high-remanence Nd-Fe-B alloys by rapid solidification, J. Magn. Magn. Mater. 128 (3) (1993) 302-306. The exchange-spring magnet: a new material principle for permanent magnets. E F Kneller, R Hawig, IEEE Trans. Magn. 274E. F. Kneller, R. Hawig, The exchange-spring magnet: a new material principle for permanent magnets, IEEE Trans. Magn. 27 (4) (1991) 3588-3560. Crystallization process of Fe based amorphous alloys: Mechanical and magnetic properties. A Saad, V Cremaschi, J Moya, B Arcondo, H Sirkin, Phys. Status Solidi A. 1893A. Saad, V. Cremaschi, J. Moya, B. Arcondo, H. Sirkin, Crystallization process of Fe based amorphous alloys: Mechanical and magnetic properties, Phys. Status Solidi A 189 (3) (2002) 877-881. Soft magnetic properties of bcc Fe-Au-X-Si-B (X=early transition metal) alloys with fine grain structure. N Kataoka, T Matsunaga, A Inoue, T Masumoto, Mater. Trans. JIM. 3011N. Kataoka, T. Matsunaga, A. Inoue, T. Masumoto, Soft magnetic properties of bcc Fe-Au-X-Si-B (X=early transition metal) alloys with fine grain structure, Mater. Trans. JIM 30 (11) (1989) 947-950. Influence of microstructure on the magnetic and mechanical behaviour of amorphous and nanocrystalline FeNbB alloy. I Skorvanek, P Svec, J.-M Greneche, J Kovac, J Marcin, R Gerling, J. Phys. Condens. Matter. 14184717I. Skorvanek, P. Svec, J.-M. Greneche, J. Kovac, J. Marcin, R. Gerling, Influence of microstructure on the magnetic and mechanical behaviour of amorphous and nanocrystalline FeNbB alloy, J. Phys. Condens. Matter 14 (18) (2002) 4717. Grain size dependence of coercivity and permeability in nanocrystalline ferromagnets. G Herzer, IEEE Trans. Magn. 265G. Herzer, Grain size dependence of coercivity and permeability in nanocrystalline ferromagnets, IEEE Trans. Magn. 26 (5) (1990) 1397-1402. Microstructure evolution and soft magnetic properties of Fe 72−x Nb x Al 5 Ga 2 P 11 C 6 B 4 (x = 0, 2) metallic glasses. N Mitrović, S Roth, J Eckert, C Mickel, J. Phys. D. 35182247N. Mitrović, S. Roth, J. Eckert, C. Mickel, Microstructure evolution and soft magnetic properties of Fe 72−x Nb x Al 5 Ga 2 P 11 C 6 B 4 (x = 0, 2) metallic glasses, J. Phys. D 35 (18) (2002) 2247. High saturation magnetization and soft magnetic properties of bcc Fe-Zr-B alloys with ultrafine grain structure. K Suzuki, N Kataoka, A Inoue, A Makino, T Masumoto, Mater. Trans. JIM. 318K. Suzuki, N. Kataoka, A. Inoue, A. Makino, T. Masumoto, High saturation magnetization and soft magnetic properties of bcc Fe-Zr-B alloys with ultrafine grain structure, Mater. Trans. JIM 31 (8) (1990) 743-746. Influence of Cr additions in magnetic properties and crystallization process of amorphous iron based alloys. P Marin, M Lopez, A Hernando, Y Iqbal, H A Davies, M R J Gibbs, J. Appl. Phys. 921P. Marin, M. Lopez, A. Hernando, Y. Iqbal, H. A. Davies, M. R. J. Gibbs, Influence of Cr additions in magnetic properties and crystallization process of amorphous iron based alloys, J. Appl. Phys. 92 (1) (2002) 374-378. Magnetic properties of nanocrystalline Fe-based soft magnetic alloys. Y Yoshizawa, K Yamauchi, Mater. Res. Soc. Symp. Proc. 232Y. Yoshizawa, K. Yamauchi, Magnetic properties of nanocrystalline Fe-based soft magnetic alloys, Mater. Res. Soc. Symp. Proc. 232 (1991) 183-194. On the role of aluminum in Finemet. A Zorkovská, P Petrovič, P Sovák, J Kováč, Czech. J. Phys. 522A. Zorkovská, P. Petrovič, P. Sovák, J. Kováč, On the role of aluminum in Finemet, Czech. J. Phys. 52 (2) (2002) 163-166. Soft magnetic properties of nanocrystalline bcc Fe-Zr-B and Fe-M-B-Cu (M=transition metal) alloys with high saturation magnetization. K Suzuki, A Makino, A Inoue, T Masumoto, J. Appl. Phys. 7010K. Suzuki, A. Makino, A. Inoue, T. Masumoto, Soft magnetic properties of nanocrystalline bcc Fe-Zr-B and Fe-M-B-Cu (M=transition metal) alloys with high saturation magnetization, J. Appl. Phys. 70 (10) (1991) 6232-6237. Nanostructure formation and soft magnetic properties evolution in Fe 91−x W x B 9 amorphous alloys. N Suliţanu, Mater. Sci. Eng. B. 90N. Suliţanu, Nanostructure formation and soft magnetic properties evolution in Fe 91−x W x B 9 amor- phous alloys, Mater. Sci. Eng. B 90 (1-2) (2002) 163-170. Magnetic properties of fine crystalline Fe-P-C-Cu-X alloys. Y Fujii, H Fujita, A Seki, T Tomida, J. Appl. Phys. 7010Y. Fujii, H. Fujita, A. Seki, T. Tomida, Magnetic properties of fine crystalline Fe-P-C-Cu-X alloys, J. Appl. Phys. 70 (10) (1991) 6241-6243. Evolution of magnetic, structural and mechanical properties in FeSiBNbCuAlGe system. V Cremaschi, A Saad, J Moya, B Arcondo, H Sirkin, Physica B. 320V. Cremaschi, A. Saad, J. Moya, B. Arcondo, H. Sirkin, Evolution of magnetic, structural and mechan- ical properties in FeSiBNbCuAlGe system, Physica B 320 (1-4) (2002) 281-284. Low core loss of a bcc Fe 86 Zr 7 B 6 Cu 1 alloy with nanoscale grain size. A Makino, K Suzuki, A Inoue, T Masumoto, Mater. Trans. JIM. 326A. Makino, K. Suzuki, A. Inoue, T. Masumoto, Low core loss of a bcc Fe 86 Zr 7 B 6 Cu 1 alloy with nanoscale grain size, Mater. Trans. JIM 32 (6) (1991) 551-556. Influence of P substitution for B on the structure and properties of nanocrystalline Fe 73. C Nguyen, H L Nguyen, X C Nguyen, Q T Phung, V V Le, Physica B. 3272-45 Si 15.5 Nb 3 Cu 1 B 7−x P x alloysC. Nguyen, H. L. Nguyen, X. C. Nguyen, Q. T. Phung, V. V. Le, Influence of P substitution for B on the structure and properties of nanocrystalline Fe 73.5 Si 15.5 Nb 3 Cu 1 B 7−x P x alloys, Physica B 327 (2-4) (2003) 241-243. Effects of Al on the magnetic properties of nanocrystalline Fe 73. S H Lim, W K Pi, T H Noh, H J Kim, I K Kang, J. Appl. Phys. 73105 Cu 1 Nb 3 Si 13.5 B 9 alloysS. H. Lim, W. K. Pi, T. H. Noh, H. J. Kim, I. K. Kang, Effects of Al on the magnetic properties of nanocrystalline Fe 73.5 Cu 1 Nb 3 Si 13.5 B 9 alloys, J. Appl. Phys. 73 (10) (1993) 6591-6593. Low-temperature magnetic properties and the crystallization behavior of FINEMET alloy. N Ponpandian, A Narayanasamy, K Chattopadhyay, M M Raja, K Ganesan, C N Chinnasamy, B Jeyadevan, J. Appl. Phys. 9310N. Ponpandian, A. Narayanasamy, K. Chattopadhyay, M. M. Raja, K. Ganesan, C. N. Chinnasamy, B. Jeyadevan, Low-temperature magnetic properties and the crystallization behavior of FINEMET alloy, J. Appl. Phys. 93 (10) (2003) 6182-6187. Crystallization of Fe-Si-B-Ga-Nb amorphous alloy. T Tomida, Mater. Sci. Eng. A. 179T. Tomida, Crystallization of Fe-Si-B-Ga-Nb amorphous alloy, Mater. Sci. Eng. A 179 (1994) 521-525. Optimization of soft magnetic properties in nanoperm type alloys. P Kwapuliński, A Chrobak, G Haneczok, Z Stoklosa, J Rasek, J Lelatko, Mater. Sci. Eng. C. 23P. Kwapuliński, A. Chrobak, G. Haneczok, Z. Stoklosa, J. Rasek, J. Lelatko, Optimization of soft magnetic properties in nanoperm type alloys, Mater. Sci. Eng. C 23 (1-2) (2003) 71-75. Magnetic properties and microstructure of nanocrystalline bcc Fe-MB (M=Zr, Hf, Nb) alloys. A Makino, K Suzuki, A Inoue, Y Hirotsu, T Masumoto, J. Magn. Magn. Mater. 1331-3A. Makino, K. Suzuki, A. Inoue, Y. Hirotsu, T. Masumoto, Magnetic properties and microstructure of nanocrystalline bcc Fe-MB (M=Zr, Hf, Nb) alloys, J. Magn. Magn. Mater. 133 (1-3) (1994) 329-333. Nanocrystallization of soft magnetic Finemet-type amorphous ribbons. O Crisan, J M Le Breton, G Filoti, Sens. Actuator A-Phys. 1061-3O. Crisan, J. M. Le Breton, G. Filoti, Nanocrystallization of soft magnetic Finemet-type amorphous ribbons, Sens. Actuator A-Phys. 106 (1-3) (2003) 246-250. Magnetic properties of very high permeability, low coercivity, and high electrical resistivity in Fe 87 Zr 7 B 5 Ag 1 amorphous alloy. B G Kim, J S Song, H S Kim, Y W Oh, J. Appl. Phys. 7710B. G. Kim, J. S. Song, H. S. Kim, Y. W. Oh, Magnetic properties of very high permeability, low coercivity, and high electrical resistivity in Fe 87 Zr 7 B 5 Ag 1 amorphous alloy, J. Appl. Phys. 77 (10) (1995) 5298-5302. Influence of substitutions on crystallization and magnetic properties of Finemet-based nanocrystalline alloys and thin films. P Sovák, G Pavlík, M Tauš, A Zorkovská, I Gościańska, J Balcerski, Czech. J. Phys. 544P. Sovák, G. Pavlík, M. Tauš, A. Zorkovská, I. Gościańska, J. Balcerski, Influence of substitutions on crystallization and magnetic properties of Finemet-based nanocrystalline alloys and thin films, Czech. J. Phys. 54 (4) (2004) 261-264. Soft magnetic Fe-Zr-Si-B alloys with nanocrystalline structure. A Inoue, Y Miyauchi, T Masumoto, Mater. Trans. JIM. 365A. Inoue, Y. Miyauchi, T. Masumoto, Soft magnetic Fe-Zr-Si-B alloys with nanocrystalline structure, Mater. Trans. JIM 36 (5) (1995) 689-692. Magnetic properties and structural evolution of FINEMET alloys with Ge addition. V Cremaschi, G Sánchez, H Sirkin, Physica B. 3541-4V. Cremaschi, G. Sánchez, H. Sirkin, Magnetic properties and structural evolution of FINEMET alloys with Ge addition, Physica B 354 (1-4) (2004) 213-216. Influence of heat treatment on magnetostrictions of Finemet Fe 73. G Vlasak, Z Kaczkowski, P Švec, P Duhaj, Mater. Sci. Eng. A. 2265 Cu 1 Nb 3 Si 3.5 B 9G. Vlasak, Z. Kaczkowski, P.Švec, P. Duhaj, Influence of heat treatment on magnetostrictions of Finemet Fe 73.5 Cu 1 Nb 3 Si 3.5 B 9 , Mater. Sci. Eng. A 226 (1997) 749-752. Origin of the magnetic anisotropy induced by stress annealing in Fe-based nanocrystalline alloy. M Ohnuma, K Hono, T Yanai, M Nakano, H Fukunaga, Y Yoshizawa, Appl. Phys. Lett. 8615152513M. Ohnuma, K. Hono, T. Yanai, M. Nakano, H. Fukunaga, Y. Yoshizawa, Origin of the magnetic anisotropy induced by stress annealing in Fe-based nanocrystalline alloy, Appl. Phys. Lett. 86 (15) (2005) 152513. Survey of magnetic properties during and after amorphous-nanocrystalline transformation. A Lovas, L F Kiss, B Varga, P Kamasa, I Balogh, I Bakonyi, J. Phys. IV. 8PR2A. Lovas, L. F. Kiss, B. Varga, P. Kamasa, I. Balogh, I. Bakonyi, Survey of magnetic properties during and after amorphous-nanocrystalline transformation, J. Phys. IV 8 (PR2) (1998) 291-293. The effect of Zn, Ag and Au substitution for Cu in Finemet on the crystallization and magnetic properties. N Chau, N Q Hoa, N D The, L V Vu, J. Magn. Magn. Mater. 3032N. Chau, N. Q. Hoa, N. D. The, L. V. Vu, The effect of Zn, Ag and Au substitution for Cu in Finemet on the crystallization and magnetic properties, J. Magn. Magn. Mater. 303 (2) (2006) e415-e418. Temperature dependence of the magnetostriction in α-Fe 100−x Si x and FINEMET type alloys. R Grössinger, R Sato Turtelli, V H Duong, C Kuss, C Polak, G Herzer, Mater. Sci. Forum. 307Trans Tech PublicationsR. Grössinger, R. Sato Turtelli, V. H. Duong, C. Kuss, C. Polak, G. Herzer, Temperature dependence of the magnetostriction in α-Fe 100−x Si x and FINEMET type alloys, in: Mater. Sci. Forum, Vol. 307, Trans Tech Publications, Switzerland, 1999, pp. 135-142. New high-B s Fe-based nanocrystalline soft magnetic alloys. M Ohta, Y Yoshizawa, Jpn. J. Appl. Phys. 466L477M. Ohta, Y. Yoshizawa, New high-B s Fe-based nanocrystalline soft magnetic alloys, Jpn. J. Appl. Phys. 46 (6L) (2007) L477. Magnetic properties and microstructure of nanocrystalline Fe-based alloys. Y Yoshizawa, Mater. Sci. Forum. 307Trans Tech PublicationsY. Yoshizawa, Magnetic properties and microstructure of nanocrystalline Fe-based alloys, in: Mater. Sci. Forum, Vol. 307, Trans Tech Publications, Switzerland, 1999, pp. 51-62. Structure and soft magnetic properties of V-doped Finemet-type alloys. W Lu, B Yan, Y Li, R Tang, J. Alloys Compd. 4541-2W. Lu, B. Yan, Y. Li, R. Tang, Structure and soft magnetic properties of V-doped Finemet-type alloys, J. Alloys Compd. 454 (1-2) (2008) L10-L13. Mössbauer study of the magnetic properties of nanocrystalline Fe 80.5 Nb 7 B 12.5 alloy. M Kopcewicz, A Grabias, I Škorvánek, J Marcin, B Idzikowski, J. Appl. Phys. 858M. Kopcewicz, A. Grabias, I.Škorvánek, J. Marcin, B. Idzikowski, Mössbauer study of the magnetic properties of nanocrystalline Fe 80.5 Nb 7 B 12.5 alloy, J. Appl. Phys. 85 (8) (1999) 4427-4429. Structure and magnetic properties of Fe. G Pavlík, P Sovák, V Kolesár, K Saksl, J Füzer, Rev. Adv. Mater. Sci. 18G. Pavlík, P. Sovák, V. Kolesár, K. Saksl, J. Füzer, Structure and magnetic properties of Fe, Rev. Adv. Mater. Sci 18 (2008) 522-526. Evolution of structure and magnetic properties with annealing temperature in novel Al-containing alloys based on Finemet. M Frost, I Todd, H A Davies, M R J Gibbs, R V Major, J. Magn. Magn. Mater. 2031-3M. Frost, I. Todd, H. A. Davies, M. R. J. Gibbs, R. V. Major, Evolution of structure and magnetic properties with annealing temperature in novel Al-containing alloys based on Finemet, J. Magn. Magn. Mater. 203 (1-3) (1999) 85-87. New Fe-metalloids based nanocrystalline alloys with high B s of 1.9 T and excellent magnetic softness. A Makino, H Men, T Kubota, K Yubuta, A Inoue, J. Appl. Phys. 1057A. Makino, H. Men, T. Kubota, K. Yubuta, A. Inoue, New Fe-metalloids based nanocrystalline alloys with high B s of 1.9 T and excellent magnetic softness, J. Appl. Phys. 105 (7) (2009) 07A308. Magnetic properties and nanocrystallization of a Fe 63. V Franco, C F Conde, A Conde, J. Magn. Magn. Mater. 2031-35 Cr 10 Si 13.5 B 9 Cu 1 Nb 3 alloyV. Franco, C. F. Conde, A. Conde, Magnetic properties and nanocrystallization of a Fe 63.5 Cr 10 Si 13.5 B 9 Cu 1 Nb 3 alloy, J. Magn. Magn. Mater. 203 (1-3) (1999) 60-62. Stress-induced magnetic and structural anisotropy of nanocrystalline Fe-based alloys. M Ohnuma, T Yanai, K Hono, M Nakano, H Fukunaga, Y Yoshizawa, G Herzer, J. Appl. Phys. 108993927M. Ohnuma, T. Yanai, K. Hono, M. Nakano, H. Fukunaga, Y. Yoshizawa, G. Herzer, Stress-induced magnetic and structural anisotropy of nanocrystalline Fe-based alloys, J. Appl. Phys. 108 (9) (2010) 093927. Contribution of the crystalline phase Fe 100−x Si x to the temperature dependence of magnetic properties of FINEMET-type alloys. R S Turtelli, V H Duong, R Grossinger, M Schwetz, E Ferrara, N Pillmayr, IEEE Trans. Magn. 362R. S. Turtelli, V. H. Duong, R. Grossinger, M. Schwetz, E. Ferrara, N. Pillmayr, Contribution of the crystalline phase Fe 100−x Si x to the temperature dependence of magnetic properties of FINEMET-type alloys, IEEE Trans. Magn. 36 (2) (2000) 508-512. Effects of substitution of Mo for Nb on less-common properties of Finemet alloys. P Butvin, B Butvinová, J M Silveyra, M Chromčíková, D Janičkovič, J Sitek, P Švec, G Vlasák, J. Magn. Magn. Mater. 32220P. Butvin, B. Butvinová, J. M. Silveyra, M. Chromčíková, D. Janičkovič, J. Sitek, P.Švec, G. Vlasák, Effects of substitution of Mo for Nb on less-common properties of Finemet alloys, J. Magn. Magn. Mater. 322 (20) (2010) 3035-3040. Structure and magnetic properties of Fe 73.5 Ag 1 Nb 3 Si 13.5 B 9 alloy. H Xu, K He, L Cheng, Y Dong, X Xiao, Q Wang, J. Shanghai Univ. 42H. Xu, K.-y. He, L.-z. Cheng, Y.-d. Dong, X.-s. Xiao, Q. Wang, Structure and magnetic properties of Fe 73.5 Ag 1 Nb 3 Si 13.5 B 9 alloy, J. Shanghai Univ. 4 (2) (2000) 159-162. Microstructure and magnetic properties of Fe 72.5 Cu 1 M 2 V 2 Si 13. W Lu, J Fan, Y Wang, B Yan, J. Magn. Magn. Mater. 59MoW)) nanocrystalline alloysW. Lu, J. Fan, Y. Wang, B. Yan, Microstructure and magnetic properties of Fe 72.5 Cu 1 M 2 V 2 Si 13.5 B 9 (M=Nb, Mo, (NbMo), (MoW)) nanocrystalline alloys, J. Magn. Magn. Mater. 322 (19) (2010) 2935- 2937. Magnetic properties of ultrasoft-nanocomposite FeAlSiBNbCu alloys. I Todd, B J Tate, H A Davies, M R J Gibbs, D Kendall, R V Major, J. Magn. Magn. Mater. 215I. Todd, B. J. Tate, H. A. Davies, M. R. J. Gibbs, D. Kendall, R. V. Major, Magnetic properties of ultrasoft-nanocomposite FeAlSiBNbCu alloys, J. Magn. Magn. Mater. 215 (2000) 272-275. Low core losses and magnetic properties of Fe 85−86 Si 1−2 B 8 P 4 Cu 1 nanocrystalline alloys with high B for power applications. A Makino, T Kubota, K Yubuta, A Inoue, A Urata, H Matsumoto, S Yoshida, J. Appl. Phys. 1097A. Makino, T. Kubota, K. Yubuta, A. Inoue, A. Urata, H. Matsumoto, S. Yoshida, Low core losses and magnetic properties of Fe 85−86 Si 1−2 B 8 P 4 Cu 1 nanocrystalline alloys with high B for power applications, J. Appl. Phys. 109 (7) (2011) 07A302. Devitrification process of FeSiBCuNbX nanocrystalline alloys: Mössbauer study of the intergranular phase. J M Borrego, C F Conde, A Conde, V A Peña-Rodríguez, J M Greneche, J. Phys. Condens. Matter. 12378089J. M. Borrego, C. F. Conde, A. Conde, V. A. Peña-Rodríguez, J. M. Greneche, Devitrification process of FeSiBCuNbX nanocrystalline alloys: Mössbauer study of the intergranular phase, J. Phys. Condens. Matter 12 (37) (2000) 8089. High B s Fe 84−x Si 4 B 8 P 4 Cu x (x = 0-1.5) nanocrystalline alloys with excellent magnetic softness. F Kong, A Wang, X Fan, H Men, B Shen, G Xie, A Makino, A Inoue, J. Appl. Phys. 1097F. Kong, A. Wang, X. Fan, H. Men, B. Shen, G. Xie, A. Makino, A. Inoue, High B s Fe 84−x Si 4 B 8 P 4 Cu x (x = 0-1.5) nanocrystalline alloys with excellent magnetic softness, J. Appl. Phys. 109 (7) (2011) 07A303. Structure and magnetic properties of nanocrystalline soft ferromagnets. T Kemény, D Kaptás, L F Kiss, J Balogh, I Vincze, S Szabó, D L Beke, Hyperfine Interact. 1301-4T. Kemény, D. Kaptás, L. F. Kiss, J. Balogh, I. Vincze, S. Szabó, D. L. Beke, Structure and magnetic properties of nanocrystalline soft ferromagnets, Hyperfine Interact. 130 (1-4) (2000) 181-219. Fe-B-P-Cu nanocrystalline soft magnetic alloys with high B s. A Urata, H Matsumoto, S Yoshida, A Makino, J. Alloys Compd. 509A. Urata, H. Matsumoto, S. Yoshida, A. Makino, Fe-B-P-Cu nanocrystalline soft magnetic alloys with high B s , J. Alloys Compd. 509 (2011) S431-S433. On determination of volume fraction of crystalline phase in partially crystallized amorphous and nanocrystalline materials. A G Ilinsky, V V Maslov, V K Nozenko, A P Brovko, J. Mater. Sci. 3518A. G. Ilinsky, V. V. Maslov, V. K. Nozenko, A. P. Brovko, On determination of volume fraction of crystalline phase in partially crystallized amorphous and nanocrystalline materials, J. Mater. Sci. 35 (18) (2000) 4495-4500. Nanocrystalline soft magnetic Fe-Si-B-P-Cu alloys with high B of 1.8-1.9 T contributable to energy saving. A Makino, IEEE Trans. Magn. 484A. Makino, Nanocrystalline soft magnetic Fe-Si-B-P-Cu alloys with high B of 1.8-1.9 T contributable to energy saving, IEEE Trans. Magn. 48 (4) (2012) 1331-1335. Effective magnetic anisotropy and internal demagnetization investigations in soft magnetic nanocrystalline alloys. L K Varga, L Novák, F Mazaleyrat, J. Magn. Magn. Mater. 2101-3L. K. Varga, L. Novák, F. Mazaleyrat, Effective magnetic anisotropy and internal demagnetization investigations in soft magnetic nanocrystalline alloys, J. Magn. Magn. Mater. 210 (1-3) (2000) 25-30. Influence of microstructure on soft magnetic properties of low coreloss and high B s Fe 85 Si 2 B 8 P 4 Cu 1 nanocrystalline alloy. P Sharma, X Zhang, Y Zhang, A Makino, J. Appl. Phys. 11517P. Sharma, X. Zhang, Y. Zhang, A. Makino, Influence of microstructure on soft magnetic properties of low coreloss and high B s Fe 85 Si 2 B 8 P 4 Cu 1 nanocrystalline alloy, J. Appl. Phys. 115 (17) (2014) 17A340. Magnetostriction of the Fe 73.5 Cu 1 Ta 2 Nb 1 Si 13.5 B 9 alloy. G Vlasák, Z Kaczkowski, P Duhaj, J. Magn. Magn. Mater. 215G. Vlasák, Z. Kaczkowski, P. Duhaj, Magnetostriction of the Fe 73.5 Cu 1 Ta 2 Nb 1 Si 13.5 B 9 alloy, J. Magn. Magn. Mater. 215 (2000) 476-478. Investigation of microstructure and magnetic properties of Fe 81 Si 4 B 12−x P 2 Cu 1 M x (M = Cr, Mn and V; x = 0, 1, 2, 3) melt spun ribbons. W L Liu, Y G Wang, F G Chen, J. Alloys Compd. 622W. L. Liu, Y. G. Wang, F. G. Chen, Investigation of microstructure and magnetic properties of Fe 81 Si 4 B 12−x P 2 Cu 1 M x (M = Cr, Mn and V; x = 0, 1, 2, 3) melt spun ribbons, J. Alloys Compd. 622 (2015) 751-755. Structure and magnetic behaviour of Fe-Cu-Nb-Si-B-Al alloys. A Zorkovská, J Kováč, P Sovák, P Petrovič, M Konč, J. Magn. Magn. Mater. 215A. Zorkovská, J. Kováč, P. Sovák, P. Petrovič, M. Konč, Structure and magnetic behaviour of Fe-Cu- Nb-Si-B-Al alloys, J. Magn. Magn. Mater. 215 (2000) 492-494. Structure and magnetic properties of Si-rich FeAlSiBNbCu alloys. L Wen, Z Wang, Y Han, Y Xu, J. Non-Cryst. Solids. 411L.-x. Wen, Z. Wang, Y. Han, Y.-c. Xu, Structure and magnetic properties of Si-rich FeAlSiBNbCu alloys, J. Non-Cryst. Solids 411 (2015) 115-118. Study of Fe-Zr-U-B and Fe-Zr-U-Cu-B nanocrystalline alloys. A Sólyom, P Petrovič, P Marko, J Kováč, G Konczos, J. Magn. Magn. Mater. 215A. Sólyom, P. Petrovič, P. Marko, J. Kováč, G. Konczos, Study of Fe-Zr-U-B and Fe-Zr-U-Cu-B nanocrystalline alloys, J. Magn. Magn. Mater. 215 (2000) 482-483. Effect of Nb addition on the magnetic properties and microstructure of FePCCu nanocrystalline alloy. R Xiang, S X Zhou, B S Dong, Y G Wang, J. Mater. Sci. Mater. Electron. 266R. Xiang, S. X. Zhou, B. S. Dong, Y. G. Wang, Effect of Nb addition on the magnetic properties and microstructure of FePCCu nanocrystalline alloy, J. Mater. Sci. Mater. Electron 26 (6) (2015) 4091-4096. Saturation magnetization and amorphous Curie point changes during the early stage of amorphous-nanocrystalline transformation of a FINEMET-type alloy. A Lovas, L F Kiss, I Balogh, J. Magn. Magn. Mater. 215A. Lovas, L. F. Kiss, I. Balogh, Saturation magnetization and amorphous Curie point changes during the early stage of amorphous-nanocrystalline transformation of a FINEMET-type alloy, J. Magn. Magn. Mater. 215 (2000) 463-465. A correlation between the magnetic and structural properties of isochronally annealed Cu-free FINEMET alloy with composition Fe 72 B 19. A K Sinha, M N Singh, A Upadhyay, M Satalkar, M Shah, N Ghodke, S N Kane, L K Varga, Appl. Phys. A. 11812 Si 4.8 Nb 4A. K. Sinha, M. N. Singh, A. Upadhyay, M. Satalkar, M. Shah, N. Ghodke, S. N. Kane, L. K. Varga, A correlation between the magnetic and structural properties of isochronally annealed Cu-free FINEMET alloy with composition Fe 72 B 19.2 Si 4.8 Nb 4 , Appl. Phys. A 118 (1) (2015) 291-299. Optimisation of soft magnetic properties in Fe-Cu-X-Si 13 B 9 (X= Cr, Mo, Zr) amorphous alloys. P Kwapuliński, J Rasek, Z Stok Losa, G Haneczok, J. Magn. Magn. Mater. 2342P. Kwapuliński, J. Rasek, Z. Stok losa, G. Haneczok, Optimisation of soft magnetic properties in Fe- Cu-X-Si 13 B 9 (X= Cr, Mo, Zr) amorphous alloys, J. Magn. Magn. Mater. 234 (2) (2001) 218-226. Development of FeSiBNbCu nanocrystalline soft magnetic alloys with high B s and good manufacturability. F Wan, A He, J Zhang, J Song, A Wang, C Chang, X Wang, J. Electron. Mater. 4510F. Wan, A. He, J. Zhang, J. Song, A. Wang, C. Chang, X. Wang, Development of FeSiBNbCu nanocrys- talline soft magnetic alloys with high B s and good manufacturability, J. Electron. Mater. 45 (10) (2016) 4913-4918. Nanocrystallite compositions for Al-and Mo-containing Finemet-type alloys. J M Borrego, A Conde, I Todd, M Frost, H A Davies, M R J Gibbs, J S Garitaonandia, J M Barandiaran, J M Greneche, J. Non-Cryst. Solids. 2871-3J. M. Borrego, A. Conde, I. Todd, M. Frost, H. A. Davies, M. R. J. Gibbs, J. S. Garitaonandia, J. M. Barandiaran, J. M. Greneche, Nanocrystallite compositions for Al-and Mo-containing Finemet-type alloys, J. Non-Cryst. Solids 287 (1-3) (2001) 125-129. Z Dan, Y Zhang, A Takeuchi, N Hara, F Qin, A Makino, H Chang, Effect of substitution of Cu by Au and Ag on nanocrystallization behavior of Fe. 833 Si 4 B 8 P 4 Cu 0.7 soft magnetic alloyZ. Dan, Y. Zhang, A. Takeuchi, N. Hara, F. Qin, A. Makino, H. Chang, Effect of substitution of Cu by Au and Ag on nanocrystallization behavior of Fe 83.3 Si 4 B 8 P 4 Cu 0.7 soft magnetic alloy, J. Alloys Compd. 683 (2016) 263-270. Mo-containing Finemet alloys: Microstructure and magnetic properties. V Franco, C F Conde, A Conde, P Ochin, J. Non-Cryst. Solids. 2871-3V. Franco, C. F. Conde, A. Conde, P. Ochin, Mo-containing Finemet alloys: Microstructure and magnetic properties, J. Non-Cryst. Solids 287 (1-3) (2001) 366-369. Core loss analysis of finemet type nanocrystalline alloy ribbon with different thickness. Z Li, K Yao, D Li, X Ni, Z Lu, Prog. Nat. Sci-Mater. 275Z. Li, K. Yao, D. Li, X. Ni, Z. Lu, Core loss analysis of finemet type nanocrystalline alloy ribbon with different thickness, Prog. Nat. Sci-Mater. 27 (5) (2017) 588-592. Thermo-magnetic transitions in two-phase nanostructured materials. F Mazaleyrat, L Varga, IEEE Trans. Magn. 374F. Mazaleyrat, L. Varga, Thermo-magnetic transitions in two-phase nanostructured materials, IEEE Trans. Magn. 37 (4) (2001) 2232-2235. Study on soft magnetic properties of Finemet-type nanocrystalline alloys with Mo substituting for Nb. D Jiang, B Zhou, B Jiang, B Ya, X Zhang, Phys. Status Solidi B. 214101700074D. Jiang, B. Zhou, B. Jiang, B. Ya, X. Zhang, Study on soft magnetic properties of Finemet-type nanocrystalline alloys with Mo substituting for Nb, Phys. Status Solidi B 214 (10) (2017) 1700074. Microstructure and properties of nanocrystalline Fe-Zr-Nb-B soft magnetic alloys with low magnetostriction. Y Q Wu, T Bitoh, K Hono, A Makino, A Inoue, Acta Mater. 4919Y. Q. Wu, T. Bitoh, K. Hono, A. Makino, A. Inoue, Microstructure and properties of nanocrystalline Fe-Zr-Nb-B soft magnetic alloys with low magnetostriction, Acta Mater. 49 (19) (2001) 4069-4077. Soft magnetic Fe-Si-B-Cu nanocrystalline alloys with high Cu concentrations. Y Li, X Jia, Y Xu, C Chang, G Xie, W Zhang, J. Alloys Compd. 722Y. Li, X. Jia, Y. Xu, C. Chang, G. Xie, W. Zhang, Soft magnetic Fe-Si-B-Cu nanocrystalline alloys with high Cu concentrations, J. Alloys Compd. 722 (2017) 859-863. Nb glassy alloys. J M Borrego, C F Conde, A Conde, Structural relaxation processes in FeSiB-Cu. Nb, X), X = Mo, V, Zr304J. M. Borrego, C. F. Conde, A. Conde, Structural relaxation processes in FeSiB-Cu(Nb, X), X = Mo, V, Zr, Nb glassy alloys, Mater. Sci. Eng. A 304 (2001) 491-494. Role of Mo addition on structure and magnetic properties of the Fe 85 Si 2 B 8 P 4 Cu 1 nanocrystalline alloy. X Jia, Y Li, G Xie, T Qi, W Zhang, J. Non-Cryst. Solids. 481X. Jia, Y. Li, G. Xie, T. Qi, W. Zhang, Role of Mo addition on structure and magnetic properties of the Fe 85 Si 2 B 8 P 4 Cu 1 nanocrystalline alloy, J. Non-Cryst. Solids 481 (2018) 590-593. Correlation between structure, magnetic properties and MI effect during the nanocrystallisation process of FINEMET type alloys. P Gorria, V M Prida, M Tejedor, B Hernando, M L Sánchez, Physica B. 299P. Gorria, V. M. Prida, M. Tejedor, B. Hernando, M. L. Sánchez, Correlation between structure, magnetic properties and MI effect during the nanocrystallisation process of FINEMET type alloys, Physica B 299 (3-4) (2001) 215-224. Local structure, nucleation sites and crystallization behavior and their effects on magnetic properties of Fe 81 Si x B 10 P 8−x Cu 1 (x=0-8). C C Cao, Y G Wang, L Zhu, Y Meng, X B Zhai, Y D Dai, J K Chen, F M Pan, Sci. Rep. 811243C. C. Cao, Y. G. Wang, L. Zhu, Y. Meng, X. B. Zhai, Y. D. Dai, J. K. Chen, F. M. Pan, Local structure, nucleation sites and crystallization behavior and their effects on magnetic properties of Fe 81 Si x B 10 P 8−x Cu 1 (x=0-8), Sci. Rep. 8 (1) (2018) 1243. . A Rohatgi, Webplotdigitizer , Version 4.1)A. Rohatgi, Webplotdigitizer (Version 4.1) (2018). URL https://automeris.io/WebPlotDigitizer . W Koehrsen, W. Koehrsen, feature-selector (2018). URL https://github.com/WillKoehrsen/feature-selector Lightgbm: A highly efficient gradient boosting decision tree. G Ke, Q Meng, T Finley, T Wang, W Chen, W Ma, Q Ye, T.-Y Liu, Advances in Neural Information Processing Systems. G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, T.-Y. Liu, Lightgbm: A highly efficient gradient boosting decision tree, in: Advances in Neural Information Processing Systems, 2017, pp. 3146-3154. Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, J. Mach. Learn. Res. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay, Scikit-learn: Machine learning in Python, J. Mach. Learn. Res. 12 (2011) 2825-2830. Random forests. L Breiman, Machine learning. 451L. Breiman, Random forests, Machine learning 45 (1) (2001) 5-32. Using random forest to learn imbalanced data. C Chen, A Liaw, L Breiman, 110BerkeleyUniversity of CaliforniaC. Chen, A. Liaw, L. Breiman, Using random forest to learn imbalanced data, University of California, Berkeley 110 (2004) 1-12. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. R Storn, K Price, J. Global Optim. 114R. Storn, K. Price, Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (4) (1997) 341-359. Multi-objective optimization in material design and selection. M F Ashby, Acta Mater. 481M. F. Ashby, Multi-objective optimization in material design and selection, Acta Mater. 48 (1) (2000) 359-369.
[ "https://github.com/WillKoehrsen/feature-selector" ]
[ "Test-time collective prediction", "Test-time collective prediction" ]
[ "Celestine Mendler-Dünner \nUniversity of California\nBerkeley\n", "Wenshuo Guo [email protected] \nUniversity of California\nBerkeley\n", "Stephen Bates [email protected] \nUniversity of California\nBerkeley\n", "Michael I Jordan [email protected] \nUniversity of California\nBerkeley\n" ]
[ "University of California\nBerkeley", "University of California\nBerkeley", "University of California\nBerkeley", "University of California\nBerkeley" ]
[]
An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points. Agents wish to benefit from the collective expertise of the full set of agents to make better predictions than they would individually, but may not be willing to release their data or model parameters. In this work, we explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model without relying on external validation, model retraining, or data pooling. Our approach takes inspiration from the literature in social science on human consensus-making. We analyze our mechanism theoretically, showing that it converges to inverse meansquared-error (MSE) weighting in the large-sample limit. To compute error bars on the collective predictions we propose a decentralized Jackknife procedure that evaluates the sensitivity of our mechanism to a single agent's prediction. Empirically, we demonstrate that our scheme effectively combines models with differing quality across the input space. The proposed consensus prediction achieves significant gains over classical model averaging, and even outperforms weighted averaging schemes that have access to additional validation data. study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recognition, 38(11): 2188-2191, 2005. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics. uci.edu/ml. Rick Durrett. Probability: theory and examples, volume 49. Cambridge university press, 2019.Bradley Efron. Estimation and accuracy after model selection.
null
[ "https://arxiv.org/pdf/2106.12012v1.pdf" ]
235,606,331
2106.12012
0c15e0639b7811a0543352c9bafaff3e6935b111
Test-time collective prediction Celestine Mendler-Dünner University of California Berkeley Wenshuo Guo [email protected] University of California Berkeley Stephen Bates [email protected] University of California Berkeley Michael I Jordan [email protected] University of California Berkeley Test-time collective prediction An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points. Agents wish to benefit from the collective expertise of the full set of agents to make better predictions than they would individually, but may not be willing to release their data or model parameters. In this work, we explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model without relying on external validation, model retraining, or data pooling. Our approach takes inspiration from the literature in social science on human consensus-making. We analyze our mechanism theoretically, showing that it converges to inverse meansquared-error (MSE) weighting in the large-sample limit. To compute error bars on the collective predictions we propose a decentralized Jackknife procedure that evaluates the sensitivity of our mechanism to a single agent's prediction. Empirically, we demonstrate that our scheme effectively combines models with differing quality across the input space. The proposed consensus prediction achieves significant gains over classical model averaging, and even outperforms weighted averaging schemes that have access to additional validation data. study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recognition, 38(11): 2188-2191, 2005. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics. uci.edu/ml. Rick Durrett. Probability: theory and examples, volume 49. Cambridge university press, 2019.Bradley Efron. Estimation and accuracy after model selection. Introduction Large-scale datasets are often collected from diverse sources, by multiple parties, and stored across different machines. In many scenarios centralized pooling of data is not possible due to privacy concerns, data ownership, or storage constraints. The challenge of doing machine learning in such distributed and decentralized settings has motivated research in areas of federated learning [Konečný et al., 2015, McMahan et al., 2017, distributed learning [Dean et al., 2012, Gupta andRaskar, 2018], as well as hardware and software design [Zaharia et al., 2016, Moritz et al., 2018. While the predominant paradigm in distributed machine learning is collaborative learning of one centralized model, this level of coordination across machines at the training stage is sometimes not feasible. In this work, we instead aim for collective prediction at test time without posing any specific requirement at the training stage. Combining the predictions of pre-trained machine learning models has been considered in statistics and machine learning in the context of ensemble methods, including bagging [Breiman, 1996a] and stacking [Wolpert, 1992]. In this work, we explore a new perspective on this aggregation problem and investigate whether insights from the social sciences on how humans reach a consensus can help us design more effective aggregation schemes that fully take advantage of each model's individual strength. At a high level, when a panel of human experts come together to make collective decisions, they exchange information and share expertise through a discourse to then weigh their subjective beliefs accordingly. This gives rise to a dynamic process of consensus finding that is decentralized and does not rely on any external judgement. We map this paradigm to a machine learning setting where experts correspond to pre-trained machine learning models, and show that it leads to an appealing mechanism for test-time collective prediction that does not require any data sharing, communication of model parameters, or labeled validation data. Our work We build on a model for human consensus finding that defines an iterative process of opinion pooling based on mutual trust scores among agents [DeGroot, 1974]. Informally speaking, a mutual trust score reflects an agent's willingness to adapt another agent's opinion. Thus, each individual's impact on the final judgment depends on how much it is trusted by the other agents. Mapping the DeGroot model to our learning scenario, we develop a scheme where each agent uses its own local training data to assess the predictive accuracy of other agents' models and to determine how much they should be trusted. This assessment of mutual trust is designed to be adaptive to the prediction task at hand, by using only a subset of the local data in the area around the current test point. Then, building on these trust scores, a final judgment is reached through an iterative procedure, where agents repeatedly share their updated predictions. This aggregation scheme is conceptually appealing as it mimics human consensus finding, and exhibits several practical advantages over existing methods. First, it is decentralized and does not require agents to release their data or model parameters. This can be beneficial in terms of communication costs and attractive from a privacy perspective, as it allows agents to keep their data local and control access to their model. Second, it does not require hold-out validation data to construct or train the aggregation function. Thus, there is no need to collect additional labeled data and coordinate shared access to a common database. Third, it does not require any synchronization across agents during the training stage, so agents are free to choose their preferred model architecture, and train their models independently. A crucial algorithmic feature of our procedure is its adaptivity to the test point at hand, which allows it to deal with inherent variations in the predictive quality across models in different regions of the input space. Figure 1 illustrates how our mechanism adaptively upweights models with lower error depending on the test point's location in the input space. In fact, we prove theoretically that our aggregation procedure can recover inverse-MSE weighting in the large-sample limit, which is known to be optimal for variance reduction of unbiased estimators [Bates and Granger, 1969]. To assess our procedure's sensitivity to the predictions of individual agents we propose a decentralized Jackknife algorithm to compute error bars for the consensus predictions with respect to the collection of observed agents. These error bars offer an attractive target of inference since they are agnostic to how individual models have been trained, can be evaluated without additional model evaluations, and allow one to diagnose the vulnerability of the algorithm to malicious agents. On the empirical side, we demonstrate the efficacy of our mechanism through extensive numerical experiments across different learning scenarios. In particular, we illustrate the mechanism's advantages over model averaging as well as model selection, and demonstrate that it consistently outperforms alternative non-uniform combination schemes that have access to additional validation data across a wide variety of models and datasets. Related work The most common approach for combining expert judgment is to aggregate and average individual expert opinions [Hammitt and Zhang, 2013]. In machine learning this statistical approach of equally-weighted averaging is also known as the "mean-rule" and underlies the popular bagging technique [Breiman, 1996a, Buja and Stuetzle, 2006, Efron, 2014. It corresponds to an optimal aggregation technique for dealing with uncertainty in settings where individual learners are homogeneous and the local datasets are drawn independently from the same underlying distribution. While principled, and powerful due to its simplicity, an unweighted approach cannot account for heterogeneity in the quality, or the expertise, of the base learners. To address this issue, performance-weighted averaging schemes have been proposed in various contexts. For determining an appropriate set of weights, these approaches typically rely on calibrating variables [Aspinall, 2010] to detect differences in predictive quality among agents. In machine learning these calibrating variables correspond to labeled cross-validation samples, while in areas such as expert forecasting and risk analysis these variables take the form of a set of agreed-upon seed questions [Cooke, 1991]. These weights can be updated over time using statistical information about the relative predictive performance [Rogova, 2008] or the posterior model probability [Wasserman et al., 2000]. More recently there have been several applications to classification that focus on dynamic selection techniques [Cruz et al., 2018b], where the goal is to select the locally best classifier from a pool of classifiers on the fly for every test point. At their core, these approaches all rely on constructing a good proxy for local accuracy in a region of the feature space around the test point and to use this proxy to weight or rank the candidate models. Various versions of distance and accuracy metrics have been proposed in this context [eg. Woods et al., 1997, Didaci et al., 2005, Cruz et al., 2018a. However, all these methods rely crucially on shared access to labeled validation data to determine the weights. Similarly, aggregation rules such as stacking [Breiman, 1996b, Coscrato et al., 2020 or meta-models [Albardan et al., 2020], both of which have proved their value empirically, require a central trusted agency with additional data to train an aggregation function on top of the predictor outputs. A related model-based aggregation method, the mixture-of-experts model [Jacobs et al., 1991], requires joint training of all models as well as data pooling. To the best of our knowledge there is no approach in the literature that can perform adaptive performance weighting without relying on a central agent to validate the individual models. Notably, our approach differs from general frameworks such as model parameter mixing [Zhang et al., 2013], model fusion [Leontev et al., 2018, Singh andJaggi, 2020] in that agents are not required to release their models. Outside machine learning, the question of how to combine opinions from different experts to arrive at a better overall outcome has been studied for decades in risk analysis [Hanea et al., 2018, Clemen andWinkler, 1999] and management science [Morris, 1977]. Simplifying the complex mechanism of opinion dynamics, the celebrated DeGroot model [DeGroot, 1974] offers an appealing abstraction to reason mathematically about human consensus finding. It has been popular in the analysis of convergent beliefs in social networks and it can be regarded as a formalization of the Delphi technique, which has been used to forecast group judgments related to the economy, education, healthcare, and public policy [Dalkey, 1972]. Extensions of the DeGroot model have been proposed to incorporate more sophisticated socio-psychological features that may be involved when individuals interact, such as homophily [Hegselmann and Krause, 2002], competitive interactions [Altafini, 2013] and stubbornness [Friedkin and Bullo, 2017]. Preliminaries We are given K independently trained models and aim to aggregate their predictions at test time. Specifically, we assume there are K agents, each of which holds a local training dataset, D k , of size n k , and a model f k : X → Y . We do not make any prior assumption about the quality or structure of the individual models, other than that they aim to solve the same regression tasks. At test time, let x ∈ X denote the data point that we would like to predict. Each agent k ∈ [K] has her own prediction for the test point corresponding to f k (x ). Our goal is to design a mechanism M that combines these predictions into an accurate single prediction, p * (x ) = M(f 1 (x ), ..., f K (x )). We focus on the squared loss as a measure of predictive quality. The requirements we pose on the aggregation mechanism M are: (i) agents should not be required to share their data or model parameters with other agents, or any external party; (ii) the procedure should not assume access to additional labeled data for an independent performance assessment or training of a meta-model. The DeGroot consensus model The consensus model proposed by DeGroot [1974] specifies how agents in a group influence each other's opinion on the basis of mutual trust. Formally, we denote the trust of agent i towards the belief of agent j as τ ij ∈ (0, 1], with j∈[K] τ ij = 1 for every i. Then, DeGroot defines an iterative procedure whereby each agent i repeatedly updates her belief p i as p (t+1) i = K j=1 τ ij p (t) j . (1) The procedure is initialized with initial beliefs p i and run until p (t) i = p (t) j for every pair i, j ∈ [K], at which point a consensus is deemed to have been reached. We denote the consensus belief by p * . The DeGroot model has a family resemblance to the influential PageRank algorithm [Page et al., 1999] that is designed to aggregate information across webpages in a decentralized manner. Similar to the consensus in DeGroot, the page rank corresponds to a steady state of equation (1), where the weights τ ij are computed based on the fraction of outgoing links pointing from page i to page j. Computing a consensus The DeGroot model can equivalently be thought of as specifying a weight for each agent's prediction in a linear opinion pool [Stone, 1961]. This suggests another way to compute the consensus prediction. Let us represent the mutual trust scores in the form of a stochastic matrix, T = {τ ij } K i,j=1 , which defines the state transition probability matrix of a Markov chain with K states. The stationary state w of this Markov chain, satisfying wT = w, defines the relative weight of each agent's initial belief in the final consensus p * = K j=1 w j p (0) j .(2) The Markov chain is guaranteed to be irreducible and aperiodic since all entries of T are positive. Therefore, a unique stationary distribution w exists, and consensus will be reached from any initial state [Chatterjee and Seneta, 1977]. We refer to the weights w as consensus weights. To compute the consensus prediction p * one can solve the eigenvalue problem numerically for w via power iteration. Letting T * = lim m→∞ T m , the rows of T * identify, and correspond to the weights w that DeGroot assigns to the individual agents [Durrett, 2019, Theorem 5.6.6]. The consensus is computed as a weighted average over the initial beliefs. DeGroot aggregation for collective test-time prediction In this section we discuss the details of how we implement the DeGroot model outlined in Section 2.1 for defining a mechanism M to aggregate predictions across machine learning agents. We then analyze the resulting algorithm theoretically, proving its optimality in the large-sample limit, and we outline how the sensitivity of the procedure with respect to individual agents can be evaluated in a decentralized fashion using a jackknife method. Algorithm 1 DeGroot Aggregation 1: Input: K agents with pre-trained models f 1 , · · · , f K and local data D k , k ∈ [K]; neighborhood size N ; test point x . 2: construct trust scores (based on local accuracy): 3: for i = 1, 2, . . . , K do 4: Construct local validation dataset D i (x ) using N -nearest neighbors of x in D i . 5: Compute accuracy MSE ij (x ) of other agents j = 1 . . . , K on D i (x ) according to (3). 6: Evaluate local trust scores {τ ij } j∈ [K] by normalizing MSE ij as in (4). 7: end for 8: find consensus: 9: Run pooling iterations (1), initialized at p (0) j = f j (x ) ∀j, until a consensus p * is reached. 10: Return: Collective prediction p * (x ) = p * Local cross-validation and mutual trust score A key component of DeGroot's model for consensus finding is the notion of mutual trust between agents, built by information exchange in form of a discourse. Given our machine learning setup with K agents each of which holds a local dataset and a model f k , we simulate this discourse by allowing query access to predictions from other models. This enables each agent to validate another agent's predictive performance on its own local data. This operationalizes a proxy for trustworthiness. An important requirement for our approach is that it should account for model heterogeneity, and define trust in light of the test sample x we want to predict. Therefore, we design an adaptive method, where the mutual trust τ ij is reevaluated for every query via a local cross-validation procedure. Namely, agent i evaluates all other agents' predictive accuracy using a subset D i (x ) ⊆ D i of the local data points that are most similar to the test point x [cf. Woods et al., 1997]. More precisely, agent i evaluates MSE ij (x ) = 1 |D i (x )| (x,y)∈D i (x ) (f j (x) − y) 2 ,(3) locally for every agent j ∈ [K], and then performs normalization to obtain the trust scores: τ ij = 1/MSE ij j∈[K] 1/MSE ij .(4) There are various ways one can define the subset D i (x ); see, e.g., Cruz et al. [2018b] for a discussion of relevant distance metrics in the context of dynamic classifier selection. Since we focus on tabular data in our experiments we use Euclidean distance and assume D i (x ) has fixed size. Other distance metrics, or alternatively a kernel-based approach, could readily be accommodated within the DeGroot aggregation procedure. Algorithm procedure and basic properties We now describe our overall procedure. We take the trust scores from the previous section, and use them to aggregate the agents' predictions via the DeGroot consensus mechanism. See Algorithm 1 for a full description of the DeGroot aggregation procedure. In words, after each agent evaluates her trust in other agents, she repeatedly pools their updated predictions using the trust scores as relative weights. The consensus to which this procedure converges is returned as the final collective prediction, denoted p * (x ). In the following we discuss some basic properties of the consensus prediction found by DeGroot aggregation. In the context of combining expert judgement, the seminal work by Lehrer and Wagner [1981] on social choice theory has defined unanimity as a general property any such aggregation function should satisfy. Unanimity requires that when all agents have the same subjective opinion, the combined prediction should be no different. Algorithms 1 naturally satisfies this condition. Proposition 3.1 (Unanimity). If all agents agree on the prediction, then the consensus prediction from Algorithm 1 agrees with the prediction of each agent: p * (x ) = f i (x ) for every i ∈ [K]. In addition, our algorithm naturally preserves a global ordering of the models, and the consensus weight of each agent is bounded from below and above by the minimal and maximal trust she receives from any of the other agents. Proposition 3.2 (Ranking-preserving). Suppose all agent rankings have the same order: for all j 1 , j 2 ∈ [K], if τ ij 1 ≥ τ ij 2 for some i ∈ [K], then τ i j 1 ≥ τ i j 2 for all i ∈ [K]. Then, Algorithm 1 finds a consensus p * = i w i f i (x ) where for pairs j 1 , j 2 such that τ ij 1 ≥ τ ij 2 , we have w j 1 ≥ w j 2 . Proposition 3.3 (Bounded by min and max trust). The final weight assigned to agent i is between the minimal and maximal trust assigned to it by the set of agents: max j τ ji ≥ w i ≥ min j τ ji . Finally, Algorithm 1 recovers equally-weighted averaging whenever each agent receives a constant amount of combined total trust from the other agents. Thus, if all agents perform equally well on the different validation sets, averaging is naturally recovered. However, the trust not necessarily needs to be allocated uniformly, there may be multiple T that result in a uniform stationary distribution. Proposition 3.4 (Averaging as a special case). If the columns of T sum to 1, then Algorithm 1 returns an equal weighting of the agents: w i = 1/K for i = 1, . . . , K. Together these properties serve as a basic sanity check that Algorithm 1 implements a reasonable consensus-finding procedure, from a decision-theoretic as well as algorithmic perspective. Optimality in a large-sample limit In this section we analyze the large-sample behavior of the DeGroot consensus mechanism, showing that it is optimal under the assumption that agents are independent. For our large-sample analysis, we suppose that the agents' local datasets D k are drawn independently from (different) distributions P k over X × Y . In order for our prediction task to be well-defined, we assume that the conditional distribution of Y given X is the same for all P k . In other words, the distributions P k are all covariate shifts of each other. For simplicity, we assume the distributions P k are all continuous and have the same support. We will consider the large-sample regime in which each agent has a growing amount of data, growing at the same rate. That is, if n = |D 1 | + · · · + |D k | is the number of total training points, then |D k |/n → c k > 0, as n → ∞ for each agent k = 1, . . . , K. Lastly, we require a basic consistency condition: for any compact set A ⊂ X , f k → f * k uniformly over x ∈ A as n → ∞, for some continuous function f * k . Theorem 3.5 (DeGroot converges to inverse-MSE weighting). Let x ∈ X be some test point. Assume that P k is supported in some ball of radius δ 0 centered at x , and that the first four conditional moments of Y given X = x are continuous functions of x in this neighborhood. Next, suppose we run Algorithm 1 choosing N = n c nearest neighbors for some c ∈ (0, 1). Let MSE * k = E (Y − f * k (x )) 2 denote the asymptotic MSE of model k at the test point x , where the expectation is over a draw of Y from the distribution of Y | X = x . Then, the DeGroot consensus mechanism yields weights w k → w * k = 1/MSE * k 1/MSE * 1 + · · · + 1/MSE * K . In other words, in the large-sample limit the DeGroot aggregation yields inverse-MSE weighting and thus recovers inverse-variance weighting for unbiased estimators-a canonical weighting scheme from classical statistics [Hartung et al., 2008] that is known to be the optimal way to linearly combine independent, unbiased estimates. This can be extended to our setting to show that DeGroot averaging leads to the optimal aggregation when we have independent 1 agents, as stated next. Theorem 3.6 (DeGroot is optimal for independent, unbiased agents). Assume that the conditional mean and variance of Y given X = x are continuous functions of x. For some δ > 0, consider drawingX uniformly from a ball of radius δ centered at x andỸ from the distribution of Y | X =X. Under this sampling distribution, suppose the residualsỸ − f * k (X) from the agents' predictions have mean zero and each pair has correlation zero. Then the optimal weights, w := arg min w∈R k : w 1 =1 E (X,Ỹ ) Ỹ − k∈[K] w k f * k (X) 2 , approach the DeGroot weights:w → w * as δ → 0. In summary, Theorem 3.5 shows that the DeGroot weights are asymptotically converging to a meaningful (sometimes provably optimal) aggregation of the models -locally for each test point x . This is a stronger notion of adaptivity than that usually considered in the model-averaging literature. Error bars for predictions Finally, we develop a decentralized jackknife algorithm [Quenouille, 1949, Efron andTibshirani, 1993] to estimate the standard error of the consensus prediction p * (x ). Our proposed procedure measures the impact of excluding a random agent from the ensemble and returns error bars that measure how stable the consensus prediction is to the observed collection of agents. Formally, let p * −i (x ) denote the consensus reached by the ensemble after removing agent i. Then, the jackknife estimate of standard error at x corresponds to SE(x ) = K − 1 K i∈[K] p * −i (x ) −p * (x ) 2 ,(5)wherep * (x ) = 1 K K i=1 p * −i (x ) is the average delete-one prediction. In the collective prediction setting, this is an attractive target of inference because it can be computed in a decentralized manner, is entirely agnostic to the data collection or training mechanism employed by the agents, and requires no additional model evaluations above those already carried out in Algorithm 1. Furthermore, it allows one to diagnose the impact of a single agent on the collective prediction and thus assess the vulnerability of the algorithm to malicious agents. A detailed description of the DeGroot jackknife procedure can be found in Algorithm 2 in Appendix B. Experiments We investigate the efficacy of the DeGroot aggregation mechanism empirically for various datasets, partitioning schemes and model configurations. We start with a synthetic setup to illustrate the strengths and limitations of our method. Then, we focus on real datasets to see how these gains surface in more practical applications with natural challenges, including data scarcity, heterogeneity of different kinds, and scalability. We compare our Algorithm 1 (DeGroot) to the natural baseline of equally-weighted model averaging (M-avg) and also compare to the performance of individual models composing the ensemble. In Section 4.2 we include an additional comparison with two reference schemes that have access to shared validation data and a central agent that determines an (adaptive) weighting scheme-information DeGroot does not have. Synthetic data First, we investigate a synthetic two-dimensional setting where each agent's features are drawn from a multivariate Gaussian distribution, x ∼ N (µ k , Σ k ). The true labeling function is given as y = [1 + exp(α x)] −1 and we add white noise of variance σ 2 Y to the labels in the training data. Unless stated otherwise, we use K = 5 agents and let each agent fit a linear model to her local data. See Figure 4 in the Appendix for a visualization of the models learned by the individual agents, and how they accurately approximate the true labeling function in different regions of the input space. 2 To evaluate the DeGroot aggregation procedure we randomly sample 200 test points from the uniform mixture distribution spanned by the K training data distributions, and compare the prediction of DeGroot to M-avg. We visualize the squared error for the collective prediction on the individual test points in Figure 2a where we use ξ := α x as a summary for the location of the test point on the logistic curve. We observe that DeGroot consistently outperforms M-avg across the entire range of the feature space. Exceptions are three singularities, where the errors of the individual agents happen to cancel out by averaging. Overall DeGroot achieves an MSE of 4.6e −4 which is an impressive reduction of over 50× compared to M-avg. Not shown in the figure is the performance of the best individual model which achieves an MSE of 5e −2 and performs worse than the M-avg baseline. Thus, DeGroot outperforms an oracle model selection algorithm and every agent strictly benefits from participating in the ensemble. The power of adaptive weighting such as used in DeGroot is that it can trace out a nonlinear function, given only linear models, whereas any static weighting (or model selection) scheme will always be bound to the best linear fit. In Figure 2b we show how individual agent's predictions improve with the number of pooling iterations performed in Step 9 of Algorithm 1. The influence of each local model on the consensus prediction of x and how it changes depending on the location of the test point is illustrated in Figure 1 in the introduction. An interesting finding that we expand on in Appendix D.1 is that for the given setup the iterative approach of DeGroot is superior to a more naive approach of combining the individual trust scores or local MSE values into a single weight vector. In particular, DeGroot reduces the MSE by 20× compared to using the average trust scores as weights. In Figure 2c we visualize the error bars returned by our DeGroot jackknife procedure. As expected, the intervals are large in regions where only one a small number of agents possess representative training data, and small in regions where there is more redundancy across agents. Finally, in Appendix D.1 we conduct further qualitative investigations of the DeGroot algorithm. First, we vary the covariance in the local data feature distribution. We find that the gain of DeGroot over M-avg becomes smaller as the covariance is increased; models become less specialized and there is little for DeGroot to exploit. On the other extreme, if the variance in the data is extremely small, the adaptivity mechanism of DeGroot becomes less effective, as the quality of individual models can no longer be reliably assessed. However, we need to go to extreme values in order to observe these phenomena. In a second experiment we explore DeGroot's sensitivity to choices of N . We find that when N is chosen too large adaptivity can suffer, and for very small values of N performance can degrade if the labels are highly noisy. However, there is a broad spectrum of values for which the algorithm performs well, between 1% and 10% of the local data usually a good choice. Real datasets Turning to real data sets, we first investigate how DeGroot deals with three different sources of heterogeneity. We work with the abalone dataset [Nash et al., 1994] and train a lasso model on each agent 3 with regularization parameter λ k = λ that achieves a sparsity of ∼0.8. For our first experiment, shown in Figure 3a, we follow a popular setup from federated learning to control heterogeneity in the data partitioning [see, e.g., Shen et al., 2021]. We partition the training data in the following manner: a fraction 1 − p of the training data is partitioned randomly, and a fraction p is first sorted by the outcome variable y and then partitioned sequentially. For the second experiment, shown in Figure 3b, we use a similar partitioning scheme, but sort the data along an important feature dimension, instead of y. This directly leads to heterogeneity in the input domain. Finally, we investigate model heterogeneity arising from different hyper-parameter choices of the regularization parameter λ, given a random partitioning. To control the level of heterogeneity, we start with λ = λ [Fan, 2011] on each agent, and let the parameters diverge increasingly, such that λ k = λ 1 + k−k 0 K q with k 0 = 3. The results are depicted in Figure 3c. In all three experiments the gain of DeGroot over M-avg is more pronounced with increasing degree of heterogeneity. When there is heterogeneity in the partitioning, DeGroot clearly outperforms the best individual model, by taking advantage of models' local areas of expertise. For the third experiment where the difference is in model quality, DeGroot almost recovers the performance of the best single model. . We use ridge and lasso as a baseline and train a decision tree regressor (DTR) for discrete outcome variables, and a neural net (NN) with two hidden layers for continuous outcomes. A positive relative gain means a reduction in MSE compared to DeGroot. MSE Next, we conduct a broad comparison of DeGroot to additional reference schemes across various datasets and models, reporting the results in Table 1. To the best of our knowledge there is no existing generic adaptive aggregation scheme that does not rely on external validation data to either build the aggregation function or determining the weights w. Thus, we decided to include a comparison with an approach that uses a validation dataset to compute the weighting, where the external validation data has the same size as the local partitions. Our two references are static inverse-MSE weighting (CV-static) where the weights are determined based on the model's average performance on the validation data, and adaptive inverse-MSE weighting (CV-adaptive), where the performance is evaluated only in the region around the current test point x . We achieve heterogeneity across local datasets by partially sorting a fraction p = 0.5 of the labels, and we choose N to be 1% of the data partition for all schemes (with a had lower bound at 2). Each scheme is evaluated for the same set of local models and confidence intervals are over the randomness of the data splitting into partitions, test and validation data across evaluation runs. The results demonstrate that DeGroot can effectively take advantage of the proprietary data to find a good aggregate prediction, significantly outperforming M-avg and CV-static, and achieving better or comparable performance to CV-adaptive that works with additional data. Finally, we verify that our aggregation mechanism scales robustly with the number of agents in the ensemble and the simultaneously decreasing partitioning size. Results in Appendix D.2 show that DeGroot consistently outperforms model averaging on two different datasets on the full range from 2 up to 128 agents. Discussion We have shown that insights from the literature on humans consensus via discourse suggest an effective aggregation scheme for machine learning. In settings where there is no additional validation data available for assessing individual models quality, our approach offers an appealing mechanism to take advantage of individual agents' proprietary data to arrive at an accurate collective prediction. We believe that this approach has major implications for the future of federated learning since it circumvents need to exchange raw data or model parameters between agents. Instead, all that is required for aggregation is query access to the individual models. A Proofs For some of our proofs we will use a Markov chain interpretation of the DeGroot mechanism, where the matrix T defines the transition probabilities of the Markov chain. By definition of the trust scores, the rows of T sum to one. Moreover, the entries are all positive, so this is an aperiodic, irreducible Markov chain and hence has a unique stationary distribution, given by the solution w to the linear system w = T w [e.g., Häggström, 2002]. Furthermore, T has a unique largest left-eigenvector with eigenvalue 1. It will be convenient for us to consider a Markov chain Z 0 , Z 1 , . . . with transition matrix T , and Z 0 drawn from the stationary distribution w. Proof of Proposition 3.1. We prove the claim by induction on the number of consensus steps t. First, we have that p (0) i = f 1 (x ) for all i ∈ [K] by assumption. Suppose p (t) i = f 1 (x ) for all i ∈ [K]. Then, p (t+1) i = K j=1 τ i,j p (t) j = f 1 (x ) since the weights sum to 1. Proof of Proposition 3.2. Using our Markov chain notation above, it suffices to show that P (Z 1 = j 1 ) ≥ P (Z 1 = j 2 ). It is easy to see that this is the case since P (Z 1 = j 1 | Z 0 = i) ≥ P (Z 1 = j 2 | Z 0 = i) for all i ∈ [K]. Proof of Proposition 3.3. Using our Markov chain notation above, note that max j {P (Z 1 = i | Z 0 = j)} ≥ P (Z 1 = i) ≥ max j {P (Z 1 = i | Z 0 = j)}. The result follows. Proof of Proposition 3.4. Note that in this case, 1 T = 1 (where 1 is the K-vector of all ones), so w = 1/n is the stationary state. Proof of Theorem 3.5. Choose any i ∈ [K]. We will show that τ ij → 1/MSE * j in probability as n → ∞. First, let x 1 , . . . , x N ∈ D i be the N nearest neighbors to the test point x in agent i's data. We claim that max m∈{1,...,N } x m − x → 0(6) in probability as n → ∞. To see this, for δ 0 > δ > 0, note that P ( x m − x < δ) = for some > 0, since the points in D i are an i.i.d. sample from a distribution supported in this ball. But then, P max m∈{1,...,N } x m − x ≥ δ = P (B < N ) where B is distributed as a binomial with |D i | draws and success probability . But P (B < N ) = P       B − |D i | |D i | (1 − ) < N − |D i | |D i | (1 − )       By assumption the term N − |D i |/ |D i | (1 − ) → −∞ as n → ∞ since N /|D i | → 0. Then, by the central limit theorem we conclude that P (B < N ) → 0 as n → ∞, and hence (6) holds. Next, let µ(x) = E[Y | X = x] and σ (x) = Var(Y | X = x) be the true mean and variance functions, respectively. For convenience, let B δ (x ) be the ball of radius δ centered at x . We have inf x∈B δ (x ) {µ(x) − f j (x)) 2 + σ (x) 2 } ≤ E 1/τ i,j max m∈{1,...,N } x m − x < δ ≤ sup x∈B δ (x ) {µ(x) − f j (x)) 2 + σ (x) 2 }. Note that the lower and upper bound both converge to MSE * j as δ → 0, by the continuity of µ(x), f j (x) and σ (x). Similarly, the conditional variance of 1/τ i,j is bounded by C/N for some constant C. Thus, by Chebyshev's inequality, there exist sequences c 1 (n) → 0 and c 2 (n) → 0 such that P |τ ij − MSE * j | ≥ c 1 (n) | max m∈{1,...,N } x m − x < δ ≤ c 2 (n). Since the event {max m∈{1,...,N } x m − x < δ} has probability tending to 1 as n grows, this implies that 1/τ i,j → MSE * j in probability, as desired. Proof of Theorem 3.6. For this distribution, we wish to find weightsw solving the following: w := arg min w∈R k ,1 w=1 E (X,Ỹ )               Ỹ − K k=1 w k f * k (X)        2         . Expanding the right side, we have E (X,Ỹ )               Ỹ − K k=1 w k f * k (X)        2         = E (X,Ỹ )                K k=1 w k (Ỹ − f * k (X))        2         = Var        K k=1 w k (Ỹ − f * k (X))        = K k=1 Var w k (Ỹ − f * k (X)) + 2 1≤k 1 <k 2 ≤K Cov w k 1 (Ỹ − f * k 1 (X), w k 2 (Ỹ − f * k 2 (X)) = K k=1 w 2 k Var Ỹ − f * k (X) With this expression, one can easily verify that the optimal w is theñ w k = 1/Var Ỹ − f * k (X) . Lastly, we claim that Var Ỹ − f * k (X) → MSE * k as δ → 0. This follows from the decomposition Var Ỹ − f * k (X) = E Var Ỹ − f * k (X) |X + Var E[Ỹ − f * k (X) |X] . By assumption, the conditional mean and variance are continuous functions of x, so the first term in the sum converges to MSE * k as δ → 0. Likewise, the second term in the sum converges to 0. This completes the proof. B Estimating the Standard Error In this section, we develop a standard errors estimator for the collective prediction p * (x ) resulting from Algorithm 1. Our proposal is a special case of jackknife standard error estimates, where we consider the agents to be the independent samples from a super-population of agents (this probabilistic interpretation is useful to precisely discuss standard errors but is not needed elsewhere in our work). Our error bars will thus measure how stable the consensus prediction is to the observed collection of agents. In the collective prediction setting, this is an attractive target of inference because it can be estimated without requiring any assumptions about how the agents gather data, their models and training procedure. As a result, our standard error estimate is entirely agnostic to the behavior of the agents, is decentralized, and requires minimal communication-it requires no additional model queries above what was already done in Algorithm 1. Algorithm 2 DeGroot Jackknife 1: Input: Pre-trained K agents f 1 , · · · , f K with local training dataset D k , k ∈ [K]; neighborhood size m; test point x . 2: Construct trust matrix T as in Algorithm 1. 3: for i = 1, 2, . . . , K do 4: Create submatrix T (i) 5: Find v such that vT (i) = v by power iteration. 6: Form collective prediction p * −i (x ) = j i v j f j (x ) 7: end for 8: Return: Jackknife estimate of standard error at x : SE(x ) = K − 1 K K i=1 p * −i (x ) −p * (x ) 2 , wherep * (x ) = 1 K K i=1 p * −i (x ) is the average delete-one prediction at x . Turning to the details, recall that T is the matrix of trust scores, and let T (i) be the principal submatrix of T formed by deleting row i and column i, renormalized so that each row sums to one. That is, T (i) is the trust matrix if agent i is removed from our collection of agents. The idea is that we run DeGroot aggregation with agent i deleted; i.e., we find the collective prediction p * −i (x ) of the remaining K − 1 agents. Then, by looking at the variability of these predictions with different agents removed, we can quantify the stability of the procedure. Specifically, we take the sample standard deviation of these quantities, scaled by (K − 1) 2 /K: the scaling prescribed by the theory of the jackknife estimator [Efron and Tibshirani, 1993]. We state this procedure formally in Algorithm 2. Importantly, this calculation does not require any additional model evaluations above what is required to perform the consensus prediction; this algorithm only requires constructing the matrix T and knowing the predictions f j (x ), which were already needed in Algorithm 1. Thus, estimating the standard error has no additional information-sharing requirement; it requires only a modest extra amount of computation. See Figure 8 for an empirical demonstration of these error bars on our synthetic example from Section 4.1. C Details on algorithm implementation In this section we discuss some practical considerations when implementing Algorithm 1. Step 4. For our experiments we have chosen the Euclidean distance metric, and a fixed number N of datapoints to determine D i (x ). To find the N nearest neighbors we have used the sklearn unsupervised NearestNeighbors classifier from sklearn.neighbors. Note that this step could be adjusted to use other distance metrics specific to the modality of the data. In particular, for higher dimensional data such as images a careful choice of distance metric will become crucial. Step 9. In this step the agents iteratively refine their predictions by pooling other agents predictions to reach a consensus, as described in Section 2.1. These pooling iterations can be executed in a fully decentralized manner, or the consensus weights can be evaluated via the power method at one of the nodes that gets access to all the trust scores in order to construct the matrix T from {τ i,j } i,j∈ [K] . The best implementation will depend on the requirements of your application. For our experiments we have chosen the centralized evaluation for convenience. Around t p = 30 power iterations are usually far sufficient to reach a consensus. Given that the number of agents is usually small (< 1000) this operation comes with little computational cost and we recommend to pick the number of iterations t p with sufficient margin to take advantage of the full potential of DeGroot aggregation. Alternatively, one might also implement a stopping criteria in the form of |p (t) j − p (t−1) j | ≤ tol that is checked independently by each agent j ∈ [K] after every round. Step 10. For every test point the algorithm returns the consensus prediction p * (x ). If Step 9 has fully converged we have p j = p * for all j and we can return any of the individual agents predictions. However, to get a robust procedure, even if the individual beliefs have not fully converged, we return p * = 1 K j p j instead. D Additional Experimental Details and Evaluations The code is written in Python and for model training, as well as the nearest neighbor computation we use the built-in functionalities of Sklearn [Pedregosa et al., 2011]. D.1 Evaluation on Synthetic Data For the synthetic setup the true label is determined by a logistic function, as described in Section 4.1, where we choose α = [1, 1]. The features of agent k are distributed according to a multi-variate Gaussian distribution with mean µ k and each model then fits a linear function to it's training data. We use the following default experimental configuration: we choose the means as µ 1 = [-3, -4], µ 2 = [-2, -2], µ 3 = [-1, -1], µ 4 = [0, 0] µ 5 = [3, 2], the covariance of the local data as Σ k = Σ = I and the variance in the label noise as σ Y = 0.1. The resulting local linear fits are illustrated in Figure 4 where we see that individual models approximate the true labeling function in different regions of the input space. The accuracy as a function of ξ is illustrated in Figure 1 in the introduction. Additional Evaluations: A) Sensitivity of DeGroot to hyperparameter N . In Figure 5 we demonstrate how the performance of Algorithm 1 in this synthetic setting changes with the number of neighbors N used for local validation for two different levels of label noise. Overall we find that the performance of DeGroot is not very sensitive to the choice of N , whereas the optimal regime of values for N increases with the noise in the data. In general, we recommend choosing N to corresponds to approximately 1 − 10% of the available data, with a hard lower-bound on N , bounding it away from 1. For later investigations on real data we pick N to be 1% of the partition size. B) Varying overlap in local data partitions. In Figure 6 we investigate how the performance of DeGroot changes in comparison to M-avg for varying overlap in the local data partitions. Therefore we vary the covariance Σ k = Iσ 2 in the feature distribution of the individual agents, while keeping the means µ k fixed. We find that for larger values of σ 2 there is less to gain for DeGroot since models are less specialized, although the gains remains significant up to σ 2 = 10. We further see that if the variance in the data is very small and the local datasets are not sufficiently covering the feature space, the local cross-validation procedure and the adaptivity of the DeGroot procedure start to suffer. C) Alternative pooling operations. In Figure 7 we aim to provide additional insights into the effect of DeGroot's iterative procedure to find consensus and aggregate the individual predictions. In Figure 2b we have shown how the accuracy of individual agents improves with the number of pooling iterations in the DeGroot procedure. After around 20 iterations the agents have reached consensus. In Figure 7 we compare the accuracy of the predictions obtained through DeGroot to alternative ways of aggregating the local MSE values and trust scores into a single weight vector. These baselines are constructed for diagnostic purpose to demonstrate the value of using power iterations instead of an alternative procedure. The first method, denoted τ-avg takes the mean of the trust scores τ ij across all agents i to obtain the weights w j of agent j on the final prediction. As shown in Figure 7a τ-avg performs significantly worse than DeGroot on our synthetic example, and achieves an overall MSE of 9e −3 which is 20× larger than DeGroot. This shows that the DeGroot agents find a better collective prediction through iterative pooling than they would by averaging their beliefs after only a single round of pooling. The second method first averages the local MSE values of each agent across the different datasets asM SE j = i MSE i,j and then obtains the weights through inverse-MSE weighting as w j = 1/M SE j . Similar to τ-avg, this alternative procedure performs significantly worse than DeGroot, and acheives an MSE of 1e −2 . D) DeGroot Jackknife. Finally, in Figure 8, we evaluate the error bars for our predictions using the Jackknife procedure proposed in Algorithm 2 for the test points evaluated in Figure 2a. Here, we report the predictions as a function of ξ, as well as the standard error (5). We find that the standard errors are relatively small in the center, but increase at the edges of the space. This makes sense: on the edges of the space there is only one agent with good prediction accuracy, so the final consensus prediction is not stable to the deletion of any one agent. To validate this explanation further, in the right panel we increase the spread of the training points for each agent, so that the agents' training data overlap more. As expected, the standard errors are now much smaller-because there are multiple agents with training data in most regions, the procedure is more stable. D.2 Evaluation on Real Data Datasets. The specifications of the datasets used for our experiments in Section 4.2 can be found in Details on Benchmark Study in Table 1: Baselines. We compare DeGroot to three baselines: M-avg, CV-static, CV-dynamic. The three methods describe different approaches to determine the weight w j of each model on the final prediction p(x ) = j w j f j (x ). (M-avg) corresponds to the most natural baseline of equally weighted averaging with w j = 1 K for all j ∈ [K]. This baseline that is optimal if models are unbiased and have equal variance. (CV-static) and (CV-adaptive) are two methods that have access to additional hold-out data to evaluate the predictive accuracy MSE j of each model j ∈ [K] and determine the weights w j . As the name says (CV-static) uses static weights, and (CV-adaptive) uses an adaptive weighting scheme. Similar to DeGroot both methods use weights that are inversely proportional to the MSE of the models: w j = 1/MSE j j 1/MSE j In CV-static MSE j is evaluated on the hold-out data, and in CV-adaptive MSE j is evaluated on a subset of the hold-out data, composing of the N closest points to x . We use the same distance measure, and number of neighbors N as we use to perform cross-validation on the individual agents in DeGroot. Our goal is to minimize potential confounding for a most meaningful comparative investigation. Evaluation. In all experiments the data is first randomly partitioned into train,test and validation data. The validation data is completely ignored by DeGroot and M-avg and only used for CV-static and CV-adaptive. The size of the hold-out data is equal to the size of one data partition. We use n test = max(0.15n, 500) test samples. For every random split all methods are evaluated on the same set of local models. We tune the hyparparameter of each model using a rough hyperparameter search upfront, but our main goal is not to get optimal accuracy with each individual model, but rather to investigate how the different aggregation procedures can deal with (resonable) given models. Hyperparameters are reported in Table 1. Models. For training of the models we use the built in model classes for scikit-learn [Pedregosa et al., 2011]. Ridge and Lasso are imported from sklearn.linear_model and the hyperparameter λ corresponds to the regularizer strength. For the DTR we use the DecisionTreeRegressor from sklearn.tree where the hyperparameter max_depth corresponds to the maximum depth of the regression tree. For the neural network (NN) we use the MLPRegressor class from sklearn.neural_network, it uses relu activation by default, and we choose two hidden layers with 8 neurons each, and α corresponds to the regularization strength. All other parameters are set to their default values. Whenever a NN model has not converged within the given number of iterations (500 for E2006 and 200 for boston) the model is nevertheless included in the ensemble since the purpose of this study is the evaluate how the different schemes deal with given models that might be heterogeneous in terms of quality. Confidence Intervals. The reported confidence intervals are over the randomness in the data splitting and partitioning, they show one standard deviation from the mean. Additional Evaluations: Heterogeneity. We perform additional evaluations with the different approaches of achieving heterogeneity described in Section 4.2. In Figure 9 and Figure 10 we show results for ridge regression on the abalone, boston and cpusmall dataset. The gain of DeGroot over M-avg consistently grows with the degree of heterogeneity. Scalability. In Figure 11 we investigate the scaling of the DeGroot method on two different datasets. We show results for 1 up to 128 agents. The training data (after test set hold-out) is partitioned across the agents by partially sorting the label with p = 0.5. This means with increasing number of agents K the partitions get smaller. We choose the number of neighbors in DeGroot as for ridge regression on different datasets. We partially sort the training data along feature 8 (Abalone) feature 9 (Boston) and feature 11 (cpusmall). The performance of the individual models in the ensemble is depicted using the shaded green area, the upper edge corresponds to the best and the lower edge to the worst model, the green line indicating the average performance across the models. N = max(2, 0.01 * n local ), where n local denotes the size of a local partition. We find that DeGroot scales robustly with the number of agents in the ensemble and outperforms averaging by a large margin. Remarkably, the adaptive weighting of DeGroot is still effective for 128 agents on the abalone datasets, where each agent only has access to 30 samples. Figure 1 : 1(Test-time aggregation of heterogeneous models). The proposed method upweights more accurate models (lower MSE) adaptively in different regions of the input space. Constructed from a regression task described in Section 4.1. Figure 2 : 2(Synthetic data experiment). Comparison of DeGroot to M-avg. (a) Predictive accuracy on individual test points. (b) Performance for varying number of pooling iterations during DeGroot consensus finding. (c) Confidence intervals for individual test points returned by DeGroot jackknife. Figure 3 : 3(Data and model heterogeneity for the abalone data). Relative gain over M-avg. Confidence intervals of DeGroot are over randomness in data partitioning and for the individual agents the shaded area spans the performance from the worst to the best model, the line indicating average performance. Figure 4 :Figure 5 : 45(agents' local models). Local linear models fit by individual agents for the default experimental configuration. (Performance vs. number of neighbors). Synthetic data experiment comparing DeGroot to model averaging (M-avg) for two different noise levels (σ Y ) in the training data labels. Gray dashed line indicate the default value in our experiments. Figure 6 :Figure 7 : 67(Varying overlap in data partitions). Comparing DeGroot to classical model averaging (M-avg) for varying covariance Σ k = Iσ 2 in the feature distribution, keeping the means µ k fixed. Gray dashed line indicate the default value in our experiments. (Alternative aggregation schemes). (a) Comparison of DeGroot to τ-avg that average the trust scores across all agents to obtain the weights of the individual agents. (b) Comparison of DeGroot to MSE-avg that performs inverse-MSE weighting based on the aggregated MSE values cross the local datasets. Experiments serve for diagnosis purpose and are conducted on the synthetic setup outlined in Section 4.1. Figure 8 : 8(DeGroot Jackknife). Evaluations of error bars using the decentralized Jackknife procedure proposed in Algorithm 2. The test points correspond to the same 200 test points visualized in Figure 2a. For (a) we use the standard configuration, and for (b) we increase the covariance of the local training datasets by a factor of 5, i.e. Σ k = 5 I. Figure 9 : 9Cpu-small (λ = 1e −5 ) (Label heterogeneity). Relative gain of DeGroot over M-avg. Same setting asFigure 3afor ridge regression on different datasets. We depict the performance of the individual models in the ensemble using the shaded green area, the upper edge corresponds to the best and the lower edge to the worst performing model, the green line indicating the average performance across the models. Figure 10 : 10Cpu-small (λ = 1e −5 ) (Feature heterogeneity). Relative gain of DeGroot over M-avg. Same setting asFigure 3b Figure 11 : 11Abalone -lasso (λ = 1e −5 ) (Scaling behavior of DeGroot). Relative gain of DeGroot aggregation over model averaging (M-avg) for different number of agents. Experiments performed for two different configurations. Table 1 : 1(Benchmark comparison). Performance for different combinations of datasets and models. Datasets have been downloaded from Table 2 . 2The datasets have been downloaded from libsvm[Fan, 2011] and are used without any additional preprocessing, apart from the boston dataset that is normalized for neural network training. The boston housing price data[Dua and Graff, 2017] and the E2006 datasets [Kogan et al., 2009] have continuous outcome variables. Abalone [Dua and Graff, 2017] has integer values outcomes from 1 − 29, YearPrediction [Dua and Graff, 2017] has integer valued year numbers from 1922 − 2011, and the labels of cpusmall correspond to integers from 0 − 99. Table 2 : 2Regression datasets used for evaluation, downloaded from[Fan, 2011].# samples # featues Boston 506 13 E2006 16087 3308 Abalone 4077 8 cpusmall 8192 12 YearPrediction 463715 90 While independence is a condition that we can't readily check in practice, it may hold approximately if the agents use different models constructed from independent data. In any case, the primary takeaway is that DeGroot is acting reasonably in this tractable case. If not stated otherwise we spread the means as µ = {[-3, -4], [-2, -2], [-1, -1], [0, 0], [3, 2]} and let Σ k = I. We use 200 training samples on each agent, for the labeling function we use α = [1, 1], and for the label noise in the training data we set σ Y = 0.1. We use N = 5 for local cross-validation in DeGroot. Similar phenomena can be observed for other datasets and models (see Appendix D.2) AcknowledgementsWe would like to thank Jacob Steinhardt and Alex Wei for feedback on this manuscript. Spocc: Scalable possibilistic classifier combinationtoward robust aggregation of classifiers. J Mahmoud Albardan, O Klein, Colot, Expert Syst. Appl. 150113332Mahmoud Albardan, J. Klein, and O. Colot. Spocc: Scalable possibilistic classifier combination - toward robust aggregation of classifiers. Expert Syst. Appl., 150:113332, 2020. Consensus problems on networks with antagonistic interactions. Claudio Altafini, IEEE Transactions on Automatic Control. 584Claudio Altafini. Consensus problems on networks with antagonistic interactions. IEEE Transactions on Automatic Control, 58(4):935-946, 2013. A route to more tractable expert advice. Aspinall, Nature. 4637279Aspinall. A route to more tractable expert advice. Nature, 463(7279):294-295, 2010. The combination of forecasts. M Bates, C W J Granger, Operational Research. 204M. Bates and C. W. J. Granger. The combination of forecasts. Operational Research, 20(4):451-468, 1969. Bagging predictors. Leo Breiman, Mach. Learn. 242Leo Breiman. Bagging predictors. Mach. Learn., 24(2):123-140, August 1996a. Stacked regressions. Leo Breiman, Mach. Learn. 241Leo Breiman. Stacked regressions. Mach. Learn., 24(1):49-64, July 1996b. Observations on bagging. Andreas Buja, Werner Stuetzle, Statistica Sinica. 162Andreas Buja and Werner Stuetzle. Observations on bagging. Statistica Sinica, 16(2):323-351, 2006. Towards consensus: Some convergence theorems on repeated averaging. S Chatterjee, E Seneta, Journal of Applied Probability. 141S. Chatterjee and E. Seneta. Towards consensus: Some convergence theorems on repeated averaging. Journal of Applied Probability, 14(1):89-97, 1977. Combining probability distributions from experts in risk analysis. T Robert, Robert L Clemen, Winkler, Risk Analysis. 192Robert T. Clemen and Robert L. Winkler. Combining probability distributions from experts in risk analysis. Risk Analysis, 19(2):187-203, 1999. Experts in Uncertainty: Opinion and Subjective Probability in Science. Roger M Cooke, Oxford University PressRoger M. Cooke. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press, 1991. The NN-Stacking: Feature weighted linear stacking through neural networks. Victor Coscrato, Marco Henrique De Almeida Inacio, Rafael Izbicki, Neurocomputing. 399Victor Coscrato, Marco Henrique de Almeida Inacio, and Rafael Izbicki. The NN-Stacking: Feature weighted linear stacking through neural networks. Neurocomputing, 399:141-152, 2020. Prototype selection for dynamic classifier and ensemble selection. Rafael M Cruz, Robert Sabourin, George D Cavalcanti, Neural Comput. Appl. 292Rafael M. Cruz, Robert Sabourin, and George D. Cavalcanti. Prototype selection for dynamic classifier and ensemble selection. Neural Comput. Appl., 29(2):447-457, 2018a. Dynamic classifier selection: Recent advances and perspectives. Information Fusion. Rafael M Cruz, Robert Sabourin, George D C Cavalcanti, 41Rafael M. Cruz, Robert Sabourin, and George D.C. Cavalcanti. Dynamic classifier selection: Recent advances and perspectives. Information Fusion, 41:195-216, 2018b. Studies in the quality of life; DELPHI and decision-making. N Dalkey, N. Dalkey. Studies in the quality of life; DELPHI and decision-making. 1972. Large scale distributed deep networks. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc Le, Andrew Ng, Advances in Neural Information Processing Systems. 25Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc Le, and Andrew Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, volume 25, 2012. Reaching a consensus. H Morris, Degroot, Journal of the American Statistical Association. 69345Morris H. DeGroot. Reaching a consensus. Journal of the American Statistical Association, 69(345): Ray: A distributed framework for emerging AI applications. Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, Ion Stoica, 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emerging AI applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pages 561-577, 2018. Software available at https://ray.io/. Combining expert judgments: A bayesian approach. A Peter, Morris, Management Science. 237Peter A. Morris. Combining expert judgments: A bayesian approach. Management Science, 23(7): 679-693, 1977. 7he population biology of abalone (haliotis species) in tasmania. i. blacklip abalone (h. rubra) from the north coast and islands of bass strait. T L Warwick Nash, S R Sellers, A J Talbot, W B Cawthorn, Ford, Technical ReportSea Fisheries DivisionWarwick Nash, T.L. Sellers, S.R. Talbot, A.J. Cawthorn, and W.B. Ford. 7he population biology of abalone (haliotis species) in tasmania. i. blacklip abalone (h. rubra) from the north coast and islands of bass strait. Sea Fisheries Division, Technical Report No, 48, 01 1994. The pagerank citation ranking: Bringing order to the web. Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd, 1999-66Stanford InfoLabTechnical ReportLawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web. Technical Report 1999-66, Stanford InfoLab, November 1999. Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12: 2825-2830, 2011. Approximate tests of correlation in time-series. M H Quenouille, Journal of the Royal Statistical Society. Series B (Methodological). 111M. H. Quenouille. Approximate tests of correlation in time-series. Journal of the Royal Statistical Society. Series B (Methodological), 11(1):68-84, 1949. Combining the Results of Several Neural Network Classifiers. Galina Rogova, 978-3-540-44792-4SpringerBerlin Heidelberg; Berlin, HeidelbergGalina Rogova. Combining the Results of Several Neural Network Classifiers, pages 683-692. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. ISBN 978-3-540-44792-4. Federated functional gradient boosting. Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi, arxiv:2103.06972Arxiv. Zebang Shen, Hamed Hassani, Satyen Kale, and Amin Karbasi. Federated functional gradient boosting. Arxiv, arxiv:2103.06972, 2021. Model fusion via optimal transport. Pal Sidak, Martin Singh, Jaggi, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin33Sidak Pal Singh and Martin Jaggi. Model fusion via optimal transport. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 22045-22055, 2020. The Opinion Pool. M Stone, The Annals of Mathematical Statistics. 324M. Stone. The Opinion Pool. The Annals of Mathematical Statistics, 32(4):1339 -1342, 1961. Bayesian model selection and model averaging. Larry Wasserman, Journal of mathematical psychology. 441Larry Wasserman et al. Bayesian model selection and model averaging. Journal of mathematical psychology, 44(1):92-107, 2000. Stacked generalization. H David, Wolpert, Neural Networks. 52David H. Wolpert. Stacked generalization. Neural Networks, 5(2):241-259, 1992. Combination of multiple classifiers using local accuracy estimates. K Woods, W P Kegelmeyer, K Bowyer, IEEE Transactions on Pattern Analysis and Machine Intelligence. 194K. Woods, W.P. Kegelmeyer, and K. Bowyer. Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):405-410, 1997. Matei Zaharia, Reynold S Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman, Michael J Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, and Ion Stoica. Apache Spark: A Unified Engine for Big Data Processing. Matei Zaharia, Reynold S. Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman, Michael J. Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, and Ion Stoica. Apache Spark: A Unified Engine for Big Data Processing. . Commun. ACM. 5911Commun. ACM, 59(11):56-65, 2016. Software available at https://spark.apache.org/. Communication-efficient algorithms for statistical optimization. Yuchen Zhang, John C Duchi, Martin J Wainwright, Journal of Machine Learning Research. 1468Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14(68):3321-3363, 2013.
[]
[ "Bars and secular evolution in disk galaxies: Theoretical input", "Bars and secular evolution in disk galaxies: Theoretical input" ]
[ "E Athanassoula [email protected] \nLAM (Laboratoire d'Astrophysique de Marseille)\nUMR 7326\nAix Marseille Université\nCNRS\n13388MarseilleFrance\n" ]
[ "LAM (Laboratoire d'Astrophysique de Marseille)\nUMR 7326\nAix Marseille Université\nCNRS\n13388MarseilleFrance" ]
[]
Bars play a major role in driving the evolution of disk galaxies and in shaping their present properties. They cause angular momentum to be redistributed within the galaxy, emitted mainly from (near-)resonant material at the inner Lindblad resonance of the bar, and absorbed mainly by (near-)resonant material in the spheroid (i.e., the halo and, whenever relevant, the bulge) and in the outer disk. Spheroids delay and slow down the initial growth of the bar they host, but, at the later stages of the evolution, they strengthen the bar by absorbing angular momentum. Increased velocity dispersion in the (near-)resonant regions delays bar formation and leads to less strong bars.When bars form they are vertically thin, but soon their inner parts puff up and form what is commonly known as the boxy/peanut bulge. This gives a complex and interesting shape to the bar which explains a number of observations and also argues that the COBE /DIRBE bar and the Long bar in our Galaxy are, respectively, the thin and the thick part of a single bar.The value of the bar pattern speed may be set by optimising the balance between emitters and absorbers, so that a maximum amount of angular momentum is redistributed. As they evolve, bars grow stronger and rotate slower. Bars also redistribute matter within the galaxy, create a disky bulge (pseudo-bulge), increase the disk scale-length and extent and drive substructures such as spirals and rings. They also affect the shape of the inner part of the spheroid, which can evolve from spherical to triaxial. † A number of concepts and results discussed in this section are much more general, and can be applied to a more general class of potentials.
10.1017/cbo9781139547420.006
[ "https://arxiv.org/pdf/1211.6752v1.pdf" ]
119,277,638
1211.6752
694a54225af2748b041ee99afbe4393de383a4fa
Bars and secular evolution in disk galaxies: Theoretical input 28 Nov 2012 E Athanassoula [email protected] LAM (Laboratoire d'Astrophysique de Marseille) UMR 7326 Aix Marseille Université CNRS 13388MarseilleFrance Bars and secular evolution in disk galaxies: Theoretical input 28 Nov 2012 Bars play a major role in driving the evolution of disk galaxies and in shaping their present properties. They cause angular momentum to be redistributed within the galaxy, emitted mainly from (near-)resonant material at the inner Lindblad resonance of the bar, and absorbed mainly by (near-)resonant material in the spheroid (i.e., the halo and, whenever relevant, the bulge) and in the outer disk. Spheroids delay and slow down the initial growth of the bar they host, but, at the later stages of the evolution, they strengthen the bar by absorbing angular momentum. Increased velocity dispersion in the (near-)resonant regions delays bar formation and leads to less strong bars.When bars form they are vertically thin, but soon their inner parts puff up and form what is commonly known as the boxy/peanut bulge. This gives a complex and interesting shape to the bar which explains a number of observations and also argues that the COBE /DIRBE bar and the Long bar in our Galaxy are, respectively, the thin and the thick part of a single bar.The value of the bar pattern speed may be set by optimising the balance between emitters and absorbers, so that a maximum amount of angular momentum is redistributed. As they evolve, bars grow stronger and rotate slower. Bars also redistribute matter within the galaxy, create a disky bulge (pseudo-bulge), increase the disk scale-length and extent and drive substructures such as spirals and rings. They also affect the shape of the inner part of the spheroid, which can evolve from spherical to triaxial. † A number of concepts and results discussed in this section are much more general, and can be applied to a more general class of potentials. Introductory remarks In the ΛCDM model, galaxies are formed in dark matter haloes, and, at early times, merge frequently with their neighbours. As time evolves (and redshift decreases), the rate of mergers decreases and the evolution of galaxies changes from being merger-driven to a more internally driven one. This change is progressive and the transition is very gradual. Generally, the internally driven evolution is on a much longer timescale than the mergerdriven one. It is now usually termed secular (for slow), a term introduced by Kormendy (1979), who made in that paper the first steps in linking this evolution with galaxy morphology. In the sixties, and partly through the seventies as well, theoretical work on galaxy dynamics was mainly analytical. The working hypothesis usually was that potentials are steady-state, or quasi-steady-state. Thus, given a potential or type of potential, theoretical work would follow the motions of individual particles, or would study collective effects aiming for selfconsistent solutions, by following, e.g., the Boltzmann equation (Binney & Tremaine 2008). In this way, the basis of orbital structure theory was set and a considerable understanding of many dynamical effects was obtained. The advent of numerical simulations, however, made it clear that galaxies evolve with time, so that a quasi-steady-state approach can not give the complete picture. Secular evolution was the general subject of this series of lectures, which were given in November 2011 in the XXIII Canary Islands Winter School of Astrophysics. My specific subject was bar-driven secular evolution and was presented from the theoretical viewpoint, although I included in many places comparisons with observations. In this written version I concentrate on a few specific topics, such as the angular momentum redistribution within the galaxy, the role of resonances in this redistribution, and its results on bar evolution and boxy/peanut bulges. I will discuss elsewhere the effects of gas and of halo triaxiality and clumpiness. The main tool I used was N -body simulations, and, albeit to a somewhat lesser extent, analytic work and orbital structure theory. It is only by coupling several independent approaches that the answer to complex questions, such as the ones we have tackled, can be obtained. Introductory material, useful for a better appreciation of some aspects of bar evolution, can be found in Binney & Tremaine (2008), while further related material can be found in the reviews by Athanassoula (1984 -on spiral structure), Contopoulos & Grosbøl (1989 -on orbits), Sellwood & Wilkinson (1993 - Introduction N -body simulations have clearly shown that bars form spontaneously in galactic disks. An example is given in Fig. 4.1, displaying the face-on (upper panels), side-on † (middle panels), and end-on ‡ (lower panels) views of the disk component at three different times during the formation and evolution. The left-hand panel shows the initial conditions of the simulation, the righthand one a snapshot at a time near the end of the simulation, and the central panel a snapshot at an intermediate time. Before plotting, I rotated the snapshots so that the major axis of the bar coincides with the x axis. Note that between the times of the central and right panels both the bar † For the side-on view, the galaxy is viewed edge-on, with the direction of the bar minor axis coinciding with the line of sight. ‡ The end-on view is also edge-on, but now the line of sight coincides with the bar major axis. and the disk have grown considerably in size, and that in both snapshots an inner ring surrounds the bar. Note also that the initially thin disk becomes thick in the inner parts. Seen side-on, it first becomes asymmetric with respect to the equatorial plane and then puffs up to reach a peanut-like shape. Seen end-on, it displays a bulge-like central concentration. From the face-on and the side-on views we can infer that this concentration is simply the bar seen end-on. In a real galaxy, however, where knowledge about the two other views would be unavailable, this could be mistaken for a classical bulge, unless supplementary photometric and/or kinematic information is available. Athanassoula (2005b) showed that this error could occur only if the angle between the bar major axis and the line of sight was less that 5-10 degrees, i.e., within a rather restricted range of viewing angles. Such bar formation and evolution processes had already been witnessed in the pioneering N -body simulations of the early seventies and onward (e.g., Miller et al. 1970;Hohl 1971;Ostriker & Peebles 1973;Sellwood 1980Sellwood , 1981Combes et al. 1990;Pfenniger & Friedli 1991). Although technically these simulations were not up to the level we are used to now (due to lower number of particles, lower spatial and temporal resolution, absence or rigidity of the halo component, a 2D geometry, etc.), they came to a number of interesting results, two of which are closely related to what we will discuss here. Ostriker & Peebles (1973), using very simple simulations with only 150 to 500 particles, came to the conclusion that haloes can stabilise bars. This number of particles is too low to describe adequately the bar-halo interaction and particularly its effect on the bar growth. It is thus no surprise that their result is partly flawed. Nevertheless, this paper, together with the subsequent one by Ostriker et al. (1974), gave a major impetus to research on dark matter haloes, focusing both observational and theoretical effort on them. , using 2D simulations with 40 000 particles only, showed that bars grow slower in hotter disks (i.e., in disks with larger velocity dispersions). They also confirmed a result which had been already found in analytical mode calculations (e.g., Toomre 1981), namely that a higher relative halo mass decreases the bar growth rate, so that bars grow slower in disk galaxies with a larger M H /M D ratio, where M H and M D are the halo † and disk masses, respectively. These results will be discussed further in Section 4.6.8. † There is some ambiguity in general about what is meant by the term 'halo mass'. In some cases it is the total halo mass, but in others it is the mass within a radius encompassing the relevant part of the simulated galaxy. In this case, since the simulations were 2D and therefore the halo rigid, only a small-sized halo was considered, so the two definitions coincide. Orbits and resonances Before starting on our quest for understanding the main bar formation and evolution processes, let me first give a brief and considerably simplified description of some basic notions of orbital structure theory. Readers interested in more thorough and rigorous treatments can consult Arnold (1989) and Lichtenberg & Lieberman (1992). Let me consider a very simple potential composed of an axisymmetric part (including all axisymmetric components) and a rigid bar rotating with a constant angular velocity Ω p . It is in general more convenient to work in a frame of reference which co-rotates with the bar, in order to have a time-independent potential (Binney & Tremaine 2008) and I will simplify things further by restricting myself to 2D motions. Any regular galactic orbit in this potential † can be characterised by two fundamental frequencies, Ω i , i = 1,2. In the epicyclic approximation these are Ω, the angular frequency of rotation around the galactic centre, and κ, the epicyclic frequency, i.e., the frequency of radial oscillations. We say that an orbit is resonant if there are two integers l and m such that lκ + m(Ω − Ω p ) = 0. (4.1) The most important resonances for our discussions here will be the Lindblad resonances (inner and outer) and the corotation resonance. The inner Lindblad resonance (hereafter ILR) occurs for l = −1 and m = 2. Therefore, in a frame of reference co-rotating with the bar, such orbits will close after one revolution around the centre and two radial oscillations ( Fig. 4.2). Similarly, the outer Lindblad resonance (hereafter OLR) occurs for l = 1 and m = 2. For l = 0 we have the corotation resonance (hereafter CR), where the angular frequency is equal to the bar pattern speed, i.e., the particle corotates with the bar. Contrary to regular orbits, chaotic orbits (often also called irregular orbits) do not have two fundamental frequencies and this property can be used to distinguish them from regular orbits with the help of what is often called a frequency analysis (Binney & Spergel 1982;Laskar 1990). Let us also briefly mention the so-called sticky orbits. Information on the dynamics and properties of such orbits can be found in Contopoulos (2002). Here we will only mention that, classified by eye, such orbits can be seen as being, say, regular over a given interval of time and then, within a relatively short time, turning to chaotic. Not too many years ago the existence and effect of non-regular orbits on the structure and dynamics of galaxies was generally neglected, but it is becoming progressively clear that this was wrong, so that such orbits are now known to play a considerable role in many fields of galactic dynamics. By definition, resonant orbits close after a certain number of revolutions and a certain number (not necessarily the same) of radial oscillations, and are often referred to as periodic orbits. Several studies of such orbits in various bar potentials have been made in 2D cases † (e.g., Contopoulos & Papayannopoulos 1980;Athanassoula et al. 1983;Contopoulos & Grosbøl 1989). They show that, in the equatorial plane, the main supporters of the bar are a family of orbits elongated along the bar, named x 1 and having l = −1 and m = 2. Examples of members of this family can be seen in Fig. 4.3 here, or in Fig. 7 of Contopoulos & Papayannopoulos (1980), or Fig. 2 of Skokos et al. (2002a. In most cases there is another family of orbits with l = −1 and m = 2, but which are oriented perpendicularly to the bar and are named x 2 . These play a crucial role in determining the gas flow in the bar and the morphology of the inner kpc region in the centre of the galaxy and will be discussed further by Isaac Shlosman (this volume). Finally there are also two main families of periodic orbits at CR, examples of which can be seen, e.g., in Fig. 3 and 4 of Contopoulos & Papayannopoulos (1980). Periodic orbits can be stable or unstable and this can be tested by considering another orbit very near the periodic one in phase space, i.e., with very similar values of positions and velocities. If the periodic orbit is stable, then the new orbit will stay in the immediate surroundings of the periodic one and 'wrap' itself around it. It can then be said that this new orbit is 'trapped' by the periodic one. Examples of trapped orbits can be seen in † 3D cases will be discussed in Section 4.8.2. Binney & Tremaine (2008). The bar can then be considered as a superposition of such orbits, trapped around members of the x 1 family, which will thus be the backbone of the bar. On the other hand, if the periodic orbit is unstable, then this second orbit will leave the vicinity of the periodic orbit, and the distance between the two orbits in phase space will increase with time, even though initially they were very near. The calculation of periodic orbits is straightforward, yet such orbits can reveal crucial information on galactic structure and dynamics. A good example is the work of Contopoulos (1980), who, with simple considerations on closed orbits, was able to show that bars cannot extend beyond their CR. Further work on periodic orbits coupled to hydrodynamic simulations gave an estimate of the lower limit to the bar length, and the ratio R of the corotation radius to the bar length was found to be in the range of 1.2 ± 0.2 (Athanassoula 1992a(Athanassoula , 1992b. Note, however, that the lower limit is only an estimate, and not a strict limit as the upper limit. Nevertheless, several other methods and works, including observational, gave results within the above-quoted range, as reviewed by Elmegreen (1996) and by Corsini (2011). The bars for which 1.0 < R < 1.4 are called fast, contrary to bars with R > 1.4, which are called slow. Finally, a straightforward superposition (with some smoothing) of stable periodic orbits offers a very simple, yet most useful tool for studying morphological or kinematical structures in disk galaxies and has been successfully applied to bars, box/peanuts and rings (e.g., Patsis et al. 1997;Patsis et al. 2002Patsis et al. , 2003Patsis 2005;Patsis et al. 2010). N -body simulations The N -body simulations that we will discuss were tailored specifically for the understanding of bar formation and evolution in a gas-less disk embedded in a spherical spheroid. That is, the initial conditions were built so as to exclude, in as much as possible, other instabilities, thus allowing us to focus on the bar. Such initial conditions are often called dynamical (because they allow us to concentrate on the dynamics), or simplified, controlled, or idealised (because they exclude other effects so as to focus best on the one under study). They allow us to make 'sequences' of models, in which we vary only one parameter and keep all the others fixed. For example, it is thus possible to obtain a sequence of models with initially identical spheroids and identical disk density profiles, but different velocity dispersions in the disk. The alternative to these simulations is cosmological simulations, and, more specifically, zoom re-imulations. In such re-simulations a specific halo (or galaxy), having the desired properties, is chosen from the final snapshot of a full cosmological simulation. The simulation is then rerun with a higher resolution for the parts which end up in the chosen galaxy or which come to a close interaction with it, and also after having replaced a fraction of the dark matter particles in those parts by gas particles. Zoom simulations are more general than the dynamical ones because the former include all the effects that dynamical simulations have, deliberately, neglected. However, they do not allow us to build sequences of models and also have less resolution than the dynamical ones and necessitate much more computer time and memory. Furthermore, some care is necessary because cosmological simulations are known to have a few problems when compared with nearby galaxy observations, concerning, e.g., the number and distribution of satellites, the inner halo radial density profile, the formation of bulge-less galaxies, or the Tully-Fisher relation (see, e.g., Silk & Mamon 2012 for a review). Thus, the zoom re-simulations could implicitly contain some non-realistic properties, which are not in agreement with what is observed in nearby galaxies, and therefore reach flawed results. Moreover, since many effects take place simultaneously, it is often difficult to disentangle the contribution of each one separately, which very strongly hampers the understanding of a phenomenon. For example, it is impossible to fully understand the bar formation instability if the model galaxy in which it occurs is continuously interacting or merging with other galaxies. A more appropriate way would be to first understand the formation and evolution of bars in an isolated galaxy, and then understand the effect of the interactions and mergings as a function of the properties of the intruder(s). Thus, zoom simulations should not yet be considered as a replacement of dynamical simulations, but rather as an alternative approach, allowing comparisons with dynamical simulations after the basic instabilities has been understood. A few studies using cosmological zoom simulations have been already made and have given interesting results on the formation and properties of bars (Romano-Díaz et al. 2008;Scannapieco & Athanassoula 2012;Kraljic et al. 2012). A non-trivial issue about dynamical N -body simulations is the creation of the initial conditions. These assume that the spheroid and the disk are already in place and, most important, that they are in equilibrium. This is very important, since a system which is not in equilibrium will undergo violent relaxation and transients, which can have undesirable secondary effects, such as spurious heating of the disk or altering of its radial density profile. At least three different classes of methods to create initial conditions have been developed so far. In the case of multi-component systems, e.g., galaxies with a disk, a bulge and a halo, the components are built separately and then either simply superposed (e.g., Hernquist 1993), or the potential of the one is adiabatically grown in the other before superposition (e.g., Barnes 1988;Shlosman & Noguchi 1993;Athanassoula 2003Athanassoula , 2007McMillan & Dehnen 2007). The former can be dangerous, as the resulting model can be considerably off equilibrium. The latter is strongly preferred to it, but still has the disadvantage that the adiabatic growing of one component can alter the density profiles of the others, which is not desirable when one wishes to make sequences of models. It also is not trivial to device a method for assigning the velocities to the disk particles without relying on the epicyclic approximation (but see Dehnen 1999). Last but not least, this class of methods is not useful for complex systems such as triaxial bulges or haloes. (b) The Schwarzschild method (Schwarzschild 1979) can also be used for making initial conditions, but has been hardly used for this, because the application is rather time consuming and not necessarily straightforward. (c) A very promising method for constructing equilibrium phase models for stellar systems is the iterative method (Rodionov et al. 2009). It relies on constrained, or guided, evolution, so that the equilibrium solution has a number of desired parameters and/or constraints. It is very powerful, to a large extent due to its simplicity. It can be used for mass distributions with an arbitrary geometry and a large variety of kinematical constraints. It has no difficulty in creating triaxial spheroids, and the disks it creates do not follow the epicyclic approximation, unless this has been imposed by the user. It has lately been extended to include a gaseous component (Rodionov & Athanassoula 2011). Its only disadvantage is that it is computer intensive, so that in some cases the time necessary to make the initial conditions is a considerable fraction of the simulation time. I would also like to stress here a terminology point which, although not limited to simulations, is closely related to them. In general the dynamics of haloes and bulges are very similar, with of course quantitative differences due to their respective extent, mass and velocity dispersion values. For this reason, I will use sometimes in these lecture notes the terms 'halo' and 'bulge' specifically, while in others I will use the word 'spheroid' in a generic way, to designate the halo and/or the bulge component. The reasons for this are sometimes historic (i.e., how it was mentioned in the original paper), or quantitative (e.g., if the effect of the halo is quantitatively much stronger that that of the bulge), or just for simplicity. The reader can mentally interchange the terms as appropriate. On angular momentum exchange and the role of resonances: the analytic approach Two papers are the pillars of the analytical work on angular momentum redistribution in disk galaxies -namely Lynden-Bell & Kalnajs (1972) and Tremaine & Weinberg (1984) -while further useful information can be found in, e.g., Kalnajs (1971), Dekker (1974), Weinberg (1985Weinberg ( , 1994, Athanassoula (2003), Fuchs (2004), Fuchs & Athanassoula (2005). In order to reach tractable analytic expressions, it is necessary to consider the disk and the spheroid components separately, and use different approximations in the two cases. For the disk we can use the epicyclic approximation (i.e., we will assume that the disk orbits can be reasonably well approximated by epicycles), while for the spheroid we will assume that the distribution function depends only on the energy, as is the case for spherical isotropic systems. The main results obtained in the papers listed above are: (a) Angular momentum is emitted or absorbed mainly at resonances. It is, however, also possible to emit or absorb away from resonances if the potential is not stationary, but grows or decays with time. Nevertheless, the contribution of the non-resonant material to the total emission or absorption should remain small, unless the growth or decay of the potential is important. (b) In the disk component, angular momentum is emitted from the ILR and at other l < 0 resonances and absorbed at the OLR and at other l > 0 resonances. It is also absorbed at CR, but, all else being equal, at lesser quantities than at the Lindblad resonances. (c) The spheroid absorbs angular momentum at all its resonances. (d) The global picture is thus that angular momentum is emitted from the bar region and absorbed by the CR and OLR in the disk, and by all resonances in the spheroid. Thus, angular momentum is transported from the inner parts of the disk, to the part of the disk outside CR and to the spheroid resonant regions. (e) For both the disk and the spheroid components it is possible to show that, for the same perturbing potential and the same amount of resonant material, a given resonance will emit or absorb more angular momentum if the material there is colder (i.e., has a lower velocity dispersion). Therefore, since the disk is always colder than the spheroid, it will absorb more angular momentum per unit resonant mass. Nevertheless, the spheroid is much more massive than the outer disk, so the amount of angular momentum it absorbs may exceed that absorbed by the outer disk. (f) Since the bar is inside corotation, it has negative energy and angular momentum and as it emits angular momentum it gets destabilised, i.e., it grows stronger. It is thus expected that the more angular momentum is emitted, the stronger the bar will become. On angular momentum exchange and the role of resonances: input from simulations General comments It is not possible to compare the analytical work mentioned in the previous section directly with observations, because each galaxy is observed only at a single time during its evolution, and neither angular momentum exchange nor individual orbits can be directly observed. One should thus include an intermediate step in the comparisons, namely N -body simulations. In these, it is possible to follow directly not only the evolution in time, but also the angular momentum exchange and the individual orbits, i.e., it is possible to make direct comparisons of simulations with analytical work. Furthermore, one can 'observe' the simulation results using the same methods as for real galaxies and make comparisons (Section 4.10). Simulations thus provide a meaningful and necessary link between analytical work and observations. In order to show that the analytical results discussed in Section 4.5 do apply to simulations it is necessary to go through a number of intermediate steps, i.e., to show (a) that there is a reasonable amount of mass at (near-)resonance both for the disk and the spheroid components, (b) that angular momentum is emitted from the resonances in the bar region and absorbed by all the spheroid resonances and the outer disk resonances, (c) that the contribution of the spheroid in the angular momentum redistribution is important, (d) that, as a result of this angular momentum transfer, the bar becomes stronger and slows down, (e) that stronger bars are found in simulations in which more angular momentum has been exchanged within the galaxy, (f) and that more (less) angular momentum can be exchanged when the emitting or absorbing material is colder (hotter). This sequence of steps was followed in two papers (Athanassoula 2002, hereafter A02 andAthanassoula 2003, hereafter A03) whose techniques and results I will review in the next subsections, giving, whenever useful, more extended information (particularly on the techniques) than in the original paper, so that the work can be easier followed by students and nonspecialists. Calculating the orbital frequencies Our first step will be to calculate the fundamental orbital frequencies. Since we are interested in the redistribution of L z , we will focus on the angular and the epicyclic frequency. The epicyclic frequency κ can be calculated with the help of the frequency analysis technique (Binney & Spergel 1982;Laskar 1990), which relies on a Fourier analysis of, e.g., the cylindrical radius R(t) along the orbit. The desired frequency is then obtained as the frequency of the highest peak in the Fourier transform. The angular frequency Ω is more difficult to estimate, and in A02 and A03 I supplemented frequency analysis with other methods, such as following the angle with time. Several technical details are important for the frequency analysis. It is necessary to use windowing before doing the Fourier analysis, to improve the accuracy. It is also necessary to keep in mind that some of the peaks of the power spectrum are not independent frequencies, but simply harmonics of the individual fundamental frequencies, or their combination. Furthermore, if one needs considerable accuracy, one has to worry about the fact that in standard Fast Fourier Transforms the step dω between two adjacent frequencies is constant, while the fundamental frequencies Ω i will not necessarily fall on a grid point. Except for the inaccuracy thus introduced, this will complicate the handling of the harmonics. Frequency analysis can be applied to orbits in any analytic stationary galactic potential, thus allowing the full calculation of the resonances and their occupation (e.g., Papaphilipou & Laskar 1996Carpintero & Aguilar 1998;Valluri et al. 2112). Contrary to such potentials, however, simulations include full time evolution, so that the galactic potential, the bar pattern speed, as well as the basic frequencies Ω and κ of any orbit are time-dependent. Thus, strictly speaking, the spectral analysis technique can not be applied, at least as such. It is, nevertheless, possible to estimate the frequencies of a given orbit at any given time t by using the potential and bar pattern speed at this time t (which are thus considered as frozen), as I did in A02 and A03. After freezing the potential, I chose a number of particles at random from each component of the simulation and calculated their orbits in the frozen potential, using as initial conditions the positions and velocities of the particles in the simulation at time t. It is necessary to take a sufficient number of particles (of the order of 100 000) in order to be able to define clearly the main spectral lines. It is also necessary to follow the orbit for a sufficiently long time (e.g., 40 orbital rotation patterns), in order to obtain narrow lines in the spectrum. By doing so I do not assume that the potential stays unevolved over such a long time. What I describe here just amounts to linking the properties of a small part of the orbit calculated in the evolving simulation potential (hereafter simulation orbit) to an equivalent part of the corresponding orbit calculated in the frozen potential. The frequencies are then calculated for the orbit in the frozen potential and attributed to the small part of the simulation orbit in question (and not to the whole of the simulation orbit). This technique makes it possible to apply the frequency analysis method, as described in A02, A03 and above, and thus to obtain the main frequencies of each orbit at a given time. It is, furthermore, possible to follow the evolution by choosing a number of snapshots during the simulation and performing the above exercise separately for each one of them. The evolution can then be witnessed from the sequence of the results, one for each chosen time. Material at resonance Having calculated the fundamental frequencies as described in the previous section, it is now possible to plot histograms of the number of particles -or of their total mass, if particles of unequal mass are used in the simulation - as a function of the ratio of their frequencies measured in a frame of reference co-rotating with the bar, i.e., as a function of (Ω − Ω p )/κ. This can be carried out separately for the particles describing the various components, i.e., the disk, the halo, and the bulge. It was first carried out in A02 and the results, for two different simulations, are shown in Fig. 4.4. Before making this histogram, it is necessary to eliminate chaotic orbits. Their spectra differ strongly from those of regular orbits, consisting of a very large number of non-isolated lines. They of course always have a 'highest peak', but this has no physical significance and is not a fundamental frequency of the orbit. Eliminating chaotic orbits is non-trivial because of the existence of sticky orbits (see Section 4.3) for which the results of the classification as regular or chaotic may well depend on the chosen integration time. Thus, although for regular orbits it is recommended to use a long integration time in order to obtain narrow, well defined spectral peaks, for sticky orbits integration times must be of the order of the characteristic timescale of the problem. For instance, if the sticky orbit shifts from regular to chaotic only after an integration time of the order of say ten Hubble times, it will be of no concern to galactic dynamic problems and this orbit can for all practical purposes be considered as regular. It is clear from Fig. 4.4 that the distribution of particles in frequency is not homogeneous. In fact it has a few very strong peaks and a number of smaller ones. The peaks are not randomly distributed; they are located at the positions where the ratio (Ω − Ω p )/κ is equal to the ratio of two integers, i.e., when the orbit is resonant and closes after a given number of rotations and radial oscillations. The highest peak is at (Ω − Ω p )/κ = 0.5, i.e., at the ILR. A second important peak is located at Ω = Ω p , i.e., at CR where the particle co-rotates with the bar. Other peaks, of lesser relative height, can be seen at other resonances, such as the −1/2 (OLR), the 1/4 (often referred to as the ultraharmonic resonance -UHR), the 1/3, the 2/3, etc. In all runs with a strong bar the ILR peak dominates, as expected. But the height of these peaks differs from one simulation to another and even from one time to another in the same simulation. This richness of structures in the resonance space could have been expected for the disk component. What, however, initially came as a surprise was the existence of strong resonant peaks in the spheroid. Two examples can be seen in the right-hand panels of Fig. 4.4. In both, the strongest peak is at corotation, and other peaks can be clearly seen at ILR, at (Ω − Ω p )/κ = −0.5 (OLR) and at other resonances. As was the case for the disk, the absolute and relative heights of the peaks differ from one simulation to another, as well as with time. Thus, the results of A02 that we have discussed in this section show that, both for the disk and the spheroid component, a very large fraction of the simulation particles is at (near-)resonance. Note that this result is backed by a large number of simulations. I have analysed the orbital structure and the resonances of some 50 to 100 simulations and for a number of times per simulation. The results of these, as yet unpublished, analyses are in good qualitative agreement with what was presented and discussed in A02, A03 and here. Further confirmation was brought by a number of subsequent and independent analyses (Martínez-Valpuesta et al. 2006;Ceverino & Klypin 2007;Dubinski et al. 2009;Wozniak & Michel-Dansac 2009;Saha et al. 2012). These studies include many different models, with very different spheroid mass profiles or distribution functions, as well as disks with different velocity dispersions. Also different simulation codes were used, including the Marseille GRAPE-3 and GRAPE-5 codes (Athanassoula et al. 1998), Gyr-Falcon (Dehnen 2000(Dehnen , 2002, FTM (Heller & Shlosman 1994, Heller 1995, ART (Kravtsov et al. 1997), Dubinski's treecode (Dubinski 1996) and GAD-GET (Springel et al. 2001, Springel 2005. Note also that Ceverino & Klypin (2007) have used a somewhat different approach, and did not freeze the potential before calculating the orbits. Instead, they followed the particle orbits through a part of the simulation during which the galaxy potential (more specifically the bar potential and pattern speed) do not change too much. In this way they obtain a power spectrum with much broader peaks than in the studies that analyse the orbits in a sequence of frozen potentials. Nevertheless, the peaks are welldefined and confirm the main A02 results -namely that there are located at the main resonances -without the use of potential freezing. Note also that this version of the frequency analysis is not suitable for deciding whether a given orbit is regular or chaotic, but is considerably faster in computer time than the one relying on a sequence of frozen potentials. Angular momentum exchange In A03 I used N -body simulations to show that angular momentum is emitted at the resonances within CR, i.e., in the bar region, and that it is absorbed at resonances either in the spheroid, or in the disk from the CR outwards, as predicted by analytic calculations. For this I calculated the angular momentum of all particles in the simulation at two chosen times t 1 and t 2 and plotted their difference, ∆J = J 2 − J 1 as a function of the frequency ratio (Ω − Ω p )/κ of the particle orbit at time J 2 . An example of the result can be seen in Fig. 1 of A03. Note that particles in the disk with a positive frequency ratio and particularly particles at ILR have ∆J < 0, i.e., they emit angular momentum. On the contrary particles in the spheroid have ∆J > 0, i.e. they absorb angular momentum and particularly at the CR, followed by the ILR and OLR. Further absorption can be seen at the disk CR, but it is considerably less than the amount absorbed by the spheroid. The amount of angular momentum emitted or absorbed at a given resonance is of course both model-and time-dependent, as were the heights of the resonant peaks (Section 4.6.3). On the contrary, whether a given resonance absorbs or emits is model-independent, and in good agreement with analytic predictions (Section 4.5). Thinking of the bar as an ensemble of orbits, it becomes clear that there are many ways in which angular momentum can be lost from the bar region. The first possibility is that the orbits in the bar, and therefore the bar itself, will become more elongated. The second one is that orbits initially on circular orbits closely outside the bar region will loose angular momentum and become elongated and part of the bar, which will thus get longer and more massive. In both cases the bar will become stronger in the process. The third alternative is that the bar will rotate slower, i.e., its pattern speed will decrease. These three possibilities were presented and discussed in A03, where it was shown that that they are linked and occur concurrently. Thus evolution should make bars longer, and/or more elongated, and/or more massive and/or slower rotating (A03). Simulations agree fully with these predictions and go further, establishing that all these occur concurrently, but not necessarily at the same pace. 4.6.5 Types of models 4.6.5.1 Models with maximum and models with sub-maximum disks Since the spheroid plays such a crucial role in the angular momentum redistribution within the galaxy, it must also play a crucial role in the formation and evolution of the bar. Athanassoula & Misiriotis (2002, hereafter AM02) tested this by analysing the bar properties in two very different types of simulations, which they named MH (for Massive Halo) and MD (for Massive Disc), respectively. Both types have a halo with a core, which is big in the MD types and small in the MH ones. Thus, in MH models, the halo plays a substantial role in the dynamics within the inner four or five disk scale lengths, while not being too hot, so as not to impede the angular momentum absorption. On the contrary, in MD models the disk dominates the dynamics within that radial range. The circular velocity curves of these two types of models are compared in Fig. 4.5. For the MD model (upper panel) the disk dominates the dynamics in the inner few disk scale lengths, while this is not the case for the MH model. MD-type models are what the observers call maximum disk models, while the MH types have sub-maximum disks. It is not yet clear whether disks in real galaxies are maximum, or sub-maximum, because different methods reach different conclusions, as reviewed, e.g., by Bosma (2002). As shown in AM02 and illustrated in Fig. 4.6, the observable properties of the bars which grow in these two types of models are quite different. MHtype bars are stronger (longer, thinner and/or more massive) than MD-type bars. Viewed face-on, they have a near-rectangular shape, while MD-type bars are more elliptical. Viewed side-on, they show stronger peanuts and sometimes (particularly towards the end of the simulation) even 'X' shapes. On the other hand, bars in MD-type models are predominantly boxy when viewed side-on. Thinking in terms of angular momentum exchange, it is easy to understand why MH-type bars are stronger than MD-type ones. Indeed, the radial density profile of MH-type haloes is such that, for reasonable values of the pattern speed, they have more material at resonance than do MD-types. Thus, all else being similar, there will be more angular momentum absorbed. This, in good agreement with analytical results, should lead to stronger bars. It should be stressed that the above discussion does not imply that all real galaxies are either of MH type or of MD type. The two models illustrated here were chosen as two examples, enclosing a useful range of halo radial density profiles, which could actually be smaller than what is set by the two above examples. Real galaxies can well be intermediate, i.e., somewhere in between the two. It is nevertheless useful to describe the two extremes separately, since this gives a better understanding of the effects of the spheroid. Models of MH-or MD-type which also have a classical bulge can be termed MHB and MDB, respectively. The effect of the bulge in MD models is quite strong, so that the bars in MDB models have a strength and properties which are intermediate between those of MD and those of MH types (AM02). Furthermore, A03 and Athanassoula (2007) showed that an initially non-rotating bulge absorbs a considerable amount of angular momentum -thereby spinning up -and thus a bar in a model with bulge slows down more than in a similar model but with no bulge. All this can be easily understood from the frequency analysis, which shows that there are considerably more particles at resonance in cases with strong bulges (A03; Saha et al. 2012). On the other hand, the effect of the bulge on the bars of MH types is much less pronounced. Models with cusps The two models we have discussed above have a core, more or less extended. There is a further possibility, namely that the central part has a cusp. It has indeed been widely debated whether haloes have a cusp or a core in their central parts. Cosmological CDM, dark matter only simulations produce haloes with strong cusps. Thus, Navarro et al. (1996) find a universal halo profile, dubbed NFW profile, which has a cusp with a central density slope (β = d ln ρ/d ln r) of −1.0, while Moore et al. (1999), with a higher resolution, find a slope of −1.5. Increasing the resolution yet further, Navarro et al. (2004) found that this slope decreases with decreasing distance from the centre, but not sufficiently to give a core. Finally, the simulations with the highest resolution (Navarro et al. 2010), argue for a lower central slope of the order of −0.7, but still too high to be compatible with a core. On the other hand, very extensive observational and modelling work (de Blok et al. 2001;de Blok & Bosma 2002;de Blok et al. 2003;Simon et al. 2003;Kuzio de Naray et al. 2006de Blok et al. 2008;Oh et al. 2008;Battaglia et al. 2008;de Blok 2010;Walker & Peñarrubia 2011;Amorisco & Evans 2012;Peñarrubia et al. 2012) argues that the central parts of haloes should have a core, or a very shallow cusp, the distribution of inner slopes in the various observed samples of galaxies being strongly peaked around a value of ∼ 0.2. This discrepancy between the pure dark matter, CDM simulations and observations may be resolved with more recent cosmological simulations which have high resolution and include baryons and appropriate star formation and feedback recipes. Indeed, such simulations start to produce rotation curves approaching those of observations (Governato et al. 2010Macciò et al. 2012;Oh et al. 2011;Stinson et al. 2012). In order to stay in agreement with observations, I will here not discuss models with cusps. Readers interested in such models can consult, e.g., Valenzuela & Klypin (2003), Holley-Bockelmann et al. (2005), Sellwood & Debattista (2006), or Dubinski et al. (2009. Let me also mention that it is possible to study models with cusps using the same functional form for the halo density as for the MH-and MD-type models (AM02), but now taking a very small core radius, preferably of an extent smaller than the softening length. The effect of the spheroid-to-disk mass ratio In the two models we discussed above, it is clear that it is the one with the highest spheroid mass fraction within the disk region that makes the strongest bar. Is that always the case? The following discussion, taken from A03, shows that the answer is more complex than a simple yes, or no. Assume we have a sequence of models, all with the same total mass, i.e., that the sum of the disk and the spheroid mass within the disk region is the same. How should we distribute the mass between the spheroid and the disk in order to obtain the strongest bar? What must be maximised is the amount of angular momentum redistribution, or, equivalently, the amount of angular momentum taken from the bar region. For this it makes sense to have strong absorbers, who can absorb all the angular momentum that the bar region can emit. Past a certain limit, however, there will not be sufficient material in the bar region to emit all the angular momentum that the spheroid can absorb, and it will be useless to increase the spheroid mass further. So the strongest bar will not be obtained by the most massive spheroid, but rather at a somewhat lower mass value, such that the equilibrium between emitters and absorbers is optimum and the angular momentum exchanged is maximum. For the models discussed in AM02 and A03, this occurs at a spheroid mass value such that the disk, in the initial conditions, is sub-maximum. In Section 4.9.3 we will discuss how disks may evolve from sub-maximum to maximum during the simulation. Live versus rigid halo In the previous sections I reviewed the very strong evidence accumulated so far showing that many particles in the simulations, both in the disk and the spheroid component, are on (near-)resonant orbits and that the angular momentum exchanged between them is as predicted by the analytic calculations, i.e., from the bar region outwards (Section 4.5). The next step should be to clarify the importance of the halo resonances in the evolution. For this we have to compare two simulations, one in which the halo resonances are at work and another where they are not, as was first done in A02, whose main results will be reviewed here. Each of the Fig. 4.7 and 4.8 compares two models with initially identical disks. In other words, the particles in the disk initially have identical positions and velocities in the two compared simulations. The models of the haloes were also identical, but in one of the simulations (right-hand panels) the halo was rigid (represented only by the forces that it exerts on the disk particles) and thus did not evolve. In the other one, however, the halo was represented by particles, i.e., was live (left-hand panels). These particles move around as imposed by the forces and can emit or absorb angular momentum, as required. Figure 4.7 compares the disk evolution in the live and in the rigid halo when the model is of MH type. The difference between the results of the two simulations is stunning. In the case with a live halo a strong bar has formed, while in the case with a rigid halo there is just a very small inner oval-like perturbation. This shows that the contribution of the halo in the angular momentum exchange can play an important role, actually, in the example shown here, the preponderant role. Figure 4.8, shows the results of a similar experiment, but now in an MDtype halo. The difference is not as stunning as in the previous example, but is still quite important. In the live halo case the bar is considerably longer and somewhat thinner than in the case with a rigid halo. It is thus possible to conclude that the role of the halo in the angular momentum redistribution is important. In fact in the MH-type models the role of the halo is preponderant, but it is still quite important even in the MD-types. It is thus strongly advised to work with live, rather than with rigid haloes in simulations. Figure 4.4 displays the frequency histograms for two models, one MD-type (upper panels) and one MH-type (lower panels). The properties of these two types of models were discussed in Section 4.6.5.1, where their initial rotation curve, as well as their bar morphology are also displayed. Distribution of frequencies for MD-and MH-type models It is now useful to compare the distribution of frequencies for the two simulations used in Fig. 4.4. Starting with the disk we note that the ILR peak is about 50% higher in the MH than in the MD model, while the CR peak is considerably lower. Also the MD model has an OLR peak, albeit small, while none can be seen in the MH one. For the spheroid, the strongest peak for both models is the CR one, which is much stronger in the halo than in the disk. It is, furthermore, stronger in the MH than in the MD model. Also the MH spheroid has a relatively strong ILR peak, which is absent from the MD one. On the other hand, the MD model has a much stronger OLR peak than the MH one. All these properties can be easily understood. From Fig. 4.6 it is clear that the bar in the MH model is stronger than in the MD one, as discussed already in Section 4.6.5.1, and this accounts for the much stronger ILR peak for the disk of the MH model. Also, from the initial circular velocity curves (Fig. 4.5) it is clear that the halo of the MH model has much more mass than the MD model within the radial extent where one would expect the CR and particularly the ILR to be. This explains why the CR halo peaks are stronger in the MH model and why the halo ILR peak is absent in the MD one. At larger radii the order between the masses of the MH and the MD haloes is reversed, being in the outer parts relatively larger in the MD model. This explains why the halo OLR peak is stronger for the MD halo than for the MH one. Bar strength 4.6.8.1 Evolution of the bar strength with time Figure 4.9 shows the evolution of the bar strength † with time, comparing an MH-type and an MD-type model. It clearly illustrates how important the differences between these two types can be, as expected. It also shows that, in both cases, one can distinguish several evolutionary phases. By construction, both simulations start axisymmetric and this lasts all through what we can call the pre-growth phase. The duration of this phase, however, is about half a Gyr for the MD model, while for the MH one it lasts about 2 Gyr. The second phase is that of bar growth, and lasts considerably less than a Gyr for the MD model and much longer (about 2 Gyr) for the MH one. In total, we can say that the bar takes less that 1 Gyr to reach the end of its growth phase in the MD model, compared to about 4 Gyr in the MH one. This is in good qualitative agreement with what was already found by , using simpler 2D simulations. From this and many other such comparisons, it becomes clear that the presence of a massive spheroid can very considerably both delay and slow down the initial bar formation due to its strong contribution to the total gravitational force. † The definition of bar strength is not unique. The one used in the analysis of simulations is usually based on the m = 2 Fourier component, but precisely how this is used varies from one study to another. We will refrain from giving a list of precise definitions here, as this would be long and tedious. Furthermore we will, anyway, only need qualitative information for our discussions here, which is the same, or very similar, for all definitions used in simulations. We will thus talk only loosely about 'bar strength' here and use arbitrary units in the plots (see also Section 4.10). After the end of the bar growth time, both models undergo a steep drop of the bar strength. This is due to the buckling instability (Raha et al. 1991). The final phase -which can also be called the secular evolution phasestarts somewhat after 5 Gyr for the MH model and after about 3 Gyr for the MD one. The corresponding bar strength increase which takes place during this phase is much more important for the MH than for the MD mode. By the end of the evolution, MH models have a much stronger bar than MD ones. As already mentioned, this is due to the more important angular momentum redistribution in the former type of models. As in Section 4.6.5.1, let me stress that we are comparing two models which display strong differences. Real galaxies can be of either type, but, most probably, can be intermediate, in which case their bar strength evolution would also be intermediate between the two shown in Fig. 4.9. (a) In the early evolutionary stages, before and while the bar grows, the spheroid delays and slows down bar formation. This is due to the fact that the gravitational forcing of the spheroid 'dilutes' the non-axisymmetric forcing of the bar. Thus, this delay and slowdown occurs even in cases with a rigid spheroid, or with an insufficient number of particles (e.g., Ostriker & Peebles 1973). (b) In the late evolutionary stages, e.g., when the secular evolution is underway, the presence of a massive and responsive spheroid will make the bar much stronger. This is due to the help of the spheroid resonances, which absorb a considerable fraction of the emitted angular momentum, thus inducing the bar region to emit yet more and (since it is within the CR and of negative energy) to become stronger. In order for this phase to be properly described the spheroid has to be live and contain a sufficient number of particles for the resonances to be properly described (A02). Spheroid mass and bar strength Velocity dispersion and bar strength Analytical works for the disk and/or the spheroidal component (Lynden-Bell & Kalnajs 1972;Tremaine & Weinberg 1984; A03) predict that the hotter the (near-)resonant material is, the less angular momentum it can emit or absorb. This was verified with the help of N -body simulations in A03. Contrary to the spheroid mass, velocity dispersion has the same effect on the bar strength evolution both in the early bar formation stages and in the later secular evolution stages. In the early stages a high velocity dispersion in the disk slows down bar formation, as shown initially by and later in A03. This is illustrated also in Fig. 4.10, where I compute two MD-type models with different velocity dispersions. The first one has a Toomre Q parameter (Toomre 1964) of 1.3 and the second one of 1.7. This difference has a considerable impact on the growth and evolution of the bar. In the former the bar starts growing after roughly half a Gyr, its growth phase lasts about 1.5 Gyr and the secular increase of the bar strength starts around 4.5 Gyr. For the latter (hotter) model the beginning and end of each phase are much less clear, so that one can only very roughly say that the bar growth starts at about 4, or 5 Gyr and ends at about 9 Gyr. During the later evolutionary stages also, a high velocity dispersion will work against an increasing bar strength because, as shown by analytic work and verified by N -body simulations, material at resonance will emit or absorb per unit mass less angular momentum when it is hot. Thus, increasing the velocity dispersion in the disk and/or the spheroid leads to a delayed and slower bar growth and to weaker bars. This has important repercussions on the fraction of disk galaxies that are barred as a function of redshift and on their location on the Tully Fisher relation (Sheth et al. 2008(Sheth et al. , 2012. Bar strength and redistribution of angular momentum One of the predictions of the analytic work is that there is a strong link between the angular momentum which is redistributed within the galaxy and the bar strength. One may thus expect a correlation between the two if the distribution functions of the disks and spheroids of the various models are not too dissimilar. This was tested out in A03, using a total of 125 simulations, and was found to be true. Here we repeat this test, using a somewhat larger number of simulations (about 400 instead of 125) and a more diverse set of models and again a good correlation is found. The result is shown in Fig. 4.11, where each symbol represents a separate simulation. It is clear that this correlation is tight, but still has some spread, due to the diversity of the models used. Note also that we have not actually used the total amount of angular momentum emitted from the bar region (or, equivalently, the amount of angular momentum absorbed in the outer disk and in the spheroid), but rather the fraction of the total initial angular momentum that was deposited in the spheroid, which proves to be a good proxy to the required quantity. Finally, note that the points are not homogeneously distributed along the trend. This has no physical significance, but is simply due to the way that I chose my simulations. Indeed I tried to study the MH-type and the MD-type models and was relatively less interested in the intermediate cases. Bar slowdown Results from N-body simulations Another prediction of the analytic work is that the bar pattern speed will decrease with time, as the bar strength increases (Section 4.5). This has been confirmed by a large number of N -body simulations (e.g., Little & Carlberg 1991a, b;Hernquist & Weinberg 1992;Debattista & Sellwood 2000;A03;O'Neil & Dubinski 2003;Martínez-Valpuesta et al. 2006;Villa-Vargas et al. 2009). The amount of this decrease was found to vary considerably from one simulation to another, depending on the mass as well as on the velocity distribution in the disk and the spheroidal (halo plus classical bulge) components, consistent with the fact that these mass and velocity distributions will condition the angular momentum exchange and therefore the bar slowdown. There is a notable exception to the above very consistent picture. Valenzuela & Klypin (2003) found in their simulations a counter-example to the above, where the pattern speed of a strong bar hardly decreases over a considerable period of time. The code they use, ART, includes adaptive mesh refinement, and thus reaches high resolution in regions where the particle density is high. According to these authors, the difference between their results and those of other simulations are due to the high resolution (20-40 pc) and the large number of particles (up to 10 7 ) they use. Sellwood & Debattista (2006) examined cases where the bar pattern speed fluctuates upward. After such a fluctuation, the density of resonant halo particles will have a local inflection created by the earlier exchanges, so that bar slowdown can be delayed for some period of time. They show that this is more likely to occur in simulations using an adaptive refinement and propose that this explains the evolution of the pattern speed in the simulation of Valenzuela & Klypin (2003). Klypin et al. (2009) did not agree and replied that Sellwood & Debattista did not have the same adaptive refinement implementation as ART. Sellwood (2010) stressed that such episodes of non-decreasing pattern speed are disturbed by perturbations, as e.g., a halo substructure, and thus are necessarily short lived. He thus concludes that simulations where the pattern speed does not decrease have simply not been run long enough. At the other extreme, Villa-Vargas et al. (2009) find a similar stalling of the pattern speed for prolonged time periods when the simulation is run so long that the corotation radius gets beyond the edge of the disc. Dubinski et al. (2009) published a series of simulations, all with the same model but with increasing resolution. They use between 1.8×10 3 and 18×10 6 particles in the disk and between 10 4 and 10 8 particles in the halo. They also present results from a multi-mass model with an effective resolution of ∼ 10 10 particles. They have variable, density dependent softening, with a minimum of the order of 10 pc. Their Fig. 18 shows clearly that the decrease of the pattern speed with time does not depend on the resolution and that it is present for all of their simulations, even the ones with the highest resolution, much higher than the one used by Valenzuela & Klypin. They conclude that 'the bar displays a convergent behavior for halo particle numbers between 10 6 and 10 7 particles, when comparing bar growth, pattern speed evolution, the DM [dark matter] halo density profile and a nonlinear analysis of the orbital resonances'. This makes it clear that, at least for their model, the pattern speed decreases with time for all reasonable values of particle numbers. Figure 4.12 shows very schematically an interesting effect of the bar slowdown. The solid line shows the radial profile of Ω(r) for a very simple model with a constant circular velocity, but the following hold for any realistic circular velocity curve. Let us assume that at time t = t 1 the pattern speed is given by the dashed horizontal line and drops by t = t 2 to a lower value given by the dotted horizontal line, as shown by the vertical arrow. This induces a change in the location of the resonances. For example the CR, which at t 1 is located at 5 kpc, as given by vertical dashed line, will move by t 2 considerably outwards to a distance beyond 6 kpc, as given by the dotted vertical line and shown by the horizontal arrow. This increases also the region in which the bar is allowed to extend, since, as shown by orbital theory (Contopoulos 1980 and Section 4.3), the bar size is limited by the CR. This schematic plot also makes it easy to understand how a 'fast' bar can slow down considerably while remaining 'fast'. As we saw in Section 4.3, a bar is defined to be 'fast' if the ratio R of the corotation radius to the bar length is less than 1.4. Thus, a bar that slows down and whose corotation radius increases can still have R < 1.4, provided the bar length increases accordingly. This occurs in a number of simulations, see, e.g., Dubinski et al. (2009). A schematic view What sets the pattern speed value? What sets the value of the pattern speed in a simulation (and thus also presumably in real galaxies)? The value of the pattern speed is set by the value of the corotation radius, which is in fact the borderline between emitters and absorbers. Thus, if the galaxy wants to maximise the amount of angular momentum it pushes outwards (i.e., the amount of angular momentum that it redistributes), it has to set this boundary, and therefore its bar pattern speed, appropriately. If the spheroid is massive, i.e., if it has sufficient mass in the resonant regions, then the bar can lower its pattern speed in order to have more emitters, since the absorbers are anyway strong. On the other hand if the spheroid is not sufficiently massive, then the bar should not lower its pattern speed overly, because it needs the absorption it can get from the outer disk. Thus, indirectly, it could be the capacity of the spheroid resonances to absorb angular momentum that sets the value of the bar pattern speed. This would mean that properties of the dark matter halo and of the classical bulge, such as their mass relative to that of the disk and their velocity dispersion at the resonant regions, will have a crucial role in setting the bar pattern speed. Boxy/peanut bulges Peanuts: input from simulations, orbits and observations When bars form in N -body simulations they have a thin vertical density profile, similar to that of the disk. In other words, it is the in-plane rearrangement of the disk material that creates the bar, when initially near-planar and near-circular orbits become more elongated and material gets trapped around the stable periodic orbits of the x 1 family, as already discussed in Section 4.3 and 4.6.4. This configuration, however, lasts for only a short while, after which the bar buckles out of the plane and becomes asymmetric with respect to the equatorial plane, as shown, e.g., in Combes et al. (1990) and Raha et al. (1991), and as is illustrated in the middle central panel of Fig. 4.1. This evolutionary phase, which can be called the asymmetry phase, is also very short-lived and soon the side-on view displays a clear peanut or boxy shape. During the peanut formation phase the strength of the bar decreases considerably (Combes et al. 1990;Raha et al. 1991;Debattista et al. 2004Martínez-Valpuesta & Shlosman 2004;Martínez-Valpuesta et al. 2006;Athanassoula 2008a). Two examples of this decrease can be seen in Fig. 4.9, one for an MH-type simulation (where the bar strength decrease starts only at roughly 6 Gyrs) and another for an MD-type simulation (where it already starts at roughly 3 Gyrs). This decrease can sometimes be very important, so that it could get erroneously interpreted as a bar destruction. These boxy/peanut structures had been observed in real galaxies many times, well before being seen in simulations. Due to the fact that they extend vertically well outside the disk, they were called bulges. More specifically, if they have a rectangular-like (box-like) outline they are called boxy bulges, and if their outline is more reminiscent of a peanut, they are called peanut bulges. Sometimes, however, this distinction is not made and the words 'boxes' or 'peanuts' are used indiscriminately, or the more generic term 'boxy/peanut' is used instead. A number of kinematical or photometrical observations followed and comparisons of their results with orbits and with simulations established the link of boxy/peanut bulges to bars (Kuijken & Merrifield 1995;Bureau & Freeman 1999;Merrifield & Kuijken 1999;Lütticke et al. 2000;Aronica et al. 2003;Chung & Bureau 2004;Athanassoula 2005b;Bureau & Athanassoula 2005;Bureau et al. 2006). Peanut-related orbital structure Considerable information on boxy/peanut structures can be obtained with the help of orbital structure theory. In 3D the orbital structure is much more complex than in 2D, as expected. Thus, the x 1 family has many sections (i.e., energy ranges) where its members are vertically unstable, and, at the energies where there is a transition from stability to instability, a 3D family can bifurcate (i.e., emerge). The orbits that are trapped around the stable l = 1, m = 2, n = 0 periodic orbits of this family can participate in the boxy/peanut structure (Patsis et al. 2003). They were discussed by Pfenniger (1984) and by Skokos et al. (2002a, b), who presented and described a number of relevant families. Since these orbits bifurcate from the x 1 and create vertically extended structures, they were named by Skokos et al. (2002a) by adding a v i ,i = 1, 2, ... after the x 1 , i.e., x 1 v i , i = 1, 2, ..., where i is the order of the bifurcation. Projected on the (x, y) plane, their shape is very similar to that of the members of the planar x 1 family. Good examples of such periodic orbits can be seen in Fig. 9 of Pfenniger (1984), or Fig. 7 to 10 of Skokos et al. (2002a). Peanuts as parts of bars: shape and extent Contrary to what has been very often said and written, boxy/peanut bulges are not bars seen edge-on. The correct statement is that boxy/peanut bulges are the inner parts of bars seen edge-on. The evidence for this was put together and discussed in Athanassoula (2005b) and I will only summarise it briefly here. Orbital structure theory shows that not all planar periodic orbits of the x 1 family are vertically unstable. In fact, the ones in the outer part of the bar are stable. Therefore, the outer part of the bar will stay thin and only the part within a given radius will thicken, so that the peanut will be shorter than the bar. This gives the bar an interesting form. As a very rough approximation, one can think of the bar as a rectangular parallelepiped box (like a shoe box), from the two smallest sides of which (perpendicular to the bar major axis) stick out thin extensions. Of course this is a very rough picture and the shape of the 'box' is in fact much more complex than a rectangular parallelepiped, while the extensions have shapes which are difficult to describe. The best is to look at an animation † where one can see a bar from a simulation, from various viewing angles. How much longer is the bar than the peanut? The answer to this question is not unique and depends on which one of the x 1 v i families sets the end of the peanut, on the galactic potential and on the bar pattern speed (Patsis et al. 2003). Figure 4.13 gives an estimate of the ratio of boxy/peanut to thin bar length, for one of my simulations. In general, it is much easier to obtain an estimate of this ratio for simulations than for observed galaxies, because one can view snapshots from any desired angle. Thus, the length of the bar can be obtained from the face-on view (lower panel) as the major axis of the largest isophotal contour that has a bar shape. This of course introduces an uncertainty of a few to several percent, but is about as good as one can achieve with difficult quantities such as the bar length ‡. The size of the boxy/peanut part can be found from the edge-on view (upper panel). This also introduces an uncertainty, probably much larger than that of the bar length (AM02), but even so one can get reasonable estimates of the ratio of the two extents, and certainly make clear that the thin part of the bar can be much longer than the thick boxy/peanut part. Further discussion of this, and further examples can be found in Athanassoula (2005b), Athanassoula & Beaton (2006) and Athanassoula (2008a). Estimates of the ratio of boxy/peanut to thin bar length can also be obtained from observations. Nevertheless, information for observed galaxies can be obtained from only one viewing angle and these estimates are less precise than the corresponding simulation ones. Figure 4.14 allows us to get an estimate for NGC 2654. Lütticke et al. (2000) made a cut along the major axis of this edge-on disk galaxy and from the projected surface density profile along it they obtained the thin bar length (confront with method (vi) from AM02). They also made cuts parallel to this and offset above or below it and from them could obtain the extent of the bar/peanut part. In this way, Lütticke et al. (2000) were able to measure the ratio of extent of the thin part of the bar to the extent of the thick boxy/peanut part and show clearly that the former can be much longer than the latter. The boxy/peanut system in the Milky Way The bar shape described in the previous section has important implications for the structure of the Milky Way. It is now well established that our Galaxy is barred (e.g., de Vaucouleurs 1964;Blitz & Spergel 1991). The thick component which can be seen to extend outside the Galactic plane in the near-infrared COBE (COsmic Background Explorer) image is often referred to as the COBE /DIRBE (Diffuse Infrared Background Experiment) bar, or the thick bar. About ten years later, further evidence started accumulating and was initially interpreted as due to the existence of a second bar, longer than the first one and considerably thinner (Hammersley et al. 2000;Benjamin et al. 2005;Cabrera-Lavers et al. 2007;López-Corredoira et al. 2007;Cabrera-Lavers et al. 2008;Churchwell et al. 2009). This second bar has been named the Long bar. The existence of a second bar is very common in barred galaxies and about a fourth or a fifth of disk galaxies have both a primary or main bar and a secondary or inner bar (Erwin & Sparke 2002;Laine et al. 2002;Erwin 2011). However, the ratio of the lengths of the two presumed Milky Way bars is totally incompatible with what is observed in double-barred external galaxies (Romero-Gómez et al. 2011), and it would be very dangerous to assume that our Galaxy has morphological characteristics so different from those of external galaxies. There are two very important clues that can help us understand the nature of the bar system in the Milky Way. The first one is that the COBE /DIRBE bar is thick while the Long bar is thin, their ratio of minor (z-) axis to major axis being of the order of 0.3 and 0.03, respectively. The second one is that Fig. 4.14. Upper panel: Isophotes for the edge-on disk galaxy NGC 2654 in the near-infrared. Lower panel: Surface brightness profiles from cuts along, or parallel to the major axis of this edge-on disk galaxy. From the cut along the major axis (uppermost curve), it is possible to obtain an estimate of the projected bar length -BAL on the plot. The size of the boxy/peanut bulge is obtained from cuts parallel to the major axis, but offset from it above or below the equatorial plane -BPL on the plot. (Figure 1 of Lütticke et al. 2000, reproduced with permission c ESO). the COBE /DIRBE bar is shorter than the Long bar by a factor of roughly 0.8. These clues, taken together with the discussion in Sections 4.8.2 and 4.8.3, point clearly to a solution where the thick COBE /DIRBE bar and the thin bar are just parts of the same single bar, the former being its thick boxy/peanut part and the latter being its outer thin part. This alternative was first proposed for our Galaxy by Athanassoula (2006Athanassoula ( , 2008b and first tested by Cabrera-Lavers et al. (2007) using their red-clump giant measurements. This suggestion was disputed at the time, because a number of observations (Hammersley et al. 2000;Benjamin et al. 2005;Cabrera-Lavers et al. 2008) argued that the position angles of the COBE /DIRBE bar and of the Long bar are considerably different, with values between 15 and 30 degrees for the former and around 43 degrees for the latter. Yet this difference in orientations is not a very strong argument. First, due to our location within the Galaxy, the estimates for the Galactic bar position angles are much less accurate than the corresponding estimates for external galaxies. Thus, Zasowski et al. (2012) find the position angle of the Long bar to be around 35 degrees, i.e., much closer to that of the COBE /DIRBE bar than the 43 degrees estimated in previous works. Second, if the shape of the outer isodensity contours of the thin part of the bar are, in the equatorial plane, more rectangular-like than elliptical-like -as is often the case in external galaxies (e.g., Athanassoula et al. 1990;Gadotti 2008Gadotti , 2011) -the Long bar position angle will appear to be larger than what it actually is. A third is that our Galaxy could well have an inner ring, of the size of the bar. N -body simulations have shown that, in such cases, there is often within the ring a short, leading segment near the end of the bar. Examples can be found in Fig. 2 in AM02, Fig. 3 In view of all the above comments, the small difference between the position angle of the COBE /DIRBE bar and that of the Long bar should not be a major concern. I thus still believe that my initial proposal, that the COBE /DIRBE and the Long bar are parts of the same bar, is correct. Secular evolution of the disk and of its substructures The presence of a bar induces not only the redistribution of angular momentum within the host galaxy (Section 4.5 and 4.6), but also the redistribution of the material within it. The torques it exerts are such that material within the CR is pushed inwards, while material outside the CR is pushed outwards. As a result, there is a considerable redistribution of the disk mass. Redistribution of the disk mass: formation of the disky bulge It is well known that gas will concentrate to the inner parts of the disk under the influence of the gravitational torque of a bar, thus forming an inner disk whose extent is of the order of a kpc (Athanassoula 1992b;Wada & Habe 1992, 1995Friedli & Benz 1993;Heller & Shlosman 1994;Sakamoto et al. 1999;Sheth et al. 2003;Regan & Teuben 2004). When this gaseous disk becomes sufficiently massive it will form stars, which should be observable as a young population in the central part of disks. Kormendy & Kennicutt (2004) estimate that the star formation rate density in this region is 0.1-1 M ⊙ yr −1 kpc −2 , i.e., one to three orders of magnitude higher than the star formation rate averaged over the whole disk. Such disks can harbour a number of substructures, such as spirals, rings, bright star-forming knots, dust lanes and even (inner) bars, as discussed, e.g., in Kormendy (1993), Carollo et al. (1998) and Kormendy & Kennicutt (2004). Furthermore, a considerable amount of old stars is pushed inwards so that this inner disk will also contain a considerable fraction of old stars (Grosbøl et al. 2004). Such disks are thus formed in N -body simulations even when the models have no gas, as seen, e.g., in AM02, or Athanassoula (2005b). Such inner disks are evident in projected surface luminosity radial profiles, as extra light in the central part of the disk, above the exponential profile fitting the remaining (non-central) part. Since this is one of the definitions for bulges, such inner disks have been linked to bulges. When fitting them with an r 1/n law -commonly known as Sérsic's law (Sérsic 1968) -the values found for n are of the order of or less than 2 (Kormendy & Kennicutt 2004 and references therein). They are thus often called disky bulges, or disk-like bulges (Athanassoula 2005b), or pseudobulges (Kormendy 1993, Kormendy & Kennicutt 2004. Redistribution of the disk mass: the disk scale-length and extent Due to the bar torques and the resulting mass redistribution, the parts of the disk beyond corotation become more extended and the disk scale length increases considerably (e.g., Hohl 1971;Athanassoula & Misiriotis 2002;O'Neil & Dubinski 2003;Valenzuela & Klypin 2003;Debattista et al. 2006;Minchev et al. 2011). Debattista et al. 2006 showed that the value of Toomre Q parameter (Toomre 1964) of the disk can strongly influence how much this increase will be. Important extensions of the disk can also be brought about by flux-tube manifold spiral arms (Romero-Gómez et al. 2006, 2007Athanassoula et al. 2009aAthanassoula et al. , 2009bAthanassoula et al. , 2010, as shown by Athanassoula (2012) who reported a strong extension of the disk size, by as much as 50% after two or three episodes of spiral arm formation within a couple of Gyrs. Sackett (1997) and Bosma (2000) discuss a simple, straightforward criterion allowing us to distinguish maximum from sub-maximum disks. Consider the ratio S = V d,max /V tot , where V d,max is the circular velocity due to the disk component and V tot is the total circular velocity, both calculated at a radius equal to 2.2 disk scalelengths. According to Sackett (1997), this ratio has to be at least 0.75 for the disk to be considered maximum. Of course in the case of strongly barred galaxies the velocity field is non-axisymmetric and one should consider azimuthally averaged rotation curves, or 'circular velocity' curves. Furthermore, in the case of strongly barred galaxies it is not easy to define a disk scalelength, so it is better to calculate S at the radius at which the disk rotation curve is maximum, which is a well-defined radius and is roughly equal to 2.2 disk scalelengths in the case of an axisymmetric exponential disk. After these small adjustments, we can apply this criterion to our simulations. In Section 4.6.5.1 we saw that the disks in MH models are sub-maximum in the beginning of the simulation and in Section 4.9.1 that the bar can redistribute the disk material and in particular push material inwards and create a disky bulge. Is this redistribution sufficient to change sub-maximum disks? Redistribution of the disk mass: maximum versus sub-maximum disks The answer is that this can indeed be true in some cases, as was shown in and is illustrated in Fig. 4.15. This shows the Sackett parameter S and the bar strength as a function of time for one such simulation. Note that the disk is initially sub-maximum and that it stays so during the bar growth phase. Then the value of S increases very abruptly to a value larger than 0.75, so that the disk becomes maximum. After this abrupt increase the S-parameter hardly changes, although the bar strength increases considerably. Secular evolution of the halo component The halo also undergoes some secular evolution, albeit not as strong as that of the disk. The most notable feature is that an initially axisymmetric halo becomes elongated in its innermost parts and forms what is usually called the 'halo bar', or the 'dark matter bar', although the word 'bar' in this context is rather exaggerated, and 'oval' would have been more appropriate. This structure was already observed in a number of simulations (e.g., Debattista & Sellwood 2000;O'Neil & Dubinski 2003;Holley-Bockelmann et al. 2005;Berentzen & Shlosman 2006) and its properties have been studied in detail by Hernquist & Weinberg (1992), Athanassoula (2005aAthanassoula ( , 2007 and Colin et al. (2006). It is considerably shorter and its ellipticity is much smaller than the disk bar, while rotating with roughly the same angular velocity. It is due to the particles in the halo ILR (Athanassoula 2003(Athanassoula , 2007. A less clear-cut and certainly much more debated issue concerns the question whether secular evolution due to a strong bar could erase the cusp predicted by cosmological simulations and turn them into cores, which would lead to an agreement with observations. A few authors (e.g., Hernquist & Weinberg 1992;Weinberg & Katz 2002;Holley-Bockelmann et al. 2005;Weinberg & Katz 2007a, 2007b argued that indeed such a flattening was possible, while a larger consensus was reached for the opposite conclusion (e.g., Sellwood 2003;McMillan & Dehnen 2005;Colin et al. 2006;Sellwood 2008). We refer the reader to these papers for more information. Comparison with observations Technically, comparison between observations and simulations is relatively straightforward. From the coordinates of the particles in the luminous components, and after choosing the viewing angles and taking into account the observational conditions, it is possible to obtain an image that can be output in the standard format used by observers, namely FITS (Flexible Image Transport System). This image can then be analysed as are observations, using standard packages, such as IRAF (Image Reduction and Analysis Facility). Similarly one can create data cubes, containing velocity information, which again will be analysed with the same software packages as observations. Taking into account the limitations of the instruments and more generally those due to observational conditions is an important feature here, as is the fact that it is the simulation data that must be transformed into observations and not the other way round. There is, nevertheless, one subtle point concerning a limitation of dynamical simulations that should be kept in mind. It concerns the simulation time to be chosen for the comparison. As already mentioned, in dynamic simulations the disk is assumed to be in place and in equilibrium before the bar starts forming. On the contrary, in cosmological simulations the bar should start forming as soon as the relative disk mass is sufficiently high to allow the bar instability to proceed. One must add to this the uncertainty about when disks can be considered as being in place. All this taken together makes it very difficult to pinpoint the simulation time to be used for the comparisons. The best is to try a range of times and then describe how the fit evolves with time. Summary and discussion Angular momentum can be redistributed within a barred galaxy. It is emitted from the (near-)resonant stars in the bar region and absorbed by the (near-)resonant material in the spheroid and the outer disk. By following the orbits in a simulation and measuring their frequencies, it is possible to determine whether they are (near-)resonant or not, and, if so, at which resonance. For strong bar cases, the most populated disk resonance is the inner Lindblad resonance. Simulations confirm the theoretical prediction that this emits angular momentum, and that the corotation and outer Lindblad resonances absorb it. In the spheroid the three most populated resonances are the corotation, the outer Lindblad and the inner Lindblad resonance, and, in many cases, it is corotation that is the most populated. Again simulations confirm the theoretical prediction that angular momentum is absorbed at the spheroid resonances. In order for bars to evolve uninhibited in a simulation, it is necessary that the angular momentum exchange is not artificially restrained, as would be the case if the halo in the simulation was rigid, e.g., represented by an axisymmetric force incapable of emitting or absorbing angular momentum. It is thus necessary to work with live haloes in simulations, and, more generally, to avoid the use of any rigid component. Note also that the effect of the spheroid on bar growth is different in the early and in the late phases of the evolution. During the initial phases of the evolution, the spheroid, due to the strong axisymmetric force it exerts, delays and slows down the bar growth. Thus, bars will take longer to form in galaxies with a large ratio of spheroid-to-disk mass. On the other hand, at later stages, after the secular evolution has started, the spheroid can increase the bar strength by absorbing a large fraction of the angular momentum emitted from the bar region. Thus, stronger bars will be found in galaxies with a larger spheroid-to-disk mass ratio. Contrary to spheroid mass, the velocity dispersion in the disk has always the same effect on the bar growth. During the initial phases it slows down the bar growth. Thus, bars will take longer to form in galaxies with hot disks. During the secular evolution phase, a higher velocity dispersion in the disk component will make its resonances less active, since it decreases the amount of angular momentum that a resonance can emit or absorb. A similar comment can be made about the velocity dispersion of the spheroid (near-)resonant material. Thus, increasing the velocity dispersion in the disk and/or the spheroid will lead to less angular momentum redistribution and therefore weaker bars. As the bar loses angular momentum, its pattern speed decreases, so that the resonant radii will move outwards with time. Since the corotation radius provides an absolute limit to the bar length, this increase implies that the bar can become longer. Indeed, this occurs in simulations. It is thus possible for the pattern speed to decrease while the bar stays 'fast', provided the bar becomes longer in such a way that the ratio R of corotation radius to bar length stays within the bracket 1.2 ± 0.2. As the bar loses angular momentum it also becomes stronger, so that there is a correlation between the bar strength and the amount of angular momentum absorbed by the spheroid. In general, as bars become stronger they become also longer and their shape gets more rectangular-like. They redistribute mass within the disk and create the disky bulge (more often referred to as pseudo-bulge) in the central region. They also increase the disk scalelength. All these changes brought about by the evolution can also strongly influence the form of the rotation curve and change an initially sub-maximum disk to a maximum one. The strongest bars will be found in cases where the maximum amount of angular momentum has been redistributed within the galaxy, and not when the spheroid mass is maximum. A further parameter which is crucial in trying to maximise the angular momentum redistribution is the bar pattern speed. Indeed, this is set by the location of the corotation radius and therefore by the balance between emitters and absorbers in the disk. When bars form they are vertically thin, but soon their inner parts puff up and form what is commonly known as the boxy/peanut bulge. This is well understood with the help of orbital structure theory. It gives a complex and interesting shape to the bar -i.e., vertically extended only over a radial extent from the centre to a maximum radius of the order of (0.7 ± 0.3)a B , where a B is the bar length, and then very thin outside that range. This shape explains a number of observations and also argues that the COBE /DIRBE bar and the Long bar in our Galaxy are, respectively, the thin and the thick part of a single bar. From the above it is thus possible to conclude that there is a continuous redistribution of angular momentum in disks with strong bars and that this drives a secular evolution. It is secular because the timescales involved are long, contrary to, e.g., a merging, which occurs in a very short time interval. on bars), Kormendy & Kennicutt (2004 -on secular Fig. 4.1. Three snapshots showing the formation and evolution of a bar. Each column corresponds to a given time (increasing from left to right), and each row corresponds to a different viewing geometry, namely (from top to bottom) face-on, side-on and end-on. The time, in Gyr, is given in the upper-right corner of each upper panel. See text for further descriptions. evolution) and Athanassoula (2008a -on boxy/peanut and disky bulges), as well as in other chapters of this book. Fig. 4 . 2 . 42Examples of epicyclic orbits. Left panel: A non-resonant orbit. Central panel: Orbit at inner Lindblad resonance, i.e., for (l, m) = (-1,2). Right panel: Orbit at corotation resonance, i.e., for l = 0. In all three panels, the dashed line gives the circular (guiding) orbit. Fig. 4 . 3 . 43Examples of orbits of the x 1 family. The outline of the bar is given by a dashed line (reproduced from Athanassoula 1992a). Fig. 3 . 319 of (a) A wide variety of methods are based on Jeans's theorem (e.g., Zang 1976; Athanassoula & Sellwood 1986; Kuijken & Dubinski 1995; Widrow & Dubinski 2005; McMillan & Dehnen 2007), or on Jeans's equations (e.g., Hernquist 1993). Fig. 4 . 4 . 44Number density, N R , of particles as a function of the frequency ratio (Ω − Ω p )/κ, for two different simulations (upper and lower panels, respectively) at a time near the end of the simulation. The left panels correspond to the disk component and the right ones to the halo. This figure is reproduced from A02. Fig. 4 . 5 . 45Circular velocity curves of the initial condition of two models, one of type MD (upper panel) and the other of type MH (lower panel). The solid line gives the total circular velocity, while the dashed and the dotted ones give the contributions of the disk and halo, respectively. Fig. 4 . 6 . 46Morphology of the disk component, viewed face-on, for an MD-type (left panels) and an MH-type halo (right panels). The time in Gyr is given in the upper right corner of each panel. Fig. 4 . 7 . 47Comparison of the evolution of the bar in a live halo (left-hand panel) to that in a rigid halo (right-hand panel) for an MH-type halo. Fig. 4 . 8 . 48Comparison of the evolution of the bar in a live halo (left-hand panel) to that in a rigid halo (right-hand panel) for an MD-type halo. Fig. 4 . 9 . 49Evolution of the bar strength with time for two models, one of the MDtype (dashed line) and the other of the MH type (solid line). Figure 4 . 49 illustrates another important point, first argued in A02, namely that the effect of the spheroid on the bar strength is very different, in fact opposite, in the early and late evolutionary stages. Fig. 4 . 410. Evolution of the bar strength with time for two models with different Toomre Q parameters. Both models are of MD type. Fig. 4 . 411. Bar strength, S B , as a function of the amount of angular momentum absorbed by the spheroid, ∆L z,s , expressed as a fraction of the total z component of the angular momentum, L z,t . The two quantities are measured at the same time, towards the end of the simulation when all bars are in their secular evolution phase. Each symbol in this plot represents a separate simulation. Fig. 4 . 412. Schematic plot of the effects of the decrease of the bar pattern speed with time for a very simple model with a constant circular velocity. The solid line gives Ω(r) and the horizontal dashed and dotted lines give the pattern speed Ω p at two instants of time. The vertical dashed and dotted lines show the CR radius at these same two instants of time. The vertical arrow indicates the decrease of the bar pattern speed and the horizontal one the increase of the corotation radius. Fig. 4 . 413. Three views of the baryonic components (disk and bulge) of a simulation. For each panel the inclination angle is given in the upper right corner, the value of 77 degrees corresponding to the inclination of the Andromeda galaxy (M 31). The solid vertical line gives an estimate of the bar length from the face-on view, while the dashed vertical line gives an estimate of the length of the boxy/peanut structure, as obtained from the side-on view. and 12 in A03, Fig. 2 in Romero-Gómez et al. (2011), or Fig. 1 in Martínez-Valpuesta & Gerhard (2011). Such a segment can spuriously increase the observed position angle of the Long bar. Fig. 4 . 415. Evolution of the Sackett parameter, S, as a function of time for an MHtype simulation (solid line). The two horizontal dotted lines give the limits within which S must lie for the disk to be considered as maximum. The dashed line gives a measure of the bar strength as a function of time, for the same simulation. † http://lam.oamp.fr/recherche-14/dynamique-des-galaxies/scientific-results/milky-way/ bar-bulge/how-many-bars-in-mw, or in http://195.221.212.246:4780/dynam/movie/MWbar. ‡ A discussion of the various methods that can be used to measure a bar length in simulations, in ensembles of orbits and/or in observations, and of their respective advantages and disadvantages can be be found in AM02,Patsis et al. (2002), Michel-Dansac & Wozniak (2006 andGadotti et al. (2007). (1989). Berlin, Heidelberg, New-York: Springer Aronica, G., Athanassoula, E., Bureau, M. et al. (2003), ApSS, 284, 753 Athanassoula, E. (1984), Phys. Repts., 114, 319 Athanassoula, E. (1992a), MNRAS, 259, 328 . E Athanassoula, MNRAS. 259345Athanassoula, E. (1992b), MNRAS, 259, 345 E Athanassoula, E Athanassoula, Disks of Galaxies: Kinematics, Dynamics and Perturbations. E. Athanassoula, A. Bosma & R. Mujica569141ApJLAthanassoula, E. (2002a), ApJL, 569, 83 (A02) Athanassoula, E. (2002b), in Disks of Galaxies: Kinematics, Dynamics and Perturbations, E. Athanassoula, A. Bosma & R. Mujica, eds., ASP Conf. Ser., 275, p. 141 . E Athanassoula, MNRAS. 341A031179Athanassoula, E. (2003), MNRAS, 341, 1179 (A03) . E Athanassoula, Cel. Mech. and Dyn. Astr. 919Athanassoula, E. (2005a), Cel. Mech. and Dyn. Astr., 91, 9 . E Athanassoula, MNRAS. 3581477Athanassoula, E. (2005b), MNRAS, 358, 1477 . E Athanassoula, astro.ph.10113Athanassoula, E. (2006), astro.ph.10113 . E Athanassoula, MNRAS. 3771569Athanassoula, E. (2007), MNRAS, 377, 1569 E Athanassoula, Formation and Evolution of Galaxy Bulges, IAU Symp. 245. M. Bureau, E. Athanassoula & B. BarbuyCambridgeCambridge University Press93Athanassoula, E. (2008a), in Formation and Evolution of Galaxy Bulges, IAU Symp. 245, M. Bureau, E. Athanassoula & B. Barbuy, eds., Cambridge: Cambridge University Press, p. 93 E Athanassoula, Mapping the Galaxy and Nearby Galaxies. K. Wada & F. CombesNew YorkSpringer47Athanassoula E. (2008b), in Mapping the Galaxy and Nearby Galaxies, K. Wada & F. Combes, eds., New York: Astrophys. & Space Sci. Proc., Springer, p.47 . E Athanassoula, MNRAS. 42646Athanassoula, E. (2012), MNRAS, 426, 46 . E Athanassoula, R Beaton, MNRAS. 3701499Athanassoula, E., Beaton, R. (2006), MNRAS, 370, 1499 . E Athanassoula, O Bienaymé, L Martinet, D Pfenniger, A&A. 127349Athanassoula, E., Bienaymé, O., Martinet, L., Pfenniger, D. (1983), A&A, 127, 349 . E Athanassoula, A Bosma, J.-C Lambert, J Makino, MNRAS. 293369Athanassoula, E., Bosma, A., Lambert, J.-C., Makino, J. (1998), MNRAS, 293, 369 . E Athanassoula, M Bureau, ApJ. 522699Athanassoula, E., Bureau, M. (1999), ApJ, 522, 699 . E Athanassoula, A Misiriotis, MNRAS. 330AM0235Athanassoula, E., Misiriotis, A. (2002), MNRAS, 330, 35 (AM02) . E Athanassoula, S Morin, H Wozniak, MNRAS. 245130Athanassoula, E., Morin, S., Wozniak, H. et al. (1990), MNRAS, 245, 130 . E Athanassoula, M Romero-Gómez, A Bosma, J J Masdemont, MNRAS. 4001706Athanassoula, E., Romero-Gómez, M., Bosma, A., Masdemont, J. J. (2009b), MNRAS, 400, 1706 . E Athanassoula, M Romero-Gómez, A Bosma, J J Masdemont, MNRAS. 4071433Athanassoula, E., Romero-Gómez, M., Bosma, A., Masdemont, J. J. (2010), MNRAS, 407, 1433 . E Athanassoula, M Romero-Gómez, J J Masdemont, MNRAS. 39467Athanassoula, E., Romero-Gómez, M., Masdemont, J. J. (2009a), MNRAS, 394, 67 . E Athanassoula, J A Sellwood, MNRAS. 221213Athanassoula, E., Sellwood J. A. (1986), MNRAS, 221, 213 . G Battaglia, A Helmi, E Tolstoy, ApJL. 68113Battaglia, G., Helmi, A., Tolstoy, E. et al. (2008), ApJL, 681, 13 . J Barnes, ApJ. 331699Barnes, J. (1988), ApJ, 331, 699 . R A Benjamin, E Churchwell, B L Babler, ApJL. 630149Benjamin, R. A., Churchwell, E., Babler, B. L. et al. (2005), ApJL, 630, 149 . I Berentzen, I Shlosman, ApJ. 648807Berentzen, I., Shlosman, I. (2006), ApJ, 648, 807 . J Binney, O E Gerhard, A A Stark, J Bally, K I Uchida, MNRAS. 252210Binney, J., Gerhard, O. E., Stark, A. A., Bally, J., Uchida, K. I. (1991), MNRAS, 252, 210 . J J Binney, D S Spergel, ApJ. 252308Binney, J. J., Spergel, D. S. (1982), ApJ, 252, 308 J Binney, S Tremaine, Galactic Dynamics. L., Spergel, D. N.PrincetonPrinceton University Press Blitz370205Second editionBinney, J., Tremaine, S. (2008), Galactic Dynamics. Second edition. Princeton: Princeton University Press Blitz, L., Spergel, D. N. (1991), ApJ, 370, 205 A Bosma, Internal Kinematics and Dynamics of Galaxies, IAU Symp. 100, E. Athanassoula. 253Dordrecht: ReidelBosma, A. (1983), in Internal Kinematics and Dynamics of Galaxies, IAU Symp. 100, E. Athanassoula, ed., Dordrecht: Reidel, p. 253 A Bosma, Dynamics of Galaxies: from the Early Universe to the Present. F. Combes, G. A. Mamon & V. Charmandaris19791Bosma, A. (2000), in Dynamics of Galaxies: from the Early Universe to the Present, F. Combes, G. A. Mamon & V. Charmandaris, eds., ASP Conf. Ser., 197, p. 91 A Bosma, Disks of Galaxies: Kinematics, Dynamics and Perturbations. E. Athanassoula et al.27523Bosma, A. (2002), in Disks of Galaxies: Kinematics, Dynamics and Perturbations, E. Athanassoula et al., eds., ASP Conf. Ser., 275, p. 23 . M Bureau, G Aronica, E Athanassoula, MNRAS. 370753Bureau, M., Aronica, G., Athanassoula, E. et al. (2006), MNRAS, 370, 753 . M Bureau, E Athanassoula, ApJ. 522686Bureau, M., Athanassoula, E. (1999), ApJ, 522, 686 . M Bureau, E Athanassoula, ApJ. 626159Bureau, M., Athanassoula, E. (2005), ApJ, 626, 159 . M Bureau, K C Freeman, AJ. 118126Bureau, M., Freeman, K. C. (1999), AJ, 118, 126 . A Cabrera-Lavers, C González-Fernández, F Garzón, A&A. 491781Cabrera-Lavers, A., González-Fernández, C., Garzón, F. et al. (2008), A&A, 491, 781 . A Cabrera-Lavers, P L Hammersley, C González-Ferández, A&A. 465825Cabrera-Lavers, A., Hammersley, P. L., González-Ferández, C. et al. (2007), A&A, 465, 825 . C M Carollo, M Stiavelli, J Mack, AJ. 11668Carollo, C. M., Stiavelli, M., Mack, J. (1998), AJ, 116, 68 . D D Carpintero, L A Aguilar, MNRAS. 2981Carpintero, D. D., Aguilar, L. A. (1998), MNRAS, 298, 1 . D Ceverino, A Klypin, MNRAS. 3791155Ceverino, D., Klypin, A. (2007), MNRAS, 379, 1155 . A Chung, M Bureau, AJ. 1273192Chung, A., Bureau, M. (2004), AJ, 127, 3192 . E Churchwell, B L Babler, M R Meade, PASP. 121213Churchwell, E., Babler, B. L., Meade, M. R. et al. (2009), PASP, 121, 213 . P Colín, O Valenzuela, A Klypin, ApJ. 644687Colín, P., Valenzuela, O., Klypin, A. (2006), ApJ, 644, 687 . F Combes, F Debbasch, D Friedli, D Pfenniger, A&A. 23382Combes, F., Debbasch, F., Friedli, D., Pfenniger, D. (1990), A&A, 233, 82 . G Contopoulos, A&A. 81Contopoulos, G. (1980), A&A, 81, 198 Order and chaos in dynamical astronomy. G Contopoulos, A&A Review. Contopoulos, G., Grosbøl P.1261SpringerContopoulos, G. (2002), Order and chaos in dynamical astronomy. Berlin: Springer. Contopoulos, G., Grosbøl P. (1989), A&A Review, 1, 261 . G Contopoulos, T Papayannopoulos, A&A. 9233Contopoulos, G., Papayannopoulos, T. (1980), A&A, 92, 33 . E M Corsini, Mem. Soc. Astr. Ital. Suppl. 1823Corsini, E. M. (2011), Mem. Soc. Astr. Ital. Suppl., 18, 23 . V P Debattista, C M Carollo, L Mayer, B Moore, ApJL. 60493Debattista V. P., Carollo C. M., Mayer, L., Moore B. (2004), ApJL, 604, 93 . V P Debattista, L Mayer, C M Carollo, ApJ. 645209Debattista V. P., Mayer L., Carollo, C. M. et al. (2006), ApJ, 645, 209 . V P Debattista, J A Sellwood, ApJ. 543704Debattista, V. P., Sellwood, J. A. (2000), ApJ, 543, 704 W J G De Blok, W J G De Blok, A Bosma, Advances in Astronomy. 385816de Blok, W. J. G. (2010), Advances in Astronomy, article id. 789293 de Blok, W. J. G., Bosma, A. (2002), A&A, 385, 816 . W J G De Blok, A Bosma, S S Mcgaugh, MNRAS. 340657de Blok, W. J. G., Bosma, A., McGaugh, S. S. (2003), MNRAS, 340, 657 . W J G De Blok, S S Mcgaugh, A Bosma, V Rubin, ApJL. 55223de Blok, W. J. G., McGaugh, S. S., Bosma, A., Rubin, V. (2001), ApJL, 552, 23 . W J G De Blok, F Walter, E Brinks, AJ. 1362648de Blok, W. J. G., Walter, F., Brinks, E. et al. (2008), AJ, 136, 2648 . W Dehnen, AJ. 1181201Dehnen, W. (1999), AJ, 118, 1201 . W Dehnen, ApJL. 53639Dehnen, W. (2000), ApJL, 536, 39 . W Dehnen, J. Comp. Phys. 17927Dehnen, W. (2002), J. Comp. Phys., 179, 27 . E Dekker, A&A. 34255Dekker, E. (1974), A&A, 34, 255 G De Vaucouleurs, The Galaxy and the Magellanic Clouds, IAU Symp. 20, F. Kerr. 195Canberra: Australian Academy of Sciencede Vaucouleurs, G. (1964), in The Galaxy and the Magellanic Clouds, IAU Symp. 20, F. Kerr., ed., Canberra: Australian Academy of Science, p. 195 . J Dubinski, New Astronomy. 1133Dubinski, J. (1996), New Astronomy, 1, 133 . J Dubinski, I Berentzen, I Shlosman, ApJ. 697293Dubinski, J., Berentzen, I., Shlosman, I. (2009), ApJ, 697, 293 B G Elmegreen, Barred galaxies. R. Buta, D. A. Crocker & B. G. Elmegreen91197Elmegreen, B. G. (1996), in Barred galaxies, R. Buta, D. A. Crocker & B. G. Elmegreen, eds., ASP. Conf. Ser., 91, p. 197 . P Erwin, Mem. Soc. Astron. Ital. Suppl. 18145Erwin, P. (2011), Mem. Soc. Astron. Ital. Suppl., 18, 145 . P Erwin, L Sparke, AJ. 12465Erwin, P., Sparke, L. (2002), AJ, 124, 65 . D Friedli, W Benz, A&A. 26865Friedli, D., Benz, W. (1993), A&A, 268, 65 . B Fuchs, A&A. 419941Fuchs, B. (2004), A&A, 419, 941 . B Fuchs, E Athanassoula, A&A. 444455Fuchs, B., Athanassoula, E. (2005), A&A, 444, 455 D A Gadotti, Chaos in Astronomy. Contopoulos and P. PatsisBerlinSpringer159Gadotti, D. A. (2008), in Chaos in Astronomy, G. Contopoulos and P. Patsis, eds., Berlin: Springer, p. 159 . D A Gadotti, MNRAS. 4153308Gadotti, D. A. (2011), MNRAS, 415, 3308 . D A Gadotti, E Athanassoula, L Carrasco, MNRAS. 381943Gadotti, D. A., Athanassoula, E., Carrasco, L. et al. (2007), MNRAS, 381, 943 . F Governato, C Brook, L Mayer, Nature. 463203Governato, F., Brook, C., Mayer, L. et al. (2010), Nature, 463, 203 . F Governato, A Zolotov, A Pontzen, MNRAS. 4221231Governato, F., Zolotov, A., Pontzen, A. et al. (2012), MNRAS, 422, 1231 . P Grosbøl, P A Patsis, E Pompei, A&A. 423849Grosbøl, P., Patsis, P. A., Pompei, E. (2004), A&A, 423, 849 . P L Hammersley, F Garzón, T J Mahoney, M López-Corredoira, M A P Torres, MNRAS. 31745Hammersley, P. L., Garzón, F., Mahoney, T. J., López-Corredoira, M., Torres, M. A. P. (2000), MNRAS, 317, 45 . C H Heller, ApJ. 455252Heller, C. H. (1995), ApJ, 455, 252 . C H Heller, I Shlosman, ApJ. 42484Heller, C. H., Shlosman, I. (1994), ApJ, 424, 84 . L Hernquist, ApJS. 86389Hernquist, L. (1993), ApJS, 86, 389 . L Hernquist, M Weinberg, ApJ. 40080Hernquist, L., Weinberg, M. (1992), ApJ, 400, 80 . F Hohl, ApJ. 168343Hohl, F. (1971), ApJ, 168, 343 . K Holley-Bockelmann, M Weinberg, N Katz, MNRAS. 363991Holley-Bockelmann, K., Weinberg, M., Katz, N. (2005), MNRAS, 363, 991 . A J Kalnajs, ApJ. 166275Kalnajs, A. J. (1971), ApJ, 166, 275 . A Klypin, O Valenzuela, P Colín, T Quinn, MNRAS. 3981027Klypin, A., Valenzuela, O., Colín, P., Quinn, T. (2009), MNRAS, 398, 1027 . J Kormendy, ApJ. 227714Kormendy, J. (1979), ApJ, 227, 714 J Kormendy, Galactic bulges, IAU Symp. 153. H. de Jonghe & H. J. Habing. Dordrecht: Kluwer209Kormendy, J. (1993), in Galactic bulges, IAU Symp. 153, eds H. de Jonghe & H. J. Habing. Dordrecht: Kluwer, p. 209 . J Kormendy, R C Kennicutt, Jr, ARA&A. 42603Kormendy J., Kennicutt R. C., Jr. (2004), ARA&A, 42, 603 . A V Kravtsov, A A Klypin, A M Khokhlov, ApJS. 11173Kravtsov, A. V., Klypin, A. A., Khokhlov, A. M. (1997), ApJS, 111, 73 . K Kraljic, F Bournaud, M Martig, K Kuijken, J Dubinski, arXiv:1207.0351MNRAS. 2771341ApJKraljic, K., Bournaud, F., Martig, M. (2012), ApJ, in press (arXiv:1207.0351) Kuijken, K., Dubinski, J. (1995), MNRAS, 277, 1341 . K Kuijken, M R Merrifield, ApJL. 44313Kuijken, K., Merrifield, M. R. (1995), ApJL, 443, 13 . R Kuzio De Naray, S S Mcgaugh, W J G De Blok, ApJ. 676920Kuzio de Naray, R., McGaugh, S. S., de Blok, W. J. G. (2008), ApJ, 676, 920 . R Kuzio De Naray, S S Mcgaugh, W J G De Blok, A Bosma, ApJS. 165461Kuzio de Naray, R., McGaugh, S. S., de Blok, W. J. G., Bosma, A. (2006), ApJS, 165, 461 . S Laine, I Shlosman, J H Knapen, R F Peletier, ApJ. 56797Laine, S., Shlosman, I., Knapen, J. H., Peletier, R. F. (2002), ApJ, 567, 97 . J Laskar, Icarus. 88266Laskar, J. (1990), Icarus, 88, 266 Regular and chaotic dynamics. A J Lichtenberg, M A Lieberman, MNRAS. Springer Little B., Carlberg R. G.250161Lichtenberg A. J., Lieberman M. A. (1992), Regular and chaotic dynamics. New York, NY: Springer Little B., Carlberg R. G. (1991a), MNRAS, 250, 161 . B Little, R G Carlberg, MNRAS. 227Little B., Carlberg R. G. (1991b), MNRAS, 251, 227 . M López-Corredoira, A Cabrera-Lavers, T J Mahoney, AJ. 133154López-Corredoira, M., Cabrera-Lavers, A., Mahoney, T. J. et al. (2007), AJ, 133, 154 . R Lütticke, R J Dettmar, M Pohlen, A&A. 362435Lütticke, R., Dettmar, R. J., Pohlen, M. (2000), A&A, 362, 435 . D Lynden-Bell, A J Kalnajs, MNRAS. 1571Lynden-Bell, D., Kalnajs, A. J. (1972), MNRAS, 157, 1 . A V Macciò, G Stinson, C B Brook, ApJL. 7449Macciò, A. V., Stinson, G., Brook, C. B. et al. (2012), ApJL, 744, 9 . I Martínez-Valpuesta, O Gerhard, ApJL. 73420Martínez-Valpuesta, I., Gerhard, O. (2011), ApJL, 734, 20 . I Martínez-Valpuesta, I Shlosman, ApJL. 61329Martínez-Valpuesta, I., Shlosman I. (2004), ApJL, 613, 29 . I Martínez-Valpuesta, I Shlosman, C Heller, ApJ. 637214Martínez-Valpuesta, I., Shlosman, I., Heller, C. (2006), ApJ, 637, 214 . P J Mcmillan, W Dehnen, MNRAS. 3631205McMillan, P. J., Dehnen, W. (2005), MNRAS, 363, 1205 . P J Mcmillan, W Dehnen, MNRAS. 378541McMillan, P. J., Dehnen, W. (2007), MNRAS, 378, 541 . M R Merrifield, K Kuijken, A&A. 34547Merrifield, M. R., Kuijken, K. (1999), A&A, 345, 47 . L Michel-Dansac, H Wozniak, A&A. 45297Michel-Dansac L., Wozniak H. (2006), A&A, 452, 97 . R H Miller, K H Prendergast, W Quirk, ApJ. 161903Miller R. H., Prendergast K. H., Quirk, W. (1970), ApJ, 161, 903 . I Minchev, B Famaey, F Combes, A&A. 527147Minchev, I., Famaey, B., Combes, F. et al. (2011), A&A, 527A, 147 . B Moore, T Quinn, F Governato, J Stadel, G Lake, MNRAS. 3101147Moore, B., Quinn, T., Governato, F., Stadel, J., Lake, G. (1999), MNRAS, 310, 1147 . J F Navarro, C S Frenk, S D M White, ApJ. 462563Navarro, J. F., Frenk, C. S., White, S. D. M. (1996), ApJ, 462, 563 . J F Navarro, E Hayashi, C Power, MNRAS. 3491039Navarro, J. F., Hayashi, E., Power, C. et al. (2004), MNRAS, 349, 1039 . J F Navarro, A Ludlow, V Springel, MNRAS. 40221Navarro, J. F., Ludlow, A., Springel, V. et al. (2010), MNRAS, 402, 21 . S.-H Oh, W J G De Blok, F Walter, E Brinks, R C Kennicutt, Jr, AJ. 1362761Oh, S.-H., de Blok, W. J. G., Walter, F., Brinks, E., Kennicutt, R. C., Jr. (2008), AJ, 136, 2761 . S.-H Oh, C Brook, F Governato, AJ. 14224Oh, S.-H., Brook, C., Governato, F. et al. (2011), AJ, 142, 24 . J K O&apos;neill, J Dubinski, MNRAS. 346251O'Neill, J. K., Dubinski, J. (2003), MNRAS, 346, 251 . J P Ostriker, P J E Peebles, ApJ. 186467Ostriker, J. P., Peebles, P. J. E. (1973), ApJ, 186, 467 . J P Ostriker, P J E Peebles, A Yahil, ApJL. 19310Ostriker, J. P., Peebles, P. J. E., Yahil, A. (1974), ApJL, 193, 10 . I Papaphilipou, J Laskar, A&A. 307427Papaphilipou, I., Laskar, J. (1996), A&A, 307, 427 . I Papaphilipou, J Laskar, A&A. 329451Papaphilipou, I., Laskar, J. (1998), A&A, 329, 451 . P A Patsis, MNRAS. 358305Patsis, P. A. (2005), MNRAS, 358, 305 . P A Patsis, E Athanassoula, A C Quillen, ApJ. 483731Patsis, P. A., Athanassoula, E., Quillen, A. C. (1997), ApJ, 483, 731 . P A Patsis, C Kalapotharakos, P Grosbøl, MNRAS. 40822Patsis, P. A., Kalapotharakos, C., Grosbøl, P. (2010), MNRAS, 408, 22 . P A Patsis, Ch Skokos, E Athanassoula, MNRAS. 337578Patsis, P. A., Skokos, Ch., Athanassoula, E. (2002), MNRAS, 337, 578 . P A Patsis, Ch Skokos, E Athanassoula, MNRAS. 34269Patsis, P. A., Skokos, Ch., Athanassoula, E. (2003), MNRAS, 342, 69 . J Peñarrubia, A Pontzen, M G Walker, S E Koposov, D Pfenniger, arXiv:1207.2772ApJL. 134373A&APeñarrubia, J., Pontzen, A., Walker, M. G., Koposov, S. E. (2012), ApJL, in press (arXiv:1207.2772) Pfenniger, D. (1984), A&A, 134, 373 . D Pfenniger, D Friedli, A&A. 25275Pfenniger, D., Friedli, D. (1991), A&A, 252, 75 . A Pontzen, F Governato, MNRAS. 4213464Pontzen, A., Governato, F. (2012), MNRAS, 421, 3464 . N Raha, J A Sellwood, R A James, F D Kahn, Nature. 352411Raha N., Sellwood, J. A., James, R. A., Kahn, F. D. (1991), Nature, 352, 411 . M W Regan, P J Teuben, ApJ. 600595Regan, M. W., Teuben, P. J. (2004), ApJ, 600, 595 . S A Rodionov, E Athanassoula, A&A. 52998Rodionov, S. A., Athanassoula, E. (2011), A&A, 529, 98 . S A Rodionov, E Athanassoula, N Y Sotnikova, MNRAS. 392904Rodionov, S. A., Athanassoula, E., Sotnikova, N. Y. (2009), MNRAS, 392, 904 . E Romano-Díaz, I Shlosman, C Heller, Y Hoffman, ApJ. 68713Romano-Díaz, E., Shlosman, I., Heller, C., Hoffman, Y. (2008), ApJ, 687, 13 . M Romero-Gómez, E Athanassoula, T Antoja, F Figueras, MNRAS. 4181176Romero-Gómez, M., Athanassoula, E., Antoja, T., Figueras, F. (2011), MNRAS, 418, 1176 . M Romero-Gómez, E Athanassoula, J J Masdemont, C García-Gómez, A&A. 47263Romero-Gómez, M., Athanassoula, E., Masdemont, J. J., García-Gómez, C. (2007), A&A, 472, 63 . M Romero-Gómez, J J Masdemont, E Athanassoula, C García-Gómez, A&A. 45339Romero-Gómez, M., Masdemont, J. J., Athanassoula, E., García-Gómez, C. (2006), A&A, 453, 39 . P D Sackett, ApJ. 483103Sackett, P. D. (1997), ApJ, 483, 103 . K Saha, I Martínez-Valpuesta, O Gerhard, MNRAS. 421333Saha, K., Martínez-Valpuesta, I., Gerhard, O. (2012), MNRAS, 421, 333 . K Sakamoto, S K Okumura, S Ishizuki, N Z Scoville, ApJ. 525691Sakamoto, K., Okumura, S. K., Ishizuki, S., Scoville, N. Z. (1999), ApJ, 525, 691 . C Scannapieco, E Athanassoula, MNRAS. 42510Scannapieco, C., Athanassoula, E. (2012), MNRAS, 425, 10 . M Schwarzschild, ApJ. 232236Schwarzschild, M. (1979), ApJ, 232, 236 . J A Sellwood, A&A. 89296Sellwood, J. A. (1980), A&A, 89, 296 . J A Sellwood, A&A. 99362Sellwood, J. A. (1981), A&A, 99, 362 . J A Sellwood, ApJ. 587638Sellwood, J. A. (2003), ApJ, 587, 638 . J A Sellwood, ApJ. 679379Sellwood, J. A. (2008), ApJ, 679, 379 J Sellwood, arXiv:1006.4855Planets, Stars and Stellar Systems. G. Gilmore5Sellwood, J. (2010), in Planets, Stars and Stellar Systems, G. Gilmore, ed., vol 5 (arXiv:1006.4855) . J A Sellwood, E Athanassoula, MNRAS. 221195Sellwood, J. A., Athanassoula, E. (1986), MNRAS, 221, 195 . J A Sellwood, V P Debattista, ApJ. 639868Sellwood, J. A., Debattista, V. P. (2006), ApJ, 639, 868 . J A Sellwood, A Wilkinson, Reports on Progress in Physics. 56173Sellwood, J. A., Wilkinson, A. (1993), Reports on Progress in Physics, 56, 173 J L. ; K Sérsic, D M Elmegreen, B G Elmegreen, Atlas de Galaxias Australes. Cordoba: Observatorio Astronomico Sheth. 6751141Sérsic, J. L. (1968), Atlas de Galaxias Australes. Cordoba: Observatorio Astronomico Sheth, K., Elmegreen, D. M., Elmegreen, B. G. et al. (2008), ApJ, 675, 1141 . K Sheth, J Melbourne, D M Elmegreen, arXiv:1208.6304ApJ. in pressSheth, K., Melbourne, J., Elmegreen, D. M. et al. (2012), ApJ, in press (arXiv:1208.6304) . K Sheth, M W Regan, N Z Scoville, L E Strubbe, ApJL. 59213Sheth, K., Regan, M. W., Scoville, N. Z., Strubbe, L. E. (2003), ApJL, 592, 13 . I Shlosman, M Noguchi, ApJ. 414474Shlosman, I., Noguchi, M. (1993), ApJ, 414, 474 . J Silk, G A Mamon, Research in Astron. and Astrophys. 12917Silk, J., Mamon, G. A. (2012), Research in Astron. and Astrophys., 12, 917 . J D Simon, A D Bolatto, A Leroy, L Blitz, ApJ. 596957Simon, J. D., Bolatto, A. D., Leroy, A., Blitz, L. (2003), ApJ, 596, 957 . C Skokos, P A Patsis, E Athanassoula, MNRAS. 333847Skokos, C., Patsis, P. A., Athanassoula, E. (2002a), MNRAS, 333, 847 . C Skokos, P A Patsis, E Athanassoula, MNRAS. 333861Skokos, C., Patsis, P. A., Athanassoula, E. (2002b), MNRAS, 333, 861 . V Springel, MNRAS. 3641105Springel, V. (2005), MNRAS, 364, 1105 . V Springel, N Yoshida, S D M White, New Astronomy. 679Springel, V., Yoshida, N., White, S. D. M. (2001), New Astronomy, 6, 79 . G Stinson, C Brook, A V Maccio, arXiv:1208.0002MNRAS. submittedStinson, G., Brook, C., Maccio, A. V. (2012), MNRAS, submitted (arXiv:1208.0002) . S D Tremaine, M D Weinberg, ApJL. 2825Tremaine, S. D, Weinberg, M. D. (1984), ApJL, 282, 5 . A Toomre, ApJ. 1391217Toomre, A. (1964), ApJ, 139, 1217 A Toomre, Structure and Dynamics of Nearby Galaxies. S. M. Fall & D. Lynden-BellCambridgeCambridge University Press111Toomre, A. (1981), in Structure and Dynamics of Nearby Galaxies, S. M. Fall & D. Lynden-Bell, eds., Cambridge: Cambridge University Press, p. 111 . O Valenzuela, A Klypin, MNRAS. 345406Valenzuela, O., Klypin, A. (2003), MNRAS, 345, 406 . M Valluri, V P Debattista, T Quinn, R Roskar, J Wadsley, MNRAS. 4191951Valluri, M., Debattista, V. P., Quinn, T., Roskar, R., Wadsley, J. (2012), MNRAS, 419, 1951 . J Villa-Vargas, I Shlosman, C Heller, ApJ. 707218Villa-Vargas, J., Shlosman, I., Heller, C. (2009), ApJ, 707, 218 . K Wada, A Habe, MNRAS. 25882Wada, K., Habe, A. (1992), MNRAS, 258, 82 . K Wada, A Habe, MNRAS. 277433Wada, K., Habe, A. (1995), MNRAS, 277, 433 . M G Walker, J Peñarrubia, ApJ. 74220Walker, M. G., Peñarrubia, J. (2012), ApJ, 742, 20 . M D Weinberg, MNRAS. 213451Weinberg, M. D. (1985), MNRAS, 213 ,451 . M D Weinberg, ApJ. 420597Weinberg, M. D. (1994), ApJ, 420, 597 . M D Weinberg, N Katz, ApJ. 580672Weinberg M. D., Katz, N. (2002), ApJ, 580, 672 . M D Weinberg, N Katz, MNRAS. 375425Weinberg M. D., Katz, N. (2007a), MNRAS, 375, 425 . M D Weinberg, N Katz, MNRAS. 375460Weinberg M. D., Katz, N. (2007b), MNRAS, 375, 460 . L M Widrow, J Dubinski, ApJ. 631838Widrow, L. M., Dubinski, J. (2005), ApJ, 631, 838 . H Wozniak, L Michel-Dansac, A&A. 49411Wozniak, H., Michel-Dansac, L. (2009), A&A, 494, 11 T A Zang, Cambridge, G Ma Zasowski, R A Benjamin, S R Majewski, id.06006Assembling the Puzzle of the Milky Way. C. Reylé, A. Robin & M. Schultheis19Massachussetts Institute of TechnologyPh.D. ThesisZang, T. A. (1976), Ph.D. Thesis, Massachussetts Institute of Technology, Cambridge, MA Zasowski, G., Benjamin, R. A., Majewski, S. R. (2012), in Assembling the Puzzle of the Milky Way, C. Reylé, A. Robin & M. Schultheis, eds., European Physical Journal Web of Conferences, 19, id.06006
[]
[ "On the influence of the seed graph in the preferential attachment model", "On the influence of the seed graph in the preferential attachment model" ]
[ "Sébastien Bubeck ", "Elchanan Mossel ", "Miklós Z Rácz " ]
[]
[]
We study the influence of the seed graph in the preferential attachment model, focusing on the case of trees. We first show that the seed has no effect from a weak local limit point of view. On the other hand, we conjecture that different seeds lead to different distributions of limiting trees from a total variation point of view. We take a first step in proving this conjecture by showing that seeds with different degree profiles lead to different limiting distributions for the (appropriately normalized) maximum degree, implying that such seeds lead to different (in total variation) limiting trees.
10.1109/tnse.2015.2397592
[ "https://arxiv.org/pdf/1401.4849v3.pdf" ]
15,854,422
1401.4849
66490a4245ab544bb5700be88e2c772e9aa3339d
On the influence of the seed graph in the preferential attachment model 28 Mar 2014 March 31, 2014 Sébastien Bubeck Elchanan Mossel Miklós Z Rácz On the influence of the seed graph in the preferential attachment model 28 Mar 2014 March 31, 2014 We study the influence of the seed graph in the preferential attachment model, focusing on the case of trees. We first show that the seed has no effect from a weak local limit point of view. On the other hand, we conjecture that different seeds lead to different distributions of limiting trees from a total variation point of view. We take a first step in proving this conjecture by showing that seeds with different degree profiles lead to different limiting distributions for the (appropriately normalized) maximum degree, implying that such seeds lead to different (in total variation) limiting trees. Introduction We are interested in the following question: suppose we generate a large graph according to the linear preferential attachment model-can we say anything about the initial (seed) graph? A precise answer to this question could lead to new insights for the diverse applications of the preferential attachment model. In this paper we initiate the theoretical study of the seed's influence. Experimental evidence of the seed's influence already exists in the literature, see, e.g., Schweiger et al. [2011]. For sake of simplicity we focus on trees grown according to linear preferential attachment. For a tree T denote by d T (u) the degree of vertex u in T , ∆(T ) the maximum degree in T , and d(T ) ∈ N N the vector of degrees arranged by decreasing order. 1 We refer to d(T ) as the degree profile of T . For n ≥ k ≥ 2 and a tree T on k vertices we define the random tree PA(n, T ) by induction. First PA(k, T ) = T . Then, given PA(n, T ), PA(n + 1, T ) is formed from PA(n, T ) by adding a new vertex u and a new edge uv where v is selected at random among vertices in PA(n, T ) according to the following probability distribution: We want to understand whether there is a relation between T and PA(n, T ) when n becomes very large. We investigate three ways to make this question more formal. They correspond to three different points of view on the limiting tree obtained by letting n go to infinity. The least refined point of view is to consider the tree PA(∞, T ) defined on a countable set of vertices that one obtains by continuing the preferential attachment process indefinitely. As observed in Kleinberg and Kleinberg [2005], in this case the seed does not have any influence: indeed for any tree T , almost surely, PA(∞, T ) will be the unique isomorphism type of tree with countably many vertices and in which each vertex has infinite degree. In fact this statement holds for any model where the degree of each fixed vertex diverges to infinity as the tree grows. For example, this notion of limit does not allow to distinguish between linear and non-linear preferential attachment models (as long as the degree of each fixed node diverges to infinity). Next we consider the much more subtle and fine-grained notion of a weak local limit introduced in Benjamini and Schramm [2001]. The notion of graph limits is more powerful than the one considered in the previous paragraph as it can, for example, distinguish between models having different limiting degree distributions. The weak local limit of the preferential attachment graph was first studied in the case of trees in Rudas et al. [2007] using branching process techniques, and then later in general in Berger et al. [2014] using Pólya urn representations. These papers show that PA(n, S 2 ) tends to the so-called Pólya-point graph in the weak local limit sense, and our first theorem utilizes this result to obtain the same for an arbitrary seed: Theorem 1 For any tree T the weak local limit of PA(n, T ) is the Pólya-point graph described in Berger et al. [2014] with m = 1. This result says that "locally" (in the Benjamini-Schramm sense) the seed has no effect. The intuitive reason for this result is that in the preferential attachment model most nodes are far from the seed graph and therefore it is expected that their neighborhoods will not reveal any information about it. Finally, we consider the most refined point of view, which we believe to be the most natural one for this problem as well as the richest one (both mathematically and in terms of insights for potential applications). First we rephrase our main question in the terminology of hypothesis testing. Given two potential seed trees T and S, and an observation R which is a tree on n vertices, one wishes to test whether R ∼ PA(n, T ) or R ∼ PA(n, S). Our original question then boils down to whether one can design a test with asymptotically (in n) non-negligible power. This is equivalent to studying the total variation distance between PA(n, T ) and PA(n, S). Thus we naturally define δ(S, T ) = lim n→∞ TV(PA(n, S), PA(n, T )), where TV denotes the total variation distance. 2 One can propose a test with asymptotically nonnegligible power (i.e., a non-trivial test) iff δ(S, T ) > 0. We believe that in fact this is always the case (except in trivial situations); precisely we make the following conjecture: Conjecture 1 δ is a metric on isomorphism types of trees with at least 3 vertices. 3 While we have not yet been able to prove this conjecture, we are able to distinguish trees with different degree profiles. In fact our proof shows a stronger statement, namely that different degree profiles lead to different limiting distributions for the (appropriately normalized) maximum degree. The smallest pair of trees we cannot as of yet distinguish is depicted in Figure 1. In some cases we can say more. For instance, the distance between a fixed tree and a star can be arbitrarily close to 1 if the star is large enough. Theorem 3 For any fixed tree T one has lim k→∞ δ (S k , T ) = 1. In the next section we derive results on the limiting distribution of the maximum degree ∆(PA(n, T )) that are useful in proving Theorems 2 and 3, which we then prove in Section 3.1. In Section 3.2 we describe a particular way of generalizing the notion of maximum degree which we believe should provide a way to prove Conjecture 1. At present we are missing a technical result which we state separately as Conjecture 2 in the same section. The proof of Theorem 1 is in Section 4, while the proof of a key lemma described in Section 2 is presented in Section 5. We conclude the paper with open problems in Section 6. Useful results on the maximum degree We first recall several results that describe the limiting degree distributions of preferential attachment graphs (Section 2.1), and from these we determine the tail behavior of the maximum degree in Section 2.2, which we then use in the proofs of Theorems 2 and 3. Throughout the paper we label the vertices of PA(n, T ) by {1, 2, . . . , n} in the order in which they are added to the graph, with the vertices of the initial tree labeled in decreasing order of degree, i.e., satisfying d T (1) ≥ d T (2) ≥ · · · ≥ d T (|T |) (with ties broken arbitrarily). We also define the constant c (a, b) = Γ (2a − 2) 2 b−1 Γ (a − 1/2) Γ (b) ,(1) which will occur multiple times. Previous results Starting from an edge Móri [2005] used martingale techniques to study the maximum degree of the preferential attachment tree starting from an edge, and showed that ∆(PA(n, S 2 ))/ √ n converges almost surely to a random variable which we denote by D max (S 2 ). He also showed that for each fixed i ≥ 1, d PA(n,S 2 ) (i) / √ n converges almost surely to a random variable which we denote by D i (S 2 ), and furthermore that D max (S 2 ) = max i≥1 D i (S 2 ) almost surely. In light of this, in order to understand D max (S 2 ) it is useful to study {D i (S 2 )} i≥1 . Móri [2005] computes the joint moments of {D i (S 2 )} i≥1 ; in particular, we have (see [Móri, 2005, eq. (2.4)]) that for i ≥ 2, ED i (S 2 ) r = Γ (i − 1) Γ (1 + r) Γ i − 1 + r 2 . (2) Using different methods and slightly different normalization, Peköz et al. [2013] also study the limiting distribution of d PA(n,S 2 ) (i); in particular, they give an explicit expression for the limiting density. Fix s ≥ 1/2 and define κ s (x) = Γ (s) 2 sπ exp − x 2 2s U s − 1, 1 2 , x 2 2s 1 {x>0} , where U (a, b, z) denotes the confluent hypergeometric function of the second kind, also known as the Kummer U function (see [Abramowitz and Stegun, 1964, Chapter 13]); it can be shown that this is a density function. Peköz et al. [2013] show that for i ≥ 2 the distributional limit of d PA(n,S 2 ) (i) / Ed PA(n,S 2 ) (i) 2 1/2 has density κ i−1 (they also give rates of convergence to this limit in the Kolmogorov metric). Let W s denote a random variable with density κ s . The moments of W s (see [Peköz et al., 2013, Section 2]) are given by EW r s = s 2 r/2 Γ (s) Γ (1 + r) Γ s + r 2 ,(3) and thus comparing (2) and (3) we see that D i (S 2 ) d = 2/ (i − 1)W i−1 for i ≥ 2. Starting from an arbitrary seed graph Since we are interested in the effect of the seed graph, we desire similar results for PA(n, T ) for an arbitrary tree T . One way of viewing PA(n, T ) is to start growing a preferential attachment tree from a single edge and condition on it being T after reaching |T | vertices; PA(n, T ) has the same distribution as PA(n, S 2 ) conditioned on PA (|T | , S 2 ) = T . Due to this the almost sure convergence results of Móri [2005] carry over to the setting of an arbitrary seed tree. Thus for every fixed i ≥ 1, d PA(n,T ) (i) / √ n converges almost surely to a random variable which we denote by D i (T ), ∆ (PA (n, T )) / √ n converges almost surely to a random variable which we denote by D max (T ), and furthermore D max (T ) = max i≥1 D i (T ) almost surely. In order to understand these limiting distributions, the basic observation is that for any i, 1 ≤ i ≤ |T |, 2 (n − 1) − d PA(n,T ) (i) , d PA(n,T ) (i) evolves according to a Pólya urn with replacement matrix ( 2 0 1 1 ) starting from (2 (|T | − 1) − d T (i) , d T (i)) . Indeed, when a new vertex is added to the tree, either it attaches to vertex i, with probability d PA(n,T ) (i) / (2n − 2), in which case both d PA(n,T ) (i) and 2 (n − 1)−d PA(n,T ) (i) increase by one, or otherwise it attaches to some other vertex in which case d PA(n,T ) (i) does not increase but 2 (n − 1) − d PA(n,T ) (i) increases by two. Janson [2006] gives limit theorems for triangular Pólya urns, and also provides information about the limiting distributions; for instance [Janson, 2006, Theorem 1.7] gives a formula for the moments of D i (T ), extending (2) for arbitrary trees T : for every i, 1 ≤ i ≤ |T |, we have ED i (T ) r = Γ (|T | − 1) Γ (d T (i) + r) Γ (d T (i)) Γ |T | − 1 + r 2 ,(4)and for i > |T | we have ED i (T ) r = Γ (i − 1) Γ (1 + r) /Γ (i − 1 + r/2), just like in (2). The joint distribution of the limiting degrees in the seed graph, D 1 (T ) , . . . , D |T | (T ) , can be understood by viewing the evolution of d PA(n,T ) (1) , . . . , d PA(n,T ) (|T |) in the following way. When adding a new vertex, first decide whether it attaches to one of the initial |T | vertices (with probability |T | i=1 d PA(n,T ) (i) / (2n − 2) ) or not (with the remaining probability); if it does, then independently pick one of them to attach to with probability proportional to their degrees. In other words, if viewed at times when a new vertex attaches to one of the initial |T | vertices, the joint degree counts of the initial vertices evolve like a standard Pólya urn with |T | colors and identity replacement matrix. Let Beta (a, b) denote the beta distribution with parameters a and b (with density proportional to x a−1 (1 − x) b−1 1 {x∈[0,1]} ), let Dir (α 1 , . . . α s ) denote the Dirichlet distribution with density pro- portional to x α 1 −1 1 · · · x αs−1 s 1 {x∈[0,1] s , s i=1 x i =1} , and write X ∼ GGa (a, b) for a random variable X having the generalized gamma distribution with density proportional to x a−1 e −x b 1 {x>0} . On the one hand, 2 (n − 1) − |T | i=1 d PA(n,T ) (i) , |T | i=1 d PA(n,T ) (i) evolves according to a Pólya urn with replacement matrix ( 2 0 1 1 ) starting from (0, 2(|T | − 1)). Janson [2006] gives the limiting distribution of |T | i=1 d PA(n,T ) (i) / √ n (see Theorem 1.8 and Example 3.1): |T | i=1 D i (T ) d = 2Z |T | , where Z |T | ∼ GGa (2 |T | − 1, 2) . On the other hand, it is known that in a standard Pólya urn with identity replacement matrix the vector of proportions of each color converges almost surely to a random variable with a Dirichlet distribution with parameters given by the initial counts. These facts, together with the observation in the previous paragraph, lead to the following representation: if X and Z |T | are independent, X ∼ Dir (d T (1) , . . . , d T (|T |)), and Z |T | ∼ GGa (2 |T | − 1, 2), then D 1 (T ) , . . . , D |T | (T ) d = 2Z |T | X.(5) Recently, Peköz et al. [2014] gave useful representations for (D 1 (T ) , . . . , D r (T )) for general r, and the representation above appears as a special case (see [Peköz et al., 2014, Remark 1.9]). Tail behavior In order to prove Theorem 2 our main tool is to study the tail of the limiting degree distributions. In particular, we use the following key lemma. Lemma 1 Let T be a finite tree. (a) Let U ⊆ {1, 2, . . . , |T |} be a nonempty subset of the vertices of T , and let d = i∈U d T (i). Then P i∈U D i (T ) > t ∼ c (|T | , d) t 1−2|T |+2d exp −t 2 /4 (6) as t → ∞, where the constant c is as in (1). 4 (b) For every L > |T | there exists a constant C (L) < ∞ such that for every t ≥ 1 we have ∞ i=L P (D i (T ) > t) ≤ C (L) t 3−2L exp −t 2 /4 .(7) We postpone the proof of Lemma 1 to Section 5, as it results from a lengthy computation. As an immediate corollary we get the asymptotic tail behavior of D max (T ). Corollary 1 Let T be a finite tree and let m := |{i ∈ {1, . . . , |T |} : d T (i) = ∆ (T )}|. Then P (D max (T ) > t) ∼ m × c (|T | , ∆ (T )) t 1−2|T |+2∆(T ) exp −t 2 /4 (8) as t → ∞, where the constant c is as in (1). Proof Recall the fact that D max (T ) = max i≥1 D i (T ) almost surely. First, a union bound gives us that P (D max (T ) > t) ≤ m i=1 P (D i (T ) > t) + |T | i=m+1 P (D i (T ) > t) + ∞ i=|T |+1 P (D i (T ) > t) . Then using Lemma 1 we get the upper bound required for (8): the first sum gives the right hand side of (8), while the other two sums are of smaller order. For the lower bound we first have that P (D max (T ) > t) ≥ m i=1 P (D i (T ) > t) − m i=1 m j=i+1 P (D i (T ) > t, D j (T ) > t) .(9) Lemma 1(a) with U = {i, j} implies that for any 1 ≤ i < j ≤ m, P (D i (T ) > t, D j (T ) > t) ≤ P (D i (T ) + D j (T ) > 2t) ≤ C i,j (T ) t 1−2|T |+4∆(T ) exp −t 2(10) for some constant C i,j (T ) and all t large enough. The exponent −t 2 , appearing on the right hand side of (10), is smaller by a constant factor than the exponent −t 2 /4, appearing in the asymptotic expression for P (D i (T ) > t) (see (6)). Consequently the second sum on the right hand side of (9) is of smaller order than the first sum, and so we have that P (D max (T ) > t) ≥ (1 − o (1)) m i=1 P (D i (T ) > t) as t → ∞. We can conclude using Lemma 1. Distinguishing trees using the maximum degree In this section we first prove Theorems 2 and 3, both using Corollary 1 (see Section 3.1). Then in Section 3.2 we describe a particular way of generalizing the notion of maximum degree which we believe should provide a way to prove Conjecture 1. At present we are missing a technical result, see Conjecture 2 below, and we prove Conjecture 1 assuming that this holds. Proofs Proof of Theorem 2 We first provide a simple proof of distinguishing two trees of the same size but with different maximum degree, and then show how to extend this argument to the other cases. ≥ P ∆ (PA (n, S)) > t √ n − P ∆ (PA (n, T )) > t √ n . Taking the limit as n → ∞ this implies that δ (S, T ) ≥ sup t>0 [P (D max (S) > t) − P (D max (T ) > t)] .(11) By Corollary 1 and the fact that |S| − ∆ (S) < |T | − ∆ (T ) we have that P (D max (S) > t) > P (D max (T ) > t) for large enough t, which concludes the proof in this case. Case 2: |S| = |T |. W.l.o.g. suppose that |S| < |T |. If |S| − ∆ (S) = |T | − ∆ (T ) then by Case 1 we have that δ (S, T ) > 0, so we may assume that |S| − ∆ (S) = |T | − ∆ (T ). Just as in the proof of Case 1 we have that δ (S, T ) ≥ sup t>0 [P (D max (T ) > t) − P (D max (S) > t)] .(12) Corollary 1 provides the asymptotic behavior for P (D max (T ) > t) in the form of (8) P (D max (S) > t | ∆ (PA (|T | , S)) = ∆ (T )) ≤ (1 + o (1)) c (|T | , ∆ (T )) t 1−2|T |+2∆(T ) exp −t 2 /4 as t → ∞. Consequently we have that P (D max (S) > t) ≤ (1 + o (1)) P (∆ (PA (|T | , S)) = ∆ (T )) c (|T | , ∆ (T )) t 1−2|T |+2∆(T ) exp −t 2 /4 as t → ∞, which combined with the tail behavior of D max (T ) gives that P (D max (T ) > t) − P (D max (S) > t) ≥ (1 − o (1)) P (∆ (PA (|T | , S)) < ∆ (T )) c (|T | , ∆ (T )) t 1−2|T |+2∆(T ) exp −t 2 /4 as t → ∞. To conclude the proof, notice that P (∆ (PA (|T | , S)) < ∆ (T )) is at least as great as the probability that vertex |S| + 1 connects to a leaf of S, which has probability at least 1/ (2 |S| − 2). Case 3: |S| = |T |, different degree profiles. Let z ∈ {1, . . . , |T |} be the first index such that d S (z) = d T (z) and assume w.l.o.g. that d S (z) < d T (z). First we have that P (D max (T ) > t) ≥ P (∃i ∈ [z − 1] : D i (T ) > t) + P (D z (T ) > t) − z−1 i=1 P (D z (T ) > t, D i (T ) > t) and P (D max (S) > t) ≤ P (∃i ∈ [z − 1] : D i (S) > t) + ∞ i=z P (D i (S) > t) . Now observe that one can couple the evolution of PA(n, T ) and PA(n, S) in such a way that the degrees of vertices 1, . . . , z − 1 stay the same in both trees. Thus one clearly has P (∃i ∈ [z − 1] : D i (T ) > t) = P (∃i ∈ [z − 1] : D i (S) > t) . Putting the three above displays together one obtains P (D max (T ) > t) − P (D max (S) > t) ≥ P (D z (T ) > t) − z−1 i=1 P (D z (T ) > t, D i (T ) > t) − ∞ i=z P (D i (S) > t) . Now using Lemma 1 one easily gets (for some constant C > 0) that P (D z (T ) > t) ∼ c (|T | , d T (z)) t 1−2|T |+2d T (z) exp −t 2 /4 , z−1 i=1 P (D z (T ) > t, D i (T ) > t) ≤ z−1 i=1 P (D z (T ) + D i (T ) > 2t) ≤ z−1 i=1 (1 + o (1)) c (|T | , d T (z) + d T (i)) (2t) 1−2|T |+2(d T (z)+d T (i)) exp −t 2 , ∞ i=z P (D i (S) > t) ≤ Ct 1−2|T |+2d S (z) exp −t 2 /4 . In particular, since d S (z) < d T (z) and t α exp(−t 2 ) = o(exp(−t 2 /4)) for any α, this shows that P (D max (T ) > t) − P (D max (S) > t) ≥ (1 − o (1)) c (|T | , d T (z)) t 1−2|T |+2d T (z) exp −t 2 /4 , which, together with (12), concludes the proof. Proof of Theorem 3 As before we have that δ (S k , T ) ≥ sup t≥0 [P (D max (S k ) > t) − P (D max (T ) > t)] ≥ P D max (S k ) > √ k/2 − P D max (T ) > √ k/2 .(13) By Corollary 1, we know that the second term in (13) goes to zero as k → ∞ for any fixed T . We can lower bound the first term in (13) by P D 1 (S k ) > √ k/2 = 1 − P D 1 (S k ) ≤ √ k/2 . From (4) we have that the first two moments of D 1 (S k ) are ED 1 (S k ) = Γ (k) /Γ (k − 1/2) and ED 1 (S k ) 2 = Γ (k + 1) /Γ (k) = k. From standard facts about the Γ function and Stirling series one has that 0 ≤ ED 1 (S k ) − √ k − 1 ≤ 6 √ k − 1 −1 and then also Var (D 1 (S k )) = ED 1 (S k ) 2 − (ED 1 (S k )) 2 ≤ k − (k − 1) = 1. Therefore Chebyshev's inequality implies that lim k→∞ P D 1 (S k ) ≤ √ k/2 = 0. Towards a proof of Conjecture 1 Our proof of Theorem 2 above relied on the precise asymptotic tail behavior of D max (T ), as described in Corollary 1. In order to distinguish two trees with the same degree profile (such as the pair of trees in Figure 1), it is necessary to incorporate information about the graph structure. Indeed, if S and T have the same degree profiles, then it is possible to couple PA (n, S) and PA (n, T ) such that they have the same degree profiles for every n. Thus a possible way to prove Conjecture 1 is to generalize the notion of maximum degree in a way that incorporates information about the graph structure, and then use similar arguments as in the proofs above. A candidate is the following. Definition 1 Given a tree U , define the U -maximum degree of a tree T , denoted by ∆ U (T ), as ∆ U (T ) = max ϕ u∈V (U ) d T (ϕ (u)) , where V (U ) denotes the vertex set of U , and the maximum is taken over all injective graph homomorphisms from U to T . That is, ϕ ranges over all injective maps from V (U ) to V (T ) such that {u, v} ∈ E (U ) implies that {ϕ (u) , ϕ (v)} ∈ E (T ), where E (U ) denotes the edge set of U , and E (T ) is defined similarly. When U is a single vertex, then ∆ U ≡ ∆, so this indeed generalizes the notion of maximum degree. We conjecture the following. Conjecture 2 Suppose S and T are two non-isomorphic trees of the same size. Then lim sup n→∞ P ∆ T (PA (n, S)) > t √ n = o t 2|T |−3 exp −t 2 /4 as t → ∞. If this conjecture were true, then Conjecture 1 also follows, as we now show. Proof of Conjecture 1 assuming Conjecture 2 holds Assume |S| = |T |; if |S| = |T | we already know from Theorem 2 that δ (S, T ) > 0. As in the proof of Theorem 2, for any t > 0 and n ≥ max {|S| , |T |} we have that TV (PA (n, S) , PA (n, T )) ≥ TV (∆ T (PA (n, S)) , ∆ T (PA (n, T ))) ≥ P ∆ T (PA (n, T )) > t √ n − P ∆ T (PA (n, S)) > t √ n , and consequently δ (S, T ) ≥ sup t>0 lim inf n→∞ P ∆ T (PA (n, T )) > t √ n − lim sup n→∞ P ∆ T (PA (n, S)) > t √ n .(14) Since ϕ (i) = i for 1 ≤ i ≤ |T | is an injective graph homomorphism from T to PA (n, T ), we have that lim inf n→∞ P ∆ T (PA (n, T )) > t √ n ≥ lim inf n→∞ P   |T | i=1 d PA(n,T ) (i) > t √ n   = P   |T | i=1 D i (T ) > t   . By Lemma 1 we know that P   |T | i=1 D i (T ) > t   ∼ c (|T | , 2 |T | − 2) t 2|T |−3 exp −t 2 /4 as t → ∞, which together with (14) and Conjecture 2 shows that δ (S, T ) > 0. The weak limit of PA(n, T ) In this section we prove Theorem 1. For two graphs G and H we write G = H if G and H are isomorphic, and we use the same notation for rooted graphs. Recalling the definition of the Benjamini-Schramm limit (see [Definition 2.1., Berger et al. [2014]]), we want to prove that lim n→∞ P B r (PA(n, T ), k n (T )) = (H, y) = P B r (T , (0)) = (H, y) , where B r (G, v) is the rooted ball of radius r around vertex v in the graph G, k n (T ) is a uniformly random vertex in PA(n, T ), (H, y) is a finite rooted tree and (T , (0)) is the Pólya-point graph (with m = 1). We construct a forest F based on T as follows. To each vertex v in T we associate d T (v) isolated nodes with self loops, that is F consists of 2(|T | − 1) isolated vertices with self loops. Our convention here is that a node with k regular edges and one self loop has degree k + 1. The graph evolution process PA(n, F ) for forests is defined in the same way as for trees, and we couple the processes PA(n, T ) and PA(n + |T | − 2, F ) in the natural way: when an edge is added to vertex v of T in PA(n, T ) then an edge is also added to one of the d T (v) corresponding vertices of F in PA(n + |T | − 2, F ), and furthermore newly added vertices are always coupled. We first observe that, clearly, the weak limit of PA(n + |T | − 2, F ) is the Pólya-point graph, that is lim n→∞ P B r (PA(n + |T | − 2, F ), k n (F )) = (H, y) = P B r (T , (0)) = (H, y) , where k n (F ) is a uniformly random vertex in PA(n + |T | − 2, F ). We couple k n (F ) and k n (T ) in the natural way, that is if k n (F ) is the t th newly created vertex in PA(n + |T | − 2, F ) then k n (T ) is the t th newly created vertex in PA(n, T ). To conclude the proof it is now sufficient to show that lim n→∞ P B r (PA(n + |T | − 2, F ), k n (F )) = B r (PA(n, T ), k n (T )) = 0. The following inequalities hold true (with a slight-but clear-abuse of notation when we write v ∈ F ) for any u > 0, P B r (PA(n + |T | − 2, F ), k n (F )) = B r (PA(n, T ), k n (T )) ≤ P ∃v ∈ F s.t. v ∈ B r (PA(n + |T | − 2, F ), k n (F )) ≤ P ∃v ∈ F, d PA(n+|T |−2,F ) (v) < u + P ∃v ∈ B r (PA(n + |T | − 2, F ), k n (F )) s.t. d PA(n+|T |−2,F ) (v) ≥ u . It is easy to verify that for any u > 0, lim n→∞ P ∃v ∈ F, d PA(n+|T |−2,F ) (v) < u = 0. Furthermore since B r (PA(n + |T | − 2, F ) tends to the Pólya-point graph we also have lim n→∞ P ∃v ∈ B r (PA(n + |T | − 2, F ), k n (F )) s. t. d PA(n+|T |−2,F ) (v) ≥ u = P ∃v ∈ B r (T , (0)) s.t. d T (v) ≥ u . By looking at the definition of (T , (0)) given in Berger et al. [2014] one can easily show that lim u→∞ P ∃v ∈ B r (T , (0)) s.t. d T (v) ≥ u = 0, which concludes the proof. Proof of Lemma 1 In this section we prove Lemma 1. In light of the representation (5) in Section 2.1.2, part (a) of Lemma 1 follows from a lengthy computation, the result of which we state separately. Lemma 2 Fix positive integers a and b. Let B and Z be independent random variables such that B ∼ Beta (a, b) and Z ∼ GGa (a + b + 1, 2), and let V = 2BZ. Then P (V > t) ∼ c a + b + 2 2 , a t −1+a−b exp −t 2 /4(15) as t → ∞, where the constant c is as in (1). Proof By definition we have for t > 0 that P (V > t) = P (2BZ > t) = ∞ t/2 1 t/(2z) Γ(a + b) Γ(a)Γ(b) x a−1 (1 − x) b−1 dx 2 Γ a+b+1 2 z a+b e −z 2 dz = ∞ t/2 1 − I t/(2z) (a, b) 2 Γ a+b+1 2 z a+b e −z 2 dz, where I x (a, b) = Γ(a+b) Γ(a)Γ(b) x 0 y a−1 (1 − y) b−1 dy is the regularized incomplete Beta function. For positive integers a and b, integration by parts and induction gives that I x (a, b) = 1 − a−1 j=0 a + b − 1 j x j (1 − x) a+b−1−j . Plugging this back in to the integral and doing a change of variables y = 2z, we get that P(V > t) = 2 −(a+b) Γ a+b+1 2 a−1 j=0 a + b − 1 j ∞ t t j (y − t) a+b−1−j y exp −y 2 /4 dy. Expanding (y − t) a+b−1−j we arrive at the alternating sum formula Thus in order to show (15) it is enough to show that for every j such that 0 ≤ j ≤ a − 1 we have P (V > t) = 2 −(a+b) Γ a+b+1 2 a−1 j=0 a+b−1−j k=0 a + b − 1 j a + b − 1 − j k (−1) a+b−1−j−k t a+b−1−k A k+1 ,(16)a+b−1−j k=0 a + b − 1 − j k (−1) a+b−1−j−k t a+b−1−k A k+1 ∼ 2 a+b−j (a + b − 1 − j)! t a+b−1−2j exp −t 2 /4 . (17) To do this, we need to evaluate the integrals {A m } m≥0 . Recall that the complementary error function is defined as erfc (z) = 1− erf (z) = (2/ √ π) ∞ z exp −u 2 du, and thus A 0 = √ π erfc (t/2); also A 1 = 2 exp −t 2 /4 . Integration by parts gives that for m ≥ 2 we have A m = 2t m−1 exp −t 2 /4 + 2 (m − 1) A m−2 . Iterating this, and using the values for A 0 and A 1 , gives us that for m odd we have A m = 2t m−1 exp −t 2 /4 m−1 2 ℓ=0 (m − 1)!! (m − 2ℓ − 1)!! 2 t 2 ℓ ,(18) and for m even we have A m = 2t m−1 exp −t 2 /4 m 2 −1 ℓ=0 (m − 1)!! (m − 2ℓ − 1)!! 2 t 2 ℓ + 2 m 2 × (m − 1)!! × √ π erfc (t/2) .(19) In the following we fix j such that 0 ≤ j ≤ a−1 and a+b−1−j is odd-showing (17) when a+b−1−j is even can be done in the same way. In order to abbreviate notation we let r = (a + b − 2 − j)/2. Plugging in the formulas (18) and (19) into the left hand side of (17) we get that a+b−1−j k=0 a + b − 1 − j k (−1) a+b−1−j−k t a+b−1−k A k+1 = 2r+1 k=0 2r + 1 k (−1) 2r+1−k t 2r+1+j−k A k+1 = − r ℓ=0 2r + 1 2ℓ t 2r+1+j−2ℓ A 2ℓ+1 + r ℓ=0 2r + 1 2ℓ + 1 t 2r+1+j−(2ℓ+1) A 2ℓ+2 = − r ℓ=0 2r + 1 2ℓ t 2r+1+j−2ℓ 2 exp −t 2 /4 ℓ u=0 2 u (2ℓ)!! (2ℓ − 2u)!! t 2ℓ−2u + r ℓ=0 2r + 1 2ℓ + 1 t 2r+1+j−(2ℓ+1) 2 exp −t 2 /4 ℓ u=0 2 u (2ℓ + 1)!! (2ℓ + 1 − 2u)!! t 2ℓ+1−2u + r ℓ=0 2r + 1 2ℓ + 1 t 2r+1+j−(2ℓ+1) 2 ℓ+1 (2ℓ + 1)!! √ π erfc (t/2) = 2 exp −t 2 /4 r u=0 t 2r+1+j−2u 2 u 2r+1 k=2u 2r + 1 k (−1) k+1 k!! (k − 2u)!!(20)+ √ π erfc (t/2) r ℓ=0 2r + 1 2ℓ + 1 t 2r+1+j−(2ℓ+1) 2 ℓ+1 (2ℓ + 1)!!.(21) An important fact that we will use is that for every polynomial P with degree less than n we have n k=0 n k (−1) k P (k) = 0. Consequently, applying this to the polynomial P (k) = k (k − 2) · · · (k − 2 (u − 1)) we get that 2r+1 k=2u 2r + 1 k (−1) k+1 k (k − 2) · · · (k − 2 (u − 1)) = 2u−1 k=0 2r + 1 k (−1) k k (k − 2) · · · (k − 2 (u − 1)) = − u−1 ℓ=0 2r + 1 2ℓ + 1 (2ℓ + 1) (2ℓ − 1) · · · (2ℓ + 1 − 2 (u − 1)) = − u−1 ℓ=0 2r + 1 2ℓ + 1 (2ℓ + 1)!! (2 (u − 1 − ℓ) − 1)!! (−1) u−1−ℓ .(23) Thus we see that in the sum (20) the cofficient of the term involving t 2r+1+j is zero, while the coefficient of the term involving t 2r+1+j−2u for 1 ≤ u ≤ r is 2 u+1 exp −t 2 /4 times the expression in (23). These are cancelled by terms coming from the sum in (21) as we will see shortly; to see this we need the asymptotic expansion of erfc to high enough order. In particular we have (see [Abramowitz and Stegun, 1964, equations 7.1.13 and 7.1.24]) that √ π erfc (t/2) = 2 exp −t 2 /4 2r n=0 (−1) n 2 n (2n − 1)!!t −2n−1 + R (t) ,(24) where the approximation error R (t) satisfies |R (t)| ≤ 2 2r+2 (4r + 1)!!t −(4r+3) exp −t 2 /4 . Plugging (24) back into (21), we first see that the error term satisfies |R (t)| r ℓ=0 2r + 1 2ℓ + 1 t 2r+1+j−(2ℓ+1) 2 ℓ+1 (2ℓ + 1)!! = O t 2j−1−(a+b) exp −t 2 /4(25) as t → ∞. The main term of (21) becomes the sum 2 exp −t 2 /4 r ℓ=0 2r n=0 2r + 1 2ℓ + 1 2 ℓ+n+1 (2ℓ + 1)!! (2n − 1)!! (−1) n t 2r+1+j−2(ℓ+n+1) . For u such that 1 ≤ u ≤ r, the coefficient of the term involving t 2r+1+j−2u is 2 u+1 exp −t 2 /4 times u−1 ℓ=0 2r + 1 2ℓ + 1 (2ℓ + 1)!! (2 (u − 1 − ℓ) − 1)!! (−1) u−1−ℓ , which cancels out the coefficient of the same term coming from the other sum (20), see (23). For u such that r < u ≤ 2r, the coefficient of the term involving t 2r+1+j−2u is 2 u+1 exp −t 2 /4 times r ℓ=0 2r + 1 2ℓ + 1 (2ℓ + 1)!! (2 (u − 1 − ℓ) − 1)!! (−1) u−1−ℓ = r ℓ=0 2r + 1 2ℓ + 1 (2ℓ + 1) (2ℓ − 1) . . . ((2ℓ + 1) − 2 (u − 1)) = − 2r+1 k=0 2r + 1 k (−1) k k (k − 2) . . . (k − 2 (u − 1)) = 0, where we again used (22), together with the fact that u ≤ 2r. Finally, the coefficient of the term involving t 2j+1−(a+b) is 2 2r+2 exp −t 2 /4 times r ℓ=0 2r + 1 2ℓ + 1 (2ℓ + 1)!! (2 (2r − ℓ) − 1)!! (−1) 2r−ℓ = − 2r+1 k=0 2r + 1 k (−1) k k (k − 2) . . . (k − 4r) = − 2r+1 k=0 2r + 1 k (−1) k k 2r+1 = − (−1) 2r+1 (2r + 1)! = (2r + 1)!, where we used (22) in the second equality. Since all other terms are of lower order (see (25)), this concludes the proof. Proof of Lemma 1 (a) If U = T , then d = i∈U d T (i) ∈ {1, . . . , 2 |T | − 3}. Similarly to the third paragraph in Section 2.1.2, we can view the evolution of i∈U d PA(n,T ) (i) in the following way. When adding a new vertex, first decide whether it attaches to one of the initial |T | vertices (with probability |T | i=1 d PA(n,T ) (i) / (2n − 2)) or not (with the remaining probability); if it does, then independently pick one of them to attach to with probability proportional to their degree-a vertex in U is chosen with probability i∈U d PA(n,T ) (i) / |T | i=1 d PA(n,T ) (i). This implies the following representation: i∈U D i (T ) d = 2BZ, where B and Z are independent, B ∼ Beta (d, 2 |T | − 2 − d), and Z ∼ GGa (2 |T | − 1, 2). This also follows directly from the representation (5). Thus (6) is a direct consequence of Lemma 2. If U = T , then i∈U D i (T ) d = 2Z where Z ∼ GGa (2 |T | − 1, 2) (see Section 2.1.2), and then (6) follows from a calculation that is contained in the proof of Lemma 2. (b) To show (7) we use the results of Peköz et al. [2013] as described in Section 2.1.1. In addition we use the following tail bound of [Peköz et al., 2013, Lemma 2.6], which says that for x > 0 and s ≥ 1 we have ∞ x κ s (y) dy ≤ s x κ s (x). Consequently, for any i > |T | we have the following tail bound: P (D i (T ) > t) = P W i−1 > i − 1 2 t = ∞ i−1 2 t κ i−1 (y) dy ≤ √ 2i − 2 t κ i−1 i − 1 2 t = 2 √ πt exp −t 2 /4 (i − 2)!U i − 2, 1 2 , t 2 4 . The following integral representation is useful for us [Abramowitz and Stegun, 1964, eq. 13.2.5]: Γ (a) U (a, b, z) = ∞ 0 e −zw w a−1 (1 + w) b−a−1 dw. Consequently, we have ∞ i=3 (i − 2)!U i − 2, 1 2 , t 2 4 = ∞ i=3 (i − 2) ∞ 0 e − t 2 4 w 1 w √ 1 + w w 1 + w i−2 dw = ∞ 0 e − t 2 4 w 1 w √ 1 + w ∞ i=3 (i − 2) w 1 + w i−2 dw = ∞ 0 e − t 2 4 w 1 w √ 1 + w w (1 + w) dw ≤ ∞ 0 e − t 2 4 w (1 + w) dw = 4 t 2 + 16 t 4 , which shows (7) for L = 3. Similarly, for L ≥ 4 we have Figure 1 : 1Two trees with six vertices and d(S) = d(T ). Is δ (S, T ) > 0? Theorem 2 Let S and T be two finite trees on at least 3 vertices. If d(S) = d(T ), then δ (S, T ) > 0. Case 1 : 1|S| − ∆ (S) = |T | − ∆ (T ). W.l.o.g. suppose that |S| − ∆ (S) < |T | − ∆ (T ). Clearly for any t > 0 and n ≥ max {|S| , |T |} one has TV (PA (n, S) , PA (n, T )) ≥ TV (∆ (PA (n, S)) , ∆ (PA (n, T ))) , where m ≥ 1 . 1To find an upper bound for P (D max (S) > t), first notice that ∆ (PA (|T | , S)) ≤ ∆ (T ), with equality holding if and only if all of the |T |−|S| vertices of PA (|T | , S) that were added to S connect to the same vertex i ∈ {1, 2, . . . , |S|} and d S (i) = ∆ (S). Consequently, if ∆ (PA (|T | , S)) = ∆ (T ), then there is exactly one vertex j ∈ {1, 2, . . . , |T |} such that d PA(|T |,S) (j) = ∆ (T ). This, together with Corollary 1, shows that on the one handP (D max (S) > t | ∆ (PA (|T | , S)) < ∆ (T )) = o t 1−2|T |+2∆(T ) exp −t 2 /4 ,as t → ∞, and on the other hand where for m ≥ 0 let A m := ∞ t y m exp −y 2 /4 dy. Observe that TV(PA(n, S), PA(n, T )) is non-increasing in n (since one can simulate the future evolution of the process) and always nonnegative so the limit is well-defined.3 Clearly δ is a pseudometric on isomorphism types of trees with at least 3 vertices so the only non-trivial part of the statement is that δ(S, T ) = 0 for S and T non-isomorphic. Throughout the paper we use standard asymptotic notation; for instance, f (t) ∼ g (t) as t → ∞ if limt→∞ f (t) /g (t) = 1. AcknowledgementsThe first author thanks Nati Linial for initial discussions on this problem. We also thank Remco van der Hofstad and Nathan Ross for helpful discussions and valuable pointers to the literature. The research described here was carried out at the Simons Institute for the Theory of Computing. We are grateful to the Simons Institute for offering us such a wonderful research environment. This work is supported by NSF grants DMS 1106999 and CCF 1320105 (E.M.), and by DOD ONR grant N000141110140 (E.M., M.Z.R.).where the first inequality follows from dropping the nonpositive term (3 − L) w 1+w L−1, and the second one follows because L ≥ 4. This shows (7) for L ≥ 4 and thus concludes the proof. . We described a way to approach Conjecture 1 in general, and showed that it would follow from a technical result which we stated as Conjecture 2.2. This paper is essentially about the testing version of the problem. Can anything be said about the estimation version? Perhaps a first step would be to understand the multiple hypothesis testing problem where one is interested in testing whether the seed belongs to the family of trees T 1 or to the family T 2 .3. Starting from two seeds S and T with different spectrum, is it always possible to distinguish (with non-trivial probability) between PA(n, S) and PA(n, T ) with spectral techniques? More generally, it would be interesting to understand what properties are invariant under modifications of the seed.4. Is it possible to give a combinatorial description of the (pseudo)metric δ?5. Under what conditions on two tree sequences (T k ), (R k ) do we have lim k→∞ δ(T k , R k ) = 1? In Theorem 3 we showed that a sufficient condition is to have T k = T and R k = S k . This can easily be extended to the condition that ∆(T k ) remains bounded while ∆(R k ) tends to infinity. If T k and R k are independent (uniformly) random trees on k vertices, do we have lim k→∞ Eδ(T k , R k ) = 1?6. What can be said about the general preferential attachment model, when multiple edges or vertices are added at each step? 7. A simple variant on the model studied in this paper is to consider probabilities of connection proportional to the degree of the vertex raised to some power α. For α = 1 we conjectured and in some cases showed that different seeds are distinguishable. On the contrary, it seems reasonable to expect that for α = 0 (uniform attachment) all seeds are indistinguishable from each other asymptotically. What about for α ∈ (0, 1)? Handbook of Mathematical Functions. Milton Abramowitz, Irene A Stegun, Dover55Milton Abramowitz and Irene A. Stegun. Handbook of Mathematical Functions, volume 55. Dover, 1964. Emergence of scaling in random networks. Albert-László Barabási, Réka Albert, Science. 2865439Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286 (5439):509-512, 1999. Recurrence of distributional limits of finite planar graphs. Itai Benjamini, Oded Schramm, Electronic Journal of Probability. 623Itai Benjamini and Oded Schramm. Recurrence of distributional limits of finite planar graphs. Electronic Journal of Probability, 6(23):1-13, 2001. Asymptotic behavior and distributional limits of preferential attachment graphs. Noam Berger, Christian Borgs, Jennifer T Chayes, Amin Saberi, The Annals of Probability. 1Noam Berger, Christian Borgs, Jennifer T. Chayes, and Amin Saberi. Asymptotic behavior and distributional limits of preferential attachment graphs. The Annals of Probability, 1:1-40, 2014. The degree sequence of a scalefree random graph process. Béla Bollobás, Oliver Riordan, Joel Spencer, Gábor Tusnády, Random Structures & Algorithms. 183Béla Bollobás, Oliver Riordan, Joel Spencer, and Gábor Tusnády. The degree sequence of a scale- free random graph process. Random Structures & Algorithms, 18(3):279-290, 2001. Limit theorems for triangular urn schemes. Probability Theory and Related Fields. Svante Janson, 134Svante Janson. Limit theorems for triangular urn schemes. Probability Theory and Related Fields, 134(3):417-452, 2006. Isomorphism and embedding problems for infinite limits of scale-free graphs. D Robert, Jon M Kleinberg, Kleinberg, Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Robert D. Kleinberg and Jon M. Kleinberg. Isomorphism and embedding problems for infinite limits of scale-free graphs. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 277-286, 2005. Distances in random plane-oriented recursive trees. M Hosam, Mahmoud, Journal of Computational and Applied Mathematics. 411-2Hosam M. Mahmoud. Distances in random plane-oriented recursive trees. Journal of Computational and Applied Mathematics, 41(1-2):237-245, 1992. F Tamás, Móri, The maximum degree of the Barabási-Albert random tree. Combinatorics, Probability and Computing. 14Tamás F. Móri. The maximum degree of the Barabási-Albert random tree. Combinatorics, Prob- ability and Computing, 14(03):339-348, 2005. Degree asymptotics with rates for preferential attachment random graphs. A Erol, Adrian Peköz, Nathan Röllin, Ross, The Annals of Applied Probability. 233Erol A. Peköz, Adrian Röllin, and Nathan Ross. Degree asymptotics with rates for preferential attachment random graphs. The Annals of Applied Probability, 23(3):1188-1218, 2013. Joint degree distributions of preferential attachment random graphs. A Erol, Adrian Peköz, Nathan Röllin, Ross, arXiv:1402.4686arXiv preprintErol A. Peköz, Adrian Röllin, and Nathan Ross. Joint degree distributions of preferential attach- ment random graphs. arXiv preprint arXiv:1402.4686, 2014. Random trees and general branching processes. Anna Rudas, Bálint Tóth, Benedek Valkó, Random Structures & Algorithms. 312Anna Rudas, Bálint Tóth, and Benedek Valkó. Random trees and general branching processes. Random Structures & Algorithms, 31(2):186-202, 2007. Generative Probabilistic Models for Protein-Protein Interaction Networks-The Biclique Perspective. Regev Schweiger, Michal Linial, Nathan Linial, Bioinformatics. 2713Regev Schweiger, Michal Linial, and Nathan Linial. Generative Probabilistic Models for Protein- Protein Interaction Networks-The Biclique Perspective. Bioinformatics, 27(13):i142-i148, 2011.
[]
[ "Capacity Theorems for Quantum Multiple Access Channels", "Capacity Theorems for Quantum Multiple Access Channels" ]
[ "Jon Yard [email protected] ", "Igor Devetak [email protected] ", "Patrick Hayden [email protected] ", "\nDepartment of Electrical Engineering\nElectrical Engineering Department University of Southern California Los Angeles\nComputer Science Department\nStanford University Stanford\nCalifornia, CaliforniaUSA, USA\n", "\nMcGill University Montréal\nQuebecCA\n" ]
[ "Department of Electrical Engineering\nElectrical Engineering Department University of Southern California Los Angeles\nComputer Science Department\nStanford University Stanford\nCalifornia, CaliforniaUSA, USA", "McGill University Montréal\nQuebecCA" ]
[]
We consider quantum channels with two senders and one receiver. For an arbitrary such channel, we give multiletter characterizations of two different two-dimensional capacity regions. The first region characterizes the rates at which it is possible for one sender to send classical information while the other sends quantum information. The second region gives the rates at which each sender can send quantum information. We give an example of a channel for which each region has a single-letter description, concluding with a characterization of the rates at which each user can simultaneously send classical and quantum information.
10.1109/isit.2005.1523464
[ "https://arxiv.org/pdf/cs/0508031v1.pdf" ]
55,574
cs/0508031
1565ea6d11eb69e9f74f64aaf9ceab95d09dfaba
Capacity Theorems for Quantum Multiple Access Channels 4 Aug 2005 Jon Yard [email protected] Igor Devetak [email protected] Patrick Hayden [email protected] Department of Electrical Engineering Electrical Engineering Department University of Southern California Los Angeles Computer Science Department Stanford University Stanford California, CaliforniaUSA, USA McGill University Montréal QuebecCA Capacity Theorems for Quantum Multiple Access Channels 4 Aug 2005arXiv:cs/0508031v1 [cs.IT] We consider quantum channels with two senders and one receiver. For an arbitrary such channel, we give multiletter characterizations of two different two-dimensional capacity regions. The first region characterizes the rates at which it is possible for one sender to send classical information while the other sends quantum information. The second region gives the rates at which each sender can send quantum information. We give an example of a channel for which each region has a single-letter description, concluding with a characterization of the rates at which each user can simultaneously send classical and quantum information. I. INTRODUCTION Suppose that two independent senders, Alice and Bob, have access to different parts of the input of a quantum channel with a single receiver Charlie. By preparing physical systems at their respective inputs, Alice and Bob can affect the state of Charlie's received system. We will analyze situations in which such a channel is used many times for the purpose of sending independent information from each of Alice and Bob to Charlie. We allow for the simultaneous transmission of both classical and quantum information. II. BACKGROUND A quantum system with the label A is supported on a Hilbert space H A of dimension |A|. We will use a superscript to indicate that a density matrix ρ A or a pure state |φ A belongs to the corresponding set of states of A. By a channel N A→B , we mean a trace preserving, completely positive linear map from density matrices on A to density matrices on B. We will often just call this a map. The tensor product AB of two systems has a Hilbert space H AB ≡ H A ⊗ H B , and we will write A n for the system with Hilbert space H A n ≡ H ⊗n A . The density matrix of a pure state |φ will be written as φ ≡ |φ φ|. When we speak of a rate Q maximally entangled state, we mean a bipartite pure state of the form |Φ = 1 √ 2 nQ b∈2 nQ |b |b where n will always be apparent from the context. For a distance measure between two states ρ and σ, we use the fidelity F (ρ, σ) = Tr √ ρ σ √ ρ 2 . A. Classical-quantum states and entropy Consider a collection of density matrices σ B x x∈X indexed by a finite set X . If those states occur according to the probability distribution p(x), we may speak of an ensemble p(x), σ B x of quantum states. Classical and quantum probabilities can be treated in the same framework by considering the cq system XB whose state is specified by a block-diagonal joint density matrix Such a cq state describes the classical and quantum aspects of the ensemble on the extended Hilbert space H X ⊗ H B [1]. Information quantities evaluated on cq states play an important role in characterizing the capacity regions we will introduce in this paper. We write H(B) σ = H(σ B ) = − Tr σ B log σ B for the von Neumann entropy of the density matrix associated with B, where σ B = Tr X σ. Note that H(X) σ is just the Shannon entropy of X. For an arbitrary state ρ AB , we define H(AB) ρ analogously. The subscripted state will be omitted when it is apparent from the context. B. Classical capacity and mutual information The classical capacity C(N ) of a quantum channel N A ′ →C from Alice to Charlie is the logarithm of the number of physical input states Alice can prepare, per use of the channel, so that Charlie can reliably distinguish them with arbitrarily low probability of error. C(N ) can be characterized in terms of the mutual information, which is defined as I(X; B) ≡ H(X) + H(B) − H(XB) and is otherwise known as the Holevo information χ {p(x), σ B x } of the underlying ensemble. Mutual information gives a regularized characterization of the classical capacity C(N ) of N as [12], [13] C(N ) = lim k→∞ 1 k max XA k I(X; C k ) where the maximization is over all states of a cq system XA k for which |X | ≤ min{|A ′ |, |C|} 2k . The mutual information is evaluated with respect to the induced state N ⊗k (σ) on XC k . Note that throughout this paper, we will adhere to the convention that an arbitrary channel N acts as the identity on any system which is not explicitly part of its domain. The units of C(N ) are bits per channel use. It is a pressing open question of quantum Shannon theory whether or not it suffices to consider k = 1 when computing C(N ). Equivalently, it is unknown if entangled preparations are required to approach C(N ). The quantum capacity Q(N ) of N gives the ultimate capability of N to convey quantum information. Q(N ) arises as the logarithm of various quantities divided by the number of channel uses. Some of these include • the amount of entanglement that can be created by using N (entanglement generation) [2] • the size of a maximally entangled state that can be transmitted over N (entanglement transmission) [3] • the size of a perfect quantum channel from Alice to Bob which can be simulated with arbitrary accuracy (strong subspace transmission) [4]. All have units of qubits per channel use. Q(N ) can be characterized as a regularized maximization of coherent information. Depending on the context, coherent information [3] will be expressed in one of two ways. For a fixed joint state σ AB , we write I c (A B) ≡ H(B) − H(AB) = −H(A|B). Otherwise, if we are given a density matrix ρ A ′ and a channel N A ′ →B which give rise to a joint state (1 A ⊗ N )(Φ ρ ), where |Φ ρ AA ′ is any purification of ρ, we will often use the notation I c (A B) = I c (ρ, N ) = H(N (ρ)) − H((1 ⊗ N )(Φ ρ )). It can be shown that this latter expression is independent of the particular purification |Φ ρ that is chosen for ρ. The regularized expression for Q(N ) can thus be written in either of two equivalent ways as Q(N ) = lim k→∞ 1 k max AA ′k I c (A C k ) = lim k→∞ 1 k max ρ A ′k I c (ρ, N ⊗k ). In the first expression, the maximization is over all bipartite pure states |Ψ AA ′k . I c (A C k ) is then evaluated for the resulting state N ⊗k (Ψ) on AC k . We further remark that when I c (A BX) is evaluated on the cq state ω XAB = x p(x)|x x| X ⊗ ω AB x , it can be considered as a conditional, or expected, coherent information, as I c (A BX) ω = x p(x)I(A B) ωx . A particular departure of this quantity from its classical analog, the conditional mutual information I(X; Y |Z), is that the latter is only equal to I(X; Y Z) when X and Z are independent, whereas the former always allows either interpretation, provided the conditioning variable is classical. Conditional coherent information arises in another context; suppose that N A ′ →XB is a quantum instrument [7], meaning that N acts as N : τ → x |x x| X ⊗N x (τ ). The completely positive maps {N x } are the components of the instrument. While the components are generally trace reducing maps, their sum N = x N x is always trace preserving. It is not difficult to show that I c (ρ, N ) = I c (A BX), where the latter quantity is evaluated on the state x p(x)|x x| X ⊗ (1 A ⊗ N x )(Φ AA ′ ρ ) . For us, a quantum multiple access channel N A ′ B ′ →C will have two senders and a single receiver. While manysender generalizations of our theorems are readily obtainable, we focus on two senders for simplicity. Winter [8] gave a single-letter characterization of the rates at which classical information can be sent over a multiple access channel with classical inputs and a quantum output as the convex hull of a union of pentagons, a form identical to that found by Ahlswede [9] and Liao [10] for the classical multiple access channel. For an arbitrary quantum multiple access channel, his results can easily be shown to yield a characterization of its classical capacity region in terms of a regularized union of pentagons. Below, we summarize results appearing in [4], [5], regarding the capabilities of quantum multiple access channels for sending classical and quantum information at the same time, while also supplementing that material with a new additive example. III. STRONG SUBSPACE TRANSMISSION Assume that Alice and Bob are connected to Charlie by n instances of a multiple access channel N A ′ B ′ →C , where Alice and Bob respectively have control over the A ′n and B ′n inputs. We will describe a scenario in which Alice wishes to transmit classical information at a rate of R a bits per channel use, while simultaneously transmitting quantum information at a rate of Q a qubits per channel use. At the same time, Bob will be transmitting classical and quantum information at rates of R b and Q b respectively. Alice attempts to convey any one of 2 nRa messages to Charlie, while Bob tries to send him one of 2 nR b such messages. We will also assume that the senders are presented with systems A and B, where | A| = 2 nQa and | B| = 2 nQ b Each will be required to complete the following two-fold task. Firstly, they must individually transfer the quantum information embodied in A and B to their respective inputs A ′n and B ′n of the channels, in such a way that it is recoverable by Charlie at the receiver. Second, they must simultaneously make Charlie aware of their independent messages M a and M b . Alice and Bob will encode with maps from the cq systems holding their classical and quantum messages to their respective inputs of N ⊗n , which we denote E Ma A→A ′n a and E M b B→B ′n b . Charlie decodes with a quantum instrument D C n → Ma M b A B . The output systems are assumed to be of the same sizes and dimensions as their respective input systems. For the quantum systems, we assume that there are pre-agreed upon unitary correspondences id A→ A a and id B→ B b between the degrees of freedom in the quantum systems presented to Alice and Bob which embody the quantum information they are presented with and the target systems in Charlie's laboratory to which that information should be transferred. The goal for quantum communication will be to, in the strongest sense, simulate the actions of these corresponding identity channels. We similarly demand low error probability for each pair of classical messages. Formally, (E a , E b , D) will be said to comprise an (R a , R b , Q a , Q b , n, ǫ) strong subspace transmission code for the channel N if for all m a ∈ 2 nRa , m b ∈ 2 nR b , |Ψ 1 A A , |Ψ 2 B B , where A and B are purifying systems of arbitrary dimensions, F |m a Ma |m b M b |Ψ 1 A A |Ψ 2 B B , Ω mam b ≥ 1 − ǫ where Ω Ma M b A AB B mam b = D • N ⊗n E a (|m a m a | Ma ⊗ Ψ A A 1 ) ⊗ E b (|m b m b | M b ⊗ Ψ B B 2 ) . We will say that a rate vector (R a , R b , Q a , Q b ) is achievable if there exists a sequence of (R a , R b , Q a , Q b , n, ǫ n ) strong subspace transmission codes with ǫ n → 0. The simultaneous capacity region S(N ) is then defined as the closure of the collection of achievable rates. Setting various rate pairs equal to zero uncovers six two-dimensional rate regions. Our first theorem characterizes the two shadows relevant to the situation where one user only sends classical information, while the other only sends quantum information. The next theorem describes the rates at which each sender can send quantum information. IV. CLASSICAL-QUANTUM CAPACITY REGION CQ(N ) Suppose that Alice only wishes to send classical information at a rate of R bits per channel use, while Bob will only send quantum mechanically at Q qubits per use of the channel. The rate pairs (R, Q) at which this is possible comprise a classicalquantum (cq) region CQ(N ) consisting of rate vectors in S(N ) of the form (R, 0, 0, Q). Our first theorem describes CQ(N ) as a regularized union of rectangles. Theorem 1: CQ(N ) = the closure of the union of pairs of nonnegative rates (R, Q) satisfying R ≤ I(X; C k ) ω /k Q ≤ I c (B C k X) ω /k for some k, some pure state ensemble {p x , |φ x A ′k } and some bipartite pure state |Ψ BB ′k giving rise to the state ω XBC k = x p x |x x| X ⊗ N ⊗k (φ x ⊗ Ψ)).(1) Further, it is sufficient to consider ensembles for which the number of elements satisfies |X | ≤ max{|A ′ |, |C|} 2k . In [4], [5], [6], we showed that this characterization of CQ can be single-letterized for a certain quantum erasure multiple access channel over which Alice noiselessly sends a bit to Charlie, while Bob sends him a qubit which is erased whenever Alice sends 1. In this case, CQ is given by those cq rate pairs (R, Q) satisfying 0 ≤ R ≤ H(p) and 0 ≤ Q ≤ 1−2p for some 0 ≤ p ≤ 1. In Section VI, we demonstrate that a single-letter description of CQ is also obtained for a particular "collective qubit-flip channel." A. On the proof of Theorem 1 In [4] we prove the coding theorem and converse for the simpler tasks of entanglement transmission (ET) and entanglement generation (EG) respectively, as each task is shown to yield the same achievable rates. For entanglement transmission, Bob is only required to transmit half of a maximally entangled state, while Alice is satisfied with obtaining a low average probability of error for her classical messages. Entanglement generation weakens this further, by only asking that Bob be able to create entanglement with Charlie, rather than preserve preexisting entanglement. These relaxed conditions for successful quantum communication are directly analogous to the average probability of error requirement in the classical theory. By recycling a negligible amount of shared common randomness which is generated using the channel, it is possible to strengthen these less powerful codes to meet the requirements of strong subspace transmission. Let us briefly sketch the coding theorem, referring the reader to [4] for further details, as well as for the converse. Fixing a state ω XB ′′ C of the form (1), we prove the achievability of the corner point (R, Q) = I(X; C) − δ, I c (B ′′ CX) − δ , where δ > 0 is arbitrary. For a suitably long blocklength n, Alice can encode her classical information with a random HSW code [12], [13] for the channel N a (τ A ′ ) = N (τ ⊗ Tr AC ω), obtaining an arbitrarily low average probability of error. Simultaneously, Bob uses a random entanglement transmission code (E, D) [2] capable of transmitting a rate Q maximally entangled state |Φ B B arbitrarily well over the instrument channel N A ′ →XC , where N (τ B ′ ) = x p(x)|x x| X ⊗ N (φ x ⊗ τ ). The randomness in the codes ensures that these channels are seen by each sender. To decode, Charlie first measures C n , learning Alice's classical message and causing a negligible disturbance to the global state. He then uses Alice's message to simulate the instrument channel N so that he can apply the decoder D from Bob's quantum code. These decoding steps constitute the required decoding instrument D, which can be shown to perform almost as well as the individual single-user codes, on average. The existence of a deterministic code performing as least as well is then inferred. V. QUANTUM-QUANTUM CAPACITY REGION Q(N ) The situation in which each sender only attempts to convey quantum infomation to Charlie is described by the quantumquantum (qq) rate region Q(N ) which consists of rate vectors in S(N ) of the form (0, 0, Q a , Q b ). Our second theorem gives a characterization of Q(N ) as a regularized union of pentagons. Theorem 2: Q(N ) = the closure of the union of pairs of nonnegative rates (Q a , Q b ) satisfying Q a ≤ I c (A BC k ) ω /k Q b ≤ I c (B AC k ) ω /k Q a + Q b ≤ I c (AB C k ) ω /k for some k and some bipartite pure states |Ψ 1 AA ′k , |Ψ 2 BB ′k giving rise to the state ω ABC k = N ⊗k (Ψ 1 ⊗ Ψ 2 ).(2) A. On the proof of Theorem 2 Let us briefly describe the proof of achievability for Theorem 2, referring the reader to [4] for the converse and other details. As with Theorem 1, we focus on the transmission of maximal entanglement. Fixing a joint state ω A ′′ B ′′ C of the form (2) (with k = 1), we achieve the corner point (Q a , Q b ) = I c (A ′′ C) ω − δ, I c (B ′′ A ′′ C) − δ ω for any δ > 0, by constructing a decoder which uses Alice's quantum information as side information for decoding Bob's quantum information. Setting ρ A ′ 1 = Tr A Ψ 1 and ρ B ′ 2 = Tr B Ψ 2 , define channels N A ′ →C a and N B ′ →A ′′ C b by N a (τ ) = N (τ ⊗ ρ 2 ) and N b (τ ) = N (Ψ 1 ⊗ τ ), observing that I c (ρ 1 , N a ) = I c (A ′′ C) and I c (ρ 2 , N b ) = I c (B ′′ A ′′ C). For suitably large n, there are arbitrarily good random entanglement transmission codes, (E a , D a ) for N a and (E b , D b ) for N b , which transmit the respective halves of the rate Q a and Q b maximally entangled states |Φ a A A and |Φ b B B . Alice encodes with some deterministic member of the random ensemble E ′ a while Bob randomly encodes with E b , producing a random global state N ⊗n (E ′ a (Φ a ) ⊗ E b (Φ b ) ) on ABC n . Next, Charlie uses D a and D b to perform a sequence of operations which ultimately define his decoding operation D C n → A B . As the randomness in Bob's code ensures that the joint state on AC n is approximately N ⊗n a • E ′ a (Φ a ), Charlie can decode Alice's information first. In order to simultaneously protect Bob's information, Charlie utilizes an isometric extension U ′C n → AF of the deterministic decoder D ′ a . Afterwards, he removes the system A and replaces it with the A part of a particular locally prepared state |ϕ A ′′n A . He then inverts the decoding with the inverse U −1 isometric extension of Alice's fully randomized decoder, leaving the remaining systems on BC n A ′′n approximately in the state N ⊗n b • E b (Φ b ). Charlie then decodes Bob's state with D b , yielding high fidelity versions of the original states |Φ a and |Φ b on average. The protocol is then derandomized. B. History of the result In an earlier draft of [4], we gave a characterization of Q(N ) as the closure of a regularized union of rectangles, defined by 0 ≤ Q a ≤ I c (A C k )/k and 0 ≤ Q b ≤ I c (B C k )/k. This solution was conjectured on the basis of a duality between classical Slepian-Wolf distributed source coding and classical multiple access channels (see e.g. [14]), as well as on a no-go theorem for distributed data compression of so-called irreducible pure state ensembles [15]. After the earlier preprint was available, Andreas Winter announced recent progress [16] with Michal Horodecki and Jonathan Oppenheim on the quantum Slepian-Wolf problem, offering a characterization identical in functional form to the classical one, while also supplying an interpretation of negative rates and evading the no-go theorem. Motivated by the earlier mentioned duality, he informed us that he could prove that the qq capacity region could also be characterized in direct analogy to the classical case. Subsequently, we found that we could modify our previous coding theorem to achieve the new region. The newer characterization behaves better under single-letterization, as the following "collective bit-flip channel" example illustrates. VI. COLLECTIVE QUBIT-FLIP CHANNEL Consider a channel into which both Alice and Bob input a single qubit. With probability p, Charlie receives the qubits without error; otherwise, they are received after undergoing a 180 • rotation about the x−axis of the Bloch sphere. The action of the channel can be written as N p (ρ A ′ B ′ ) = (1 − p)ρ + p(σ x ⊗ σ x )ρ(σ x ⊗ σ x ), where σ x is the Pauli spin flip matrix. We first summarize the argument from [4] which shows that Q(N p ) is given by a single-letter formula. Consider the following state of the form (2): ω ABC = (1 − p)Ψ ACA + ⊗ Ψ BCB + + pΨ ACA − ⊗ Ψ BCB − . Here, we define the Bell states |Ψ ± = (|00 ± |11 / √ 2, identifying C ≡ C A C B . By evaluating the single-letter rate bounds on ω, one sees that I c (AB C) ω = 2 − H(p) and that I c (A BC) ω = I c (B AC) ω = 1. Since N p is a generalized dephasing channel [17], its quantum capacity is additive and can be calculated to be 2 − H(p). Clearly this is an upper bound on the maximum sum rate over N p . Observing that each bound is as large as it can be, we conclude that Q(N p ) is given by a pentagon of nonnegative rates (Q a , Q b ) satisfying Q a + Q b ≤ 2 − H(p) and Q a , Q b ≤ 1. Note that since the capacity of a single qubit bit-flip channel is 1 − H(p), we may interpret this example as illustrating that if Alice codes for such a channel, while Bob performs no coding whatsoever, Charlie can still correct errors to Bob's inputs. It is worth mentioning that computer calculations reveal that the older rectangle description of Q(N p ) is non-additive, indicating that the pentagon characterization is in fact more accurate than the rectangle one for this channel. In fact, CQ(N p ) is also single-letter and is given by the same formula as Q(N p ), replacing Q a by R and Q b by Q. There are two ways to see this. For the first, observe that CQ(N p ) is the convex hull of the two rectangles corresponding to the states ω XBC 1 = 1 2 |0 0| X ⊗ N p (φ A ′ + ⊗ Ψ BB ′ + ) + |1 1| X ⊗ N p (φ A ′ − ⊗ Ψ BB ′ + ) ω XBC 2 = 1 2 |0 0| X ⊗ N p (φ A ′ 0 ⊗ Ψ BB ′ + ) + |1 1| X ⊗ N p (φ A ′ 1 ⊗ Ψ BB ′ + ) for which |φ 0 = |0 , |φ 1 = |1 , and |φ ± = (|0 ± |1 )/ √ 2. Namely, (I(X; C), I c (B CX)) ω1 = (1, 1 − H(p)), while (I(X; C), I c (B CX)) ω2 = (1 − H(p), 1). That this is the capacity region follows for the same reasons as with Q(N p ); the rates are as large as can be, and are thus saturated. For codes designed using ω 1 , Alice's inputs are unaffected by the channel, while the effective channel from Bob to Charlie is one with quantum capacity equal to 1 − H(p). Codes may be designed for ω 2 , in which Alice codes for a binary symmetric channel with parameter p, while Bob performs no coding whatsoever. Charlie first decodes Alice's classical information, thereby learning the error locations so that he can correct Bob's quantum information. While the coding theorem for the region of Theorem 1 employs decodings which use classical side information for decoding quantum information, it is also possible to use the techniques from the proof of Theorem 2 to prove a coding theorem achieving (I(X; BC)−δ, I c (B C)−δ), using quantum side information to decode the classical information. This yields a new characterization of CQ as a regularized union of pentagons. Specifically, CQ(N ) can be written as those pairs of nonnegative cq rates (R, Q) satisfying R ≤ I(X; BC k ) ω /k Q ≤ I c (B C k X) ω /k R + Q ≤ I(X; C k ) ω + I c (B C k X) ω /k = I(X; BC k ) ω + I c (B C k ) ω /k. for some k ≥ 0 and some state σ XBC k of the form (1). Evaluating these rate bounds (with k = 1) for the state ω 2 gives the second way of deriving the single-letter characterization of CQ(N p ) argued above, as the new corner point is (I(X; BC), I c (B C)) ω2 = (1, 1 − H(p)). The corresponding code is one in which Alice performs no coding, (even though her inputs are subject to noise), while Bob codes for the single-user qubit-flip channel. Charlie decodes Bob's quantum information first, which is then used to correct the errors in Alice's classical inputs, since Alice's inputs are subject to noise. However, the new coding theorem yields no advantage when applied to ω 1 , as (I(X; BC), I c (B C)) ω1 = (1 − H(p), 1) implies that the corresponding pentagon is actually a rectangle. VII. A CHARACTERIZATION OF S(N ) Finally, we characterize the four-dimensional simultaneous capacity region S(N ) which was defined in Section III. Theorem 3: S(N ) = the closure of the union of vectors of nonnegative rates (R a , R b , Q a , Q b ) satisfying R a ≤ I(X; C k |Y ) ω /k R b ≤ I(Y ; C k |X) ω /k ). Furthermore, it suffices to consider ensembles for which |X | ≤ min{|A ′ |, |C|} 2k and |Y| ≤ min{|B ′ |, |C|} 2k . R a + R b ≤ I(XY ; C k ) ω /k Q a ≤ I c (A C k BXY ) ω /k Q b ≤ I c (B C k AXY ) ω /k Q a + Q b ≤ I c (AB C k XY ) ω / This characterization of S(N ) generalizes a number of existing results in the literature. Setting the quantum rates equal to zero yields a region which is the regularized optimization over input ensembles of the region given by Winter in [8] for classical capacity of a classical-quantum multiple access channel. By setting both of either Alice's or Bob's rates equal to zero, the result of Devetak and Shor [17] on the simultaneous classical-quantum capacity region of a singleuser channel follows. Two of the remaining three shadows are instances of our Theorem 1, while the final one gives Theorem 2. We remark that the pentagon characterization of CQ(N ) does not follow as a corollary since, as we will see, all of the classical information is decoded first. Briefly, achievability of S(N ) is obtained as follows. Using techniques introduced in [17], each sender "shapes" their quantum information into HSW codewords. Decoding is accomplished by first decoding all of the classical information as with CQ(N ), after which techniques from [17] as well as those used to achieve Q(N ) are utilized. k for some integer k ≥ 1 and some bipartite pure state ensembles {p(x), |ψ x AA ′n }, {p(y), |φ y BB ′n } giving rise toω XY ABC k = x,y p(x)p(y) |x x| X ⊗ |y y| Y ⊗ ω xy where ω ABC k xy = N (ψ AA ′k x ⊗ φ BB ′ky Distilling common randomness from bipartite quantum states. I Devetak, A Winter, quant-ph/0304196I. Devetak, A. Winter, "Distilling common randomness from bipartite quantum states," quant-ph/0304196, 2003. The private classical information capacity and quantum information capacity of a quantum channel. I Devetak, quant-ph/0304127I. Devetak, "The private classical information capacity and quantum information capacity of a quantum channel," quant-ph/0304127. Quantum data processing and error correction. B Schumacher, M A Nielsen, Phys. Rev. A. 5442629B. Schumacher, M. A. Nielsen, "Quantum data processing and error correction," Phys. Rev. A, 54(4)2629, 1996. Capacity theorems for quantum multiple access channels -classical-quantum and quantum-quantum capacity regions. J Yard, I Devetak, P Hayden, quant-ph/0501045J. Yard, I. Devetak, P. Hayden, "Capacity theorems for quantum multiple access channels -classical-quantum and quantum-quantum capacity regions," quant-ph/0501045. Simultaneous classical-quantum capacities of quantum multiple access channels. J Yard, quant-ph/0506050Electrical Engineering Dept. , Stanford UniversityPh.D. thesisJ. Yard, "Simultaneous classical-quantum capacities of quantum multiple access channels," Ph.D. thesis, Electrical Engineering Dept. , Stanford University, quant-ph/0506050, March 2005. Sending classical and quantum information over quantum multiple access channels. J Yard, I Devetak, P Hayden, Proc. Ninth Annual Canadian Workshop on Inform. Theory. Ninth Annual Canadian Workshop on Inform. TheoryMontréal, CanadaJ. Yard, I. Devetak, P. Hayden, "Sending classical and quantum infor- mation over quantum multiple access channels," Proc. Ninth Annual Canadian Workshop on Inform. Theory, Montréal, Canada, 2005. An operational approach to quantum probability. E B Davies, J T Lewis, Comm. Math. Phys. 17E. B. Davies, J. T. Lewis, "An operational approach to quantum probability," Comm. Math. Phys., vol. 17, pp. 239-260, 1970. The capacity of the quantum multiple access channel. A Winter, IEEE Trans. Info. Th. IT-47A. Winter, "The capacity of the quantum multiple access channel," IEEE Trans. Info. Th., vol. IT-47, pp. 3059 -3065, November 2001. Multi-way communication channels. R Ahlswede, Second Intern. Sympos. on Inf. Theory, Thakadsor, 1971, Publ. House of the Hungarian Adad. of Sciences. R. Ahlswede, "Multi-way communication channels," Second Intern. Sympos. on Inf. Theory, Thakadsor, 1971, Publ. House of the Hungarian Adad. of Sciences, 23-52, 1973. Multiple access channels. Liao, Dept. of Electrical Engineering, University of HawaiiPh.D. dissertationLiao, "Multiple access channels", Ph.D. dissertation, Dept. of Electrical Engineering, University of Hawaii, 1972. Capacities of quantum erasure channels. C Bennett, D Divincenzo, J Smolin, quant-ph/9701015C. Bennett, D. DiVincenzo, J. Smolin, "Capacities of quantum erasure channels," quant-ph/9701015. Sending classical information via noisy quantum channels. B Schumacher, M D Westmoreland, Phys. Rev. A. 561131B. Schumacher, M. D. Westmoreland, "Sending classical information via noisy quantum channels," Phys. Rev. A, vol. 56, no. 1, p. 131. The capacity of the quantum channel with general input states. A S Holevo, IEEE Trans. Info. Th. 441269A. S. Holevo, "The capacity of the quantum channel with general input states," IEEE Trans. Info. Th. vol. 44, no. 1, p. 269. Elements of information theory. T Cover, J A Thomas, John-Wiley & Sons, IncT. Cover, J. A. Thomas, "Elements of information theory," John-Wiley & Sons, Inc., 1991. On the distributed compression of quantum information. C Ahn, P Doherty, P Hayden, A Winter, quant-ph/0403042C. Ahn, P. Doherty, P. Hayden, A. Winter, "On the distributed compres- sion of quantum information," quant-ph/0403042. Partial quantum information. M Horodecki, J Oppenheim, A Winter, quant-ph/0505062Nature. 4364M. Horodecki, J. Oppenheim, A. Winter, "Partial quantum information," Nature, vol. 436, no. 4, pp. 673-676, quant-ph/0505062. The capacity of a quantum channel for simultaneous transmission of classical and quantum information. I Devetak, P Shor, quant-ph/0311131I. Devetak, P. Shor, "The capacity of a quantum channel for simultaneous transmission of classical and quantum information," quant-ph/0311131.
[]
[ "An Improved Bound for Security in an Identity Disclosure Problem", "An Improved Bound for Security in an Identity Disclosure Problem" ]
[ "Debolina Ghatak ", "Bimal K Roy " ]
[]
[]
Identity disclosure of an individual from a released data is a matter of concern especially if it belongs to a category with low frequency in the data-set. Nayak et al.(2016) discussed this problem vividly in a census report and suggested a method of obfuscation, which would ensure that the probability of correctly identifying a unit from released data, would not exceed ξ for some 1 3 < ξ < 1. However, we observe that for the above method the level of security could be extended under certain conditions.In this paper, we discuss some conditions under which one can achieve a security for any 0 < ξ < 1.
10.5539/ijsp.v8n3p24
[ "https://arxiv.org/pdf/1807.09656v1.pdf" ]
88,522,465
1807.09656
369317b48729ccfaf1736e622388d67d734a2cb0
An Improved Bound for Security in an Identity Disclosure Problem 13 Jul 2018 Debolina Ghatak Bimal K Roy An Improved Bound for Security in an Identity Disclosure Problem 13 Jul 2018arXiv:1807.09656v1 [stat.ME] Identity disclosure of an individual from a released data is a matter of concern especially if it belongs to a category with low frequency in the data-set. Nayak et al.(2016) discussed this problem vividly in a census report and suggested a method of obfuscation, which would ensure that the probability of correctly identifying a unit from released data, would not exceed ξ for some 1 3 < ξ < 1. However, we observe that for the above method the level of security could be extended under certain conditions.In this paper, we discuss some conditions under which one can achieve a security for any 0 < ξ < 1. Introduction Many agencies release data to motivate statistical research and industrial work. But often these data-sets carry some information which may be sensitive to the individual bearing it. Erasing the name or some identity number associated with an individual may not always be sufficient to hide the identity of the individual. For example, imagine a situation where a data-set of p variables corresponding to n individuals are released and among these p variables there is a variable named "pin-code"( sometimes called zip-code). Now "pin-code" is not supposed to be a sensitive variable, but it may happen that the intruder, who is trying to identify some individual in the data-set, has an idea about where the individual lives and thus can guess his "pin-code". In this case, if in the dataset there is no other individual having the same "pin-code", he can directly guess from this information which row in the data-set corresponds to the individual and thus the identity is revealed. Hence, suppressing identity numbers or names is not always sufficient to prevent identity disclosure. In case, there are a few variables with low frequency cells, it is usually easy for the intruder to identify the individual. Various articles including [5] [4] [2] have discussed this problem and various authors have proposed different risk measures to evaluate the security in the released data. However, here we follow the framework of Nayak et. al. [2] where the intruder has a knowledge of the variable category X (B) corresponding to his target unit B. If the variable X has k categories c 1 , c 2 , . . . , c k , then we assume without loss of generality X (B) = c 1 and the frequencies of the categories in the data-set are T 1 , T 2 , . . . , T k respectively. If T 1 = 1, i.e. only X (B) has category c 1 , the intruder can guess the row of his target unit with certainty. If T 1 is small, the intruder knows that his target unit is definitely one of the T 1 many units and then taking into consideration other information, he may successfully identify the row of his target unit or make a correct guess. Thus, in this case, the variable information must be suppressed before releasing the data. One way to do that is to completely erase the variable but that is not desirable to the statistician. The usual practice is to perturb the data in such a way so that the new data can be treated like the original data in making statistical inferences. If {X 1 , X 2 , · · · , X n } is the original data-set and {Z 1 , Z 2 , · · · , Z n } is the perturbed data then the transition matrix P is given by, ((p ij )) where, p ij = P [Z = c j |X = c i ] , i, j = 1, 2, · · · k.(1) This matrix is not released and is unknown to the statistician. This method of obfuscation is known as the post-randomization method (PRAM). If we assume T = (T 1 , T 2 , · · · , T k ) ∼ M ultinomial(Π 1 , Π 2 , · · · , Π k ) then after transformation of X to Z, if S = (S 1 , S 2 , · · · , S k ) are the frequencies of each class {c 1 , c 2 , · · · , c k } in the perturbed data, then S ∼ M ultinomial(Λ 1 , Λ 2 , · · · , Λ k ) where Λ = P Π (Λ := (Λ 1 , Λ 2 , · · · , Λ k ), Π := (Π 1 , Π 2 , · · · , Π k )). If we want to treat Z as the original data, we must have Π = Λ = P Π. But Π is generally unknown to the one, who is masking the data. However, he can estimate Π from the original data with T/n where n is the total sample size. If we want S/n to be an unbiased estimator of Π, we must have, E[S | T] = T/n , or equivalently, P T = T.(2) Gouweleeuw et.al. ( 1998) [6] defined a post randomization method to be an invariant PRAM if P satisfies Equation (2). The error due to estimation after post randomization was studied in the literature by various authors including Nayak et. al. [7]. One of the common techniques to achieve an invariant PRAM is to use an Inverse Frequency Post Randomization (IFPR) block diagonal matrix, in which the entire dataset is partitioned into few groups and within each group, categories are interchanged. If it is not desirable to change the category of some variable, it can be made to form its own block. Thus, if there are m groups, given by {c 1 , c 2 , · · · , c k 1 }, {c k 1 +1 , c k 1 +2 , · · · , c k 1 +k 2 }, . . ., {c k m−1 +1 , c k m−1 +2 , · · · , c k m−1 +km }, where k 1 + k 2 + · · · + k m = k,p ij =        1 − θ/T i if i = j θ (k ′ −1)T i if i = j ,(3) where 0 < θ < 1 and k ′ > 1 is the block size of the group that i and j fall into. However, the parameter θ of the model should be carefully chosen to ensure that the perturbed data is secured from the intruder, at least, up to a certain extent. To measure the risk of disclosure, Nayak et.al. [2] suggested checking whether the probability of correctly identifying an individual given any structure of T and any value of S 1 is bounded by some specified quantity 0 < ξ < 1. Moreover, they showed that there exists a θ ⋆ , where 0 < θ ⋆ < 1 which gives the transition matrix, P (θ ⋆ ) = ((p ⋆ ij )) 1≤i≤k,1≤j≤k where p ⋆ ij is chosen according to Equation (3) with θ = θ ⋆ for each i, j = 1, 2, · · · , k 1 and k 1 is the block size of the group c 1 belongs to. Without loss of generality, we assume the block c 1 belongs to is the first block. This matrix P (θ ⋆ ) when used to post randomize X, P [ CM | S 1 = a, T = t] ≤ ξ ∀ a ≥ 0, ∀t,(4) for any 1 3 ≤ ξ < 1, where CM denotes "Correct Match". However, if we can extend the search range of θ from 0 < θ < 1 to 0 < θ < T 1 and can find all categories in the first block that satisfy T j ≥ T 1 for all j = 1, then the level of security can be extended to any 0 < ξ < 1. Note that, under this definition, there is no harm in the range of the probabilities as they certainly lie between 0 and 1. However, smaller the value of ξ, larger the block size is required. Therefore we can extend the security as far as the frequency distribution permits. Our Approach As mentioned earlier, our framework is similar to that of Nayak et.al. [2]. From the intruder's point of view, we assume that as he gets access of the released data {Z 1 , Z 2 , · · · , Z n }, he checks the rows for which Z i = c 1 for {i = 1, 2, · · · , n}. Let S 1 be the total number of units having class c 1 . If S 1 = 0, intruder stops searching for his target unit B in the data-set. If S 1 = a for some a > 0, he selects one unit randomly among these a individuals and concludes that to be his target unit B. Under this assumption, we discuss how to choose the parameter θ of the IFPR block diagonal matrix ( See Equation (3)), depending on T 1 , so that the probability of correctly identifying unit B is less than some specified 0 < ξ < 1. Our method is described in the following paragraph. Fix a 0 < ξ < 1. Note that, if T 1 > 1 ξ , then there is no need for obfuscation as the intruder can choose one unit randomly and conclude it as his target unit B. Since, in the original data, the probability of correctly identifying B is 1/T 1 , if T 1 > 1 ξ , the probability is less than ξ. This is quite intuitive since identification risk is a problem associated with low-frequency classes. If T 1 ≤ 1 ξ , then we find k 1 = K 1 (ξ, T 1 ) classes ( where the function K 1 is discussed in Sec. 3 ) such that for each of these classes {c 1 , c 2 , · · · , c k 1 }, T j ≥ T 1 for each j ∈ {1, 2, · · · , k 1 }. Such an event is usually feasible for moderate values of ξ as T 1 usually has small values. If such classes are available, we can have any desired level of security, i.e., for any fixed 0 < ξ < 1, there exists a corresponding θ ⋆ such that if the data is perturbed with matrix P (θ ⋆ ), Equation (4) holds. If, however, such classes are not available, we can find the integer n ⋆ such that 1 n ⋆ ≤ ξ < 1 n ⋆ −1 . Since k 1 classes are not available such that T j ≥ T 1 for each j ∈ {1, 2, · · · , k 1 }, we now set ξ 1 = 1 n ⋆ −1 and try to find k 1 any 1 n ⋆ −l < ξ < 1. According to Nayak et. al. [2], there is always a solution for ξ ≥ 1 3 which implies n ⋆ can take a minimum value. However, n ⋆ can take higher values in many cases. Model,Assumptions and Results As discussed earlier, the goal of the paper is to find out a method by which a data can be perturbed ensuring as much security as possible. Since security is an abstract term, we limit ourselves to ensure that the measure, given by Equation (4)) holds for low values of ξ. Smaller the value of ξ, better the security of the data. Let us denote, by R 1 (a, t), the probability of correctly identifying the individual from released data given S 1 = a and the frequency distribution of X given by t := (t 1 , t 2 , · · · , t k ). In other words, R 1 (a, t) = P [CM | S 1 = a, T = t], a ≥ 0, t ∈ R k .(5) If R 1 (a, t) is bounded by ξ for any t, then note that R 1 (a) = P [CM | S 1 = a],(6) is bounded by ξ for any a ≥ 0, which signifies that the probability of correctly identifying an individual is less than ξ, no matter how small or large the frequency of category c 1 is, in the released data. R 1 (a, t) is used instead of R 1 (a) because it is hard to calculate the probability if t is not known. Note that, CM stands for "Correct Match" in the above equations (5) (6). Recall that if we use, IFPR block diagonal matrix to perturb X, the category c 1 may get changed to one of {c 1 , c 2 , . . . , c k 1 }, k 1 ≥ 2 with positive probability. let us denote ) can be re-written as α i = p 1i , β i = α i 1−α i for i ∈ {1, 2, . . . , k 1 }. Observe that, R 1 (a, tR 1 (a, t) = 1 a P [Z (B) = c 1 | S 1 = a, T = t].(7) Again, we have, P [Z (B) = c 1 , S 1 = a, | T = t] = α 1 k 1 i=1 T ⋆ i a i α a i i (1 − α i ) T ⋆ i −a i = α 1 [ k 1 i=1 (1 − α i ) T ⋆ i ] k 1 i=1 T ⋆ i a i β a i i(8) where T ⋆ 1 = T 1 − 1, T ⋆ i = T i , i ≥ 2 and the sum is over all integer-valued a 1 , a 2 , · · · a k 1 such that 0 ≤ a i ≤ T ⋆ i and a i = a − 1. We denote the sum by Σ a−1 P [Z (B) = c 1 , S 1 = a, | T = t] = (1 − α 1 )[ k 1 i=1 (1 − α i ) T ⋆ i ]Σ a(9) Equation (8) and (9) implies that P [S 1 = a | T = t] = k 1 i=1 (1 − α i ) T ⋆ i (α 1 Σ a−1 + (1 − α 1 )Σ a ) and since P [Z (B) = c 1 | S 1 = a, T = t] = P [Z (B) = c 1 , S 1 = a | T = t] P [S 1 = a | T = t] from Equation (7) , we finally have, R 1 (a, t) = 1 a α 1 Σ a−1 α 1 Σ a−1 +(1−α 1 )Σa = 1 a 1 + 1 β 1 Σa Σ a−1 −1 .(10) Nayak et.al. [2] observed that although it seems intuitive that R 1 (1, t) ≥ R 1 (a, t) for any t, a > 1 there are certain cases it does not hold true. However, they proved that if α 1 ≥ α j , i.e., β 1 ≥ β j for all j = 1, 2, · · · , k 1 , then R 1 (1, t) ≥ R 1 (2, t) for any t. Intuitively, if β 1 is highest, i.e., the odds that c 1 goes to any category other than c 1 , then the risk of disclosure should be maximum if a = 1. We checked that this is quite true which leads us to our first result, stated in the following theorem and the proof is given in Appendix Section. Theorem 3.1. If α 1 ≥ α j , i.e., β 1 ≥ β j for any j = 1, 2, · · · , k 1 , then R 1 (1, t) ≥ R 1 (a, t) for any t, a > 1, where R 1 (a, t) is given by Equation (10). Assuming Theorem 3.1 holds, proving Equation (4) is equivalent to prove that R 1 (1, t) ≤ ξ for any t. For this condition to hold, we must carefully choose the parameter θ in (1). Due to Nayak et. al. [2], we have, R 1 (1, T ) = T 1 + θ T 1 − θ k 1 i=2 θT i (k 1 − 1)T i − θ −1 = (T 1 − θ) T 1 (T 1 − θ) + θ 2 k 1 i=2 T i (k 1 − 1)T i − θ −1 ≤ T 1 − θ T 1 (T 1 − θ) + θ 2 = ψ(T 1 , θ)(11) To proceed further we also need the following lemma, proof of which is deferred in Appendix Section. Lemma 3.2. For any fixed 0 < ξ < 1, there exists a θ ⋆ ∈ (0, T 1 ) such that ψ(θ, T 1 ) ≤ ξ. For Theorem 3.1 to hold, in an IFPR block diagonal matrix, we must have T 1 −θ θ ≥ θ (k 1 −θ)T 1 −θ which leads to the condition, θ ≤ T 1 1+ T 1 T j (k 1 −1) , i.e., k 1 − 1 ≥ θ T 1 −θ T 1 T j . Note that, if k 1 − 1 ≥ θ T 1 −θ , and T 1 T j ≤ 1, k 1 − 1 ≥ θ T 1 −θ T 1 T j . Hence, it is enough to find K(θ, T 1 ) = 1 + θ T 1 −θ = T 1 T 1 −θ for Theorem 3.1 to hold. Again, θ is chosen by solving ψ(θ, T 1 ) = ξ. Thus, for fixed ξ and T 1 we have a θ and a corresponding K 1 (ξ, T 1 ) which is the largest integer contained in K(θ, T 1 ). K 1 (ξ, T 1 ) is the minimum number of categories required to form the block containing c 1 . For some possible choices of ξ and some possible values of T 1 , the value of K 1 (ξ, T 1 ) is calculated and given in Table 1. While choosing the block size, one must note that the block size k 1 must be larger than or at least equal to K 1 (ξ, T 1 ) to ensure Equation (4). Simulation Results To illustrate the process, we simulate a sample of size n = 2000 from k = 8 categories such that the probability of falling into a category is given by the vector Π = (0.001, 0.1, 0.2, 0.05, 0.12, 0.13, 0.301, 0.098). The sample has frequency distribution given by Table 2. Two units in the data-set have Category 1, one of which is unit B = 780. Since T 1 = 2, the probability of Correct Match from true data is 0.5 which is very high. We want this probability to be lower, say below ξ = 0.1. So, we transform the data to Z using the IPRAM method with a transition matrix P . To choose an ideal P we apply the procedure of this paper. From Table 1, we get the required block size is 6. So, we would apply transition to the first k 1 = 6 categories with the lowest probability of occurrence and do not alter the categories for the rest 2 categories. To solve for h(θ) = ξ, we have θ ⋆ = 1.656854 which gives the transition matrix, P =                                              Using this transition matrix we ran 1000 simulations to get 1000 different Zs. The mean squared estimation error for each category is given by E = (4.9350e − 07, 7.6125e − 07, 0.0000e+00, 7.4300e−07, 8.8550e−07, 7.8375e−07, 0.0000e+00, 8.5550e−07) which is quite low and the average probability of correct match in 1000 simulations is 0.07639286 < 0.1. The process thus seems to work well for simulated data. Conclusion The method works fine in most practical cases, because, in general, since we want to obfuscate categories with low frequency, there will be sufficient number of categories with higher frequency values than them. Accordingly, the security level can be increased. However, the greatest drawback of this method of obfuscation is that we have assumed the game of the intruder, i.e., it selects one of the units with the desired categorical value randomly looking at the obfuscated data. But this is not expected to happen since in most cases there will be many regressive variables associated and the selection will not be, in general, random. This problem was also discussed in [4]. However, if the model assumptions hold true, the discussed method is successful in giving a better security. We will prove this result by a two dimensional induction procedure. First, we show that the statement is true for k 1 = 2 for all a ∈ N, then we show that if the statement is true for k 1 = k 1 0 , then it is true for k 1 = k 1 0 + 1 for all a. Case: k 1 = 2: Since, Σ 1 = T ⋆ i β i and Σ a = a s=0 a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s)β s i 1 β a−s i 2 We have, Σ a Σ 1 = a s=0 a s i 1 =i 2 T ⋆ i 1 2 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s)β s+1 i 1 β a−s i 2 + a s=0 a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s)β s i 1 β a−s+1 i 2 Writing Σ a+1 similarly, we note that there are a + 2 terms in the expansion ofΣ a+1 − ΣΣ a + β 1Σa . First term = a+1 0 k 1 i 2 =1 T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a)β a+1 i 2 − a 0 k 1 i 2 =1 T ⋆ i 2 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a + 1)β a+1 i 2 +a a 0 k 1 i 2 =1 T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a + 1)β a i 2 β 1 = a k 1 i 2 =1 T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a + 1)β a i 2 (β 1 − β i 2 ) For s = 1, 2, · · · a, (s + 1) th term = a+1 s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a + 1 − s + 1)β s i 1 β a+1−s i 2 − a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)β s i 1 β a+1−s i 2 − a s−1 i 1 =i 2 T ⋆ i 1 2 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 2)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s)β s i 1 β a+1−s i 2 +aβ1 a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)β s i 1 β a−s i 2 = a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)(−a − s)β s i 1 β a+1−s i 2 − a s−1 (s − 1) i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 2)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s)β s i 1 β a+1−s i 2 +aβ1 a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)β s i 1 β a−s i 2 [Using Pascal's rule a+1 s = a s + a s−1 ] = a a s i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)β s i 1 β a−s i 2 (β1 − βi 2 ) + a s sβ1 i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)β s i 1 β a−s i 2 − a s−1 (s − 1) i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 2)T ⋆ i 2 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s)β s i 1 β a+1−s i 2 ≥ a s sβ1 i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s + 1)T ⋆ i 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − s + 1)β s i 1 β a−s i 2 − a s−1 (s − 1) i 1 =i 2 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − s − 1 + 1)T ⋆ i 2 2 (T ⋆ i 2 − 1) · · · (T ⋆ i 2 − a − (s − 1) + 1)β s i 1 β a−(s−1) i 2 [Since, β1 ≥ βi∀i] In the last expression, let us denote the first term by T erm(s, β1) and the second term by T erm(s − 1, β). Note that since β 1 ≥ β i ∀i T erm(s, β 1 ) − T erm(s, β) ≥ 0. (a + 2) th term = a + 1 a + 1 k 1 i 1 =1 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − a)β a+1 i 1 − a a k 1 i 1 =1 T ⋆ i 1 2 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − a + 1)β a+1 i 1 Thus, it can be clearly seen that, Σ a+1 − Σ 1Σa + β 1Σa ≥ T erm(1, β 1 ) + T erm(2, β 1 ) − T erm(1, β) + T erm(3, β 1 ) − T erm(2, β) + · · · + T erm(a, β 1 ) − T erm(a − 1, β) + ( a+1 a+1 k 1 i 1 =1 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − a)β a+1 i 1 ) − T erm(a, β) ≥ 0 Hence, (12) is true for k 1 = 2 for any a. Now, let it be true for some k 1 = k 1 0 , k 1 0 ∈ {2, 3, . . . }. We will show then that (12) is true for k 1 = k 1 0 + 1. Case: k 1 = k 1 0 + 1: The general expression forΣ a can be given by the following expression. Σ a = a 1 +a 2 +···+a k 1 =a Σ a+1 − ΣΣ a + β 1Σa = a 1 k 1 i 1 =1 T ⋆ i 1 β i 1Σ(a−1,k1 0 ) (β 1 − β i 1 ) + a 2 k 1 i 1 =1 T ⋆ i 1 (T ⋆ i 1 − 1)β 2 i 1Σ (a−2,k 1 0 ) (β 1 − β i 1 ) + · · · + a a k 1 i 1 =1 T ⋆ i 1 (T ⋆ i 1 − 1) · · · (T ⋆ i 1 − a + 1)β a i 1 (β 1 − β i 1 ) ≥ 0 [ Since β 1 ≥ β i ∀i] Thus the statement is true for k 1 = k 1 0 + 1 if true for k 1 0 for any a ≥ 1. Thus, we see (12) always holds and hence the proof. Proof to Lemma 3.2 For T 1 ≥ 2, ψ(1, θ) = ψ(T 1 , θ) iff 1−θ 1−θ+θ 2 = T 1 −θ T 1 (T 1 −θ)+θ 2 , i.e., θ = T 1 T 1 +1 . Consider, h(θ) =        ψ(1, θ) , if θ < T 1 T 1 +1 ψ(T 1 , θ) , if θ ≥ T 1 T 1 +1 Note that, h(θ) is continuous and strictly decreasing in θ ∈ (0, 1) with h(0) = 1, h(T 1 ) = 0. By Mean Value Theorem, there must exist a θ ⋆ ∈ (0, T 1 ) such that h(θ ⋆ ) = ξ for 0 < ξ < 1. then p ij > 0 if c j and c i fall into the same group and p ij = 0 if c j and c i fall into different groups. Within each group, p ij is given by, R 1 1(a, t) = P [CM | S 1 = a, Z (B) = c 1 , T = t]P [Z (B) = c 1 | S 1 = a, T = t] +P [CM | S 1 = a, Z (B) = c 1 , T = t]P [Z (B) = c 1 | S 1 = a, T = t]. By our assumption, since the intruder searches his target unit B among the ones with category c 1 , P [CM | S 1 = a, Z (B) = c 1 , T = t] = 0. Again, since, the intruder is assumed to choose randomly one unit among a units to be B, P [CM | S 1 = a, Z (B) = c 1 , T = t] = 1 a for any t. Thus, Table 1 : 1Showing minimum block size required for some possible choices of security level ξ and some possible values of class frequency T 1❍ ❍ ❍ ❍ ❍ ❍ T 1 ξ 0.1 0.125 0.15 0.175 0.2 0.25 0.3 1 11 9 8 7 6 5 5 2 6 5 5 4 4 3 3 3 5 4 3 3 3 2 2 4 4 3 3 2 2 2 2 5 3 3 2 2 2 2 2 6 3 2 2 2 2 2 2 7 2 2 2 2 2 2 2 8 2 2 2 2 2 2 2 9 2 2 2 2 2 2 2 10 2 2 2 2 2 2 2 Table 2 : 2Table showing frequencies of Categories for True Data from Simulated data-setCategory T 1 2 2 205 3 431 4 106 5 230 6 221 7 611 8 194 = K 1 (T 1 , ξ 1 ) classes such that T j ≥ T 1 for each j ∈ {1, 2, · · · , k 1 1 }. If we fail, we next try for ξ 2 = 1 n ⋆ −2 and so on until we get a success for some ξ l = 1 n ⋆ −l . Since for ξ = 1 n ⋆ −l , there exists k l 1 = K 1 (T 1 , ξ l ) classes such that T j ≥ T 1 for each j ∈ {1, 2, · · · , k l 1 }, and a θ ⋆ , such that if the data is perturbed with P (θ ⋆ ), then Equation(4)is satisfied for AppendixProof to Theorem 3.1To prove the result, we need to show R 1 (a + 1,whereΣ a = a!Σ a (Σ a as defined in Equation(8)and(9)). Thus, we will need to check if Fuller Masking Procedures for Microdata Disclosure Limitation Journal of Official Statistics. W A , W.A. Fuller Masking Procedures for Microdata Disclosure Limitation Journal of Of- ficial Statistics 1993 pp. 383-406 You Measuring Identification Risk in Microdata Release and Its Control by Post-Randomization. T K Nayak, C Zhang, J , U.S. Census Bureau Washington DC 20233Center for Disclosure Avoidance ResearchT. K. Nayak C. Zhang and J. You Measuring Identification Risk in Microdata Re- lease and Its Control by Post-Randomization , 2016, Center for Disclosure Avoidance Research U.S. Census Bureau Washington DC 20233 Adeshiyan C. Zhang A Concise Theory of Randomized Response Techniques for Privacy and Confidentiality Protection Handbook of Statistics Volume. T K Nayak, S A , 10.1016/bs.host.2016.01.01534Pages 273-286 DOIT. K. Nayak S. A. Adeshiyan C. Zhang A Concise Theory of Randomized Response Techniques for Privacy and Confidentiality Protection Handbook of Statistics Volume 34, 2016, Pages 273-286 DOI:https://doi.org/10.1016/bs.host.2016.01.015 Montagnon Data Disclosure Risk Evaluation. S Trabelsi, V Salzgeber, G Bezzi, 10.1109/CRISIS.2009.5411979S. Trabelsi V. Salzgeber M Bezzi G. Montagnon Data Disclosure Risk Evaluation, 2009, IEEE Xplore DOI: 10.1109/CRISIS.2009.5411979 Pannekoek Disclosure Control of Microdata. J G Bethlehem, W J Keller, J , Journal of American Statistical Association. J. G. Bethlehem W. J. Keller J. Pannekoek Disclosure Control of Microdata, 1990, Journal of American Statistical Association J M Gouweleeuw, P Kooiman, P P De Wolf, Post Randomisation for Statistical Disclosure Control: Theory and Implementation. J. M. Gouweleeuw P. Kooiman P.P. de Wolf Post Randomisation for Statistical Dis- closure Control: Theory and Implementation, 1998, Journal Of Official Statistics Adeshiyan On invariant Post Randomization for Statistical Dislosure Control. T K Nayak, S A , International Statistical Review. T. K. Nayak S. A. Adeshiyan On invariant Post Randomization for Statistical Dislo- sure Control, 2015, International Statistical Review
[]
[ "ENTROPY OF MEROMORPHIC MAPS ACTING ON ANALYTIC SETS", "ENTROPY OF MEROMORPHIC MAPS ACTING ON ANALYTIC SETS" ]
[ "Henry De Thélin ", "Gabriel Vigny " ]
[]
[]
Let f : X X be a dominating meromorphic map on a compact Kähler manifold X of dimension k. We extend the notion of topological entropy h l top (f ) for the action of f on (local) analytic sets of dimension 0 ≤ l ≤ k. For an ergodic probability measure ν, we extend similarly the notion of measure-theoretic entropy h l ν (f ). Under mild hypothesis, we compute h l top (f ) in term of the dynamical degrees of f . In the particular case of endomorphisms of P 2 of degree d, we show that h 1 top (f ) = log d for a large class of maps but we give examples where h 1 top (f ) = log d.
10.1512/iumj.2021.70.8290
[ "https://arxiv.org/pdf/1807.06483v1.pdf" ]
119,137,886
1807.06483
75cd92d4223a4b37878eb49903a21d9db95e38df
ENTROPY OF MEROMORPHIC MAPS ACTING ON ANALYTIC SETS 17 Jul 2018 Henry De Thélin Gabriel Vigny ENTROPY OF MEROMORPHIC MAPS ACTING ON ANALYTIC SETS 17 Jul 2018arXiv:1807.06483v1 [math.CV]Entropy of rational mapsanalytic setsdynamical degrees Mathematics Subject Classification (2010): 37B4037F1032Bxx Let f : X X be a dominating meromorphic map on a compact Kähler manifold X of dimension k. We extend the notion of topological entropy h l top (f ) for the action of f on (local) analytic sets of dimension 0 ≤ l ≤ k. For an ergodic probability measure ν, we extend similarly the notion of measure-theoretic entropy h l ν (f ). Under mild hypothesis, we compute h l top (f ) in term of the dynamical degrees of f . In the particular case of endomorphisms of P 2 of degree d, we show that h 1 top (f ) = log d for a large class of maps but we give examples where h 1 top (f ) = log d. Introduction Consider a dynamical system f : X X where f is a dominating meromorphic map on a compact Kähler manifold X of dimension k endowed with a Kähler form ω. A central question in the study of such dynamical system is to compute the topological entropy h top (f ) of f and to construct a measure of maximal entropy. The quantity h top (f ) is related to the so-called dynamical degrees (d l (f )) 0≤l≤k of f . They are defined by ( [RS,DS1]) d l (f ) = lim n→∞ X (f n ) * (ω l ) ∧ ω k−l 1 n and the l-th degree d l (f ) measures the spectral radius of the action of the pull-back operator f * on the cohomology group H l,l (X). It can be shown that the sequence of degrees is increasing up to a rank l and then it is decreasing (see [Gro1]). By [Gro2,DS1,DS3], we always have h top (f ) ≤ max 0≤s≤k log d s . In order to prove the reverse inequality, the strategy is to construct a measure of maximal entropy max 0≤s≤k log d s . This has been done in numerous cases (e.g. Hénon maps [BS] or holomorphic endomorphism [FS]) and we gave in [DTV] a very general criterion under which we can construct a measure µ of measure-theoretic entropy h µ (f ) = max 0≤s≤k log d s . On the other hand, f naturally acts on analytic sets of dimension l ≤ k (at least outside the indeterminacy sets), the case l = 0 being the classical action on points z ∈ X. The purpose of this article is to define natural notions of topological entropy h l top (f ) and measure-theoretic entropy h l ν (f ) (for an ergodic invariant probability measure ν) that extend the classical ones and then to compute those entropies. Though such computations will again be in terms of dynamical degrees (see Theorems 1.1 and 1.2, we shall show in the The second author's research is partially supported by the ANR grant Fatou ANR-17-CE40-0002-01. particular case of endomorphisms of P 2 of degree d that h 1 top (f ) = log d(= log d 1 ) for a large class of maps but we give examples where h 1 top (f ) < log d (see Theorem 1.3). This makes the entropies of meromorphic maps acting on analytic sets richer than the classical notion. Observe for that, in the general setting of compact Kähler manifold, that they are a priori no global analytic sets of positive dimension. This is why, as we will see right below, the point of view we adopt here to define the entropies is the growth rate in a very strong sense of local analytic sets. Denote by I the indeterminacy set of f . For δ > 0 and n ∈ N, we define: X δ,n l := W ⊂ X|∃x ∈ W, W ∩ B(x, e −nδ ) is analytic of exact dimension l in B(x, e −nδ ), Vol l (W ∩ B(x, e −nδ )) ≤ 1, ∀k ≤ n − 1, f k (W ) ⊂ X\I . In the above, Vol l denotes the l-dimensional volume. For A, B ⊂ X, we denote dist(A, B) := inf{d(x, y), x ∈ A, y ∈ B}. Beware that it is not an actual distance, for example, when X = P k and l = k−1, two analytic hypersurfaces A and B in P k necessarily intersect by Bézout's theorem. Definition. -For n ∈ N, we say that a set E ⊂ X δ,n l is (n, δ)-separated if for all W = W ′ ∈ E there exists i ∈ {0, . . . , n − 1}, such that dist(f i (W ), f i (W ′ )) ≥ δ. Definition. -For l ≤ k, we define h l top (f ), the l-topological entropy of f , as the quantity: h l top (f ) := lim δ→0 lim n→∞ 1 n log max{#E, E ⊂ X δ,n l is (n, δ) − separated} . Remark. -1. When l = 0, h 0 top (f ) = h top (f ) is the classical topological entropy of f since points whose forward orbit stays in X\I belong to X δ,n 0 for all n. By compacity, h k top (f ) = 0 and, using a slicing argument, one has that l → h l top (f ) is decreasing. 2. We could also have defined a notion of entropy using n, δ-separated sets in X δ ′ ,n l for δ = δ ′ and then make δ → 0 and δ ′ → 0 in an appropriate order; in here, we choose to take δ = δ ′ in order to simplify the definitions. 3. Finally, observe that our definitions (see also the notion of measure-theoretic entropy below) make sense for a C r -map f on a real C r -manifold M acting on local C r manifolds. Our first result is: Theorem 1.1. -Let f be a dominating meromorphic map of a compact Kähler manifold X, then for any 0 ≤ l ≤ k, we have h l top (f ) ≤ log max j≤k−l d j . We now define a measure-theoretic entropy for analytic sets associated to an (ergodic) invariant measure ν. Let Λ be a set of positive measure for ν. We say that a set E ⊂ X δ,n l is (n, δ, Λ)-separated if it is (n, δ)-separated and if furthermore for all W ∈ E, W ∩ Λ = ∅. Definition. -For l ≤ k, ν an invariant probability measure and κ > 0, we consider the quantity: h l ν (f, κ) := inf Λ, ν(Λ)>1−κ lim δ→0 lim n→∞ 1 n log max{#E, E ⊂ X δ,n l is (n, δ, Λ) − separated} . We define h l ν (f ), the l-measure-theoretic entropy of f , as the quantity: h l ν (f ) := sup κ>0 h l ν (f, κ). We show in Proposition 2.1 that h 0 ν (f ) is the usual measure-theoretic entropy when ν is ergodic. As above, one has l → h l ν (f ) is decreasing and h k ν (f ) = 0. Our main result in that setting is the following. Theorem 1.2. -Let f be a dominating meromorphic map of a compact Kähler manifold X. Let µ be a an ergodic invariant measure such that h µ (f ) > 0 and log dist(., I) ∈ L 1 (µ). Assume that the (well-defined) Lyapunov exponents satisfy: χ 1 ≥ · · · ≥ χ s > 0 > χ s+1 ≥ · · · ≥ χ k . Then, we have: (1) ∀l ≤ k − s, h l µ (f ) = h µ (f ). Finally, in the particular case of endomorphisms of P 2 of degree d, we show (generic is meant in the sense of [DT2].): Theorem 1.3. -Let f be a generic holomorphic endomorphism of P 2 (C) of degree d ≥ 2. Assume that supp(µ) = supp(T ) where T is the Green current of f and µ := T ∧ T is the measure of maximal entropy. Then : h 1 top (f ) = log d. Observe that it is possible to have h 1 top (f ) = log d for specific holomorphic endomorphisms of P 2 (C). Indeed, we show in Subsection 3.6 that for a Lattès map f of P 2 of degree d, then h 1 top (f ) = 0 (supp(µ) = supp(T ) in that case). 2. Computing h l top (f ) in terms of the dynamical degrees 2.1. Proof of Theorem 1.1. Take δ > 0 and let E ⊂ X δ,n l be (n, δ)-separated with #E = N . Consider a finite atlas of X. We can choose a chart ∆ such that there are cN elements W of E that intersects ∆ where c > 0 does not depend on E. Slightly enlarging ∆ if necessary, we can assume that for each such W ∈ E, W ′ := W ∩ ∆ is in X δ,n l . Write W ′ 1 , . . . W ′ cN the collection of those (n, δ)-separated sets. Let us consider the euclidean metric on ∆, it is comparable with ω in ∆ so we can assume that each W ′ i is analytic of exact dimension l in B(x, c ′ e −nδ ) where c ′ is a constant that depends only on ∆. By the main result of [ATU], there exists a positive constant C l such that α λ(π α (W ′ i )) ≥ C l e −2lnδ , where the sum is over the l-dimensional coordinate planes α through 0 in C k , π α is the projection to α, and λ is the Lebesgue measure in C l (notice that the area is counted without the multiplicity). In particular, shrinking c and C l if necessary, we can assume that they are cN elements W ′ i ⊂ ∆ which are (n, δ)-separated and a l-dimensional coordinate plane P such that the projection of each W ′ on P has volume ≥ C l e −2lnδ . Observe that we can slightly move P and the above still stands. For the rest of the proof, we proceed as in [DT4][p.110-111] so we only sketch the main steps. If π 3 : ∆ → P denotes the canonical projection, we can assume that π 3 (∆) lies in a compact set K of P . For a ∈ K, let F a := π −1 3 ({a}) and da be the Lebesgue measure on P . Let W s := ∪ i≤cN W ′ i and let n(a) be the number of intersection of F (a) with W s counted with multiplicity. That way: n(a)da = volume of the projection of W s on P ≥ C l cN e −2lnδ . Let Γ n (a) be the closure of {(z, f (z), . . . , f n−1 (z)), z ∈ F (a) ∩ ∆} in X n . It is the multigraph in F (a) ∩ ∆ that we endow with the Kähler form ω n := i≤n Π * i (ω) where Π i denotes the canonical projection of X n on its i-th factor. We have [DT4][Lemma 13]: volume(Γ n (a))da ≥ c(δ) n(a)da where c(δ) is a constant that depends only on δ (and not n). On the other, one shows that, for any ε > 0, there exists c ε such that : volume(Γ n (a))da ≤ c ε n k−l max 0≤j≤k−l (d j + ε) n . In particular, c(δ)C l cN e −2lnδ ≤ c ε n k−l max 0≤j≤k−l (d j + ε) n . We take the logarithm, divide by n and let n → ∞. The result then follows by letting ε → 0 and δ → 0. Remark. -Observe that we do not require in the proof of Theorem 1.1 that the ldimensional volume of the W ∈ X δ,n l is ≤ 1. Proof of Theorem 1.2 Before proving Theorem 1.2, we show that h 0 ν (f ) is the usual measure-theoretic entropy when ν is ergodic. Indeed, following [K], we have the following folklore's proposition whose proof will be useful for us (it can be easily extended to the case of meromorphic maps assuming ν(I) = 0). Proposition 2.1. -Assume that ν is ergodic, f is holomorphic and l = 0, then h 0 ν (f ) = h ν (f ) is the classical measure-theoretic entropy of ν. Proof. -Observe first that X δ,n 0 = X. Let ε > 0 and Λ ⊂ X. Fix 1 ≫ δ 0 > 0 and consider: X δ 0 ,n := x, ν(B n (x, δ 0 )) ≤ e −hν (f )n+εn . We know by Brin-Katok formula( [BK] that 1 − ν(Λ) 4 ≤ ν x, lim − 1 n log ν(B n (x, δ 0 )) ≥ h ν (f ) − ε 2 ≤ ν   n 0 n≥n 0 X δ 0 ,n   . In particular, we choose n 0 large enough so that: ν   n≥n 0 X δ 0 ,n 0   ≥ 1 − ν(Λ) 2 . Consider the set Λ ′ := Λ ∩ n≥n 0 X δ 0 ,n 0 which satisfies ν(Λ ′ ) > ν(Λ)/2 by construction. Then, for n ≥ n 0 , we start with a point x 0 ∈ Λ ′ and we choose inductively a point x i ∈ Λ ′ \ ∪ 0≤j<i B n (x j , δ 0 ) . This is possible as long as ν(Λ ′ ) > 0≤j<i ν(B n (x j , δ 0 )). So using our hypothesis, we can find at least N such points with N given by: N = e hν (f )n−εn ν(Λ ′ ). In particular: lim δ→0 lim n→∞ 1 n log (max{#E, E ⊂ X is (n, δ, Λ) − separated}) ≥ h ν (f ) − ε which gives the inequality h 0 ν (f ) ≥ h ν (f ) by letting ε → 0, taking the infimum over all Λ with ν(Λ) > 1 − κ and letting κ → 0. For the other inequality, consider: Y k,δ 0 := x, ∀n ≥ k, ν(B n (x, δ 0 /2)) ≥ e −hν (f )n−εn . Let κ > 0, Brin-Katok formula implies that for δ 0 small enough and m large enough ν(∩ k≥m Y k,δ 0 ) ≥ 1 − κ. In particular, we choose Λ := ∩ k≥m Y k,δ 0 . Take n ≥ m. Consider {x 1 , . . . , x N } a (n, δ 0 , Λ)-separated set. Then, the Bowen ball B n (x i , δ 0 /2) are disjoint so: N e −hν (f )n−εn ≤ 1. Taking the logarithm, dividing by n, letting n → ∞ and δ 0 → 0 implies: h 0 ν (f ) ≤ h ν (f ) + ε and the result follows by letting ε → 0. We now prove Theorem 1.2. Let µ be an ergodic invariant measure with h µ (f ) > 0. Assume that log dist(., I) ∈ L 1 (µ). We recall some facts we need on Pesin theory in this setting [DT4]. We shall use the results of that paper keeping the same notations: π,X, X * ,μ, τ x , ε 0 , f x , f n x , f −n x , Df (x) . We fix some some δ > 0: -The Lyapunov exponent are well defined (Oseledec's Theorem). We assume that they satisfies χ 1 ≥ · · · ≥ χ s > 0 > χ s+1 ≥ · · · ≥ χ k . the setŶ of points in the universal extensionX of X that satisfy the conclusion of Oseledec's Theorem, Pesin's Theorem and [DT4][lemme 10] satisfiesμ(Ŷ ) = 1 -one can find a set A of µ-measure arbitrarily close to 1 with A ⊂ π(Ŷ ), an integer n 0 and a constant α 0 > 0, such that (see [DT4][p.103]): ∀x ∈ A, ∀n ≥ n 0 , µ(B n (x, 2δ)) ≤ e −nhµ(f )+nδ . for all x in the set A, using iterated pull-backs and graph transforms of a suitable local complex k − s-plane, one can define an approximated stable manifold W n (x) such that: • x ∈ W n (x) and W n (x) ⊂ B(x, e −nδ ) is an analytic set of dimension k − s; • W n (x) is a graph over some k − s plane of a α 0 -Lipschitz map; • diam(f k (W n (x))) ≤ exp(−nδ) for all k < n. In particular, for any Λ with µ(Λ) > 1 − κ, we can assume that Λ ⊂ A (up to considering Λ∩ A where A is the above set of µ-measure arbitrarily close to 1). Using the same volume argument as in the proof of Proposition 2.1, we can thus find a (n, δ, Λ)-separated set of cardinality N ≥ e nhµ(f )−nδ µ(Λ) of points x 1 , . . . , x N . For n large enough, we have that exp(−nδ) ≤ δ/4, in particular, (W n (x i )) i≤N is a collection of sets of X δ/2,n k−s which are (n, δ/2, Λ)-separated. Finally, as W n (x) is a graph over some k − s plane of a α 0 -Lipschitz map, its volume is bounded by some constant that depends on α 0 so it is ≤ 1 for n large enough. Theorem 1.2 follows. 2.3. Computing h l ν (f ) in some families of maps 2.3.1. Holomorphic maps of P k Let f : P k → P k be a holomorphic map and take 0 ≤ l ≤ k. Assume that one can find an invariant ergodic probability measure µ of entropy log max j≤k−l d j = (k − l) log d of saddle type for f : χ 1 ≥ · · · ≥ χ k−l > 0 > χ k−l+1 ≥ · · · ≥ χ k , where (χ i ) i≤k are the well defined Lyapunov exponent of µ. Then, as a consequence of Theorems 1.1 and 1.2, one obtains directly: h l top (f ) = h l µ (f ) = (k − l) log d. In [DT3], the first author constructed such measures for the case k = 2, l = 1 (with χ 2 ≥ 0 in general), see also [FsS] for the initial case of hyperbolic maps. Generic birational maps of P k Let f : P k → P k be a birational maps such that dim(I(f )) = k − s − 1 and dim(I(f −1 )) = s − 1 for some 1 ≤ s ≤ k − 1. Generalizing a construction of Bedford and Diller ([BD1]), we defined in [DTV] a condition on such maps under which we constructed a measure of maximal entropy s log d that integrates log dist(., I) with χ 1 ≥ · · · ≥ χ s > 0 > χ s+1 ≥ · · · ≥ χ k . The condition is generic in the sense that for all A outside a pluripolar set of Aut(P k ) and any f such that dim(I(f )) = k − s − 1 and dim(I(f −1 )) = s − 1, then f • A satisfies the condition. Furthermore, the class of such generic birational maps contains the regular automorphisms of C k and generic birational maps of P k ( [S,DS2]). By the above, for such a generic birational map of P k , one has: ∀l ≤ k − s, h l top (f ) = s log d. As a consequence, observe that one does not necessarily have h l top (f ) = h l top (f −1 ) for a birational map. Indeed, consider the case of a generic birational map of P 3 whose dynamical degrees are 1 = d 0 < 2 = d 1 < 4 = d 2 > 1 = d 3 (for example, one can take the regular automorphism (z + y 2 , y + x 2 , x)). Then, h 1 top (f ) = log 2 and h 1 top (f −1 ) = log 4. Observe also that h k−l top (f ) = h l top (f −1 ) in general (for k = l, then h 0 top (f ) = h top (f ) = 0 and h k top (f −1 ) = 0). Lastly, in [V], the second author generalized the construction of generic birational map to the rational case with no additional hypothesis on the dimension of the indeterminacy sets. In particular, he constructed saddle measures of maximal entropy under mild hypothesis, giving many examples where one can compute h l top (f ) (for l such that d l ≤ max d j ). 3. Proof of Theorem 1.3 Strategy of the proof and Yomdin's estimate Take f as in Theorem 1.3. Consider a projective line L. By [DT2], the genus of f −n (L) outside supp(µ) is bounded by d n e δn . Using [DT1], we will then construct approximately d n e δn disks of size e −δn in f −n (L)\supp(µ). Using a length-area argument, we will show that the size of those disks is still small when pushed forward by f i , i = 0, . . . , n − 1. Finally, using Yomdin's theorem ( [Y]), we will construct a (n, δ)-separated set from those disks. Here is the version of Yomdin's result we will use; it can be deduced from [DTV][Proposition 2.3.2]. Proposition 3.1 (Yomdin). -Let δ ′ > 0. Then there exist C 1 > 0 and δ 0 > 0 such that for all 0 < δ < δ 0 , we have for any dynamical ball B n (x, 5δ) and any projective line L: ∀n ∈ N, ((f n ) * ω ∧ L) (B n (x, 5δ)) ≤ C 1 e δ ′ n . Finding a set with uniform estimates Let δ ′ > 0. Fix 0 < δ < δ 0 where δ 0 is given by the above proposition. We fix x 0 ∈ supp(T )\supp(µ). There exist an open neighborhood U of supp(µ) and 0 < r 0 < 1 such that B(x 0 , r 0 ) ∩ U = ∅. Using [DT2], we have: where the supremum is taken over the Grassmanian space (P 2 ) * of complex projective lines in P 2 . Hence, there exist n 2 such that for n ≥ n 2 , max y #{z, f n (z) = y, z / ∈ U } ≤ d n e nδ/3 max L Genusf −n (L)\U ≤ d n e nδ/3 . In particular, there exists C 2 > 0 such that for all n, max y #{z, f n (z) = y, z / ∈ U } ≤ C 2 d n e nδ/3 max L Genusf −n (L)\U ≤ C 2 d n e nδ/3 . As x 0 ∈ supp(T ), we have T ∧ ω(B(x 0 , r 0 /4)) > 0. In particular, we can find a coordinate axis D such that T ∧ π * (ω 0 )(B(x 0 , r 0 /4)) > 0 (where π denotes the orthogonal projection on D and ω 0 := ω |D ). Now, as T = lim n d −n (f n ) * (ω), there exists ε 1 > 0 and n 3 ∈ N such that (2) ∀n ≥ n 3 , d −n (f n ) * (ω) ∧ π * (ω 0 )(B(x 0 , r 0 /2)) ≥ ε 1 . In what follows, we take n ≥ n 3 . Constructing disks of size e −δn in f −n (L) We follow [DT1]. We subdivide the square C 0 ⊂ D, centered at x 0 and size 2, into 4k 2 identical squares with k = e δn /4. Such subdivision contains four families of k 2 disjoint squares that we will denote by Q 1 , Q 2 , Q 3 and Q 4 . Let V denote the Fubini-Study measure of (P 2 ) * normalized so that ω = [L]dV (L). Up to moving slightly the subdivision, we assume that V ({L, π |f −n (L) has a critical value in ∂Q i for some i}) = 0 (indeed, for each L there exists a finite number of critical value and we conclude using Fubini). Let r 0 /2 < ρ < r 0 . We fix L such that π |f −n (L) has no critical value in ∂Q i for all i. We follow [DT1] keeping the more precise estimates we need. We start with the geometric simplification of C n : = f −n (L) ([DT1][Paragraph 2.1]). Let ρ ′ ∈ [ρ + r 0 − ρ 4 , r 0 − r 0 − ρ 4 ]. We denote by C n the complex curve obtained by taking the union of C n ∩B(x 0 , ρ) with the connected components of C n ∩ (B(x 0 , ρ ′ )\B(x 0 , ρ)) that meet ∂B(x 0 , ρ). By the maximum principle, the boundary of C n is contained in ∂B(x 0 , ρ ′ ) and if B n denotes the number of its boundary components, we have by [DT1] that ( B n − G n ) r 0 − ρ 4 2 ≤ A n := Area of f −n (L) in B(x 0 , r 0 ) ≤ d n , where G n is the genus of f −n (L) in B(x 0 , r 0 ). Furthermore, by the coarea formula and letting ρ ′ moves in [ρ + r 0 −ρ 4 , r 0 − r 0 −ρ 4 ], we can find ρ ′ such that L n , the length of the boundary of C n , satisfies L n ≤ 2A n r 0 − ρ . Notice that ρ ′ depends on L but not the initial ρ. In addition, C n coincides with C n in B(x 0 , ρ) by construction. Summing up, if G n is the genus of C n , we have G n ≤ G n ≤ C 2 d n e nδ/3 , B n ≤ 4 r 0 − ρ 2 d n + C 2 d n e nδ/3 , L n ≤ d n 2 r 0 − ρ . We now continue with the idea of [DT1][Paragraph 2.2]. We fix a family Q of squares amongst the (Q i ) i≤4 . We can tile C 0 \Q by crosses. Let Σ be a component above a cross (for π). If l(Σ) denotes the length of the relative boundary of Σ and a(Σ) the area of π(Σ) counted with multiplicity, then we have, taking ε k = e −2δn/3 : l(Σ) ≥ ε k k = 4e −δn e −2δn/3 or l(Σ) ≤ ε k k = 4e −δn e −2δn/3 . Using the isoperimetric inequality in the latter case implies a(Σ) ≥ (1 − ε k ) × Area of the cross = 3(1 − ε k ) k 2 or a(Σ) ≤ l(Σ) 2 . As in [DT1], we want to remove the components of that latter type. Let us denote them by A 1 , . . . , A m . If we remove them from C n , we change its area for π * (ω) by at most m i=1 l 2 i ≤ 1 k l i ≤L n k . By the triangular inequality, the length of the relative boundary of the curve obtained by removing those components is ≤ 2 L n . Finally, this curve still satisfies: G n ≤ C 2 d n e nδ/3 and B n ≤ 4 r 0 − ρ 2 d n + C 2 d n e nδ/3 . We still denote by C n that curve and we proceed with [DT1][Paragraph 2.3]. Let I denote the set of islands above Q. We construct a graph where each vertex is a connected component above a cross and where we put as many edges between two vertices as the corresponding components share common arcs. Then, by [DT1] [Paragraph 1.2] #I ≥ χ( C n ) − s + a, where s is the number of vertices and a the number of edges of the graph. We now bound s from above and a from below, starting with s. There are at most 2 Lnk ε k vertices Σ satisfying l(Σ) ≥ ǫ k k . Furthermore, the cardinality of vertices such that a(Σ) ≥ (1 − ε k ) 3 k 2 is bounded from above by the area of the projection of C 0 \Q (counted with multiplicity) divided by (1 − ε k ) 3 k 2 hence it is bounded by 3S n (C 0 \Q) (1 − ε k ) 3 k 2 = k 2 S n (C 0 \Q) (1 − ε k ) where S n (C 0 \Q) is the mean number leaves above C 0 \Q. We thus have: s ≤ 2 L n k ε k + k 2 S n (C 0 \Q) (1 − ε k ) . We now bound a from below. Following line by line [DT1][Paragraph 2.3] gives: a = 1 2 vertices v(Σ) ≥ 2k 2 S n (C 0 \Q) − 4hk L n . Combining those bounds implies the following lower bound on #I #I ≥ χ( C n ) − 2 L n k ε k − k 2 S n (C 0 \Q) (1 − ε k ) + 2k 2 S n (C 0 \Q) − 4hk L n ≥ −2 G n − B n + k 2 S n (C 0 \Q)(2 − 1 1 − ε k ) − 8h k ε k L n (we can always assume h ≥ 1). Now, if we denote by I n (L) the total number of islands above the four families of squares Q 1 , Q 2 , Q 3 and Q 4 , we have: I n (L) ≥ −8 G n − 4 B n − 32h k ε k L n + k 2 (2 − 1 1 − ε k ) 4 i=1 S n (C 0 \Q i ) ≥ −8 G n − 4 B n − 32h k ε k L n + k 2 (2 − 1 1 − ε k )4S n where S n is the mean number of leaves above C 0 . Let a n (L) denote the area of C n in B(x 0 , ρ) for π * (ω). We have 4S n ≥ a n (L) − Ln k (we removed A 1 , . . . A m ). Hence I n (L) ≥ −12C 2 d n e nδ/3 − 4 4 r 0 − ρ 2 d n − 32hkd n ε k 2 r 0 − ρ + k 2 (2 − (1 + 2ε k ))(a n (L) − 2d n (r 0 − ρ)k ) ≥ −h d n e 5δn/3 (r 0 − ρ) 2 C ′ 2 + (1 − 2ε k )k 2 a n (L) where we used that ε k = exp(−2δn/3), k = exp(δn)/4 and where C ′ 2 is a constant independent of n and L. Amongst those islands, few are ramified. Indeed, assume N 1 of them are ramified, for those islands ∆, the area of π(∆) is ≥ 2 1 k 2 so a n (L) ≥ 2 k 2 N 1 + 1 k 2 (I n (L) − N 1 ) = N 1 k 2 + I n (L) k 2 ≥ N 1 k 2 + (1 − 2ε k )a n (L) − h d n e 5δn/3 k 2 (r 0 − ρ) 2 C ′ 2 so N 1 ≤ 2ε k k 2 a n (L) + h d n e 5δn/3 (r 0 − ρ) 2 C ′ 2 . Finally the number of islands of volume ≥ 1 is at most d n . In particular, if I ′ n (L) denotes the number of unramified islands of volume ≤ 1, we have, replacing h by hC ′ 2 if necessary. (3) I ′ n (L) ≥ (1 − 4ε k )k 2 a n (L) − 2h d n e 5δn/3 (r 0 − ρ) 2 . Control of size the image by f i of the above disks We now want to construct, in the above good islands, many disks whose size stay small when we push them forward by f i for i = 0, . . . , n − 1. Let q be a square in the above family. In q, we consider the annulus A := D(t, 1 2k )\D(t, 1 4k ), where t is the center of q: 1/k q . t < > Let i ∈ {0, . . . , n − 1} and ∆ a good island above q. We shall use a length-area argument. The form ω |f i (∆) defines a metric β, we take the pull-back of this metric by f i then we push it forward by π (it is a biholomorphism on ∆). This gives a conformal metric h 0 = σ|dz| on q. Then Area h 0 (A) = 1 2k 1 4k 2π 0 σ 2 (re iθ )rdθdr ≥ 1 2k 1 4k 1 2π 2π 0 σ(re iθ )rdθ 2 dr ≥ log 2 2π inf γ∈Γ (l h 0 (γ)) 2 where Γ is the set of circles of center t and radius in [1/(4k), 1/(2k)] (essential curves of A). It implies the existence of an essential curve γ such that l h 0 (γ) 2 ≤ 2π log 2 Area h 0 (A) = 2π log 2 Area ω (f i (∆)) where the area is counted with multiplicity and where l h 0 (γ) is the length of f i (π −1 |∆ (γ)) for the metric ω |f i (∆) i.e. ω |f −n+i (L) . Let∆ be the part of ∆ above the disk D(t, 1/4k). By the Appendix of [BD2], we have: diameter of f i (∆) for ω |f −n+i (L) ≤ diameter of f i ((π |∆ ) −1 (D γ )) for ω |f −n+i (L) ≤ 3 2π log 2 Area of f i (∆) counted with multiplicity where D γ is the disk in q delimited by γ. We denote by ∆ 1 , . . . ∆ M the I ′ n (L) good islands constructed at the first step and by∆ 1 , . . .∆ M the corresponding∆. The ∆ j are in B(x 0 , r 0 ) thus not in U . Since max y #{z, f i (z) = y, z / ∈ U } ≤ C 2 d i e iδ/3 for all i, we have that the f i (∆ j ) may recover themselves at most C 2 d i e iδ/3 . So the area of f i (∪∆ j ) is thus bounded from above by the area of f −n+i (L) (which is d n−i ) times C 2 d i e iδ/3 . In particular, the number of ∆ j such that the area of f i (∆ j ) (counted with multiplicity) is greater than δ 2 18π log 2 is ≤ C 2 d n e δi/3 18π δ 2 log 2 . If we remove those disks for i = 0, . . . , n − 1, we remove at most: d n C 2 18π δ 2 log 2 n−1 i=0 e iδ/3 ≤ d n e nδ/3 e δ − 1 C 2 18π δ 2 log 2 = C 3 d n e δn/3 where C 3 > 1 is another constant that does not depend on n nor L. Using (3), we deduce that the number I ′′ n (L) of good islands constructed at the first step for which the area of f i (∆ j ) (counted with multiplicity) is ≤ δ 2 18π log 2 for all i ≤ n − 1 satisfies (4) I ′′ n (L) ≥ I ′ n (L) − C 3 d n e δn/3 ≥ (1 − 4ε k )k 2 a n (L) − 3hC 3 d n e 5δn/3 (r 0 − ρ) 2 . We denote ∆ 1 , . . . , ∆ M ′ these islands and∆ 1 , . . . ,∆ M ′ the corresponding∆ (with our notations, M ′ = I ′′ n (L)). By construction, for each 1 ≤ j ≤ M ′ , 0 ≤ i ≤ n − 1: diamf i (∆ j ) ≤ 3 2π log 2 δ √ 18π log 2 = δ. 3.5. Using Yomdin's estimate to produce (n, δ)-separated disks Denote f −n (L) the part of f −n (L) made with the∆ 1 , . . . ,∆ M ′ . By construction, the area of π( f −n (L)) is π 1 4k 2 I ′′ n (L) ≥ π 16 (1 − 4ε k )a n (L) − π 16k 2 3hC 3 d n e 5δn/3 (r 0 − ρ) 2 = π 16 (1 − 4e −2δn/3 )a n (L) − π3hC 3 d n e −δn/3 (r 0 − ρ) 2 since 4k = e δn and ε k = e −2δn/3 by definition. Using Proposition 3.1, we have for every dynamical ball B n (x, 5δ) (f n ) * ω ∧ π * ω 0 (B n (x, 5δ)) ≤ C 1 e δ ′ n thus (recall that V denote the Fubini-Study measure of (P 2 ) * ) [ f −n (L)] ∧ π * (ω 0 )(B n (x, 5δ))dV (L) ≤ [f −n (L)] ∧ π * (ω 0 )(B n (x, 5δ))dV (L) ≤ C 1 e δ ′ n . Furthermore, integrating the previous estimate and using (2), we have for n large enough [ f −n (L)] ∧ π * (ω 0 )(P 2 (C))dV (L) ≥ π 32 a n (L)dV (L) − π3hC 3 d n e −δn/3 (r 0 − ρ) 2 ≥ ε 1 π 32 d n − π3hC 3 d n e −δn/3 (r 0 − ρ) 2 ≥ ε 1 π 64 d n . In particular, we can find points x 1 , . . . , x N 1 with N 1 ≥ ε 1 π 64C 1 d n e −δ ′ n which are (n, 5δ)-separated and belong to the support of [ f −n (L)] ∧ π * (ω 0 )dV (L). In particular, for each x j [ f −n (L)] ∧ π * (ω 0 )(B n (x j , δ))dV (L) > 0, so for each x j we can choose a disk ∆ j in f −n ( L j ) (L j might depends on j) which meets B n (x j , δ). Lemma 3.2. -The disks ∆ 1 , . . . , ∆ N 1 are (n, δ)-separated. Proof. -Let i = j, x i and x j are (n, 5δ)-separated so there exists l ≤ n − 1 such that d(f l (x i ), f l (x j )) ≥ 5δ. In subsection 3.4, we showed that diam(f l ( ∆ i )) ≤ δ and diam(f l ( ∆ j )) ≤ δ. In particular, f l ( ∆ i ) ⊂ B(f l (x i ), 2δ) (since ∆ i intersects B n (x i , 5δ)) and f l ( ∆ j ) ⊂ B(f l (x j ), 2δ). Thus dist(f l ( ∆ i ), f l ( ∆ j )) ≥ 5δ − 2δ − 2δ = δ. The ∆ j are (n, δ)-separated disks and they are graphs above a disk of the type D(0, e −δn ) and the volume of each ∆ j is ≤ 1 by the above. In conclusion, for n large enough 1 n log max{#E, E ⊂ X δ,n 1 is (n, δ) − separated} ≥ 1 n log ε 1 π 64C 1 + log d − δ ′ . so lim n→∞ 1 n log max{#E, E ⊂ X δ,n 1 is (n, δ) − separated} ≥ log d − δ ′ . We want to bound from above the volume of T i n k . Let Ω be the metric preserved by U and ω FS the Fubini-Study form on P 2 . Then, up to multiplying Ω by a suitable constant, the Green current T + of f is given by σ * (Ω) (see [BL01][Proposition 4.1]) and it can be written as T + = σ * (Ω) = ω + dd c g where g is a β-Hölder function (the Green current). Then Vol Ω (T i n k ) = T i n k Ω ≤ ∆i n k σ * Ω. Let θ i,n k be a smooth cut-off function equal to 1 in B(Z i,n k , e −δn k /2) and 0 outside B(Z i,n k , e −δn k ) such that 0 ≤ ±dd c θ i,n k + Ce 2δn k ω (changing C if necessary). In particular, by Stokes and the definition of X δ,n k 1 Vol Ω (T i n k ) ≤ ∆ i n k θ i,n k (ω + dd c g) ≤ ∆ i n k ω + ∆ i n k (g − g(Z i,n k ))dd c θ i,n k ≤ 1 + Ce −βδn k e 2δn k . In particular, for a suitable p > 0, we have that T i n k has volume ≤ Ce pδn k (again, changing C if necessary). Let ρ i,n k be a smooth cut-off function equal to 1 in B(x i,n k , C 2 e −n k δ ) and equal to 0 outside B(x i,n k , Ce −n k δ ). We can define it so it satisfies √ −1∂ρ i,n k ∧∂ρ i,n k ≤ C ′ e 2δn k Ω where C ′ is another constant that does not depends on n k . For each i, we define the positive (1, 1) current S i n k , using the above notation S i n k := ρ i,n k a i,n k T i n k where a i,n k is a constant chosen so that S i n k has mass 1. By Lelong's inequality, we know that a i,n k ≥ e −2n k δ C 2 π/4. We claim that, up to extracting, d −ψ(n k ) (D ψ(n k ) ) * (S i n k ) → R i where R i is a positive closed (1, 1) current of mass 1 in T. As Ω is the metric preserved by U , then d −1 D * (Ω) = Ω thus d −ψ(n k ) (D ψ(n k ) ) * (S i n k ) has mass 1 and we can extract a converging subsequence for both i = 1 and i = 2. To show that the limit R i of such subsequence is closed, it is enough to show that it is ∂-closed so we only have to test against forms of the type χ∂Z where χ is a smooth function and∂Z some (0, 1)-form with constant coefficients and norm 1. As D has linear part √ dU , we have that (D ψ(n k ) ) * (∂Z) is again some 1-form with constant coefficients and norm d ψ(n k )/2 so we write it as d ψ(n k )/2∂Z n k . Now, ∂(d −ψ(n k ) (D ψ(n k ) ) * (S i n k )), χ∂Z = d −ψ(n k ) a i,n k ∂ρ i,n k ∧ T i n k , χ • D ψ(n k ) (D ψ(n k ) ) * (∂Z) = d −ψ(n k )/2 a i,n k T i n k , χ • D ψ(n k ) ∂ρ i,n k ∧∂Z n k . L Genusf −n (L)\U ≤ log d As this is true ∀δ < δ 0 , we can make δ → 0 in the above inequality so. This is what we want as δ ′ can be taken arbitrarily small and h 1 top (f ) ≤ log d by Theorem 1.1. Theorem 1.3 follows.3.6. h 1 top (f ) = 0 for Lattès mapsAs mentioned in the introduction, the purpose of this section is to show that h 1 top (f ) = 0 for a Lattès map f of P 2 of degree d. Such map can be lifted to an affine map D of linear part √ dU , where U is an isometry, on a torus T by a holomorphic branch cover σ ([D]). By contradiction, assume that h 1 top (f ) = α > 0. For δ > 0 small enough, we thus haveClaim. -For all n such thatthere exist ψ(n) with αn 9 log d ≤ ψ(n) ≤ n and ∆ 1 n , ∆ 2 n ∈ X δ,n 1 such that:Indeed, if the claim does not hold then we could take arbitrarily large n for which the disks in X δ,n 1 are not δ-separated for f m for any αn 9 log d ≤ m ≤ n. By (5), we have arbitrarily large n such thatThus for such n and such E realizing the above maximum, the δ-separation between the elements of E is achieved for m ≤ αn 9 log d . For W ∈ E, we pick x W ∈ W . Then observe that the collection of such (x W ) W ∈E defines a ( αn 9 log d , δ)-separated set which has the same cardinality than E. Thus:This contradicts the fact that h top (f ) ≤ 2 log d.By the claim, there exists a sequence (n k ) k of integers with n k → ∞ such that for all k we can find two elements ∆ 1 n k , ∆ 2 n k ∈ X δ,n k 1 such that:For each i, there exists Z i,n k ∈ ∆ i n k such that ∆ i n k is analytic in B(Z i,n k , e −δn k ). Let∆ i n k be the restriction of ∆ i n k to the ball B(Z i,n k , e −δn k /2). Consider the lifts σ −1 (∆ i n k ) for i = 1, 2. Since σ is holomorphic, σ −1 (∆ i n k ) is analytic in some ball B(x i,n k , Ce −n k δ ) for some constant C > 0 that depends on σ (but not on n k ) and where x i,n k ∈ σ −1 (∆ i n k ). We denote T i n k := σ −1 (∆ i n k ) ∩ B(x i , Ce −n k δ ).By Cauchy-Schwarz inequality and the properties of ρ i,n k , we deducewhere C ′′ is another constant that does not depends on n k . In particular, for δ small enough that quantity converges to 0 so R 1 and R 2 are closed. By Bézout, we can take Z ∈ supp(σ * (R 1 )) ∩ supp(σ * (R 1 )). Take θ a smooth non zero form with support in B(Z, δ/4) and so that σ * (R 1 ), θ = 0 and σ * (R 2 ), θ = 0. Then, for n k large enough, we have that σ * (d −ψ(n k ) (D ψ(n k ) ) * (S i n k )), θ = 0 for i = 1, 2. Since σ * (d −ψ(n k ) (D ψ(n k ) ) * (S i n k )) has support in f ψ(n k ) (∆ i ψ(n k ) ), this contradicts the fact that d f ψ(n k ) (∆ 1 n k ), f ψ(n k ) (∆ 2 n k ) ≥ δ. Hence h 1 top (f ) = 0. Areas of projections of analytic sets. H Alexander, B A Taylor, J L Ullman, Invent. Math. 16H. Alexander, B. A. Taylor, and J. L. Ullman. Areas of projections of analytic sets. Invent. Math., 16:335-341, 1972. Energy and invariant measures for birational surface maps. E Bedford, J Diller, Duke Math. J. 1282E. Bedford and J. Diller. Energy and invariant measures for birational surface maps. Duke Math. J., 128(2):331-368, 2005. Polynomial diffeomorphisms of C 2 . III. Ergodicity, exponents and entropy of the equilibrium measure. E Bedford, J Smillie, Math. Ann. 2943E. Bedford and J. Smillie. Polynomial diffeomorphisms of C 2 . III. Ergodicity, exponents and entropy of the equilibrium measure. Math. Ann., 294(3):395-420, 1992. Une caractérisation géométrique des exemples de Lattès de P k. F Berteloot, J.-J Loeb, Bull. Soc. Math. France129F. Berteloot and J.-J. Loeb. Une caractérisation géométrique des exemples de Lattès de P k . Bull. Soc. Math. France, 129(2):175-188, 2001. Deux caractérisations de la mesure d'équilibre d'un endomorphisme de P k (C). J.-Y Briend, J Duval, Publ. Math. Inst. HautesÉtudes Sci. 93J.-Y. Briend and J. Duval. Deux caractérisations de la mesure d'équilibre d'un endomor- phisme de P k (C). Publ. Math. Inst. HautesÉtudes Sci., (93):145-159, 2001. On local entropy. M Brin, A Katok, Geometric dynamics. BerlinSpringer1007Rio de JaneiroM. Brin and A. Katok. On local entropy. In Geometric dynamics (Rio de Janeiro, 1981), volume 1007 of Lecture Notes in Math., pages 30-38. Springer, Berlin, 1983. Sur la laminarité de certains courants. H De Thélin, Ann. Sci.École Norm. Sup. 374H. De Thélin. Sur la laminarité de certains courants. Ann. Sci.École Norm. Sup. (4), 37(2):304-311, 2004. Un phénomène de concentration de genre. H De Thélin, Math. Ann. 3323H. De Thélin. Un phénomène de concentration de genre. Math. Ann., 332(3):483-498, 2005. Sur la construction de mesures selles. H De Thélin, Ann. Inst. Fourier (Grenoble). 562H. De Thélin. Sur la construction de mesures selles. Ann. Inst. Fourier (Grenoble), 56(2):337-372, 2006. Sur les exposants de Lyapounov des applications méromorphes. H De Thélin, Invent. Math. 1721H. De Thélin. Sur les exposants de Lyapounov des applications méromorphes. Invent. Math., 172(1):89-116, 2008. Entropy of meromorphic maps and dynamics of birational maps. H De Thélin, G Vigny, Mém. Soc. Math. Fr. (N.S.). 12298H. De Thélin and G. Vigny. Entropy of meromorphic maps and dynamics of birational maps. Mém. Soc. Math. Fr. (N.S.), 122:vi+98, 2010. Regularization of currents and entropy. T.-C Dinh, N Sibony, Ann. Sci.École Norm. Sup. 374T.-C. Dinh and N. Sibony. Regularization of currents and entropy. Ann. Sci.École Norm. Sup. (4), 37(6):959-971, 2004. Dynamics of regular birational maps in P k. T.-C Dinh, N Sibony, J. Funct. Anal. 2221T.-C. Dinh and N. Sibony. Dynamics of regular birational maps in P k . J. Funct. Anal., 222(1):202-216, 2005. Une borne supérieure pour l'entropie topologique d'une application rationnelle. T.-C Dinh, N Sibony, Ann. of Math. 1612T.-C. Dinh and N. Sibony. Une borne supérieure pour l'entropie topologique d'une applica- tion rationnelle. Ann. of Math. (2), 161(3):1637-1644, 2005. Exemples de Lattès et domaines faiblement sphériques de C n. C Dupont, Manuscripta Math. 1113C. Dupont. Exemples de Lattès et domaines faiblement sphériques de C n . Manuscripta Math., 111(3):357-378, 2003. Complex dynamics in higher dimension. II. J E Fornaess, N Sibony, Modern methods in complex analysis. Princeton, NJ; Princeton, NJPrinceton Univ. Press137J. E. Fornaess and N. Sibony. Complex dynamics in higher dimension. II. In Modern methods in complex analysis (Princeton, NJ, 1992), volume 137 of Ann. of Math. Stud., pages 135-182. Princeton Univ. Press, Princeton, NJ, 1995. Hyperbolic maps on P 2. J E Fornaess, N Sibony, Math. Ann. 3112J. E. Fornaess and N. Sibony. Hyperbolic maps on P 2 . Math. Ann., 311(2):305-333, 1998. Convex sets and Kähler manifolds. M Gromov, Advances in differential geometry and topology. Teaneck, NJM. Gromov. Convex sets and Kähler manifolds. In Advances in differential geometry and topology, pages 1-38. World Sci. Publ., Teaneck, NJ, 1990. On the entropy of holomorphic maps. M Gromov, Enseign. Math. 492M. Gromov. On the entropy of holomorphic maps. Enseign. Math. (2), 49(3-4):217-235, 2003. Lyapunov exponents, entropy and periodic orbits for diffeomorphisms. A Katok, Inst. Hauteś Etudes Sci. Publ. Math. 51A. Katok. Lyapunov exponents, entropy and periodic orbits for diffeomorphisms. Inst. Hauteś Etudes Sci. Publ. Math., (51):137-173, 1980. Value distribution for sequences of rational mappings and complex dynamics. A Russakovskii, B Shiffman, Indiana Univ. Math. J. 463A. Russakovskii and B. Shiffman. Value distribution for sequences of rational mappings and complex dynamics. Indiana Univ. Math. J., 46(3):897-932, 1997. Dynamique des applications rationnelles de P k. N Sibony, of Panor. Synthèses, pages ix-x, xi-xii. ParisLyon8Dynamique et géométrie complexesN. Sibony. Dynamique des applications rationnelles de P k . In Dynamique et géométrie com- plexes (Lyon, 1997), volume 8 of Panor. Synthèses, pages ix-x, xi-xii, 97-185. Soc. Math. France, Paris, 1999. Hyperbolic measure of maximal entropy for generic rational maps of P k. G Vigny, Ann. Inst. Fourier (Grenoble). 642G. Vigny. Hyperbolic measure of maximal entropy for generic rational maps of P k . Ann. Inst. Fourier (Grenoble), 64(2):645-680, 2014. Volume growth and entropy. Y Yomdin, Israel J. Math. 573Y. Yomdin. Volume growth and entropy. Israel J. Math., 57(3):285-300, 1987. Université Paris 13, 99 avenue J.B. Clément, 93430 Villetaneuse, France • E-mail : dethelin@math. Laga Henry De Thélin, Institut Galilée. Henry De Thélin, LAGA, UMR 7539, Institut Galilée, Université Paris 13, 99 avenue J.B. Clément, 93430 Villetaneuse, France • E-mail : [email protected] . Gabriel Vigny, Lamfa Umr, 7352Université de Picardie Jules Verne33 rue Saint-Leu, 80039 AMIENS Cedex 1, FRANCE • E-mail : [email protected] Vigny, LAMFA UMR 7352, Université de Picardie Jules Verne, 33 rue Saint-Leu, 80039 AMIENS Cedex 1, FRANCE • E-mail : [email protected]
[]
[ "Statistical sparsity", "Statistical sparsity" ]
[ "Peter Mccullagh \nDepartment of Statistics\nBooth School of Business\nUniversity of Chicago\nUniversity of Chicago\n\n", "Nicholas G Polson \nDepartment of Statistics\nUniversity of Chicago\n5747 S. Ellis Ave60637ChicagoIlU.S.A\n" ]
[ "Department of Statistics\nBooth School of Business\nUniversity of Chicago\nUniversity of Chicago\n", "Department of Statistics\nUniversity of Chicago\n5747 S. Ellis Ave60637ChicagoIlU.S.A" ]
[]
The main contribution of this paper is a mathematical definition of statistical sparsity, which is expressed as a limiting property of a sequence of probability distributions. The limit is characterized by an exceedance measure H and a rate parameter ρ > 0, both of which are unrelated to sample size. The definition is sufficient to encompass all sparsity models that have been suggested in the signal-detection literature. Sparsity implies that ρ is small, and a sparse approximation is asymptotic in the rate parameter, typically with error o(ρ) in the sparse limit ρ → 0. To first order in sparsity, the sparse signal plus Gaussian noise convolution depends on the signal distribution only through its rate parameter and exceedance measure. This is one of several asymptotic approximations implied by the definition, each of which is most conveniently expressed in terms of the zeta-transformation of the exceedance measure. One implication is that two sparse families having the same exceedance measure are inferentially equivalent, and cannot be distinguished to first order. A converse implication for methodological strategy is that it may be more fruitful to focus on the exceedance measure, ignoring aspects of the signal distribution that have negligible effect on observables and on inferences. From this point of view, scale models and inverse-power measures seem particularly attractive.
10.1093/biomet/asy051
[ "https://arxiv.org/pdf/1712.03889v2.pdf" ]
73,658,543
1712.03889
b653ed309e85f27bff4e4888741116c1787c015b
Statistical sparsity May 1, 2018 23 May 2018 Peter Mccullagh Department of Statistics Booth School of Business University of Chicago University of Chicago Nicholas G Polson Department of Statistics University of Chicago 5747 S. Ellis Ave60637ChicagoIlU.S.A Statistical sparsity May 1, 2018 23 May 2018Polson is at ChicagoBooth, University of Chicago. 1ConvolutionExceedance measureFalse discoveryInfinite divisibilityLévy measurePost-selection inferenceSignal activityTail inflation The main contribution of this paper is a mathematical definition of statistical sparsity, which is expressed as a limiting property of a sequence of probability distributions. The limit is characterized by an exceedance measure H and a rate parameter ρ > 0, both of which are unrelated to sample size. The definition is sufficient to encompass all sparsity models that have been suggested in the signal-detection literature. Sparsity implies that ρ is small, and a sparse approximation is asymptotic in the rate parameter, typically with error o(ρ) in the sparse limit ρ → 0. To first order in sparsity, the sparse signal plus Gaussian noise convolution depends on the signal distribution only through its rate parameter and exceedance measure. This is one of several asymptotic approximations implied by the definition, each of which is most conveniently expressed in terms of the zeta-transformation of the exceedance measure. One implication is that two sparse families having the same exceedance measure are inferentially equivalent, and cannot be distinguished to first order. A converse implication for methodological strategy is that it may be more fruitful to focus on the exceedance measure, ignoring aspects of the signal distribution that have negligible effect on observables and on inferences. From this point of view, scale models and inverse-power measures seem particularly attractive. Introduction The role of a definition Statistical sparsity is concerned partly with phenomena that are rare, partly with phenomena that are mostly zero, but more broadly with phenomena that are mostly negligible or seldom appreciably large. Progress in mathematics is seldom impeded by inadequacy of definitions, and the same may be said about progress in the development of sparsity as a concept in statistical work. But, sooner or later, definitions are needed in order to clarify ideas and to keep confusion at bay. The challenge is to formulate accurately a definition of sparsity that is faithful to current usage, and to explore its consequences. Our approach uses a probabilistic limit. Statistical sparsity is defined in § 2 as a limiting property of a sequence of probability distributions that governs both the rate at which probability accumulates near the origin, and the rate at which it decreases elsewhere. Our definition covers all sparsity models that are found in the statistical literature on sparse-signal detection and estimation. It includes all two-group atom-and-slab mixtures, (Johnstone andSilverman 2004, Efron, 2009), all non-atomic spike-and-slab mixtures (George and McCulloch 1993;Rockova and George 2018), the low-index gamma model (Griffin and Brown, 2013), and many Gaussian scale mixtures such as the Cauchy scale family and the horseshoe scale family (Carvalho, Polson and Scott 2010). The sparse limit is characterized by a rate parameter ρ > 0 and a measure H whose product ρH determines the rarity of threshold exceedances. The exceedance measure is also the chief determinant of a certain restricted class of integrals, probabilities, and expected values that arise in probabilistic assessments of signal activity. In many cases, a definition tells us only what is intuitively well known. But occasionally, a good mathematical definition reveals an aspect of the phenomenon that is unexpected and not readily apparent from a litany of examples. Sparsity is a case in point. The phenomenon may be intuitively obvious, but the definition in terms of a characteristic pair (ρ, H) is much less so. The first reason for a definition is that it highlights the role of the characteristic pair and provides a definitive answer to the question of whether a particular probability model is or is not statistically sparse, in what way it deviates from sparsity, and so on. The second reason is that the limit enables us to develop distributional approximations for inferential purposes in sparse signal-detection problems, i.e., approximations for the marginal distribution or the conditional distribution given the observation. The sequence is essential because a sparse approximation is asymptotic in the rate parameter ρ → 0, and is unrelated to sample size and sample configuration. The third reason is that, while the sequence of distributions determines the exceedance measure, the exceedance measure does not determine the distributions. Two sequences having the same exceedance measure are first-order equivalent in the sense that all marginal and conditional distributions depend only on the exceedance measure. For example, to certain atom-free spike-and-slab mixtures there corresponds an equivalent atom-and-slab mixture. Likewise, the Cauchy and horseshoe scale families are equivalent, but they are not equivalent to the low-index gamma model. Statistical implications The novelty of this paper lies entirely in the definition of sparsity, which is statistically interesting on account of its implications. We leave it to the reader to decide whether the implications described in § § 3-6 are useful or relevant or have practical consequences, but utility and practical considerations play no role in their derivation. The over-riding implication is that it is futile to estimate any functional of the signal distribution that is not first-order identifiable from the data that are observed. Subsidiary implications flowing from the definition are as follows: • the use of (ρ, H) in place of the signal distribution for model specification; • the role of the asymptotic likelihood for parameter estimation ( § § 3.4, 7); • the role of the zeta function for inference about the signal given the data ( § 5); • the connection between scale models and inverse-power measures ( § 4); • the interpretation of the Benjamini-Hochberg procedure in terms of conditional exceedance rather than conditional false discovery or null signals ( § 5.4). The zeta transformation is defined in § 3; it plays a key role for inference in the standard sparse signal detection model. Section 4 focuses on the inverse-power exceedance measures, a class that includes the sparse Cauchy model, the horseshoe model and all other scale families having similar tail behaviour. Within the inverse-power class, there exists a particular family of probability distributions, called the ψ-scale family, that has a highly unusual but extremely useful property. Every Gaussian-ψ convolution that arises in the signal-plus-noise model is expressible exactly as a binary Gaussian-ψ mixture. This is a closure or self-conjugacy property, which means that the observation distribution belongs to the same family as the signal. Section 5 shows how the zeta function determines the asymptotic conditional distribution of the signal given the observation, and Tweedie's formula for the conditional moment generating function. The conditional activity, or -exceedance probability, is shown to be a rational function of the zeta transformation, which is closely related to the Benjamini-Hochberg procedure (Benjamini and Hochberg 1995). The theory is extended in § 6 to a hyperactive random signal for which the asymptotic behaviour is technically more complicated. Section 7 illustrates the application of parametric maximum likelihood to estimate the sparsity rate parameter and the exceedance index for subsequent inferential use. Sparse limit: definitions Exceedance measure The sparse limit involves an exceedance measure, which is defined as follows. Definition 1. A non-negative measure H on the real line excluding the origin is termed an exceedance measure if R\{0} min(x 2 , 1) H(dx) < ∞. A measure satisfying R\{0} (1 − e −x 2 /2 )H(dx) = 1 is called a unit exceedance measure. Although the motivation for this definition is unconnected with stochastic processes, every exceedance measure is the Lévy measure of an infinitely divisible distribution on the real line, and vice-versa. No constraint is imposed on the total measure, which may be finite or infinite. To every non-zero exceedance measure H there corresponds a ray {λH(dx) : λ > 0} of proportional measures. Each ray contains as a reference point a unit measure such that (1 − e −x 2 /2 )H(dx) is a probability distribution on R\{0}. For example, the unit inverse-power measures are H(dx) = d 2 d/2−1 Γ(1 − d/2) dx |x| d+1 (1) for 0 < d < 2. Definition 2. The activity index 0 ≤ AI(H) < 2 gauges the behaviour in a neighbourhood of the origin: AI(H) = inf α > 0 : 1 −1 |x| α H(dx) < ∞ . Every finite measure has activity index zero; the measure (1) has activity index d. Comment 1: The activity index AI(H) is strictly less than two because continuity of α → 1 −1 |x| α H(dx) for α > 0 implies that {α > 0 : 1 −1 |x| α H(dx) < ∞} is an open set containing 2. In particular, the limit lim →0 2−AI(H) = 0 arises in § 3.5. Definition 3. The space W of Lévy-integrable functions consists of bounded continuous functions w(x) on the real line such that x −2 w(x) is also bounded and continuous. Lévy integrability implies R\{0} w(x) H(dx) < ∞ for every w ∈ W and every exceedance measure H. The functions min(x 2 , 1), x 2 e −x 2 and 1 − e −x 2 /2 belong to W . Sparse limit Let {P ν } be a sequence of probability distributions indexed by ν > 0, and converging weakly to the Dirac measure δ 0 as ν → 0. Sparsity is a rate condition governing the approach to the weak limit. Definition 4. A sequence of probability distributions {P ν } is said to have a sparse limit with rate ρ ν if there exists a unit exceedance measure H such that lim ν→0 ρ −1 ν R w(x) P ν (dx) = R\{0} w(x) H(dx)(2) for every w ∈ W . Otherwise, if the limit is zero for every w, the sequence is said to be sparse with rate o(ρ ν ). The motivation for this definition comes from extreme-value theory, which focuses on exceedances over high thresholds (Davison and Smith 1990). Each sparse-signal threshold > 0 is fixed as ν → 0, but is automatically high relative to the bulk of the distribution: in this respect, the parallel with extreme-value theory is close. Unlike extreme-value theory, sparsity places no emphasis on limit distributions for the excesses over any threshold. Formally setting w(·) equal to the indicator function for the event + = ( , ∞) or [ , ∞) in the integrals (2) gives the motivating condition-that the sparsity rate is the rarity of exceedances lim ν→0 ρ −1 ν P ν ( + ) = H( + ) < ∞.(3) There is a similar limit for negative exceedances, and any other subset whose closure does not include zero. Comment 2: The integral definition implies (3), but the converse fails if the limit in (3) is not a Lévy measure. For example, the Dirac-Gaussian mixture (1 − ν)δ 0 + νN (0, ν −1 ) does not have a sparse limit, but (3) is satisfied by ρ ν = ν with H( + ) = 1/2. Section 6 discusses another example where the limit measure is non-trivial but not in the Lévy class. Since the definition involves only the limit ν → 0, it is always possible to re-parameterize by the rate function, so that ν = ρ. This standard parameterization is assumed where it is convenient. Definition 5. Sparse-limit equivalence: Regardless of their parameterization, two sparse families having the same exceedance measure are said to be equivalent in the sparse limit. Let {P ν } and {Q ν } be two families having the same unit exceedance measure H, both taken in the standard parameterization with rate parameter ρ = ν. In effect, the rate parameterization matches each distribution in one family with a sparsity-matching distribution in the other, so the families are in 1-1 correspondence, at least in the approach to the limit. For any function w ∈ W , the limit integrals are finite and equal: lim ν→0 ν −1 w(x) P ν (dx) = lim ν→0 ν −1 w(x) Q ν (dx) = w(x) H(dx) Consequently, near the sparse limit, both integrals may be approximated by w(x) P ν (dx) w(x) Q ν (dx) = ν w(x) H(dx) + o(ν). This analysis implies that every W -integral using P ν as the signal distribution is effectively the same as the integral using Q ν at the corresponding sparsity level. Definition 6. Sparse scale family: A scale family of distributions with density σ −1 p(x/σ) is called a sparse scale family if it is sparse in the small-scale limit σ → 0. The rate function ρ(σ) need not coincide with the scale parameter. The Student-t scale family on d < 2 degrees of freedom is a sparse scale family with rate function ρ(σ) ∝ σ d and exceedance density proportional to dx/|x| d+1 . If the scale family {P σ } is sparse with rate parameter ρ(σ), then, for small σ, P σ (dx) = p(x/σ) dx/σ ρ(σ) h(x) dx. Setting x = 1 and u = 1/σ gives p(u) h(1)ρ(u −1 )/u as u → ∞. Conversely, h(x) = x −1 ρ(σ/x)/ρ(σ) for x > 0, implying that ρ(σ) = σ d for some power d > 0. If p is not symmetric, the power index for x < 0 may be a different number. It follows that the exceedance density of a sparse scale family is an inverse power function h(x) ∝ 1/x d+1 , the same as the tail behaviour of p(x) as x → ∞. Infinite divisibility Let F be an infinitely divisible probability distribution on the real line, and let {F ν } be the Lévy family indexed by the convolution parameter, i.e., F ν F ν = F ν+ν with F 1 = F . The Lévy process is sparse with rate ν, and the exceedance measure is the Lévy measure ( Barndorff-Nielsen and Hubalek, 2008). To each exceedance measure there corresponds an infinite equivalence class of sparse sequences, most of which are not closed under convolution. This result tells us that each equivalence class contains exactly one Lévy process; the zero equivalence class contains the Gaussian family, i.e., the Brownian motion process. Despite this characterization, exceedance measures and Lévy processes have not played a prominent role in either frequentist or non-frequentist work on sparsity. Comment 3: A typical spike-and-slab distribution is not infinitely divisible. However, there are exceptions. For each positive pair (λ, τ ), the atom-and-slab lasso distribution e −λ δ 0 (x) + (1 − e −λ )τ e −τ |x| /2 is infinitely divisible with finite Lévy measure H λ,τ (dx) ∝ e −τ |x| − e −τ e λ/2 |x| |x| −1 dx.(4) For each (λ, τ ), the family {F ν } exists, but the distributions are not easily exhibited. The atom-and-slab lasso family with ρ = 1 − e −λ as the sparsity rate parameter is not to be confused with the Lévy family {F ν } in which λ > 0 is held fixed. The exceedance measures are H λ,τ and τ e −τ |x| /2 = lim λ→0 λ −1 H λ,τ . Sparsity expansion Weak convergence of the sequence {P ν } to the Dirac measure δ 0 is concerned with the behaviour of P ν -integrals for a suitable class of functions. For any bounded continuous function w having one continuous derivative at zero, the symmetrized function 2w(x) = w(x) + w(−x) − 2w(0) is O(x 2 ) near the origin. Thusw ∈ W is Lévy integrable and, if P ν is symmetric, sparseness implies a linear expansion for small ν: R w(x) P ν (dx) = w(0) + R w(x) − w(0) P ν (dx) = w(0) + Rw (x) P ν (dx) = w(0) + ρ R\{0}w (x) H(dx) + o(ρ). The exceedance measure is the directional derivative or linear operator governing the approach to the weak limit: P ν (w) − δ 0 (w) = ρH(w) + o(ρ) in the sense of integrals. Sparsity determines the difference P ν (w) − δ 0 (w), but only to first order in ρ. Examples For > 0, the exceedance event A ⊂ R, or activity event, is the complement of the closed intervalĀ = [− , ]. Example 1: The -exceedance probability for the Laplace distribution with density σ −1 e −|x|/σ /2 is e − /σ , implying lim σ→0 σ −p e − /σ = 0 for every > 0 and p > 0. The scale family is sparse with rate o(σ p ) for every p > 0. There is no definite sparsity rate parameter satisfying (2) with a finite non-zero limit, so we say that the sequence belongs to the zero-activity class. The Gaussian scale family, and all other scale families having exponential tails, have the same property. So far as this paper is concerned all sparse families in this class are trivial and equivalent. Example 2: Let F be a probability distribution on the real line. The atom and slab family P ν = (1 − ν)δ 0 + νF indexed by 0 < ν ≤ 1 is sparse with exceedance measure proportional to F . The unit exceedance measure is F/K, where K = (1−e −x 2 /2 ) F (dx), and the exceedance rate is ρ = Kν, so the product satisfies ρH = νF . The activityreduction factors ρ/ν for the standard Laplace, Cauchy and Gaussian distributions are 0.34, 0.48 and 0.29 respectively. The mixture-indexed spike and F -slab family with a fixed scale parameter belongs to the finite class; it is not to be confused with the atom-free F -scale family or intermediate combinations. Example 3a: The family of double gamma distributions with density p ν (x) = |x| ν−1 e −|x| 2 Γ(ν) is sparse with rate parameter ρ ∝ ν and exceedance density h(x) ∝ |x| −1 e −|x| /2. The total mass lim →0 H(A ) is infinite but there is no atom at zero. Example 3b: For fixed σ, the re-scaled double gamma family with density σ −1 p ν (x/σ) is sparse with rate parameter ρ ∝ ν and exceedance density h(x) ∝ |x| −1 e −|x|/σ , which depends on σ. Each of the sub-families for different σ > 0 has its own exceedance measure. These are not equivalent because they are not proportional. For fixed index ν, the double gamma scale family is also sparse with rate o(σ p ) for every p > 0, so the scale family belongs to the zero-activity class. Example 4: The Cauchy family C(σ) with probable error σ > 0 and density P σ (dx) = σ dx π(σ 2 + x 2 ) is sparse with inverse-square exceedance dx/( √ 2π x 2 ) and rate ρ = σ 2/π. The scale families C(σ), 1 2 δ 0 + C(2σ) , 0.75 N (0, σ 2 ) + 0.25 C(4σ) , have the same rate parameter and exceedance measure. Similar remarks apply to a large number of families that have been proposed as prior distributions in the literature, including the scale family generated by various Gaussian mixtures such as the horseshoe distribution with density log(1 + 1/x 2 )/(2π). Example 5a: Consider the distribution with density p(x) = 1 − e −x 2 /2 x 2 √ 2π and let p σ (x) = σ −1 p(x/σ) be the density of the re-scaled distribution. Then the family {P σ } is sparse with rate parameter ρ = σ and inverse-square exceedance. Example 5b: For d > 1/2, let w(x) = x 2d /(1 + x 2 ) d , and let p σ (x) = Γ(d) Γ(d − 1/2) √ π σ w(x/σ) x 2 . This scale family is sparse with the same rate function and inverse-square exceedance measure as the previous two. The weight function has no role in the limit provided that the integral is finite, w(x) ∼ x 2 for small x, and w(x) → 1 as x → ±∞. Example 6: The Dirac-Gaussian mixture P ν = (1 − ν)δ 0 + νN (0, 1/ν) converges to a point mass, as does (1 − ν)δ 0 + νN (0, ν), but neither mixture has a sparse limit according to the definition. The Dirac-Cauchy mixture (1−ν)δ 0 +νC(ν) has a sparse inverse-square limit with rate parameter ρ = ν 2 , but (1 − ν)δ 0 + νC(1/ν) does not. Zeta function and zeta measure Definitions We assume henceforth that every sparse family is symmetric, and the exceedance measure is expressed in unit form, so that (1 − e −x 2 /2 )H(dx) = 1. To each exceedance measure H there corresponds a zeta function ζ(t) = R\{0} cosh(tu) − 1 e −u 2 /2 H(du),(5) which is positive and finite, symmetric and convex, satisfying ζ(0) = 0. By construction, the zeta function is the cumulant function of the infinitely divisible distribution with down-weighted Lévy measure e −u 2 /2 H(du). The zeta function is analytic at the origin, so this Lévy process has finite moments of all orders. Numerical values are shown in Table 1 for three inverse-power measures. The zeta measure is the integrand in (5): ζ(du; θ) = cosh(θu) − 1)e −u 2 /2 H(du),(6) which is a weighted linear combination of symmetric measures |u| 2r e −u 2 /2 H(du) with coefficients θ 2r /(2r)! for r ≥ 1, and finite total mass ζ(θ). The zeta measure occurs as one of two components of the conditional distribution in § 5.3. Tail inflation factor The zeta function is an integral transformation much like a Laplace transform, i.e., formally H is a measure on the observation space and ζ is a function on the dual space. However, if φ(x) is the standard normal density, the product ψ(x) = φ(x)ζ(x) is also a probability density with characteristic function e itx ψ(x) dx = R\{0} e −u 2 /2 H(du) R φ(x) e x(u+it) /2 + e −x(u−it) /2 − e itx ) dx, = e −t 2 /2 R\{0} cos(tu) − e −u 2 /2 H(du), = e −t 2 /2 − e −t 2 /2 R\{0} 1 − cos(tu) H(du).(7) Provided that H is a unit exceedance measure, the value at t = 0 is one. Section 3.4 shows that ψ is the tail-inflation component of the marginal distribution of the observations. The left panel of Figure 1 shows the density function for the inversepower exceedance measures, all of which have similar bimodal distributions differing chiefly in modal height and tail behaviour. There is a certain qualitative similarity with a pair of distributions depicted in Figure 1 of Johnson and Rossell (2009), and recom-mended there as priors for testing a Gaussian mean. For typical exceedance measures having regularly-varying tails, the characteristic function of ψ is not analytic at the origin, in which case the distribution does not have finite moments. For example, if H is the inverse-power exceedance (1) for some 0 < d < 2, the characteristic function (7) reduces to R e itx ψ(x) dx = e −t 2 /2 (1 − |t| d /K d ),(8) where K d = 2 d/2 Γ(1/2+d/2)/ √ π. By contrast, if H(du) = Ke −α|u| du/|u| has exponential tails, the integral in (7) is R\{0} 1 − cos(tu) H(du) = K log(1 + t 2 /α 2 ), which is analytic at the origin. A derivation is sketched near the end of § 4.2. Sparsity integrals To see how the zeta function arises in sparsity calculations, consider first the integral of the weighted distribution e −x 2 /2 P ν (dx) = 1 − (1 − e −x 2 /2 ) P ν (dx), = 1 − ρ (1 − e −x 2 /2 ) H(dx) + o(ρ), = 1 − ρ + o(ρ), where ρ ≡ ρ(ν) is the rate function. Second, for any sparse family, the Laplace transform is R e tx e −x 2 /2 P ν (dx) = 1 − ρ + R (e tx − 1)e −x 2 /2 P ν (dx) = 1 − ρ + R (cosh(tx) − 1)e −x 2 /2 P ν (dx), = 1 − ρ + ρ R\{0} (cosh(tx) − 1) e −x 2 /2 H(dx) + o(ρ), = 1 − ρ + ρζ(t) + o(ρ). The normalized family (1 − ρ) −1 e −x 2 /2 P ν is sparse with rate ρ = ρ/(1 − ρ), exceedance measure e −x 2 /2 H(dx), and Laplace transform 1 + ρ ζ(t) + o(ρ). Signal plus noise convolution Suppose that the observation Y is a sum of two independent unobserved random variables Y = µ + ε, where the signal µ ∼ P ν is sparse, and ε ∼ N (0, 1) is a standard Gaussian variable. Then the joint density of (Y, µ) at (y, u) is φ(y − u) p ν (u), while the marginal density of the observation is a Gaussian-ψ mixture: m ν (y) = R φ(y − u) P ν (du) = φ(y) R e yu−u 2 /2 P ν (du) = φ(y) 1 − ρ + ρζ(y) + o(ρ) = (1 − ρ)φ(y) + ρψ(y) + o(ρ).(9) For every sparse family, the sparsity parameter satisfies 1 − ρ = m ν (0)/φ(0), which provides a recipe for consistent estimation. Alternatively, if H, ζ or ψ are given, the mixture (9) can be fitted directly by maximum likelihood: see § 7. Comment 4: The Gaussian-ψ mixture is a consequence of sparsity alone. It is compatible with a sparse atom-and-slab mixture (1 − ν)δ 0 + νF for signals, but the sparsity rate is strictly less than the slab weight ρ/ν = (1 − e −u 2 /2 )F (du) < 1. The identity ψ(0) = 0 implies that ψ cannot be the response distribution for any signal subset, so the sparsity rate in the marginal mixture (9) is not interpretable as the prior probability of a non-null signal in the standard two-group model (Johnstone and Silverman, 2004;Efron, 2008Efron, , 2010. Although (ρ, H) determines the marginal density to first order, Example 4 in § 2.5 show that (ρ, H) does not determine the null fraction 0 ≤ P ν (µ = 0) < 1 − ρ, which may be zero. Two inequalities For fixed t, the zeta function is linear in H, so the maximum necessarily occurs on the boundary at an atomic measure H({±u}) = 1/(1 − e −u 2 /2 ) for some u = 0, or, if the maximum does not exist, the supremum occurs in the limit u → 0. For t 2 ≤ 3, the limit point prevails, and the supremum is ζ(t) ≤ lim u→0 cosh(tu) − 1 e u 2 /2 − 1 = t 2 , which is attained in the limit d → 2 for the inverse-power class. Accordingly, the leading Taylor coefficient ζ 2 = u 2 e −u 2 /2 H(du) satisfies ζ 2 < 2. The second inequality is concerned with the behaviour of the zeta measure in a neighbourhood of the origin. If H has an atom at u = 0, then ζ also has an atom at u. But AI(H) < 2 implies that there is no atom at the origin, i.e., for each θ lim →0 ζ((− , ); θ) = Kθ 2 2 − AI(H) lim →0 2−AI(H) = 0.(10) It follows that the density limit lim →0 −1 ζ((− , ); θ) is zero for low-activity measures AI(H) < 1 and infinite for AI(H) > 1: see Fig. 3. For |u| ≤ |u |, the series expansion with positive coefficients implies cosh(u) − 1 u 2 ≤ cosh(u ) − 1 u 2 . For every θ and > 0, it follows that ζ (− , ); θ = θ 2 − (cosh(θu) − 1) θ 2 u 2 u 2 e −u 2 /2 H(du) ≤ cosh(θ ) − 1 2 − u 2 e −u 2 /2 H(du) ≤ ζ 2 cosh(θ ) − 1 2 .(11) For example, ζ 2 < 2 implies ζ (−1, 1); θ < 2 cosh(θ) − 2. Inverse power exceedances 4.1 Summary This section summarizes, without proofs, the role of the zeta function (5) for sparse scale families whose unit exceedance densities for 0 < d < 2 are shown in (1). It has the following properties. 1. The zeta function is expressible as a power series ζ(x) = d(2 − d) Γ(2 − d/2) ∞ r=1 2 r−2 Γ(r − d/2) x 2r (2r)! ,(12) which is symmetric with infinite radius of convergence. 2. The characteristic function of ψ is e −t 2 /2 1 − |t| d /K d , where K d is given in (8). 3. The scale family σ −1 ψ(y/σ) is sparse with rate parameter ρ = σ d and the same exceedance density (1). 4. Let η ∼ ψ and ∼ φ be independent. To each pair of coefficients with a 2 + b 2 = 1 there corresponds a number 0 ≤ α ≤ 1 such that the linear combination a + bη is distributed as the mixture (1 − α)φ + αψ. 5. The marginal distribution of Y is a mixture m ν (y) = (1 − ρ)φ(y) + ρψ(y). The statement in 5 is a little imprecise because the signal distribution is not mentioned. Modulo the scale factor √ 1 + σ 2 that occurs implicitly in 4, the statement is exact if the signal is distributed as σ −1 ψ(·/σ) for arbitrary σ, which determines the mixture weight ρ. It is also correct, modulo a similar scale factor, if the signal is distributed according to the mixture itself. More importantly, it is correct for every signal distribution in this inverse-power equivalence class, but with error o(ρ) in the approach to the sparse limit. The convolution-mixture theorem With ζ defined by its series expansion (12), the probability distribution with density ψ(x) = φ(x)ζ(x) has a remarkable property that makes it singularly well adapted for statistical applications related to sparse signals contaminated with additive Gaussian error. Not only is the ψ-scale family sparse with inverse-power exceedance measure, but each Gaussian-ψ convolution is also expressible as a binary Gaussian-ψ mixture as follows. Theorem 7. Convolution-mixture: For arbitrary scalars a, b, let Y = aε + bη, where ε ∼ N (0, 1) and η ∼ ψ are independent random variables. Also, let Y be the mixture Y = η √ a 2 + b 2 with probability α = |b| d /(a 2 + b 2 ) d/2 ε √ a 2 + b 2 otherwise. Then Y and Y have the same distribution denoted by CM d (α, a 2 + b 2 ). The theorem states that every linear combination with norm σ = √ a 2 + b 2 is equal in distribution to an equi-norm Gaussian-ψ mixture. An arbitrary binary mixture is not expressible as a convolution unless the two scale parameters are equal. Proof. The result is a direct consequence of the characteristic function (8). One derivation proceeds from (7) by analytic continuation of the gamma integral: ∞ 0 u α−1 e −su (2 − e −itu − e itu ) du = Γ(α) 2 s α − 1 (s + it) α − 1 (s − it) α , which is convergent for s > 0 and α > −2. For −2 < α < 0, the limit as s → 0 is R\{0} (1 − cos(tu)) du |u| d+1 = 2 cos(dπ/2)Γ(2 − d) d(1 − d) |t| d , where d = −α. The limit as d → 1 is π|t|. There is an extension to any exceedance measure that is an inverse-power mixture. Identifiability The convolution-mixture theorem asserts that CM d (α, σ 2 0 ) N (0, σ 2 1 ) = CM d (ρ, σ 2 0 + σ 2 1 ),(13) where ρ = ασ d 0 /(σ 2 0 + σ 2 1 ) d/2 . Of the four parameters (σ 2 0 , σ 2 1 , α, d), only three are identifiable in the marginal distribution. In principle, lack of identifiability is a serious inferential obstacle because the conditional distribution of the signal given Y = y depends on all four parameters. The difficulty is resolved in this paper, as it is elsewhere in the literature, by fixing σ 2 1 = 1. Without such an assumption, there are severe limitations to what can be learned from the data about the signal. Conditional distribution Tweedie's formula Provided that the error distribution is standard Gaussian, the argument used in § 3.4 shows that the moment generating function of the conditional distribution of the signal given Y = y is φ(y) m ν (y) R e (y+t)u−u 2 /2 P ν (du) = 1 − ρ + ρζ(y + t) 1 − ρ + ρζ(y) ,(14) depending, to first order in ρ, only on the zeta function of the exceedance measure. In particular, the rth conditional moment is ρζ (r) (y)/(1 − ρ + ρζ(y)), which is the firstorder asymptotic version of Tweedie's formula (Efron, 2011). For the inverse-square exceedance, the conditional mean is depicted in the middle panel of Figure 1 for a range of sparsity levels. Mixture interpretation Let G(du; 0) be the probability distribution whose moment generating function is 1+ζ(t), i.e., (14) with y = 0 and ρ = 1/2, and let G(du; y) be the exponentially tilted distribution whose moment generating function is (1 + ζ(y + t))/(1 + ζ(y)), i.e., (14) with ρ = 1/2. Then, the two-component mixture 1 − 2ρ 1 − ρ + ρζ(y) δ 0 (du) + ρ + ρζ(y) 1 − ρ + ρζ(y) G(du; y),(15) has moment generating function (14). Holding ρζ(y) = λ fixed as ν → 0, this heuristic argument suggests that the asymptotic conditional distribution of the signal is a specific two-component mixture in which the rth moment of G is ζ (r) (y)/(1 + ζ(y)). The argument is non-rigorous because there is no limit distribution (ν → 0 for fixed λ > 0 implies |y| → ∞ and |µ| → ∞), but the moment-matching intuition is essentially correct. However, the Dirac atom in (15) could be replaced by N (0, ρ 2 ) with no first-order effect on the generating function, so, (14) does not imply that the conditional distribution has an atom at zero. For the inverse-power family, the Laplace integral approximation for large |y| is log 1 + ζ(y) = 1 2 y 2 − (d + 1) log |y| + const + O(|y| −1 ), implying that G is approximately Gaussian with mean m = y − (d + 1)/m and unit variance. Asymptotically, ν → 0 for fixed λ > 0 implies |y| 2 log(λ/ρ), so the two components in (15) are asymptotically well separated with negligible probability assigned to bounded intervals (a, b) for which a > 0. This behaviour is typical for exceedance measures whose log density is slowly varying at infinity, but it is not universal. Bounded and discrete exceedance measures exhibit very different behaviours. Symmetrization The conditional density of the signal given Y = y is proportional to the joint density, φ(y) e yu−u 2 /2 P ν (du), and the symmetrized conditional distribution is proportional to cosh(yu)e −u 2 /2 P ν (du) = e −u 2 /2 P ν (du) + cosh(yu) − 1 e −u 2 /2 P ν (du) = e −u 2 /2 P ν (du) + ρζ(du; y) + o(ρ). The latter approximation is understood in the usual sense of integrals. The ratio of the conditional density at u to that at −u is e 2yu , so the conditional density may be recovered from the symmetrized version by multiplication by the bias function e yu / cosh(yu). Without loss of generality, therefore, we focus on the sparse limit of the symmetrized distribution shown above, ignoring terms of order o(ρ). To first order, the symmetrized conditional distribution is a mixture consisting of two components: 1. A central spike distribution e −u 2 /2 P ν (du)/(1−ρ) with weight proportional to 1−ρ; 2. The zeta distribution ζ(du; y)/ζ(y) with weight proportional to ρζ(y). The moment-generating function of the central spike is 1 − ρ + ρζ(t) /(1 − ρ), and the normalization constant for the mixture is the total weight 1 − ρ + ρζ(y). For y 2 ≤ 3, the inequality ζ(y) ≤ y 2 (see § 3.5) implies that the net weight on the central spike is at least 1 − ρy 2 . In other words, for typical ρ-values less than 5%, and y 2 ≤ 3, the central spike is the dominant feature of the conditional distribution. If AI(H) > 1, the zeta density has an integrable singularity at the origin. Ordinarily, this spike is not visible because it overlaps the central spike and its relative weight is small. But there are exceptions in which the central spike has zero density at the origin. See Fig. 3 in § 7 for one illustration. Signal activity probability The conditional distribution given Y = 0 is symmetric with sparse-limit distribution P ν (du | Y = 0) = (1 − ρ) −1 e −u 2 /2 P ν (du) + o(ρ), which is negligibly different from P ν . For any symmetric event such as A whose closure does not include zero, the first-order conditional probability given Y = y is the weighted linear combination P ν (|µ| > | y) = (1 − ρ)P ν (A | 0) + ρζ(A ; y) 1 − ρ + ρζ(y) ≥ ρζ(A ; y) 1 − ρ + ρζ(y) . Since ζ(A ; y) = ζ(y) − ζ(Ā ; y), and (10) implies that ζ(Ā ; y) tends to zero for low thresholds, the asymptotic low-threshold activity bound for fixed y is P ν (|µ| > | y) ≥ ρζ(y) + o(ρe ) 1 − ρ + ρζ(y) .(16) The argument for (16) or its complement as an asymptotic approximation with negligible asymptotic error is more delicate than that for the inequality. First, the approximation requires P ν (A | 0) → 0 as ν → 0, which implies a lower bound ρ on the activity threshold. Second, for y → ∞ at a rate such that ρζ(y) is bounded below, the approximation also requires ζ(Ā ; y)/ζ(y) → 0. Ordinarily, if H has unbounded support, log ζ(y) increases super-linearly, i.e., lim y→∞ y −1 log ζ(y) = ∞, in which case the condition that ρζ(y) be bounded below implies e αy /ζ(y) → 0 for every α. Consequently, for every fixed threshold > 0, the inequality (11) implies ζ(Ā ; y) ζ(y) ≤ 2 cosh(y ) − 1 2 ζ(y) . Super-linearity implies that the ratio tends to zero as ν → 0. The zero-order conditional non-exceedance probability is then P ν (|µ| ≤ | y) = (1 − ρ)P ν (Ā | 0) + ρζ(Ā ; y) 1 − ρ + ρζ(y) + o(ρ), = 1 − ρ − o(1) 1 − ρ + ρζ(y) , which is independent of . The non-exceedance probability serves as an upper bound for the local false-positive rate, P ν (µ = 0 | y) ≤ P ν (|µ| ≤ | y) = 1 − ρ − o(1) 1 − ρ + ρζ(y) .(17) This derivation rests on super-linearity of log ζ(y), which is satisfied by all inversepower measures and all of the typical examples discussed in § 2.5. Super-linearity fails if H has bounded support, and, in that case, (17) is true only for thresholds such that H(A ) > 0. Tail average activity Multiplication of (17) by the marginal density, and integration over y ≥ t gives the tailaverage -inactivity probability m ν (y)P ν (|µ| ≤ | Y = y) = φ(y) + o(1), P ν (|µ| ≤ | Y > t) = 1 − Φ(t) + o(1) m ν (Y > t) .(18) For bounded ρζ(t) and every fixed threshold > 0, this ratio of tail integrals is a reinterpretation of the Benjamini-Hochberg procedure, which determines the data threshold t corresponding to any specified tail-average inactivity rate. The B-H procedure controls the tail-average false-positive rate P ν (µ = 0 | Y > t) in the sense that the threshold t satisfying (18) serves as an upper bound. It does not approximate the tailaverage false-positive rate or the local false-positive rate, both of which depend crucially on the null atom P ν (µ = 0), which could be zero. Hyperactivity The Student t scale family Student's t 3 scale family of signal distributions satisfies lim ν→0 ν −3 P ν (dx) = lim ν→0 ν −3 2ν 3 dx π(ν 2 + x 2 ) 2 = 2 dx π|x| 4 = H(dx), for x = 0, so the limit measure exists with rate ρ ν = ν 3 . Unfortunately this limit is not a Lévy measure, so definition (2) is not satisfied, and the approximations developed in the preceding sections do not apply. For example, if w(x) = min(x 2 , 1) or 1 − e −x 2 , the integral P ν (w) behaves as ν 2 w (0), i.e., P ν (w) = O(ρ 2/3 ), not O(ρ). The t 3 -scale family is an instance of a first-order hyperactive model for which the exceedance measure exists and x 2 min(x 2 , 1) is H-integrable. First-order hyperactivity typically implies that the exceedance density near the origin is O(|x| −d−1 ) for some 2 < d < 4. The next section provides a sketch of the modifications needed to accommodate such behaviour. Hyperactivity integrals Let H(dx) be a first-order hyperactive exceedance measure, i.e., x 2 H(dx) is a non-zero symmetric Lévy measure. Among the positive multiples of H, the natural reference point satisfies R\{0} 1 − e −x 2 /2 (1 + x 2 /2) H(dx) = 1. For the t 3 model, the unit inverse quartic exceedance density is 3 2/π/|x| 4 . The first-order asymptotic theory for hyperactive sparse models is determined by the exceedance measure plus two rate parameters γ ν , ρ ν as follows: γ ν = 1 2 R x 2 e −x 2 /2 P ν (dx); lim ν→0 ρ −1 ν R x 2 w(x)P ν (dx) = R\{0} x 2 w(x) H(dx) for bounded continuous Lévy-integrable functions w ∈ W . The rate parameters for the t 3 -scale model are γ ν = ν 2 /2 and ρ ν = 2/π ν 3 /3, both tending to zero as ν → 0, but not at the same rate. To first order in sparsity, R (1 + x 2 /2)e −x 2 /2 P ν (dx) = 1 − R 1 − e −x 2 /2 (1 + x 2 /2) P ν (dx) = 1 − ρ R\{0} 1 − e −x 2 /2 (1 + x 2 /2) H(dx) = 1 − ρ, implying that R e −x 2 /2 P ν (dx) = 1 − ρ − γ. To ensure integrability at the origin, the definition of the zeta function is modified to ζ(t) = R\{0} cosh(tx) − 1 − t 2 x 2 /2 e −x 2 /2 H(dx), implying that ζ(t) = O(t 4 ) near the origin. Provided that H satisfies (19), the product ψ(x) = φ(x)ζ(x) is a probability density function. Its characteristic function is R e itx ψ(x) dx = e −t 2 /2 + e −t 2 /2 R\{0} cos(tx) − 1 + t 2 x 2 /2 H(dx), which simplifies to e −t 2 /2 (1 + |t| 3 π/2) for the inverse quartic. The marginal distribution of Y for a first-order hyperactive sparse model is a threecomponent mixture of the density functions φ(y), y 2 φ(y) and ψ(y): m ν (y) = (1 − γ − ρ)φ(y) + γy 2 φ(y) + ρψ(y) + o(ρ). Note that the basis distributions are fixed, while the coefficients γ ν , ρ ν are sparsitydependent. The chief consequence of signal hyperactivity is that the non-Gaussian perturbations are not of equal order in sparsity: asymptotically, γ ν > ρ ν . If it is convenient, the first two components may be combined so that m ν = (1 − ρ)N (0, 1 + 2γ) + ρψ + o(ρ), at the cost of a small increase in the variance of the Gaussian component. All of the results in § § 3 and 4 may be extended to hyperactive models with ρζ(y) replaced by γy 2 + ρζ(y). Tweedie's formula for the conditional mean of the signal involves both rate parameters: E(µ | Y = y) = 2γy + ρζ (y) 1 − γ − ρ + γy 2 + ρζ(y) . Illustration The left panel of Figure 2 shows a histogram of the absolute values of 5000 independent responses generated by Efron's (2011) version of the sparse signal plus Gaussian noise model Y i = µ i + ε i where the εs are independent N (0, 1) random variables, and the signals are µ i = ± log((i − 1/2)/500) for i ≤ 500, and µ i = 0 otherwise. The absolute non-zero signals are approximately exponentially distributed, and the mixture fraction is 10%. But a substantial fraction of the signals are small, and consequently undetectable in the presence of additive standard Gaussian noise. Example 2 in § 2.5 implies that the effective mixture fraction is ρ = 0.1 × 1 2 (1 − e −x 2 /2 )e −|x| dx 0.0344. The developments in this paper suggest two ways to proceed, both using the inversepower family of exceedance measures for illustration. The first is to estimate the parameter (ρ, d) by maximizing the asymptotic log likelihood l(ρ, d; y) = log 1 − ρ + ρζ d (y i ) . Maximization with no constraints on ρ givesd = 1.49,ρ = 0.056, and l(ρ,d) = 123.32 relative to the value at ρ = 0. The solid line in Fig 2b shows the fitted conditional activity probability ρζ d (y)/(1 − ρ + ρζ d (y)) as a function of y. The preferred option is to include a free scale parameter, σ 2 = 1 + σ 2 0 , and to estimate subject to the condition ρ ≤ (σ 0 /σ) d as implied by the convolution-mixture (13). The log likelihood function for the CM d (ρ, σ 2 ) model is The match is reasonably satisfactory at least up to the 99.5 percentile. Only at the most extreme quantiles does the difference between the exponential tail of the L-G mixture and the inverse-power tail of the CM d mixture become apparent. This discrepancy could be viewed as a deficiency of the class of inverse-power measures, but we are more inclined to view it as a deficiency of the Laplacian model for signals. The dashed line in Fig 2b shows the fitted conditional activity probability as a function of y. The difference between the two activity curves is small, and is due partly to the re-scaling (σ = 1.01) that occurs in the CM d fit, and partly to the small difference in fitted rates. For example, the fitted conditional activity probabilities given Y = 3 are 46.3% and 42.9% respectively. − y 2 i /(2σ 2 ) − n log σ + log 1 − ρ + ρζ d (y i /σ For this example, we know that the signals were effectively generated using the short-tailed atom-and-slab Laplace model, which is associated with the two-parameter Lévy measure H λ,τ in (4) with τ = 1 and e −λ = 0.9. Using the associated zeta function, the asymptotic log likelihood achieves a maximum of 135.3 atτ = 1.00,ρ = 0.043. In this setting, ρ is the Lévy convolution parameter, and the likelihood function is essentially constant in λ over the range 0 ≤ λ ≤ 0.5. For λ → 0, the marginal density (9) covers both atomic and non-atomic spike-and-slab Laplace models such as (1 − ν)e −|x|/ν /(2ν) + ντ e −τ |x| /2, and the log likelihood is the same for all models in this equivalence class. The Laplace-activity curve ρζ(y)/(1 − ρ + ρζ(y)) shown in Fig. 2b also applies in the non-atomic setting, with the threshold-exceedance interpretation (17). In the CM d model, AI(Ĥ) =d > 1 implies that the fitted conditional density given Y = y has a |u| 1−d -singularity at the origin, which is clearly visible for y = 4 in Fig. 3a. Figure 3b shows the additive decomposition of the conditional density in which the central double spike has net weight 12%, and the zeta component has weight 88%. The zeta-measure is decomposed further as a two-part mixture along the lines of § 6. The intermediate spike has density u 2 e −u 2 /2 H(du)/ζ 2 and weight proportional to ρζ 2 y 2 /2, which is asymptotically negligible compared with ρζ(y), but not numerically negligible for ρ = 0.051. The remaining major component is unimodal with density ζ(du; y) − y 2 u 2 e −u 2 /2 H(du)/2 ζ(y) − ζ 2 y 2 /2 , and weight proportional to ρζ(y) − ρζ 2 y 2 /2. The latter can be approximated with reasonable accuracy by a Gaussian distribution. The asymptotic theory in this paper tells us that, if ρ is small, every model in the inverse-power class, such as the sparse Student t, must produce essentially the same fit, with similar estimates for (ρ, d). In all cases, the conditional distribution given Y = y has a two-part decomposition as described in § 5.3. The asymptotic theory says little about the appearance of the central spike other than its total mass and the fact that it is concentrated near the origin. The central spike is bimodal for the CM d model but unimodal for the sparse Student t and most atom-free spike-and-slab mixtures. In neither case could the conditional distribution be said to have an atom at zero. But the asymptotic theory also tells us that the zeta component depends only on the exceedance measure, so the zeta measure, and its two-part decomposition in Figure 3, are the same for all models in the same sparsity class. To that extent at least, only the exceedance measure matters. There are indications that a second-order analysis might be capable of offering a more detailed description of the behaviour of the conditional distribution in the neighbourhood of the origin, but this paper stops at first order. Acknowledgements We are grateful to Jiyi Liu for pointing out that the convolution-mixture property in section 4.2 implies a certain form for the characteristic function, and ultimately for supplying a proof that the characteristic function of ψ has this form. References Barndorff-Nielsen, O.E. and Hubalek, F. (2008) Probability measures, Lévy measures and analyticity in time. Bernoulli 14, 764-790. Figure 1 : 1Left panel: tail inflation densities ψ(x) for the inverse power exceedance measures. Middle panel: Bayes estimate of the signal using the inverse-square exceedance measure; Right panel: conditional activity probability (16) as a function of y for a range of sparsity values 4 −k , 0 ≤ k ≤ 6. Figure 2 : 2Fitted marginal density and fitted signal activity probability as a function of y. Figure 3 : 3Fitted conditional distribution showing the singularity at the origin in the CM d model. The lower panel shows the decomposition as a three-part mixture. Table 1 : 1Zeta function ζ d (x) for three inverse-power exceedance measures.d \ x 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 0.5 1.9 2.7 3.9 5.8 8.8 13.9 22.9 39.4 70.9 133.7 264.3 547.6 1188.4 1.0 3.1 4.2 5.8 8.1 11.6 17.2 26.5 42.7 72.5 129.6 244.2 485.0 1013.9 1.5 3.8 4.8 6.2 8.1 10.7 14.4 20.2 29.5 45.5 74.4 129.8 241.6 478.7 ) .on the boundary at ρ = (σ 0 /σ) d , for a maximum of 122.95 relative to ρ = 0.For the marginal distribution of absolute values, the table below compares five quan-tiles of the Laplace-Gaussian mixture with the corresponding quantiles of the fitted CM dConstrained maximization giveŝ d = 1.48,ρ = 0.051,σ 0 = 0.135, distribution: 97% 98% 99% 99.5% 99.75% L-G 2.39 2.62 3.04 3.56 4.20 CM d 2.40 2.61 3.01 3.61 5.00 Controlling the false discovery rate: A practical and powerful approach to multiple testing. Y Benjamini, Y Hochberg, J. Roy. Statist. Soc. B. 57Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. Roy. Statist. Soc. B, 57, 289-300 The horseshoe estimator for sparse signals. C M Carvalho, N G Polson, J G Scott, Biometrika. 97Carvalho, C.M., Polson, N.G. and Scott, J.G. (2010) The horseshoe estimator for sparse signals. Biometrika 97, 465-480. Models for exceedances over high thresholds. A C Davison, R L Smith, J. Roy. Statist. Soc. B. 52Davison, A.C. and Smith, R.L. (1990) Models for exceedances over high thresholds. J. Roy. Statist. Soc. B 52, 393-442. Microarrays, empirical Bayes and the two-groups model. B Efron, Statist. Sci. 23Efron, B. (2008) Microarrays, empirical Bayes and the two-groups model. Statist. Sci. 23, 1-22. Empirical Bayes estimates for large-scale prediction problems. B Efron, Journal of the American Statistical Association. 104Efron, B. (2009) Empirical Bayes estimates for large-scale prediction problems. Journal of the American Statistical Association, 104, 1015-1028. Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing and Prediction. B Efron, Cambridge University PressEfron, B. (2010) Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing and Prediction. Cambridge University Press. Tweedie's formula and selection bias. B Efron, Journal of the American Statistical Association. 106Efron, B. (2011) Tweedie's formula and selection bias. Journal of the American Statistical Association, 106, 1602-1614. Variable selection via Gibbs sampling. E I George, R E Mcculloch, J. Amer. Statist. Assoc. 88George, E.I. and McCulloch, R.E. (1993) Variable selection via Gibbs sampling. J. Amer. Statist. Assoc., 88, 881-889. Some priors for sparse regression modelling. J E Griffin, P E Brown, Bayesian Analysis. 8Griffin, J.E. and Brown, P.E. (2013) Some priors for sparse regression modelling. Bayesian Analysis 8, 691-702. Needles and straw in haystacks: Empirical-Bayes estimates of possibly sparse sequences. I Johnstone, B W Silverman, The Annals of Statistics. 32Johnstone, I. and Silverman, B.W. (2004) Needles and straw in haystacks: Empirical- Bayes estimates of possibly sparse sequences. The Annals of Statistics, 32, 1594-1649. On the use of non-local prior densities in Bayesian hypothesis tests. V E Johnson, D Rossell, J. Roy. Statist. Soc. B. 72Johnson, V.E. and Rossell, D. (2010) On the use of non-local prior densities in Bayesian hypothesis tests. J. Roy. Statist. Soc. B 72, 143-170. The spike and slab LASSO. V Rockova, E I George, 10.1080/01621459.2016.1260469Journal of the American Statistical Association. to appearRockova, V. and George, E.I. (2018) The spike and slab LASSO. Journal of the American Statistical Association (to appear). https://doi.org/10.1080/01621459.2016.1260469
[]
[ "A Theory for Backtrack-Downweighted Walks", "A Theory for Backtrack-Downweighted Walks" ]
[ "Francesca Arrigo \nDepartment of Mathematics and Statistics\nUniversity of Strathclyde\nG1 1XHGlasgowUK\n", "Desmond J Higham \nSchool of Mathematics\nUniversity of Edinburgh\nJames Clerk Maxwell Building, Edin-burghEH9 3FDUK\n", "Vanni Noferini \nDepartment of Mathematics and Systems Analysis\nAalto University\nP.O. Box 11100FI-00076AaltoFinland\n" ]
[ "Department of Mathematics and Statistics\nUniversity of Strathclyde\nG1 1XHGlasgowUK", "School of Mathematics\nUniversity of Edinburgh\nJames Clerk Maxwell Building, Edin-burghEH9 3FDUK", "Department of Mathematics and Systems Analysis\nAalto University\nP.O. Box 11100FI-00076AaltoFinland" ]
[]
We develop a complete theory for the combinatorics of walk-counting on a directed graph in the case where each backtracking step is downweighted by a given factor. By deriving expressions for the associated generating functions, we also obtain linear systems for computing centrality measures in this setting. In particular, we show that backtrackdownweighted Katz-style network centrality can be computed at the same cost as standard Katz. Studying the limit of this centrality measure at its radius of convergence also leads to a new expression for backtrackdownweighted eigenvector centrality that generalizes previous work to the case where directed edges are present. The new theory allows us to combine advantages of standard and nonbacktracking cases, avoiding localization while accounting for tree-like structures. We illustrate the behaviour of the backtrack-downweighted centrality measure on both synthetic and real networks.
10.1137/20m1384725
[ "https://arxiv.org/pdf/2012.02999v1.pdf" ]
227,334,924
2012.02999
3a85d9ad441fdfcb06331843869ba1f59c0c60df
A Theory for Backtrack-Downweighted Walks Francesca Arrigo Department of Mathematics and Statistics University of Strathclyde G1 1XHGlasgowUK Desmond J Higham School of Mathematics University of Edinburgh James Clerk Maxwell Building, Edin-burghEH9 3FDUK Vanni Noferini Department of Mathematics and Systems Analysis Aalto University P.O. Box 11100FI-00076AaltoFinland A Theory for Backtrack-Downweighted Walks We develop a complete theory for the combinatorics of walk-counting on a directed graph in the case where each backtracking step is downweighted by a given factor. By deriving expressions for the associated generating functions, we also obtain linear systems for computing centrality measures in this setting. In particular, we show that backtrackdownweighted Katz-style network centrality can be computed at the same cost as standard Katz. Studying the limit of this centrality measure at its radius of convergence also leads to a new expression for backtrackdownweighted eigenvector centrality that generalizes previous work to the case where directed edges are present. The new theory allows us to combine advantages of standard and nonbacktracking cases, avoiding localization while accounting for tree-like structures. We illustrate the behaviour of the backtrack-downweighted centrality measure on both synthetic and real networks. eliminating backtracking forces the walker to explore the network more widely. More concretely, it has been shown to offer benefits in centrality measurement [2,3,9,14,22,29,38], community detection [19,21,28,30,32], network comparison and alignment [23,39] and in the study of related issues concerning optimal percolation [25,26] and the spread of epidemics [24]. Nonbacktracking also plays an important role in a number of seemingly unrelated scientific fields, including spectral graph theory [1,16], number theory [37], discrete mathematics [7,17,35], quantum chaos [33], random matrix theory [34], and computer science [31,40]. Hence, our results also have potential for impact outside network science. Downweighting rather than Eliminating Our work can be motivated by two issues a. Treating all walks equally leads to centrality measures that suffer from localizationplacing most emphasis on a small subset of nodes and struggling to distinguish between the remainder. This issue can be associated with an accumulation of backtracking walks [2,22]. b. Completely eliminating backtracking walks, however, may overlook some features, notably the existence of trees [28]. For this reason, we will consider a more general regime where any backtracking step during a walk is downweighted by some factor, 0 ≤ θ ≤ 1. So the extremes of θ = 1 and θ = 0 correspond to standard and nonbacktracking walk counts, respectively. We are concerned with the combinatorics of such backtrack-downweighted walks-we seek a formula for the number of distinct walks of each length in a given graph. We then study the associated generating functions in order to produce Katz-style network centrality measures. Moreover, by considering how the resolvent-based generating function behaves at its radius of convergence, we also arrive at a corresponding eigenvector centrality. We note that in [9] the related concept of alpha-nonbacktracking 1 centrality was introduced. That work focused on the eigenvector setting, adapting the Hashimoto matrix construction, and applied only to undirected networks. Our work allows for directed networks and is built on a combinatoric walk-counting approach that generalizes Katz centrality. Further, by working at the node level rather than the edge level, we are able to derive more computationally efficient measures, based on linear systems with the same dimension and sparsity as the original network. We finish this section by describing how the manuscript is organized, and pointing out how previous results are generalized. In section 2 we introduce the required background concepts. Section 3 defines backtrack-downweighted walks and in Theorem 3.1 we derive a general four-term recurrence that allows us to count them. This result extends the fully nonbacktracking version from [7]; see also [36]. Section 4 concerns the standard generating function. Corollary 4.2 gives a linear system for the associated Katz-style centrality measure, generalizing [2, equation (3.3)]. In section 5 we give results on a small graph that illustrate the two motivational issues a and b above. (This graph has an interesting spectral property that is described in Remark 5.1.) We also give comparative results on the star graph and on regular graphs. In section 6 we consider the limit as the Katz parameter approaches its upper value, and thereby, in Theorem 6.3, derive an eigenvector centrality measure. This result extends the measure in [9] to the case of directed graphs. Section 7 shows how the recurrence from Theorem 3.1 can be used to compute generating functions based on general power series. Here we find it necessary to work with block matrices of three times the dimension of the original adjacency matrix; see Theorem 7.1, which extends [3,Theorem 5.2]. In section 8 we give results on data from the London Underground train network, and we argue that the backtrackdownweighting parameter provides a useful means to mitigate localization while maintaining correlation with passenger usage. We finish with a brief discussion in section 9. Background and Notation We consider an unweighted, directed network with n nodes. We let A ∈ R n×n denote the adjacency matrix, so a ij = 1 if there is an edge from i to j and a ij = 0 otherwise. There are no self-loops, so a ii = 0. A walk of length k from node i to node j is a sequence of nodes i = i 0 , i 1 , i 2 , . . . , i k = j such that each edge from i s to i s+1 exists. Note that the nodes in the sequence are not required to be distinct; the walk may revisit nodes and edges. It follows directly from the definition of matrix multiplication that (A k ) ij counts the number of distinct walks of length k from i to j [20]. Katz [18] used this walk-counting expression as the basis for a centrality measure. Here, we compute a value x i > 0 that quantifies the importance of node i, with a larger value indicating greater importance. Katz centrality uses x i = 1 + ∞ k=1 n j=1 α k (A k ) ij .(1) Here, (up to a convenient constant unit shift) the centrality of node i is given by the total number of walks from node i to every node, with a walk of length k weighted by a factor α k , where α > 0 is a real parameter. This series converges for α < 1/ρ(A), where ρ(·) denotes the spectral radius, and we may use the matrix-vector notation (I − αA)x = 1,(2) where 1 ∈ R n has all elements equal to one. A nonbacktracking walk of length k from node i to node j is a sequence of nodes i = i 0 , i 1 , i 2 , . . . , i k = j such that each edge from i s to i s+1 exists and we never have i s = i s+2 . In words, after leaving a node we must not return to it immediately. Now, let p k (A) ∈ R n×n be such that (p k (A)) ij records the number of distinct nonbacktracking walks of length k from i to j. It is straightforward to show that p 1 (A) = A, and p 2 (A) = A 2 − D, where D ∈ R n×n is the diagonal matrix whose entries are d ii = (A 2 ) ii . Setting p 0 (A) = I for convenience, it was shown in [7] that the matrices p k (A) satisfy the following four-term recurrence p k+1 (A) = p k (A)A + p k−1 (A)(I − D) − p k−2 (A)(A − S), for k = 2, 3, . . . ,(3) where S ∈ R n×n is such that s ij = a ij a ji . See [36] for an alternative, linear algebraic, proof. We note that in the extreme case of a directed network for which no edges are reciprocated, that is a ij = 1 ⇒ a ji = 0, there is no opportunity for any walk to backtrack; all walks are nonbacktracking. In this case, we have D = 0 and S = 0, and p k (A) recovers the classical walk count A k . Motivated by (1), the nonbacktracking Katz analogue x i = 1 + ∞ k=1 n j=1 α k (p k (A)) ij(4) was introduced in [2]. Here, centrality is computed via weighted combinations of nonbacktracking (rather than classical) walks. By first deriving an expression for the generating function ∞ k=0 α k p k (A), it was shown that (4) solves the linear system I − αA + α 2 (D − I) − α 3 (A − S) x = 1 − α 2 1.(5) In general, the radius of convergence for the series in (4), and hence the range of valid α values in (4), is governed by the spectrum of a three-by-three block matrix; see [2, Theorem 5.1] and section 6. The nonbacktracking version of Katz centrality, (4), was first defined and analyzed in [14] for undirected networks. In this undirected case, as α approaches its upper limit the ranking induced by the centrality measure in (5) generically tends to that induced by the nonbacktracking eigenvector centrality measure introduced in [22]; see [14,Theorem 10.2]. Taking the corresponding limit in (5) defines a computable nonbacktracking eigenvector centrality for the more general case of a directed network, [2, Theorem 6.1]. Backtrack-Downweighted Walk Counts We now consider an intermediate regime where backtracking is not completely eliminated, but rather the count for each walk is downweighted by θ m where 0 ≤ θ ≤ 1 is a parameter and m is the number of backtracking steps incurred during the walk. Hence, θ = 1 corresponds to the classical walk count A k and θ = 0 corresponds to the nonbacktracking walk count from p k (A) in (3). We will let q k (A) denote the resulting backtrack-downweighted walk (BTDW) count matrix, where, for brevity, the dependence of q k (A) on θ is not explicitly indicated. More precisely, (q k (A)) ij counts the number of distinct walks of length k from node i to node j with the following proviso: for each walk, i = i 0 , i 1 , i 2 , . . . , i k = j, every occurrence of a backtracking step (i s = i s+2 ) incurs a downweighting by a factor θ. To illustrate this idea, consider the directed graph in Figure 1. Looking at walks of length four from node 1 to node 5, we have • a walk 1 → 2 → 3 → 4 → 5 with no backtracking, • a walk 1 → 2 → 3 → 2 → 5 with one instance of backtracking, • a walk 1 → 2 → 5 → 2 → 5 with two instances of backtracking. Hence, (q 4 (A)) 15 = 1 + θ + θ 2 . Continuing with these arguments, we find that q 4 (A) =       0 0 θ + θ 2 0 1 + θ + θ 2 0 1 + 2θ 2 + 2θ 3 0 θ + θ 2 0 0 0 1 + θ + θ 3 0 2θ + 2θ 2 0 θ + θ 2 0 1 0 0 0 2θ 2 0 1 + θ + θ 3       . The following theorem generalizes the recurrence (3). For the statement of this theorem, and later results, we find it convenient to let µ = 1 − θ. q 1 (A) = A, q 2 (A) = A 2 − µD, and for k ≥ 2 q k+1 (A) = q k (A)A + µ q k−1 (A) (µI − D) − µ 2 q k−2 (A) (A − S) ,(6) where µ = (1 − θ). Proof. The identity q 1 (A) = A follows immediately since no walk of length one can backtrack. Backtracking walks of length two are precisely closed walks of length two. Hence, the off-diagonal elements of q 2 (A) match those of A 2 , and the diagonal elements of q 2 (A) correspond to those of A 2 scaled by θ. This gives q 2 (A) = A 2 − D + θD. We proceed by induction. Assume that q s (A) correctly counts BTDWs of length s for s ≤ k. We start with the expression q k (A)A.(7) Postmultiplying by A in this way corresponds to adding an edge to the end of the walk: the (i, j) entry of q k (A)A deals with walks from i to j of length k + 1 where any backtracking arising along the first k edges has been correctly downweighted. We may associate (q k (A)A) ij with the schematic i · · · k j.(8) Here i and j are the first and last nodes in the walk of length k + 1. The star symbols denote arbitrary nodes. The presence of k indicates that, in every walk under consideration, the first k edges along that walk have been correctly downweighted (since they are accounted for by q k (A)). However, any backtracking caused by the final edge has not been correctly downweighted. The walks whose weights we must adjust have the form i · · · j k j.(9) To deal with such walks we note that the term (q k−1 (A)D) ij is associated with walks of the form i · · · j (k−1) j.(10) These walks appeared in our original expression, q k (A)A in (7), without any downweighting of the final backtracking step. We may therefore remove the contribution from all such walks to this expression, to give q k (A)A − q k−1 (A)D, and then add this contribution back in with the required extra downweighting factor θ, leading to the expression q k (A)A + (θ − 1)q k−1 (A)D.(11) However, the walks represented by (9) and (10) are not the same, because we have not yet properly dealt with walks of the form i · · · j k j.(12) To proceed, we note that (q k−1 (A)) ij contains the correct BTDW count for walks of length k − 1 from i to j; that is, i · · · j (k−1) .(13) To see how these may extend to walks of the form i · · · j (k−1) j we consider two cases 1. if the reciprocated edge from j to exists, then there is exactly one such walk, 2. if the reciprocated edge from j to does not exist, then there is no such walk. The quantity (q k−1 (A) − q k−2 (A)(A − S)) ij therefore accounts for these walks, but with a scaling that does not allow for the final two backtracking steps. Hence the correctly weighted contribution to q k+1 (A) from walks of the form (12) is θ 2 [q k−1 (A) − q k−2 (A)(A − S)] j . The factor θ 2 is required because of the two final backtracking steps, which are not downweighted in q k−1 (A) − q k−2 (A)(A − S). In order to make (11) correct we therefore need to • subtract the amount (θ − 1)[q k−1 (A) − q k−2 (A)(A − S)] in order to compensate for the fact that these walks were incorrectly scaled by a factor 1 − θ, rather than θ 2 , in q k−1 (A)D, • subtract the amount θ[q k−1 (A) − q k−2 (A)(A − S)] in order to compensate for the fact that these walks were incorrectly scaled by a factor θ rather than θ 2 in q k (A)A, • (having now removed the contribution from these walks) add in θ 2 [q k−1 (A)− q k−2 (A)(A − S)] so that the two final backtracking steps are accounted for properly. This leads us to the relation q k+1 (A) = q k (A)A + (θ − 1)q k−1 (A)D + (1 − θ) 2 [q k−1 (A) − q k−2 (A)(A − S)], giving the required result. The following corollary gives an alternative version of the recurrence. Corollary 3.2. For k ≥ 2, the BTDW count matrices q k (A) also satisfy the recurrence q k+1 (A) = Aq k (A) + µ (µI − D) q k−1 (A) − µ 2 (A − S) q k−2 (A).(14) Proof. The recurrence (14) can be established by adapting the proof of Theorem 3.1. The main difference is to add the (k + 1)st edge at the beginning of the walk, rather than the end; so, instead of (7), we begin with Aq k (A). However, a higher-level argument can also be used. Reversing the direction of every edge in a graph, the BTDW count from from i to j becomes the BTDW count from j to i. Hence, q k (A T ) = (q k (A)) T . Writing the recurrence (6) for A T and taking the transpose, we then arrive at (14). Remark 3.3. It is straightforward to confirm that θ = 0 in (6), that is, elimination of all backtracking walks, leads to the relation (3). Also, θ = 1, treating al walks equally, gives the classical count A k for walks of length k. Resolvent To study Katz-style centrality, we define the generating function Ψ(A) = ∞ k=0 α k q k (A).(15) Theorem 4.1. If α ≥ 0 is within the radius of convergence of the generating function Ψ(A) in (15) then Ψ(A) 1 − αA − µα 2 (µI − D) + µ 2 α 3 (A − S) = 1 − µ 2 α 2 I, where we recall that µ = 1 − θ. Proof. Multiplying by α k+1 in (6) and summing from k = 2 to ∞ gives ∞ k=2 α k+2 q k+1 (A) = α ∞ k=2 α k q k (A)A + µα 2 ∞ k=2 α k−1 q k−1 (A) (µI − D) − µ 2 α 3 ∞ k=2 α k−2 q k−2 (A) (A − S) . Hence, Ψ(A) − α 2 q 2 (A) − αq 1 (A) − q 0 (A) = α (Ψ(A) − αq 1 (A) − q 0 (A)) A + µα 2 (Ψ(A) − q 0 ) (µI − D) − µ 2 α 3 Ψ(A) (A − S) . Using the expressions for q 0 (A), q 1 (A) and q 2 (A) in Theorem 3.1, the result follows after rearrangement and simplification. Following (1) and (4), we define the BTDW Katz centrality for node i as x i = 1 + ∞ k=1 n j=1 α k (q k (A)) ij .(16) We may then generalize (5) as follows. Corollary 4.2. The BTDW Katz centrality measure (16) solves the linear system I − αA − µα 2 (µI − D) + µ 2 α 3 (A − S) x = 1 − µ 2 α 2 1.(17) Proof. From (16) We note that the coefficient matrix in (17) has the same sparsity as the coefficient matrix in standard Katz (2). This shows that (a) downweighting of backtracking walks can be incorporated at no extra computational cost, and (b) it is therefore feasible to apply the measure to large, sparse networks. Also, by construction, the radius of convergence in (15) for a general 0 < θ < 1 must be bounded above and below by the corresponding radius of convergence when θ = 1 and θ = 0, respectively. Squid, Star and Regular Graphs We now analyze specific simple examples that shed light on how the BTDW Katz centrality measure in (17) can perform differently to the extreme cases of standard and fully nonbacktracking Katz. We first consider the undirected graph with 11 nodes shown in Figure 2. Due to its shape, we will refer this as the squid graph. Here, node 1 has the highest degree, but it could be argued that nodes 6 and 8, of lower degree, possess better quality connections. In particular, node 1 is connected to four leaves, and the subgraph consisting of nodes 1,2,3,4 and 5 represents a tree hanging off the remainder of the graph. In the context of community detection, it has been argued that nonbacktracking measures will completely "ignore" the presence of such trees, with undesirable consequences [28]. Hence, in this centrality measurement setting it is of interest to see whether a similar effect arises. Intuitively, if we count only nonbacktracking walks, then the connections 1-2, 1-3, 1-4 and 1-5 possessed by node 1 should be less valuable than the connections enjoyed by the other nodes in the graph. Hence, the small θ regime should not be favourable for node 1. Figure 3 shows the BTDW Katz centrality measure in (17), normalized to have x 1 = 1, for nodes 1 (solid), 6 (dashed) and 8 (dotted). Here, we used a fixed value of α = 0.99/ρ(A), which corresponds to α ≈ 0.39, and we show the centrality values as θ ranges between 0 and 1. (Of course, by symmetry node 10 will always have the same centrality value as node 8.) The plot reveals a crossover effect, where a sufficiently small value of θ, that is, sufficiently stringent downweighting of backtracking walks, causes node 1 to be ranked below nodes 6 and 8. It is also of note that node 8, which is able to take part in a three-cycle, and is therefore involved in a relatively large number of short nonbacktracking walks, is rated more highly than node 6 for small θ. However, as θ increases, and hence the downweighting of backtracking becomes less severe, node 8 is overtaken by node 6, which arguably occupies a more central position in the graph. Remark 5.1. We mention in passing that the squid graph in Figure 2 has an unusual property: it possesses nodes that share exactly the same eigenvector centrality (so that they could be described as spectrally iso-central) whilst being topologically distinct. More precisely, let λ and v denote the Perron-Frobenius eigenvalue and eigenvector associated with the adjacency matrix of this graph, respectively. Then straightforward algebra confirms that λ = (1 + √ 17)/2 and, after normalizing so that v 1 = 1, v 1 = v 6 = v 8 = v 10 = 1 are the largest components, v 7 = v 9 = v 11 = λ − 1 2 are the next-largest components, and v 2 = v 3 = v 4 = v 5 = λ − 1 4 are the smallest components. Next, we study the star graph with n = m + 1 nodes, S 1,m . We label the nodes so that node 1 is the hub, that is, the only node of degree m. The remaining nodes, which have degree one, are connected only to the hub. All edges are undirected. We note that the star graph is a widely used test case for centrality measures [6,12], and we also point out that the eigenvector version of full nonbacktracking centrality, [22], breaks down on a star graph [14]. The following theorem characterizes the matrices q k (A) that arise. (ii) for all k = 2, 3, . . . and for θ = 0 q 2k (A) = η k−1 A 2 I − 1 − θ η B k I − 1 − θ η B −1 − (1 − θ) k DB k−1 , where B = (1 − θ)I − D. Proof. See Appendix A. We then have the following expression for BTDW Katz centrality. Corollary 5.3. Consider the star graph with n = m + 1 nodes, S 1,m , and let η = θ(θ + m − 1). On this graph, the BTDW Katz centrality measure (16) exists for α 2 η < 1 and has the form ∞ k=0 α k q k (A)1 i = 1 + α 1 − α 2 η (m(1 + αθ)) for i = 1 (1 + α(θ + m − 1)) for i = 2, . . . , m + 1. Proof. See Appendix A. It can in fact be formally proved that the radius of convergence of the generating function of BTDW Katz centrality is indeed η −1/2 ; this will be done in a manuscript in preparation. It follows from Corollary 5.3 that for large n x 1 x 2 ≈ 1 α + θ. So, for a fixed α, the extent to which the hub node is prioritized over a leaf node decreases as we penalize backtracking. Also of interest is the regime of large n and fixed θ, where the range of allowable α values in Corollary 5.3 is (0, 1/ √ θ 2 + θm − θ), i.e., approximately (0, 1/ √ θn); this shows how the α range may increase substantially as we downweight the backtracking. At the other extreme, we may consider an undirected d-regular graph where d is large. (Here, x ∝ 1 solves (17) for all values of θ ∈ [0, 1], so all nodes are ranked equally, but it is informative to study the singularity of the coefficient matrix.) For a d-regular graph any walk of length k may be extended to a walk of length k + 1 using d − 1 nonbacktracking edges and only one backtracking edge. So we would expect the allowable range of α values to increase much less dramatically as we decrease θ. Indeed, since ρ(A) = d, as α increases from zero in (17) it may be shown that the system becomes singular when α = (d − µ) −1 = (d − 1 + θ) −1 . Hence, in this case, the use of θ makes very little difference to the range of allowable α values. In future work, it would be of interest to study how the choice of θ impacts the allowable range of α values for other types of graph and also for networks arising in practice. For the transport network in section 8, using the spectral bound in Theorem 6.3, we found computationally that the upper limit, α , varied approximately linearly between α = 0.264 at θ = 1 and α = 0.415 at θ = 0. Spectral Limit In this section we briefly relate the Katz-style centrality measure to an eigenvalue version. We begin by noting that the recurrences (6) and (14) are closely connected with the block matrix Z ∈ R 3n×3n of the form Z =   0 I 0 0 0 I −µ 2 (A − S) µ (µI − D) A   ,(18) as made clear by the following theorem. Theorem 6.1. The power series defining the generating function Ψ(A) in (15) is convergent when α < 1/ρ(Z). Proof. Suppose αρ(Z) < 1. Let v k ∈ R 3n×n be defined by v k = α k   q k−1 (A) q k (A) q k+1 (A)   , for k ≥ 1. Then we see from (14) and (18) that v k+1 = αZv k = (αZ) k v 1 . By Gelfand's formula [15,Corollary 5.6.14] it follows that, for any matrix norm · and for any > 0, there exists k 0 ∈ N such that if k ≥ k 0 then (αZ) k < (ρ(αZ) + ) k . Taking = (1 − ρ(αZ))/2 and specializing to any submultiplicative matrix norm, we conclude that there exists k 0 such that k ≥ k 0 ≥ 1 ⇒ α k q k (A) ≤ v k+1 ≤ 1 + ρ(αZ) 2 k v 1 . The result follows because ∞ k=0 α k q k (A) ≤ k0−1 k=0 α k q k (A) + v 1 ∞ k=k0 1 + ρ(αZ) 2 k , and since ρ(αZ) < 1 the right hand side is convergent. Remark 6.2. We note that although the bound 1/ρ(Z) in Theorem 6.3 will generally be sharp, there exist cases where this is not so. For example, in the star graph example of Corollary 5.3, it may be shown that the radius of convergence is always 1/ √ η, but this quantity coincides with 1/ρ(Z) if and only if θ ≥ 1/(m + 1). This statement, along with further results that may be derived using matrix polynomial theory, will be proved in forthcoming work. The next theorem characterizes the node ranking that arises generically when the radius of convergence is approached. Theorem 6.3. Suppose that the radius of convergence is 1/ρ(Z), and hence the bound in Theorem 6.1 is sharp. Also, for fixed θ, suppose that I − ρ(Z) −1 Z has rank n − 1. Then, as α → 1/ρ(Z) from below the ranking produced by (17) converges to that given by the last n components of the unique eigenvector of Z associated with the eigenvalue ρ(Z). Proof. The proof of [2, Theorem 6.1] can be extended directly to this case. Remark 6.4. Theorem 6.3 shows that the last n components of the dominant eigenvector of Z in (18) is an appropriate backtrack-downweighted eigenvalue centrality measure. Indeed, for θ = 0 it reduces to the full backtracking measure given in [2] for general graphs and in [22] for undirected graphs. For general θ, in the undirected case it reduces to the measure in [9, Theorem 3.8]. Exponential and Other Generating Functions Katz centrality in (1) and (2) is associated with the matrix resolvent function (I − αA) −1 . Several authors have argued that other matrix functions, defined via different power series, may also be useful; see, for example, [4,5,11]. Hence, in this section, given coefficients {c k } ∞ k=0 , where c k is the downweighting factor associated with a walk of length k, we are interested in characterizing and computing the corresponding quantity ∞ k=0 c k q k (A), and the action of this matrix on 1. We define f 0 (y) = ∞ k=0 c k y k ,(19) and, more generally, for any integer s ≥ 0, f s (y) = ∞ k=0 c s+k y k .(20) The following theorem shows how ∞ k=0 c k q k (A) can be expressed in terms of f 0 (Z) and f 2 (Z). Consequently the backtrack-downweighted version of any matrix-function based centrality measure can be computed whenever the underlying matrix function is computable. We note that the series defining f 0 (Z) converges whenever the series defining f 2 (Z) converges, and vice versa.   I A A 2 − µD   .(21) Moreover, this quantity may be computed as the (3, 3) block of f 0 (Z) − µ 2 f 2 (Z). Proof. It follows directly from the recurrence (14) and the definition of Z in (18) that, for all k ≥ 0, Z k   I A A 2 − µD   =   q k (A) q k+1 (A) q k+2 (A)   .(22) The identity (21) is then immediate. We next define the block matrix Z ∈ R 3n×3n by Z = Z 2 −   0 0 0 0 0 0 0 0 µ 2 I   , and note that Z   0 0 I   =   I A A 2 − µD   .(23) Using (22) and (23), we may write ∞ k=0 c k q k (A) = c 0 I + c 1 A + ∞ k=2 c k q k (A) = c 0 I + c 1 A + 0 0 I ∞ k=2 c k Z k−2 Z   0 0 I   = c 0 I + c 1 A + 0 0 I ∞ k=2 c k Z k   0 0 I   − 0 0 I ∞ k=2 c k Z k−2   0 0 0 0 0 0 0 0 µ 2 I     0 0 I   = 0 0 I f 0 (Z)   0 0 I   − µ 2 0 0 I f 2 (Z)   0 0 I   , which completes the proof. Remark 7.2. We note that in the exponential case, where c k = α k /(k!), we may recover (21) through a more direct route. Regarding the power series ∞ k=0 α k q k (A)/(k!) as a function of the parameter α, say F (α), we have F (α) = ∞ k=0 α k q k (A) k! , F (α) = ∞ k=0 α k q k+1 (A) k! , F (α) = ∞ k=0 α k q k+2 (A) k! . Then multiplying by α k in (14) and summing from k = 0 to ∞ gives the matrixvalued linear third order ODE F (α) = AF (α) + µ (µI − D) F (α) − µ 2 (A − S)F (α). This may be written in block first order form   F (α) F (α) F (α)   = Z   F (α) F (α) F (α)   and hence, using F (0) = q 0 (A) = I, F (0) = q 1 (A) = A and F (0) = q 2 (A) = A 2 − µD, we have   F (α) F (α) F (α)   = exp(αZ)   I A A 2 − µD   , which is consistent with (21). Computational Example In this section we illustrate the BTDW Katz centrality measure (17) on a real transport network from [10] with further data supplied in [8]. Here, n = 271 nodes represent stations in the London Underground system, with 312 (undirected) edges denoting rail links. We note that such a network has many "hanging trees" where underground lines head away from the city centre. Hence, we may expect a fully nonbacktracking centrality measure to penalize geographically outlying stations too severely. On the other hand, since there are some well-connected city centre stations that intersect many underground lines, we may expect traditional eigenvector centrality to display localization, where most of the centrality mass is placed on a small subset of the nodes. To be concrete, we will quantify localization in terms of the inverse participation ratio, S(n). Here, for a family of unit-norm vectors of increasing dimension, that is, v ∈ R n , with v 2 = 1, letting S(n) = n i=1 v 4 i , we say the sequence is localized if S(n) = O(1) as n → ∞ [13]. In the tests below, where n is fixed, we use the size of S(n) as a measure of localization when comparing results, with a larger inverse participation ratio indicating greater localization. In Figure 4 we show how the inverse participation ratio for the BTDW Katz centrality measure (17) varies as a function of θ and α. For this network, ρ(A) ≈ 3.78, so standard Katz, that is θ = 0, requires α < 1/ρ(A) := α ≈ 0.264. In the figure, we use α values of 0.05, 0.06, . . . , 0.26. We see from the figure that the measure dramatically increases in localization in the regime where backtracking is not suppressed (θ ≈ 1) and we are close to an eigenvector measure (α ≈ α ). For Figure 5 we made use of independent data from [8] that records the annual passenger usage at each station. We took data for the most recent year, 2017. The idea now is to regard passenger usage (the total number exiting or entering a station) as an indication of importance, and to check whether this correlates with the centrality measure, which is computed only from information about the network structure. We used Kendall's tau coefficient to quantify the correlation between passenger usage and centrality. (Spearman's rank correlation coefficient, which is also widely used for assessing rankings, gave similar results.) We see that the most localized regime according to Figure 4 is also the regime with the best correlation coefficient. However, by varying the backtrack-downweighting parameter, θ, it is possible to achieve a reduction in localization. Varying the Katz parameter, α, instead, gives a more rapid loss of correlation. Kendall's tau coefficient for BTDW Katz centrality and annual passenger usage on the London Underground transport data. Discussion Our aim in this work was to define and study a framework that interpolates between traditional and nonbacktracking walk-counting combinatorics. From a network science perspective, this has the potential to combine the benefits of both worlds-notably, avoiding localization while accounting for treelike structures. Our results also extend theoretical developments in nonbacktracking walks and zeta functions on graphs from a range of related fields [1,16,17,33,35,40]. We developed a general four term recurrence in Theorem 3.1 and an expression for the associated generating function in Theorem 4.1. In particular, we showed in Corollary 4.2 that the corresponding Katzlike centrality measure may be computed at the same cost as standard Katz. By considering the relevant limit, Theorem 6.3 then produced an eigenvector centrality measure that interpolates between the traditional and nonbacktracking extremes, There is considerable scope for further work on backtrack-downweighted walks. In Remark 6.2 we quoted a counterintuitive result on the radius of convergence for the associated generating function on a star graph. This raises questions such as how best to characterize the radius of convergence in general, and under what circumstances the lower bound in Theorem 6.1 is sharp. We are currently studying these issues with the tools of matrix polynomial theory. From the perspective of algorithm design, development of further insights and guidelines concerning the behaviour of backtrack-downweighted Katz in terms of the parameters α and θ is also of interest. and terminate in a node in V j , for i, j = 1, 2 and i = j. This immediately implies that, for all k = 0, 1, . . ., the matrices q 2k+1 (A) have the same sparsity pattern as the matrix A: q 2k+1 (A) = 0 β1 T β1 T for some β > 0. We now proceed by induction. When k = 0 we have that q 2k+1 (A) = A = η k A. Suppose now that the result holds for all ≤ k. We want to show that q 2k+3 (A) = η k+1 A = [θ(θ + m − 1)] k+1 A. We proceed entrywise by considering the walks of length 2k + 3 from node 1 to node i = 1; These are of two types: -First type: 1 · · · 1 i (2k+1) 1 i, of which there are η k θθ = η k θ 2 . The multiplication by θ 2 is used to account for the two backtracking steps introduced when moving from node i to node i in two steps. -Second type: 1 · · · 1 j Overall, by summing these two contributions, it follows that (q 2k+3 (A)) 1i = η k (θ 2 + θ(m − 1)) = η k+1 , which concludes this part of the proof. (ii) Let k ≥ 2, then from Theorem 3.1 q 2k (A) = q 2k−1 (A)A + (1 − θ)q 2(k−1) (A)B = η k−1 A 2 + (1 − θ)[q 2k−3 (A)A + (1 − θ)q 2(k−2) (A)B]B = η k−1 A 2 + (1 − θ)η k−2 A 2 B + (1 − θ) 2 q 2(k−2) (A)B 2 = A 2 k−3 j=0 η k−1−j ((1 − θ)B) j + (1 − θ) k−2 [q 3 (A)A + (1 − θ)q 2 (A)B]B k−2 = A 2 k−1 j=0 η k−1−j ((1 − θ)B) j − (1 − θ) k DB k−1 = η k−1 A 2 I − 1 − θ η B k I − 1 − θ η B −1 − (1 − θ) k DB k−1 , where we have used the fact that B = (1 − θ)I − D = − θ + m − 1 θI and hence the matrix I − 1−θ η B is invertible. Proof of Corollary 5.3 Corollary 5.3 now follows after inserting the expressions for q k (A) in Theorem 5.2 into (16) and simplifying, for all θ ∈ (0, 1]. When θ = 0, i.e., in the fully nonbacktracking setting, the result follows from a direct computation, since the length of the longest possible nonbacktracking walk in S 1,m is two: x 1 = 1 + αm, and x i = 1 + α + α 2 (m − 1) for all i = 2, . . . , m + 1. Figure 1 : 1Illustrative directed graph. Theorem 3 . 1 . 31Letting q 0 (A) = I, we have we have Ψ(A) −1 x = 1. Theorem 7.1 gives the required expression for Ψ(A) −1 . Figure 2 : 2The squid graph. Figure 3 : 3Normalized BTDW Katz centrality for nodes nodes 1 (solid), 6 (dashed) and 8 (dotted), as a function of θ, corresponding to the squid graph inFigure 3. Here α is fixed at 0.99/ρ(A). Theorem 5 . 2 . 52Let A be adjacency matrix of the star graph with n = m + 1 nodes, S 1,m . Then q 0 (A) = I, q 1 (A) = A, q 2 (A) = A 2 − (1 − θ)D and generally (i) for all k = 0, 1, 2, . . . q 2k+1 (A) = η k A where η = θ(θ + m − 1); and Theorem 7 . 1 . 71When the series represented below converge, we have ∞ k=0 c k q k (A) = 0 0 I f 0 (Z) Figure 4 : 4Inverse participation ratio for BTDW Katz centrality on the London Underground transport data. Figure 5 : 5Figure 5: Kendall's tau coefficient for BTDW Katz centrality and annual passenger usage on the London Underground transport data. j = i. Of these there are η k θ(m−1). The multiplication by (m−1) accounts for all the possible choices of k, while the multiplication by θ accounts for the added backtracking steps. Acknowledgements The work of FA was supported by fellowship ECF-2018-453 from the Leverhulme Trust. The work of DJH was supported by EP-SRC Programme Grant EP/P020720/1. The work of VN was supported by an Academy of Finland grant (Suomen Akatemian päätös 331240).A Star Graph ResultsProof of Theorem 5.2The expressions for q k (A) for k = 0, 1, 2 are independent of the structure of the network; they follow immediately from Theorem 3.1.The adjacency matrix has the form The non-backtracking spectrum of the universal cover of a graph. O Angel, J Friedman, S Hoory, Transactions of the American Mathematical Society. 326O. Angel, J. Friedman, and S. Hoory, The non-backtracking spectrum of the universal cover of a graph, Transactions of the American Mathemat- ical Society, 326 (2015), pp. 4287-4318. Nonbacktracking walk centrality for directed networks. F Arrigo, P Grindrod, D J Higham, V Noferini, Journal of Complex Networks. 6F. Arrigo, P. Grindrod, D. J. Higham, and V. Noferini, Non- backtracking walk centrality for directed networks, Journal of Complex Net- works, 6 (2018), pp. 54-78. On the exponential generating function for non-backtracking walks. Linear Algebra and Its Applications. 79, On the exponential generating function for non-backtracking walks, Linear Algebra and Its Applications, 79 (2018), pp. 781-801. Total communicability as a centrality measure. M Benzi, C Klymko, Journal of Complex Networks. 1M. Benzi and C. Klymko, Total communicability as a centrality measure, Journal of Complex Networks, 1 (2013), pp. 124-149. On the limiting behavior of parameterdependent network centrality measures. M Benzi, C Klymko, SIAM J. Matrix Anal. Appl. 36M. Benzi and C. Klymko, On the limiting behavior of parameter- dependent network centrality measures, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 686-706. P Boldi, S Vigna, Axioms for centrality. 10P. Boldi and S. Vigna, Axioms for centrality, Internet Mathematics, 10 (2014), pp. 222-262. Zeta functions of restrictions of the shift transformation. R Bowen, O E Lanford, Global Analysis: Proceedings of the Symposium in Pure Mathematics of the Americal Mathematical Society. S.-S. Chern and S. SmaleAmerican Mathematical SocietyUniversity of California, BerkelyR. Bowen and O. E. Lanford, Zeta functions of restrictions of the shift transformation, in Global Analysis: Proceedings of the Symposium in Pure Mathematics of the Americal Mathematical Society, University of California, Berkely, 1968, S.-S. Chern and S. Smale, eds., American Mathematical Society, 1970, pp. 43-49. S Cipolla, F Durastante, F Tudisco, Nonlocal PageRank, ESIAM. to appear, (2020S. Cipolla, F. Durastante, and F. Tudisco, Nonlocal PageRank, ESIAM, to appear, (2020). On the alpha-nonbacktracking centrality for complex networks: Existence and limit cases. R Criado, J Flores, E García, A J G Del Amo, À Pérez, M Romance, Journal of Computational and Applied Mathematics. 350R. Criado, J. Flores, E. García, A. J. G. del Amo,À. Pérez, and M. Romance, On the alpha-nonbacktracking centrality for complex networks: Existence and limit cases, Journal of Computational and Applied Mathematics, 350 (2019), pp. 35-45. Navigability of interconnected networks under random failures. M De Domenico, A Solé-Ribalta, S Gómez, A Arenas, Proceedings of the National Academy of Sciences. 111M. De Domenico, A. Solé-Ribalta, S. Gómez, and A. Arenas, Navigability of interconnected networks under random failures, Proceedings of the National Academy of Sciences, 111 (2014), pp. 8351-8356. Network properties revealed through matrix functions. E Estrada, D J Higham, SIAM Review. 52E. Estrada and D. J. Higham, Network properties revealed through matrix functions, SIAM Review, 52 (2010), pp. 696-671. Centrality in social networks conceptual clarification. L C Freeman, Social Networks. 1L. C. Freeman, Centrality in social networks conceptual clarification, So- cial Networks, 1 (1978), pp. 215-239. Localization and spreading of diseases in complex networks. A V Goltsev, S N Dorogovtsev, J G Oliveira, J F F Mendes, Phys. Rev. Lett. 109128702A. V. Goltsev, S. N. Dorogovtsev, J. G. Oliveira, and J. F. F. Mendes, Localization and spreading of diseases in complex networks, Phys. Rev. Lett., 109 (2012), p. 128702. The deformed graph Laplacian and its applications to network centrality analysis. P Grindrod, D J Higham, V Noferini, SIAM Journal on Matrix Analysis and Applications. 39P. Grindrod, D. J. Higham, and V. Noferini, The deformed graph Laplacian and its applications to network centrality analysis, SIAM Journal on Matrix Analysis and Applications, 39 (2018), pp. 310-341. R A Horn, C R Johnson, Matrix Analysis. CambridgeCambridge University PressR. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. Ihara zeta functions on digraphs. M D Horton, Linear Algebra and its Applications. 425M. D. Horton, Ihara zeta functions on digraphs, Linear Algebra and its Applications, 425 (2007), pp. 130-142. What are zeta functions of graphs and what are they good for?. M D Horton, H M Stark, A A Terras, Quantum graphs and their applications. G. Berkolaiko, R. Carlson, S. A. Fulling, and P. Kuchment415M. D. Horton, H. M. Stark, and A. A. Terras, What are zeta func- tions of graphs and what are they good for?, in Quantum graphs and their applications, G. Berkolaiko, R. Carlson, S. A. Fulling, and P. Kuchment, eds., vol. 415 of Contemp. Math., 2006, pp. 173-190. A new index derived from sociometric data analysis, Psychometrika. L Katz, 18L. Katz, A new index derived from sociometric data analysis, Psychome- trika, 18 (1953), pp. 39-43. Localized eigenvectors of the non-backtracking matrix. T Kawamoto, Journal of Statistical Mechanics: Theory and Experiment. 23404T. Kawamoto, Localized eigenvectors of the non-backtracking matrix, Journal of Statistical Mechanics: Theory and Experiment, 2016 (2016), p. 023404. P A Knight, E Estrada, A First Course in Network Theory. OxfordOxford University PressP. A. Knight and E. Estrada, A First Course in Network Theory, Oxford University Press, Oxford, 2015. Spectral redemption: clustering sparse networks. F Krzakala, C Moore, E Mossel, J Neeman, A Sly, L Zdeborová, P Zhang, Proceedings of the National Academy of Sciences. 110F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zde- borová, and P. Zhang, Spectral redemption: clustering sparse networks, Proceedings of the National Academy of Sciences, 110 (2013), pp. 20935- 20940. Localization and centrality in networks. T Martin, X Zhang, M E J Newman, Phys. Rev. E. 9052808T. Martin, X. Zhang, and M. E. J. Newman, Localization and cen- trality in networks, Phys. Rev. E, 90 (2014), p. 052808. P.-A G Maugis, S C Olhede, P J Wolfe, arXiv:1705.05677Topology reveals universal features for network comparison. stat.MEP.-A. G. Maugis, S. C. Olhede, and P. J. Wolfe, Topology reveals universal features for network comparison, arXiv:1705.05677 [stat.ME], (2017). Predicting the speed of epidemics spreading in networks. S Moore, T Rogers, Phys. Rev. Lett. 12468301S. Moore and T. Rogers, Predicting the speed of epidemics spreading in networks, Phys. Rev. Lett., 124 (2020), p. 068301. Influence maximization in complex networks through optimal percolation. F Morone, H A Makse, Nature. 524F. Morone and H. A. Makse, Influence maximization in complex net- works through optimal percolation, Nature, 524 (2015), pp. 65-68. Collective influence algorithm to find influencers via optimal percolation in massively large social media. F Morone, B Min, L Bo, R Mari, H A Makse, Scientific Reports. 630062F. Morone, B. Min, L. Bo, R. Mari, and H. A. Makse, Collective influence algorithm to find influencers via optimal percolation in massively large social media, Scientific Reports, 6 (2016), p. 30062. M E J Newman, Networks: an Introduction. OxfordOxford Univerity PressM. E. J. Newman, Networks: an Introduction, Oxford Univerity Press, Oxford, 2010. arXiv:1308.6494Spectral community detection in sparse networks. , Spectral community detection in sparse networks, arXiv:1308.6494, (2013). Distinct types of eigenvector localization in networks. R Pastor-Satorras, C Castellano, Scientific Reports. 618847R. Pastor-Satorras and C. Castellano, Distinct types of eigenvector localization in networks, Scientific Reports, 6 (2016), p. 18847. Non-backtracking walks reveal compartments in sparse chromatin interaction networks. K Polovnikov, A Gorsky, S Nechaev, S V Razin, S V Ulianov, Scientific Reports. 10K. Polovnikov, A. Gorsky, S. Nechaev, S. V. Razin, and S. V. Ulianov, Non-backtracking walks reveal compartments in sparse chromatin interaction networks, Scientific Reports, 10 (2020). Spectral clustering of graphs with the Bethe Hessian. A Saade, F Krzakala, L Zdeborová, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger27A. Saade, F. Krzakala, and L. Zdeborová, Spectral clustering of graphs with the Bethe Hessian, in Advances in Neural Information Process- ing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds., 2014, pp. 406-414. Finding communities in sparse networks. A Singh, M D Humphries, Scientific Reports. 5A. Singh and M. D. Humphries, Finding communities in sparse net- works, Scientific Reports, 5 (2015). Quantum chaos on discrete graphs. U Smilansky, Journal of Physics A: Mathematical and Theoretical. 40621U. Smilansky, Quantum chaos on discrete graphs, Journal of Physics A: Mathematical and Theoretical, 40 (2007), p. F621. Random matrices, non-backtracking walks, and the orthogonal polynomials. S Sodin, J. Math. Phys. 48123503S. Sodin, Random matrices, non-backtracking walks, and the orthogonal polynomials, J. Math. Phys, 48 (2007), p. 123503. H Stark, A Terras, Zeta functions of finite graphs and coverings. 121H. Stark and A. Terras, Zeta functions of finite graphs and coverings, Advances in Mathematics, 121 (1996), pp. 124-165. An Ihara formula for partially directed graphs. A Tarfulea, R Perlis, Linear Algebra and its Applications. 431A. Tarfulea and R. Perlis, An Ihara formula for partially directed graphs, Linear Algebra and its Applications, 431 (2009), pp. 73-85. A Terras, Harmonic Analysis on Symmetric Spaces -Euclidean Space, the Sphere, and the Poincaré Upper Half-Plane. New YorkSpringer2nd ed.A. Terras, Harmonic Analysis on Symmetric Spaces -Euclidean Space, the Sphere, and the Poincaré Upper Half-Plane, Springer, New York, 2nd ed., 2013. L Torres, K S Chan, H Tong, T Eliassi-Rad, arXiv:2002.12309Node immunization with non-backtracking eigenvalues, tech. rep. L. Torres, K. S. Chan, H. Tong, and T. Eliassi-Rad, Node immu- nization with non-backtracking eigenvalues, tech. rep., arXiv:2002.12309, 2020. Nonbacktracking cycles: Length spectrum theory and graph mining applications. L Torres, P Suárez-Serrato, T Eliassi-Rad, Journal of Applied Network Science. 4L. Torres, P. Suárez-Serrato, and T. Eliassi-Rad, Non- backtracking cycles: Length spectrum theory and graph mining applications, Journal of Applied Network Science, 4 (2019). Graph zeta function in the Bethe free energy and loopy belief propagation. Y Watanabe, K Fukumizu, Advances in Neural Information Processing Systems. Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta22Y. Watanabe and K. Fukumizu, Graph zeta function in the Bethe free energy and loopy belief propagation, in Advances in Neural Information Processing Systems 22, Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, eds., 2009, pp. 2017-2025.
[]
[ "The structures of embedded clusters in the Perseus, Serpens and Ophiuchus molecular clouds", "The structures of embedded clusters in the Perseus, Serpens and Ophiuchus molecular clouds" ]
[ "S Schmeja \nCentro de Astrofísica da Universidade do Porto\nRua das Estrelas4150-762PortoPortugal\n\nZentrum für Astronomie\nInstitut für Theoretische Astrophysik\nUniversität Heidelberg\nAlbert-Ueberle-Str. 269120HeidelbergGermany\n", "2⋆ , M. SN Kumar \nCentro de Astrofísica da Universidade do Porto\nRua das Estrelas4150-762PortoPortugal\n", "B Ferreira \nDepartment of Astronomy\nUniversity of Florida\n32611-2055GainesvilleFLUSA\n" ]
[ "Centro de Astrofísica da Universidade do Porto\nRua das Estrelas4150-762PortoPortugal", "Zentrum für Astronomie\nInstitut für Theoretische Astrophysik\nUniversität Heidelberg\nAlbert-Ueberle-Str. 269120HeidelbergGermany", "Centro de Astrofísica da Universidade do Porto\nRua das Estrelas4150-762PortoPortugal", "Department of Astronomy\nUniversity of Florida\n32611-2055GainesvilleFLUSA" ]
[ "Mon. Not. R. Astron. Soc" ]
The young stellar population data of the Perseus, Ophiuchus and Serpens molecular clouds are obtained from the Spitzer c2d legacy survey in order to investigate the spatial structure of embedded clusters using the nearest neighbour and minimum spanning tree method. We identify the embedded clusters in these clouds as density enhancements and analyse the clustering parameter Q with respect to source luminosity and evolutionary stage. This analysis shows that the older Class 2/3 objects are more centrally condensed than the younger Class 0/1 protostars, indicating that clusters evolve from an initial hierarchical configuration to a centrally condensed one. Only IC 348 and the Serpens core, the older clusters in the sample, shows signs of mass segregation (indicated by the dependence of Q on the source magnitude), pointing to a significant effect of dynamical interactions after a few Myr. The structure of a cluster may also be linked to the turbulent energy in the natal cloud as the most centrally condensed cluster is found in the cloud with the lowest Mach number and vice versa. In general these results agree well with theoretical scenarios of star cluster formation by gravoturbulent fragmentation.
10.1111/j.1365-2966.2008.13442.x
[ "https://arxiv.org/pdf/0805.2049v1.pdf" ]
11,789,149
0805.2049
f30236dd72083900884de031a11c0107a7329ff7
The structures of embedded clusters in the Perseus, Serpens and Ophiuchus molecular clouds 2007. May 2008 S Schmeja Centro de Astrofísica da Universidade do Porto Rua das Estrelas4150-762PortoPortugal Zentrum für Astronomie Institut für Theoretische Astrophysik Universität Heidelberg Albert-Ueberle-Str. 269120HeidelbergGermany 2⋆ , M. SN Kumar Centro de Astrofísica da Universidade do Porto Rua das Estrelas4150-762PortoPortugal B Ferreira Department of Astronomy University of Florida 32611-2055GainesvilleFLUSA The structures of embedded clusters in the Perseus, Serpens and Ophiuchus molecular clouds Mon. Not. R. Astron. Soc 0002007. May 2008(MN L A T E X style file v2.2)stars: formation -ISM: clouds -ISM: kinematics and dynamics -open clusters and associations: general -infrared: stars -methods: statistical The young stellar population data of the Perseus, Ophiuchus and Serpens molecular clouds are obtained from the Spitzer c2d legacy survey in order to investigate the spatial structure of embedded clusters using the nearest neighbour and minimum spanning tree method. We identify the embedded clusters in these clouds as density enhancements and analyse the clustering parameter Q with respect to source luminosity and evolutionary stage. This analysis shows that the older Class 2/3 objects are more centrally condensed than the younger Class 0/1 protostars, indicating that clusters evolve from an initial hierarchical configuration to a centrally condensed one. Only IC 348 and the Serpens core, the older clusters in the sample, shows signs of mass segregation (indicated by the dependence of Q on the source magnitude), pointing to a significant effect of dynamical interactions after a few Myr. The structure of a cluster may also be linked to the turbulent energy in the natal cloud as the most centrally condensed cluster is found in the cloud with the lowest Mach number and vice versa. In general these results agree well with theoretical scenarios of star cluster formation by gravoturbulent fragmentation. INTRODUCTION It is believed that stars form from high-density regions in turbulent molecular clouds that become gravitationally unstable, fragment and collapse. That way, a star cluster is built up in a complex interplay between gravity and supersonic turbulence (Mac Low & Klessen 2004;Ballesteros-Paredes et al. 2007;McKee & Ostriker 2007). Therefore the structure of an embedded cluster, i.e. the spatial distribution of its members, may hold important clues about the formation mechanism and initial conditions. Several characteristics of star-forming clusters seem to be correlated with the turbulent flow velocity, e.g. the star formation efficiency (Klessen, Heitsch & Mac Low 2000;Clark & Bonnell 2004;Schmeja, Klessen & Froebrich 2005), mass accretion rates (Schmeja & Klessen 2004), or the core mass distribution (Ballesteros-Paredes et al. 2006) and the initial mass function (Padoan & Nordlund 2005). Giant molecular cloud (GMC) complexes usually contain multiple embedded clusters as well as more distributed, rather isolated young stellar objects (YSOs). The interstel-⋆ E-mail: [email protected] lar medium shows a hierarchical structure (sometimes described as fractal) from the largest GMC scales down to individual cores and clusters, which are sometimes hierarchical themselves. There is no obvious change in morphology at the cluster boundaries, so clusters can be seen as the bottom parts of the hierarchy, where stars have had the chance to mix (Efremov & Elmegreen 1998;Elmegreen et al. 2000;Elmegreen 2006). Since embedded stars are best visible in the infrared part of the spectrum, and since nearby GMCs can span several degrees on the sky, wide-field near-infrared (NIR) and mid-infrared (MIR) surveys are useful tools for identifying and mapping the distribution of young stars in GMCs. The advent of the Spitzer Space Telescope has in particular led to large advances in such studies. The Spitzer Cores to Disks (c2d) Legacy programme survey (Evans et al. 2003) aimed at mapping five large nearby star-forming regions, including Perseus, Serpens and Ophiuchus using Spitzer's IRAC (3.6 to 8 µm) and MIPS (24 to 160 µm) instruments (Jørgensen et al. 2006;Harvey et al. 2006Harvey et al. , 2007aRebull et al. 2007; Padgett et al. 2008). The Perseus molecular cloud complex is an extended region of low-mass star formation containing the well-studied embedded clusters IC 348 and NGC 1333. The latter is considered to be significantly younger (< 1 Myr; Lada, Alves & Lada 1996;Wilking et al. 2004) than IC 348 (∼ 3 Myr; Luhman et al. 2003;Muench et al. 2007). The distance to the Perseus cloud is uncertain and may lie somewhere between 200 and 320 pc, presumably there is a distance gradient along the cloud (see e.g. the discussion in Enoch et al. 2006). Areas of 3.86 deg 2 and 10.6 deg 2 have been mapped by IRAC and MIPS, respectively (Jørgensen et al. 2006;Rebull et al. 2007). Jørgensen et al. (2006) identified a total number of 400 YSOs, thereof 158 in IC 348 and 98 in NGC 1333, which means that a significant fraction of young stars is found outside the main clusters (see also Hatchell et al. 2005). Muench et al. (2007), also based on Spitzer MIR data, report a known membership of IC 348 of 363 sources and estimate a total population of more than 400. The Serpens molecular cloud is a very active starforming region covering more than 10 deg 2 on the sky at a distance of about 260 pc (Straižys,Černis & Bartašiūtė 1996). Areas of 0.89 deg 2 and 1.5 deg 2 have been mapped by IRAC and MIPS, respectively (Harvey et al. 2006(Harvey et al. , 2007a. In this region, 235 YSOs have been identified by Harvey et al. (2007b). The age of the main cluster, called the Serpens cloud core, is estimated to be ∼ 2 Myr (Kaas et al. 2004). The Ophiuchus (or ρ Ophiuchi) molecular cloud is one of the closest star-forming regions at a distance of about 135 pc (Mamajek 2008). Its main cluster, L1688, is the richest known nearby embedded cluster. It is very young, with an age of probably < 1 Myr (Greene & Meyer 1995;Luhman & Rieke 1999). 14.4 deg 2 have been observed with MIPS, leading to the identification of 323 YSO candidates (Padgett et al. 2008). In this paper we analyse the structure and distribution of the young stellar population unveiled by the c2d programme in these nearby molecular clouds and compare the results to current theoretical scenarios of star formation. For this purpose, we retrieved the c2d point source catalogues of the Perseus, Serpens and Ophiuchus molecular clouds and applied two complementary statistical methods to identify embedded clusters and analyse their structures. The selection of the data set is described in Section 2, the statistical methods and their application to the data are explained in Section 3 and the results and discussion are presented in Section 4 and 5. DATA SELECTION The third delivery of data from the c2d legacy project (Evans et al. 2005), which combines observations from the IRAC and MIPS cameras was obtained from the c2d website 1 . The combined photometric catalogues for the Perseus, Ophiuchus and Serpens molecular clouds were retrieved. These catalogues are products resulting from extensive analysis of the photometric observations in the four IRAC bands (central wavelengths 3.6, 4.5, 5.8 and 8.0 µm) and the three MIPS bands covering the 24 to 160 µm wavelength range. The sensitivity of the survey is expected to sample objects down to 0.1 -0.5 M⊙ in all three clouds. The delivery catalogues provide source identifications and classifications based on a consistent method of analysis for all the targets covered by the legacy survey as described by Evans et al. (2005). The catalogues contain positions, photometry and associated quality flags, source type such as point/extended source and source classification such as stars/galaxy/YSO. These catalogues focus on identifying and classifying YSOs and distinguishing them from stars and galaxies. Extensive analysis of the point sources using colour-colour and colour-magnitude diagrams, spectral indices in the observed bands, comparison with SWIRE (Spitzer Wide-area Infrared Extragalactic survey) data to remove galaxy and PAH emission contaminants is used for this purpose. In the present work we use these catalogues to retrieve the stellar and YSO population in the three molecular clouds based on the source classification flags. Although YSOs are classified based on their spectral energy distributions (SEDs) and MIR colours, a certain amount of confusion is always present when the complete SEDs are not available and only a part of the SED is sampled. The IRAC and MIPS observations cover the wavelength range where the YSOs lend themselves for an effective classification, however, confusion arising due to partial sampling of the SED cannot be removed. Further, an edge-on Class 2 source can imitate a Class 1 source etc. when classification is made using partial SEDs. Based on the nature of the point source SED, which is represented by 19 different flags in the c2d catalogues, we group the point sources into three categories, namely stars (foreground/background and cloud members), YSOs in Class 0/1 phase and YSOs in Class 2/3 phase. While the c2d catalogue provides a distinct classification for stars, the YSOs are represented by 15 different object types. We use the thumb rule of grouping sources that are visible only in wavelengths longer than 8 µm into the Class 0/1 group and the remaining into the Class 2/3 group. This leads to the classification given in Table 1, where we relate the object types from Evans et al. (2005) and their characteristics to the common evolutionary classes. Our classification does not necessary represent the 'true' nature of the objects, which would in some cases require data from wavelengths longer than provided by Spitzer and a more sophisticated SED modelling. STATISTICAL METHODS Nearest neighbour method While clusters are readily identifiable as peaks in stellar density maps, the exact delimitation of a cluster is difficult and will always be somewhat arbitrary, since all the embedded clusters are surrounded by an extended population of YSOs distributed throughout the entire clouds. Several methods to detect clusters in a field have been developed. Usually they are based on stellar densities (derived e.g. from star counts or Nyquist binning) and consider as clusters all regions above a certain deviation (∼ 2 − 4σ) from the background level (e.g. Lada et al. 1991;Lada & Lada 1995;Kumar, Keto & Clerkin 2006;Froebrich, Scholz & Raftery 2007). The nearest neighbour (NN) method (von Hoerner 1963;Casertano & Hut 1985) estimates the local source density ρj by measuring the distance from each object to its jth nearest neighbour, where the value of j is chosen depending on the sample size. There is a connection between the chosen j value and the sensitivity to those density fluctuations being mapped. A small j value increases the locality of the density measurements at the same time as increasing sensitivity to random density fluctuations, whereas a large j value will reduce that sensitivity at the cost of losing some locality. Through the use of Monte Carlo simulations Ferreira & Lada (in preparation) find that a value of j = 20 is adequate to detect clusters with 10 to 1500 members. The positions of the cluster centres are defined as the density-weighted enhancement centres (Casertano & Hut 1985) x d,j = P i xiρ i j P i ρ i j , where xi is the position vector of the ith cluster member and ρ i j the jth NN density around this object. These centres do not necessarily correspond to the density peaks. Minimum spanning tree and Q The second method makes use of a minimum spanning tree (MST), a construct from graph theory, which is defined as the unique set of straight lines ("edges") connecting a given set of points without closed loops, such that the sum of the edge lengths is a minimum (Kruskal 1956;Prim 1957). From the MST we derive the mean edge length ℓMST. Cartwright & Whitworth (2004) introduced the parameter Q =lMST/s, which combines the normalized correlation lengths, i.e. the mean distance between all stars, and the normalized mean edge lengthlMST. The Q parameter permits to quantify the structure of a cluster and to distinguish between clusters with a central density concentration and hierarchical clusters with possible fractal substructure. Large Q values (Q > 0.8) describe centrally condensed clusters having a volume density n(r) ∝ r −α , while small Q values (Q < 0.8) indicate clusters with fractal substructure. Q is correlated with the radial density exponent α for Q > 0.8 and anticorrelated with the fractal dimension D for Q < 0.8. The dimensionless measure Q is independent of the number of objects and of the cluster area. The method has been successfully applied to both observed clusters and results of numerical simulations (Cartwright & Whitworth 2004;Kumar & Schmeja 2007). For details of the method, in particular its implementation and normalization used for this study, see . Application to the data The NN and MST statistical methods described above were applied to the c2d data of the Perseus, Serpens and Ophiuchus molecular clouds. We first used the NN analysis (described in detail by Ferreira et al., in prep.) on the stellar and YSO populations in the entire clouds to detect density enhancements and therefore embedded clusters. For this purpose, 20th NN density maps were generated for the full extent of the available c2d data. The 20th NN density is thought to be the optimum density to identify primary clusters and not pick up very loose groups (Ferreira & Lada, in preparation). We found that using only the point sources that were classified as YSOs to produce the NN maps yields the best density contrast and boundary identifications. We determined the average 20th NN density in the clouds away from the clusters and define as cluster members all YSOs whose density exceeds 3σ of the average value. The cluster boundaries are shown as thick lines in Fig. 1, the density thresholds ρ thresh for the clusters are given in Table 2. All YSOs enclosed within the respective cluster defining contour are treated as cluster members. For those we constructed the MST and determined Q. Note that the exact definition of the cluster boundaries is only relevant for the quantitative structure analysis using the Q parameter. Changing the cluster definition criteria may change the absolute Q values, but it does not affect the general trends and correlations discussed below. STRUCTURE ANALYSIS Cluster morphology The YSOs in the three investigated molecular clouds are found in the embedded clusters and in a distributed population that extends through the entire cloud (Jørgensen et al. 2006;Harvey et al. 2007b). Figure 1 shows IRAC Channel 4 (8 µm) images of the four largest embedded clusters IC 348, NGC 1333, Serpens cloud core and L1688, overlaid with the 20th NN density contours. In the Perseus molecular cloud, apart from the embedded clusters NGC 1333 and IC 348, the region around Barnard 3 (B3) emerges as a moderately sized cluster, but no significant density enhancements are found associated with L1455 and L1448. Despite the large number of member stars, the Perseus clusters show relatively small peak densities compared to the other clouds. Likewise, the Perseus cloud shows the lowest "background" YSO density of all regions. The density peak in IC 348 coincides with the B5 star HD 281159, the most massive star in this cluster, whereas the two B stars in NGC 1333 (BD +30 549 and SVS 3) are not associated with the densest region. The B0 star HD 278942 is found close to the centre of the B3 cluster, however, it may be located behind the main Perseus cloud complex (Ridge et al. 2006). In the Serpens cloud two major clusters were detected: The main cluster in this cloud, usually called the Serpens cloud core, and referred to as Cluster A by Harvey et al. (2006), and a recently discovered cluster called Cluster B by Harvey et al. (2006) and Serpens/G3-G6 by Djupvik et al. (2006). The Serpens core has a very steep density gradient and a prominent peak with a density of 1045 pc −2 , the high- est of all investigated clusters and twice the value of the second densest cluster. In the Ophiuchus cloud one highly structured, major embedded cluster was detected roughly centred on L1688, several smaller density enhancements clearly trace four filamentary structures that seem to converge on this prominent cluster. These density enhancements are in good correlation with the dust continuum emission maps. Two of these density enhancements are picked up as small clusters by the NN method, we designate them Ophiuchus north and centre. The Ophiuchus north cluster is centred around a group of bright B stars (ρ Oph D, ρ Oph AB, ρ Oph C) 2 . Furthermore we detect the known cluster associated with L1689S (Bontemps et al. 2001), which shows a single peak and no distinct substructure. The shapes of IC 348, NGC 1333, Serpens A and L1689S can be roughly approximated by ellipses, whereas the hierarchical L1688 cluster shows an irregular, nevertheless rather circular, perimeter. It is intersting to note that the elliptical clusters appear to have two peaks roughly coinciding with the foci of ellipses. However, these peaks seem to consist of of YSO populations of different evolutionary stages. This is seen in NGC 1333, with the density peak in the south and two luminous, massive B stars, which illuminate the prominent reflection nebula, in the north. In Serpens we note a group of bright stars north of the density peak, marked by a dotted circle in Fig. 1. The second peak in IC 348, seen to the south-east, seems to correspond to a new star formation episode close to HH211 (Tafalla, Kumar & Bachiller 2006). Table 2 lists all the detected embedded clusters along with their position, size, peak density, the Q parameter and the number of YSOs n * . The size corresponds to the length of the major axis of an ellipse fitted to the cluster areas. Its only purpose is to provide the reader with a rough estimate of the diameter of the cluster. The linear sizes are computed assuming distances of 315, 260 and 135 pc to the Perseus, Serpens and Ophiuchus cloud, respectively. 2 These designations are not to be confused with the ones used for the subclusters discussed below and marked in Fig. 1. Q values Extensive analysis of the distribution of the MST edge lengths as a function of YSO class and the Q parameter as a function of YSO brightness was carried out only for four embedded clusters where the number statistics are significant and suitable for this kind of analysis. These four major embedded clusters are NGC 1333, IC 348, Serpens core and L1688. It can be seen from Figure 1 that the Serpens core, NGC 1333 and IC 348 clusters are centrally concentrated, although at very different surface densities, whereas L1688 shows at least four peaks enclosed within the cluster defining contour. This is reflected in the Q values listed in Table 2. L1688 shows the lowest Q value (0.72) indicating fractal substructure, whereas the other clusters show Q values of Q 0.8, denoting central condensation. All the small clusters have values of Q < 0.8. In Table 3 we list the absolute and relative numbers of Class 0/1 and Class 2/3 objects and their Q values in the four large embedded clusters. Note that the classifications (see Sect. 2) do not necessarily correspond to the standard definitions. Assuming that the ratio of younger to older sources represents the relative ages of the embedded clusters, the data in Table 3 indicate that the L1688 cluster is the youngest of all, NGC 1333 and Serpens are slightly older and IC 348 is the oldest of all four. This is not identical, but quite similar to the age estimates based on more sophisticated methods mentioned in Sect. 1. In all the clusters but Serpens, the Q values are significantly higher for the Class 2/3 objects than for the Class 0/1 protostars, indicating a more centrally condensed configuration for the older sources. In the centrally condensed Serpens A cluster the two groups show the same Q values. The three major subclusters in L1688 were investigated further, these are (following the nomenclature of Bontemps et al. 2001) Oph A, Oph B and Oph EF (marked in Fig. 1). They are defined as having NN densities > 180 pc −2 , their clustering properties are listed in Table 4. Oph A contains 41 Class 0/1 protostars and 26 Class 2/3 objects and has a value of Q = 0.74 in the hierarchical range. Oph B (18 Class 0/1, 34 Class 2/3 sources) and Oph EF (16 Class 0/1, 27 Class 2/3 sources) show higher values of Q = 0.79 and Q = 0.77, respectively. Oph A is associated with a larger amount of gas and has a significantly higher Class 0/1 to Class 2/3 ratio than the other two subclusters, making it most likely the youngest of the subclusters, it shows the lowest Q value of the subclusters. Again we see an evolutionary sequence from the youngest, hierarchical cluster to the more evolved clusters closer to central condensation. When looking at Class 0/1 and Class 2/3 sources separately, we again find significantly higher Q values for the more evolved objects: In Oph A the value rises from Q = 0.74 for Class 0/1 to Q = 0.90 for Class 2/3, in Oph B from 0.77 to 0.81 and in Oph EF from 0.73 to 0.79. Distribution of MST edge lengths The lengths of the MST edges can be seen as the separation from one object to its nearest neighbour. Histograms of the MST edge lengths indicate characteristic separations for YSOs in each of the clusters. Figure 2 shows the normalized histograms of the MST edge lengths in the four main clusters in bins of 0.02 pc, using the assumed distances given above (Section 4.2). The histograms show a distinct peak at 0.02 pc (Serpens, L1688) and 0.04 pc (NGC 1333), respectively, and the distributions spread up to separations of 0.2 to 0.3 pc. In IC 348, however, two peaks appear at a larger separation of 0.04 and 0.08 pc. Also shown in Fig. 2 are the same histograms for Class 0/1 and Class 2/3 sources separately. In the clusters NGC 1333, Serpens and L1688 we do not see a significant difference between the separations of Class 0/1 and Class 2/3 objects. Objects of all evolutionary states are more or less similarly distributed in the cluster. In IC 348, however, it is only the Class 2/3 objects which roughly follow the distribution of the overall cluster members, whereas the Class 0/1 objects seem to be homogeneously distributed. This is because the central cluster in IC 348 is evolved and has blown out a cavity, with the current star formation occurring in a shell around the cluster (Tafalla et al. 2006;Muench et al. 2007). The characteristic edge lengths found from Fig. 2 do reflect the true separations and are not a result of poor spatial sampling because these values are almost an order of magnitude higher than the tolerance used for matching IRAC and MIPS data (4 arcsec, corresponding to 0.003 -0.006 pc). We also investigated the MSTs and their edge lengths for the stars in the cluster field, but due to their large spatial density they would generally yield very short edges and a narrow peak of the distribution at very small projected separations. Q versus magnitude Assuming that the IR source magnitudes are representative of the source masses and evolutionary stages we investigate the variation of Q as a function of magnitude. Q was calculated for sources brighter than a certain magnitude in steps of 0.5 mag as long as the sample contained at least 40 objects. This calculation was made for YSOs and stars separately. The resulting Q versus Channel 4 magnitude curves for YSOs and stars are shown in Fig. 3a and b respectively. (Similar plots were made using Channel 1, 2 and 3 magnitudes as well; they look similar in all bands.) The results for different clusters are shown using different line styles. Fig. 3a shows that the YSO curves remain flat or rising with magnitude for the clusters NGC 1333 and L1688 while they decrease with magnitude for IC 348 and, less pronounced, for Serpens. This implies that in NGC 1333 and L1688 YSOs of all luminosities display a roughly similar configuration, whereas in Serpens and IC 348 the low luminosity YSOs are less centrally concentrated than the more luminous YSOs. In contrast, the stellar population shows a declining Q versus magnitude curve for all the clusters, indicating that the fainter stars are relatively homogeneously distributed in space and the brighter stars are centrally concentrated. Since this curve is computed for a region encompassing only the dense cluster, the chances of picking up fainter background sources through the large column of extinction is low. Due to the same reason, stars that are cluster members outnumber foreground stars for the computed area. Therefore the observed effect is representative of the true variations among the cluster members. Furthermore, a random distribution expected from foreground or background objects will produce a Q value constant over the whole magnitude range at Q ≈ 0.73 and cannot reproduce the declining curve seen in Fig. 3b. Assuming the age of IC 348 as 3 Myr, that of Serpens as 2 Myr and that of NGC 1333 and L1688 around 1 Myr (see Sect. 1), the YSO curves in Fig. 3a show that the brighter objects appear to evolve towards centrally condensed configurations and fainter objects towards homogeneous distributions with time. The transition of the distribution of the lowest-mass/substellar objects from a centrally condensed configuration to a more homogeneous distribution is thought to be an effect of dynamical evolution. A detailed discussion of this effect in IC 348 and the Orion Trapezium cluster and its implications for the formation of brown dwarfs can be found in Kumar & Schmeja (2007). DISCUSSION Dynamical evolution and mass segregation The structure analysis of YSOs and stars presented in the previous section has several implications. The significantly higher Q values for the Class 2/3 objects compared to the Class 0/1 protostars show that the older sources a more centrally condensed than the younger objects, which in most cases show a hierarchical configuration. Also in other embedded clusters the Class 0/1 protostars are found in subclusters, e.g. in NGC 2264 (Teixeira et al. 2006) or the Orion Trapezium cluster (Lada et al. 2004). This is in agreement with the predictions of numerical simulations (Bonnell, Bate & Vine 2003; suggesting that clusters build up from several subclusters that eventually form a single centrally condensed cluster. However, it is not possible to distinguish between the structure inherent to the cloud and that arising due to dynamical evolution. It is tempting to assume that the Class 0/1 objects trace the structure inherent to the cloud while the older objects represent the structure as a result from dynamical evolution. More than half of the mass of a Class 0 protostar is found in its envelope. So a Class 0 source can be seen as still part of the cloud, while the older YSOs are star-like objects with a disc rather than an envelope and are therefore susceptive to dynamical interaction. While Q rises as a cluster evolves, its absolute value does not depend on the age of the cluster. NGC 1333 and IC 348 with estimated ages of 1 and 3 Myr, respectively, have similar Q values of 0.81 and 0.80, whereas the two extreme Q values in our sample (0.84 and 0.72) are associated with the clusters L1688 and Serpens thought to have ages of about 1 and 2 Myr, respectively. This shows that the cluster structure in general is not directly correlated with the evolutionary stage and rather depends on the existing large-scale structure of the cloud, but the observation that Class 2/3 objects (which had more time and are also more susceptive to dynamical interactions) are more centrally condensed than the younger protostars is a clue that the cluster evolves from a hierarchical to a more centrally condensed configuration as predicted by numerical simulations. We interpret the decrease of Q with fainter (i.e. less massive) sources as an effect of mass segregation. However, the relation of luminosity to mass may be different for the different evolutionary classes, which may cause some interference of evolutionary and mass segregation effects. The trend shown in Fig. 3a, based on Channel 4 (8 µm) magnitudes, is mainly representative of Class 2/3 objects, but it is similar in the other IRAC bands, and also in MIPS Channel 1 (24 µm), which includes Class 0/1 objects. A decrease of Q with fainter magnitudes is seen in IC 348 (∼ 3 Myr) and, to a somewhat lesser extent, in a References for ∆v and T : 1: Jijina et al. (1999), 2: Myers et al. (1978) Serpens (∼ 2 Myr), whereas there are no signs of mass segregation in the YSO population of the youngest clusters. The stellar (i.e. older) population, on the other hand, shows significant mass segregation in all clusters. Therefore mass segregation seems to be a function of age. According to simulations of turbulent molecular clouds (Bonnell et al. 2003) YSOs form at different locations in the cloud and condense into a single cluster within a few free-fall times or less than 1 Myr. Dynamical interactions become important only after this initial assembly of YSOs is made. In particular, dynamical mass segregation occurs on a timescale of the order of the relaxation time of the cluster (Bonnell & Davies 1998). This is consistent with the observed effects. Our findings are not necessarily contradicting claims that mass segregation is primordial, i.e. that massive stars form near or in the centre of a cluster (Bonnell & Davies 1998). In the youngest clusters, contrary to the older ones, the massive YSOs show a more hierarchical distribution (smaller Q, see Fig. 3a) than the lowermass objects, suggesting that the massive stars are still on the bottom of the potential wells of the former subclusters. This is consistent with theoretical scenarios in which mergers of small clumps that are initially mass segregated (or in which mass segregation is generated quickly by two-body relaxation) lead to larger clusters that are also mass segregated (McMillan, Vesperini & Portegies Zwart 2007). The NN maps (Fig. 1) indicate that IC 348, NGC 1333 and Serpens A have formed from a single dense core, while L1688 appears to lie at the interface of merging filaments (Ferreira et al., in preparation). L1688 has a highly hierarchical structure and consists of at least three readily identifiable subclusters, nevertheless it is detected as a single cluster by the 20th NN maps, whose centre does not coincide with any of the subclusters. Provided this geometrical centre corresponds approximately to the centre of mass and provided the density is high enough, the subclusters may merge to form a single centrally condensed cluster in the future. The structure of the cluster seems to be correlated to the cloud structure in the sense that the collapse of a single dense core more easily leads to a centrally condensed cluster, while interacting filaments are reflected in a longer-lasting hierarchical structure of the cluster. Turbulence and cluster structure As turbulence seems to play an important rôle in the build-up of a star cluster (Elmegreen et al. 2000;Mac Low & Klessen 2004;Ballesteros-Paredes et al. 2007), we investigate the relation of the cluster structure with the turbulent environment. In Table 5 we list the velocity dispersion ∆v and temperature T of the dense gas and the derived Mach numbers in Serpens, NGC 1333 and L1688 as traced by NH3 line observations (Myers et al. 1978;Jijina, Myers & Adams 1999). The NH3 line is a tracer of dense gas which is representative of the cores from which star clusters are born (Bergin & Tafalla 2007) and is also the tracer that it least affected by outflows. Therefore, the NH3 line widths better represent the turbulent properties of the cluster forming cores than other tracers of the molecular cloud such as CO or CS, which may be seriously affected by the presence of outflows and effects of molecular depletion (Tafalla et al. 2002;Bergin & Tafalla 2007). Similarly, the best measure of the core temperatures is also obtained from the same emission lines. Hence we use the observed values of ∆v and T from this tracer to compute the Mach number M = ∆v/cs where the isothermal sound speed cs = (RT /µ) 1/2 with the specific gas constant R and the mean molecular weight µ = 2.33. Myers et al. (1991) argue that core linewidths are dominated by non-thermal motions (turbulence) only for core masses above ∼ 22 M⊙ and that linewidths are thermal below ∼ 7 M⊙. Since the core masses in all three clouds are above ∼ 30 M⊙ at an AV ≈ 20 mag contour (Enoch et al. 2007), the assumption that the linewidths are caused by turbulence is justified. The findings of Myers et al. (1991) also lead to the conclusion that cluster-forming cores are turbulent while quiescent cores form isolated stars (see also Myers 2001). Comparing the Mach numbers and Q values in Table 5 shows that Serpens, being the most centrally condensed cluster, is associated with the lowest Mach number (turbulent energy) and L1688 being the most hierarchically structured cluster is associated with the highest Mach number (see also Fig. 1). Note that we compare Q and M only for the three clusters that have a significant population of YSOs, are embedded in dense gas and have similar ages of 1 -2 Myr. For this reason we excluded IC 348 from this comparison, which is significantly older and largely depleted of cold gas. Since Q seems to depend on the temporal evolution, a comparison only makes sense for clusters of roughly the same age. These results are in accordance with the scenario of turbulent fragmentation and hierarchical formation of clusters. Numerical simulations demonstrate that high Mach number flows lead to a highly fragmented density structure (Ballesteros-Paredes et al. 2006). Enoch et al. (2007) study the same clusters and obtain a contradictory result; however, the Mach numbers they compute are based on velocity dispersions obtained from observations of the CO molecule, which rather traces the outer envelope of the molecular cloud and may be contaminated by the effects of outflows. As this correlation is obviously based on small-number statistics and has not been found in numerical simulations of gravoturbulent fragmentation and considering the associated uncertainties of ∆v and T , it has to remain rather speculative. Nevertheless it is a hint that the structure of an embedded cluster may be somehow connected to the turbulent environment. Even though the turbulent velocity is probably not the only agent responsible for shaping the cluster, it may play a significant rôle. While a high degree of turbulence in the cloud may keep the young stars in a more hierarchical distribution for a longer time, the absence of strong turbulent motions may help to reach a centrally condensed configuration more quickly. Assuming that strong turbulence in the cloud also leads to a high velocity dispersion among the embedded stars, this is also supported by the results of Goodwin & Whitworth (2004), who performed N-body simulations of the dynamics of fractal star clusters in order to investigate the evolution of substructure in recently formed clusters. They show that if the velocity dispersion in the cluster is low, much of the substructure will be erased, however if the velocity dispersion is high, significant levels of substructure can survive for several crossing times. SUMMARY We used the Spitzer c2d survey data of the nearby molecular clouds Perseus, Serpens and Ophiuchus to identify embedded clusters and analyse their structures using the nearest neighbour and MST method. Apart from the large known embedded clusters IC 348, NGC 1333, Serpens core and L1688 we only found a few relatively small clusters. The Q parameter was determined for all embedded clusters in these clouds and the MST edge lengths and Q values were analysed as a function of YSO class and magnitudes. Our main results can be summarised as follows: (i) Among the four large clusters, IC 348, NGC 1333 and Serpens are centrally condensed (Q 0.8), while L1688 shows a hierarchical structure with several density peaks. The three main subclusters in L1688 show again a hierarchical structure, with Oph A, presumably the youngest one, having the lowest Q value. The peak densities vary strongly between the clusters in different clouds, as does the density of the dispersed YSO population. (ii) In all clusters the Q values for the younger Class 0/1 objects are substantially lower than for the more evolved Class 2/3 objects. This indicates that embedded clusters are assembled from an initial hierarchical configuration arising from the filamentary parental cloud and eventually end up as a single centrally condensed cluster, as predicted by numerical simulations of turbulent fragmentation. While the youngest, deeply embedded objects may represent the structure inherent to the cloud, the distribution of the older objects probably is a result from dynamical interactions. (iii) We find no signs of mass segregation for the YSOs in the youngest clusters, only for the YSO population in the presumably older clusters Serpens and IC 348. In contrast, the stellar population displays a clear mass segregation in all clusters. This suggests that the effect of dynamical interaction becomes visible at a cluster age between about 2 and 3 Myr. (iv) The structure of a cluster may be related to the turbulent velocity in the cloud in a way that clusters in regions with low Mach numbers reach a centrally condensed configuration much faster than those in highly turbulent clouds, in agreement with N-body simulations (Goodwin & Whitworth 2004). The results from the Perseus, Serpens and Ophiuchus star-forming regions are in good agreement with the predictions of theoretical scenarios claiming that embedded clusters form from gravoturbulent fragmentation of molecular clouds in a hierarchical process. Our results may not be applicable to the YSO populations in other regions, in particular the Taurus star-forming region, which shows a level of clustering and velocity dispersions very different from the regions studied in this work and which is generally harder to reconcile with the gravoturbulent scenario (Froebrich et al. 2006). Figure 1 . 1Spitzer IRAC Channel 4 (8 µm) images of the four main clusters IC 348, NGC 1333, Serpens A and L1688 overlaid with the 20th nearest neighbour density contours. The thin contours are plotted in intervals of 40 pc −2 . The outermost thin contours correspond to a nearest neighbour density of 40 pc −2 . The thick contours indicate the cluster boundaries as defined in the text. The horizontal bars showing a projected length of 0.5 pc are based on assumed distances of 315, 260 and 135 pc to the Perseus, Serpens and Ophiuchus cloud, respectively. Crosses indicate the cluster centres, the known B stars are highlighted by diamond symbols. Figure 2 . 2Normalized histograms of MST edge lengths for Class 0/1 (dotted line), Class 2/3 (dashed line) and all YSOs (solid line) in the four main clusters. Figure 3 . 3Q as a function of source magnitude for YSOs (left) and stars (right) in Channel 4 for IC 348 (solid line), NGC 1333 (dotted line), Serpens A (dashed line) and L1688 (dash-dotted line). 1 http://ssc.spitzer.caltech.edu/legacy/c2dhistory.html Table 1. Classification schemeobject type classification, (Evans et al. 2005) main criteria red Class 0/1 red1 (objects only visible at red2 wavelengths 8 µm) YSOc MP1 red YSOc IR4+MP1 red YSOc IR4 Class 2/3 YSOc MP1 (objects visible at YSOc IR4+MP1 wavelengths < 8 µm, YSOc IR4 PAH-em SEDs with slopes YSOc IR4 star+dust(BAND) typical for Class 2/3) YSOc IR4 red YSOc MP1 PAH-em YSOc MP1 star+dust(BAND) YSOc IR4+MP1 PAH-em YSOc IR4+MP1 star+dust(BAND) star stars star+dust(BAND) (SEDs fitted with photospheres) Table 2 . 2Clustering parameters of the identified clusters in the three regions.Region/ RA (J2000) Dec (J2000) size ρ thresh ρ peak n * Q Cluster [deg] [deg] [pc] [pc −2 ] [pc −2 ] Perseus NGC 1333 52.2563 +31.3107 2.09 20 261 158 0.81 IC 348 56.1151 +32.1310 2.53 20 140 204 0.80 Barnard 3 54.9612 +31.9227 1.67 20 45 51 0.69 Serpens Serpens core (A) 277.485 +1.2274 1.00 70 1045 84 0.84 Serpens B 277.256 +0.5059 0.77 70 378 43 0.77 Ophiuchus Ophiuchus north 246.348 −23.4660 0.78 95 126 34 0.68 Ophiuchus centre 246.374 −24.1197 0.52 95 213 29 0.73 L1688 246.683 −24.5112 1.85 95 509 337 0.72 L1689S 248.045 −24.9294 1.41 50 240 78 0.78 Table 3 . 3Numbers of objects and Q values for the different evolutionary classesCluster Class 0/1 Class 2/3 n * Q n * Q NGC 1333 62 (39%) 0.78 96 (61%) 0.83 IC 348 72 (35%) 0.71 132 (65%) 0.78 Serpens A 33 (38%) 0.85 51 (62%) 0.85 L1688 194 (58%) 0.69 143 (42%) 0.78 Table 4. Clustering parameters of the subclusters in L1688 Subcluster RA Dec ρ peak n * Q [deg] [deg] [pc −2 ] Oph A 246.553 −24.416 509 67 0.74 Oph B 246.836 −24.465 373 52 0.79 Oph EF 246.854 −24.679 367 43 0.77 Table 5 . 5Gas properties and Mach numbersCluster ∆v T cs M Q Ref. a [km s −1 ] [K] [km s −1 ] Serpens A 0.77 12 0.21 3.8 0.84 1 NGC 1333 0.95 13 0.21 4.5 0.81 1 L1688 1.33 15 0.23 5.8 0.72 2 ACKNOWLEDGMENTSThis work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. SS and MSNK were supported by a research grant POCTI/CFE-AST/55691/2004 approved by FCT and POCTI, with funds from the European community programme FEDER. SS also acknowledges funding by the Deutsche Forschungsgemeinschaft (grant SCHM 2490/1-1) during part of this work. MSNK is also supported by the project PTDC/CTE-AST/65971/2006 approved by FCT. BF wants to acknowledge support from his PhD advisor Elizabeth Lada as well as financial support from NSF grant AST 02-02976 and National Aeronautics and Space Administration grant NNG 05D66G issued to the University of Florida. . J Ballesteros-Paredes, A Gazol, J Kim, R S Klessen, A.-K Jappsen, E Tejero, ApJ. 637384Ballesteros-Paredes J., Gazol A., Kim J., Klessen R. S., Jappsen A.-K., Tejero E., 2006, ApJ, 637, 384 . J Ballesteros-Paredes, R S Klessen, Mac Low, M.-M Vázquez-Semadeni, E , Reipurth B., Jewitt D., Keil K.Protostars and Planets V, Univ. of Arizona Press63TucsonBallesteros-Paredes J., Klessen R. S., Mac Low M.-M., Vázquez-Semadeni E., 2007, in Reipurth B., Jewitt D., Keil K., eds, Protostars and Planets V, Univ. of Arizona Press, Tucson, p. 63 . E A Bergin, M Tafalla, ARA&A. 45339Bergin E. A., Tafalla M., 2007, ARA&A, 45, 339 . I A Bonnell, M B Davies, MNRAS. 295691Bonnell I. A., Davies M. B., 1998, MNRAS, 295, 691 . I A Bonnell, M R Bate, S G Vine, MNRAS. 343413Bonnell I. A., Bate M. R., Vine S. G., 2003, MNRAS, 343, 413 . S Bontemps, A&A. 372173Bontemps S. et al., 2001, A&A, 372, 173 . A Cartwright, A P Whitworth, MNRAS. 348589Cartwright A., Whitworth A. P., 2004, MNRAS, 348, 589 . S Casertano, P Hut, ApJ. 29880Casertano S., Hut P., 1985, ApJ, 298, 80 . P C Clark, I A Bonnell, MNRAS. 34736Clark P. C., Bonnell I. A., 2004, MNRAS, 347, L36 . A A Djupvik, P André, S Bontemps, F Motte, G Olofsson, M Gålfalk, H Florén, A&A. 458789Djupvik A. A., André P., Bontemps S., Motte F., Olofsson G., Gålfalk M., Florén H.-G., 2006, A&A, 458, 789 . Y N Efremov, B G Elmegreen, MNRAS. 299588Efremov Y. N., Elmegreen B. G., 1998, MNRAS, 299, 588 B G Elmegreen, arXiv:astro-ph/0605519Globular Clusters -Guides to Galaxies. Richtler T., Larsen S.ESO/Springer, in pressElmegreen B. G., 2006, in Richtler T., Larsen S., eds, Glob- ular Clusters -Guides to Galaxies, ESO/Springer, in press (arXiv:astro-ph/0605519) B G Elmegreen, Y Efremov, R E Pudritz, H Zinnecker, Protostars and Planets IV. Mannings V., Boss A. P., Russell S. S.TucsonUniv. of Arizona Press179Elmegreen B. G., Efremov Y., Pudritz R. E., Zinnecker H., 2000, in Mannings V., Boss A. P., Russell S. S., eds, Pro- tostars and Planets IV, Univ. of Arizona Press, Tucson, p. 179 . M L Enoch, ApJ. 638293Enoch M. L. et al., 2006, ApJ, 638, 293 . M L Enoch, J Glenn, N J Evans, Ii, A I Sargent, K E Young, T L Huard, ApJ. 666982Enoch M. L., Glenn J., Evans N. J. II, Sargent A. I., Young K. E., Huard T. L., 2007, ApJ, 666, 982 . N J Evans, Ii, PASP. 115965Evans N. J. II et al., 2003, PASP, 115, 965 N J Evans, Ii, Third Delivery of Data from the c2d Legacy Project: IRAC and MIPS. Evans N. J. II et al., 2005, Third Delivery of Data from the c2d Legacy Project: IRAC and MIPS, http://data.spitzer.caltech.edu/popular/c2d/20051220 enhanced v1/Documents/C2D Explan Supp.pdf . D Froebrich, S Schmeja, M D Smith, R S Klessen, MNRAS. 368435Froebrich D., Schmeja S., Smith M. D., Klessen R. S., 2006, MNRAS, 368, 435 . D Froebrich, A Scholz, C L Raftery, MNRAS. 374399Froebrich D., Scholz A., Raftery C. L., 2007, MNRAS, 374, 399 . S P Goodwin, A P Whitworth, A&A. 413929Goodwin S. P., Whitworth A. P., 2004, A&A, 413, 929 . T P Greene, M R Meyer, ApJ. 450233Greene T. P., Meyer M. R., 1995, ApJ, 450, 233 . P M Harvey, ApJ. 644307Harvey P. M. et al., 2006, ApJ, 644, 307 . P M Harvey, ApJ. 6631139Harvey P. M. et al., 2007a, ApJ, 663, 1139 . P Harvey, B Merín, T L Huard, L M Rebull, N Chapman, N J Evans, Ii, P C Myers, ApJ. 6631149Harvey P., Merín B., Huard T. L., Rebull L. M., Chapman N., Evans N. J. II, Myers P. C., 2007b, ApJ, 663, 1149 . J Hatchell, J S Richer, G A Fuller, C J Qualtrough, E F Ladd, C J Chandler, A&A. 440151Hatchell J., Richer J. S., Fuller G. A., Qualtrough C. J., Ladd E. F., Chandler C. J., 2005, A&A, 440, 151 . J Jijina, P C Myers, F C Adams, ApJS. 125161Jijina J., Myers P. C., Adams F. C., 1999, ApJS, 125, 161 . J K Jørgensen, ApJ. 6451246Jørgensen J. K. et al., 2006, ApJ, 645, 1246 . A A Kaas, A&A. 421623Kaas A. A. et al., 2004, A&A, 421, 623 . R S Klessen, F Heitsch, Mac Low, M.-M , ApJ. 535887Klessen R. S., Heitsch F., Mac Low M.-M., 2000, ApJ, 535, 887 . J B KruskalJr, Proc. Amer. Math. Soc. 748Kruskal J. B. Jr., 1956, Proc. Amer. Math. Soc., 7, 48 . M S N Kumar, S Schmeja, A&A. 47133Kumar M. S. N., Schmeja S., 2007, A&A, 471, L33 . M S N Kumar, E Keto, E Clerkin, A&A. 4491033Kumar M. S. N., Keto E., Clerkin E., 2006, A&A, 449, 1033 . E A Lada, C J Lada, AJ. 1091682Lada E. A., Lada C. J., 1995, AJ, 109, 1682 . E A Lada, N J Evans, Ii, D L Depoy, I Gatley, ApJ. 371171Lada E. A., Evans N. J. II, Depoy D. L., Gatley I., 1991, ApJ, 371, 171 . C J Lada, J Alves, E A Lada, AJ. 111Lada C. J., Alves J., Lada E. A., 1996, AJ, 111, 1964 . C J Lada, A A Muench, E A Lada, J F Alves, AJ. 1281254Lada C. J., Muench A. A., Lada E. A., Alves J. F., 2004, AJ, 128, 1254 . K L Luhman, G H Rieke, ApJ. 525440Luhman K. L., Rieke G. H., 1999, ApJ, 525, 440 . K L Luhman, J R Stauffer, A A Muench, G H Rieke, E A Lada, J Bouvier, C J Lada, ApJ. 5931093Luhman K. L., Stauffer J. R., Muench A. A., Rieke G. H., Lada E. A., Bouvier J., Lada C. J., 2003, ApJ, 593, 1093 . Mac Low, M.-M Klessen, R S , Rev. Mod. Phys. 76125Mac Low M.-M., Klessen R. S., 2004, Rev. Mod. Phys., 76, 125 . E E Mamajek, Astron. Nachr. 32910Mamajek E. E., 2008, Astron. Nachr., 329, 10 . C F Mckee, E C Ostriker, ARA&A. 45565McKee C. F., Ostriker E. C., 2007, ARA&A, 45, 565 . S L W Mcmillan, E Vesperini, Portegies Zwart, S F , ApJ. 65545McMillan S. L. W., Vesperini E., Portegies Zwart S. F., 2007, ApJ, 655, L45 . A A Muench, C J Lada, K L Luhman, J Muzerolle, E Young, AJ. 134411Muench A. A., Lada C. J., Luhman K. L., Muzerolle J., Young E., 2007, AJ, 134, 411 From Darkness to Light: Origin and Evolution of Young Stellar Clusters. P C Myers, ASP Conf. Proc. Montmerle T., André P.243131Myers P. C., 2001, in Montmerle T., André P., eds., From Darkness to Light: Origin and Evolution of Young Stellar Clusters, ASP Conf. Proc., 243, 131 . P C Myers, P T P Ho, M H Schneps, G Chin, V Pankonin, A Winnberg, ApJ. 220864Myers P. C., Ho P. T. P., Schneps M. H., Chin G., Pankonin V., Winnberg A., 1978, ApJ, 220, 864 . P C Myers, E F Ladd, G A Fuller, ApJ. 37295Myers P. C., Ladd E. F., Fuller G. A., 1991, ApJ, 372, L95 . D L Padgett, ApJ. 6721013Padgett D. L. et al., 2008, ApJ, 672, 1013 P Padoan, Nordlundå, The Initial Mass Function 50 Years Later. Corbelli E., Palla F., Zinnecker H.DordrechtSpringer327357Padoan P., NordlundÅ., 2005, in Corbelli E., Palla F., Zin- necker H., eds, ASSL Vol. 327, The Initial Mass Function 50 Years Later, Springer, Dordrecht, p. 357 . R C Prim, Bell Syst. Tech. J. 361389Prim R. C., 1957, Bell Syst. Tech. J., 36, 1389 . L M Rebull, ApJS. 171447Rebull L. M. et al., 2007, ApJS, 171, 447 . N A Ridge, S L Schnee, A A Goodman, J B Foster, ApJ. 643932Ridge N. A., Schnee S. L., Goodman A. A., Foster J. B., 2006, ApJ, 643, 932 . S Schmeja, R S Klessen, A&A. 419405Schmeja S., Klessen R. S., 2004, A&A, 419, 405 . S Schmeja, R S Klessen, A&A. 449151Schmeja S., Klessen R. S., 2006, A&A, 449, 151 . S Schmeja, R S Klessen, D Froebrich, A&A. 437911Schmeja S., Klessen R. S., Froebrich D., 2005, A&A, 437, 911 . V Straižys, K Černis, S Bartašiūtė, Baltic Astron. 5125Straižys V.,Černis K., Bartašiūtė S., 1996, Baltic Astron., 5, 125 . M Tafalla, P C Myers, P Caselli, C M Walmsley, C Comito, ApJ. 569815Tafalla M., Myers P. C., Caselli P., Walmsley C. M., Comito C., 2002, ApJ, 569, 815 . M Tafalla, M S N Kumar, R Bachiller, A&A. 456179Tafalla M., Kumar M. S. N., Bachiller R., 2006, A&A, 456, 179 P S Teixeira, L45 von Hoerner S. 63647ApJTeixeira P. S. et al., 2006, ApJ, 636, L45 von Hoerner S., 1963, Z. Astrophys., 57, 47 . B A Wilking, M R Meyer, T P Greene, A Mikhail, G Carlson, AJ. 1271131Wilking B. A., Meyer M. R., Greene T. P., Mikhail A., Carlson G., 2004, AJ, 127, 1131
[]
[ "Mathematische Annalen Translates of rational points along expanding closed horocycles on the modular surface", "Mathematische Annalen Translates of rational points along expanding closed horocycles on the modular surface" ]
[ "Claire Burrin ", "Uri Shapira ", "· Shucheng ", "Yu " ]
[]
[]
We study the limiting distribution of the rational points under a horizontal translation along a sequence of expanding closed horocycles on the modular surface. Using spectral methods we confirm equidistribution of these sample points for any translate when the sequence of horocycles expands within a certain polynomial range. We show that the equidistribution fails for generic translates and a slightly faster expanding rate. We also prove both equidistribution and non-equidistribution results by obtaining explicit limiting measures while allowing the sequence of horocycles to expand arbitrarily fast. Similar results are also obtained for translates of primitive rational points.
10.1007/s00208-021-02267-7
[ "https://arxiv.org/pdf/2009.13608v2.pdf" ]
221,996,033
2009.13608
e701621e56a2905cadcf2996d10ea9b02b388ea1
Mathematische Annalen Translates of rational points along expanding closed horocycles on the modular surface 2022 Claire Burrin Uri Shapira · Shucheng Yu Mathematische Annalen Translates of rational points along expanding closed horocycles on the modular surface 382202210.1007/s00208-021-02267-7Received: 9 October 2020 / Revised: 31 July 2021 / Accepted: 24 August 2021 / We study the limiting distribution of the rational points under a horizontal translation along a sequence of expanding closed horocycles on the modular surface. Using spectral methods we confirm equidistribution of these sample points for any translate when the sequence of horocycles expands within a certain polynomial range. We show that the equidistribution fails for generic translates and a slightly faster expanding rate. We also prove both equidistribution and non-equidistribution results by obtaining explicit limiting measures while allowing the sequence of horocycles to expand arbitrarily fast. Similar results are also obtained for translates of primitive rational points. Introduction Let {S n } n∈N be a sequence of "nice" subsets that become equidistributed in their ambient space. Given a sequence of discrete subsets {R n } n∈N with R n ⊂ S n , an interesting question is to study to what extent does the distribution behavior of {R n } n∈N mimic that of {S n } n∈N . One naturally expects that when the size of R n is relatively large, it is more likely that {R n } n∈N inherits some distribution property from {S n } n∈N ; on the other hand if R n lies on S n sparsely, then it is more likely that points in {R n } n∈N become decorrelated and distribute like random points on the ambient space. In the setting of unipotent dynamics, the most typical example of a sequence {S n } n∈N is a sequence of expanding closed horocycles on a non-compact finite-area hyperbolic surface M. More precisely, we can realize M as a quotient \H where is a cofinite Fuchsian subgroup and H = {z = x + iy ∈ C : y > 0} is the Poincaré upper half-plane, equipped with the hyperbolic metric ds = |dz|/y, where dz = dx + idy is the complex line element. Up to conjugating by an appropriate isometry, we may assume that M = \H has a width one cusp at infinity, that is, that the isotropy group ∞ < is generated by the translation sending z ∈ H to z + 1. A closed horocycle of height y > 0 is a closed set of the form H y := { (x + iy) : x ∈ R/Z} ⊂ M, and its period, i.e., its hyperbolic length, is y −1 . As H y gets longer, that is, as y → 0 + , it becomes equidistributed on M with respect to the hyperbolic area dμ(z) = y −2 dxdy. The first effective version of this result is due to Sarnak [28] who, using spectral arguments, proved that for every ∈ C ∞ c ( \H) and any y > 0, 1 0 (x + iy)dx = M (z)dμ(z) μ(M) + O S( )y α ,(1.1) where S is some Sobolev norm, and 0 < α < 1 is a constant depending on the first non-trivial residual hyperbolic Laplacian eigenvalue of . In the case of the modular surface SL 2 (Z)\H, α = 1 2 , while Zagier [32] observed that the Riemann hypothesis is equivalent to the equidistribution rate O y 3/4− . In this setting, this problem was first investigated by Hejhal in [12] with a heuristic and numerical study of the value distribution of the sample points x+ j n + iy : 0 ≤ j ≤ n − 1 ( 1 . 2 ) for some Hecke triangle groups = G q under the assumption that ny is small. Set where is some mean-zero step function on a fixed fundamental domain for \H (automorphically extended to H). The numerics show that the value distribution of n −1/2 S n,y, (x) with respect to x ∈ [0, 1) approaches a Gaussian curve for the nonarithmetic Hecke triangle groups G 5 and G 7 , while this phenomenon breaks down for G 3 = PSL 2 (Z). Hejhal gave an explanation of this difference based on the existence of Hecke operators on G 3 . The convergence to a Gaussian distribution for general non-arithmetic Fuchsian groups was later confirmed by Strömbergsson [30,Corollary 6.5], under the assumption that the sequence {y n } n∈N decays sufficiently rapidly. Other such problems have since been investigated. Marklof and Strömbergsson [27] proved the equidistribution of generic Kronecker sequences { ( jβ + iy n ) ∈ M : 1 ≤ j ≤ n} ⊂ M (1.3) along a sequence of closed horocycles expanded at a certain rate y n on T 1 M, the unit tangent bundle of M. The equidistribution of Hecke points proved by Clozel-Ullmo [4] (see also [3,10]) implies the equidistribution of the primitive rational points j n + i n : 1 ≤ j ≤ n − 1, gcd( j, n) = 1 at prime steps on the modular surface, see [10, Remark on p. 171]. More recently, the equidistribution of the above sequence along the full sequence of positive integers was proved by Einsiedler-Luethi-Shah [8] in a slightly more general setting, namely on the product of the unit tangent bundle of the modular surface and a torus. Various sparse equidistribution results have also been obtained for expanding horospheres in the space of lattices SL n (R)/ SL n (Z) for n ≥ 3 [7,9,22,23,26] and in Hilbert modular surfaces [24]. For each of these equidistribution results, assumptions on the expanding rate of the sequence {S n } n∈N are crucial; the discrete subsets {R n } n∈N lying on {S n } n∈N can not be too sparse. This paper emerged from an attempt to prove a result which turned out to be false. We consider the sparse equidistribution problem for the subset of rational points (with denominator n) under a horizontal translation x ∈ R/Z on a horocycle H y on the modular surface; we denote this subset by R n (x, y n ) (cf. (1.4)). We thought that since the closed horocycles H y equidistribute as y → 0 + , if we fix a sequence {y n } n∈N approaching zero, then the normalized counting measures on R n (x, y n ) (and its primitive counterpart) should equidistribute for Lebesgue almost every x as n → ∞. See the recent paper of Bersudsky [1,Theorem 1.5] for an analogue situation where such a result is true. Note the order of quantifiers; we first fix the sequence {y n } n∈N and only then choose the horizontal translation x. It is not hard to see that if one flips the quantifiers, for any fixed horizontal translation x, there are sequences {y n } n∈N (approaching zero rapidly) such that equidistribution fails. We were very surprised to learn though, that in stark contrast to our initial expectation, equidistribution fails. The main novel result of this paper (Theorem 1.5) says that there are sequences {y n } n∈N approaching zero arbitrarily fast such that for almost every horizontal translation x the normalized counting measures R n (x, y n ) and its primitive counterpart do not equidistribute. In fact, we show the collection of limit measures contains the uniform measure μ M , the zero measure and certain singular measures. Although these should be considered as the main contribution of this paper, we also complement our analysis with answering natural questions concerning sequences {y n } n∈N approaching zero in a polynomial rate. The next subsections describe more precisely the setting and results obtained. Context of the present paper Let = SL 2 (Z) and let M = \H be the modular surface. In this paper, generalizing the setting of [8], we study the equidistribution problem for the sets of rational and primitive rational points under an arbitrary horizontal translation x ∈ R/Z along a given sequence of expanding closed horocycles on M. The set of rational points is the obvious choice of a sparse set with identical spacings, while primitive rational points constitute the simplest pseudorandom sequence (via the linear congruential generator). For any n ∈ N, x ∈ R/Z and y > 0 we denote by R n (x, y) := x + j n + iy ∈ H y : 0 ≤ j ≤ n − 1 (1.4) and respectively R pr n (x, y) := x + j n + iy ∈ H y : j ∈ (Z/nZ) × , (1.5) the set of rational and respectively primitive rational points with denominator n on the closed horocycle H y translated to the right by x. As usual, (Z/nZ) × denotes here the multiplicative group of integers modulo n. Let {y n } n∈N be a sequence of positive numbers such that y n → 0 as n → ∞. We investigate the limiting distribution of the sequences of sample points {R n (x, y n )} n∈N and R pr n (x, y n ) n∈N under various assumptions on the expanding rate of the sequence of horocycles {H y n } n∈N , or equivalently, the decay rate of {y n } n∈N . This problem is naturally easier when the sequence {y n } n∈N decays slowly since then at each step we have relatively more sample points on the underlying horocycle. For instance, if ny n → ∞ as n → ∞, the hyperbolic distance between two adjacent points in R n (x, y n ) decays to zero as n → ∞. Since the points in R n (x, y n ) distribute evenly on H y n , the distribution behavior of R n (x, y n ) then mimics that of H y n . In particular, for any x ∈ R/Z the sequence {R n (x, y n )} n∈N becomes equidistributed on M with respect to the hyperbolic area μ as n → ∞, following from the equidistribution of the sequence {H y n } n∈N . Regarding R pr n (x, y n ) n∈N , its distribution behavior is well understood when x = 0. Indeed, it was shown by Luethi [24] that if y n = c/n α for some c > 0 and some α ∈ (0, 1), then R pr n (0, y n ) becomes equidistributed on M with respect to μ as n → ∞. Moreover, under the simple symmetry relation that for gcd( j, n) = 1 and y > 0 j n + iy = − j n + i n 2 y , (1.6) one can extend this equidistribution result to the range α ∈ (1, 2); this improves the previous work of Demirci Akarsu [5,Theorem 2] which confirms equdistribution of {R pr n (0, c/n α )} n∈N for α ∈ ( 3 2 , 2). Here j ∈ (Z/nZ) × denotes the multiplicative inverse of j ∈ (Z/nZ) × . The equidistribution for the case α = 1 was later proved by Einsiedler-Luethi-Shah [8]; Jana [16,Theorem 1] recently gave an alternative spectral proof to this equidistribution result. We also mention that both [5,Theorem 2] and [16,Theorem 1] are valid in the same setting as [8], namely, on the product of the unit tangent bundle of the modular surface and a torus. When α = 2 the equidistribution fails as the aforementioned symmetry implies that R pr n (0, c/n 2 ) = R pr n (0, 1/c) is always trapped in the closed horocycle H 1/c . For the same reason, when α > 2 (or more generally for any sequence satisfying n 2 y n → 0), one has with R pr n (0, c/n α ) = R pr n (0, n α−2 /c) ⊂ H n α−2 /c a full escape to the cusp of M as n → ∞. It is worth noting that while the symmetry (1.6) still holds for rational translates (cf. Lemma 3.6), it breaks down for irrational translates. Statements of the results We will state here the main results of this paper, and postpone the discussion of their proofs to the next subsection. Let μ M := μ(M) −1 μ be the normalized hyperbolic area on M. For any n ∈ N, x ∈ R/Z and y > 0 let δ n,x,y and δ pr n,x,y denote the normalized probability counting measure supported on R n (x, y) and R pr n (x, y) respectively. That is, for any ∈ C ∞ c (M), δ n,x,y ( ) = 1 n n−1 j=0 x + j n + iy , and δ pr n,x,y ( ) = 1 ϕ(n) j∈(Z/nZ) × x + j n + iy , where ϕ is Euler's totient function. Here and throughout, for any measure ν on M, we set ν( ) := M (z)dν(z). Using spectral expansion and collecting estimates on the Fourier coefficients of Hecke-Maass forms and Eisenstein series, we obtain the following effective result, which yields equidistribution when the sequence is within a certain polynomial range. Theorem 1.1 Let M be the modular surface. For any ∈ C ∞ c (M), for any n ∈ N, x ∈ R/Z and y > 0 we have δ n,x,y ( ) − μ M ( ) S 2,2 ( ) y 1/2 + n −1 y −(1/2+θ+ ) , and δ pr n,x,y ( ) − μ M ( ) S 2,2 ( ) y 1/2 + n −1+ y −(1/2+θ+ ) , where θ = 7/64 is the current best known bound towards the Ramanujan conjecture (which implies θ = 0) and S 2,2 is a "L 2 , order-2" Sobolev norm on C ∞ c (M), see Sect. 2.1. If {y n } n∈N is a sequence of positive numbers satisfying lim n→∞ y n = 0 and y n 1/n α for some fixed α ∈ 0, 2 1+2θ = (0, 64 39 ), then Theorem 1.1 implies that for any translate x ∈ R/Z, both {R n (x, y n )} n∈N and R pr n (x, y n ) n∈N become equidistributed on M with respect to μ M as n → ∞. In particular, it gives an alternative -spectral -proof to the aforementioned results of Luethi [24] and Einsiedler-Luethi-Shah [8]. The upper bound 2 1+2θ is the natural barrier for our spectral methods. Nevertheless, when x is a rational translate, a generalization of the symmetry (1.6) allows to go beyond this barrier, and to prove unconditionally the remaining range α ∈ [ 2 1+2θ , 2), as holds in the case of {R pr n (0, y n )} n∈N . Theorem 1.2 Let x = p/q be a primitive rational number, i.e. gcd( p, q) = 1. Let {y n } n∈N be a sequence of positive numbers satisfying y n 1/n α for some fixed α ∈ [ 2 1+2θ , 2). Then both δ n,x,y n n∈N q and δ pr n,x,y n n∈N pr q weakly converge to μ M as n goes to infinity, where N q := {n ∈ N : gcd(n 2 , q) | n} and N pr q := {n ∈ N : gcd(n, q) = 1}. Remark 1.7 If q is squarefree, then the condition gcd(n 2 , q) | n is void. Thus for such q, Theorem 1.2 (together with Theorem 1.1) confirms the equidistribution of the sample points R n ( p/q, y n ) (with y n 1/n α ) along the full set of positive integers for any 0 < α < 2. As a byproduct of our analysis, we also have the following non-equidistribution result for rational translates, giving infinitely many explicit limiting measures. Let us first fix some notation. For each m ∈ N, let P m := {n = m ∈ N : is a prime number and m}. (1.8) For each Y > 0, we denote by μ Y the uniform probability measure supported on the closed horocycle H Y . For each m ∈ N and Y > 0, we define the probability measure ν m,Y on M by ν m,Y := 1 m d|m ϕ( m d )μ d 2 Y . (1.9) Theorem 1.3 Keep the notation as above. Let x = p/q be a primitive rational number and let {y n } n∈N be a sequence of positive numbers. (1) If y n = c/n 2 for some constant c > 0, then for any m ∈ N q and for any (2) If lim n→∞ n 2 y n = 0, then both sequences {R n (x, y n )} n∈N and {R pr n (x, y n )} n∈N fully escape to the cusp of M. ∈ C ∞ c (M) lim n→∞ gcd(n,q)=1 Our next result shows that, similar to the rational translate case, equidistribution fails for generic translates as soon as {y n } n∈N decays logarithmically faster than 1/n 2 . Theorem 1.4 Let d M (·, ·) be the distance function on M induced from the hyperbolic distance function on H. Fix z 0 ∈ M. Let {y n } n∈N be a sequence of positive numbers satisfying y n 1/(n 2 log β n) for some fixed 0 < β < 2. Then for almost every x ∈ R/Z lim n→∞ inf z∈R n (x,y n ) d M ( z 0 , z) log log n ≥ min{β, 2 − β}. (1.10) This implies that for almost every x ∈ R/Z, there exists an unbounded subsequence of N such that along this subsequence inf z∈R n (x,y n ) d M ( z 0 , z) ≥ (α − ) log log n, where α = min{β, 2 − β}. That is, for almost every x ∈ R/Z, all the sample points R n (x, y n ) (and hence also R pr n (x, y n )) are moving towards the cusp of M along this subsequence, and eventually escape to the cusp as n in this subsequence goes to infinity. Our proof of Theorem 1.4 relies on connections to Diophantine approximation theory. This viewpoint comes with inherent limitations; in the specific setting y n 1/(n 2 log β n), Khintchine's approximation theorem guarantees full escape to the cusp almost surely, but this argument does not extend to any sequence {y n } n∈N that decays polynomially faster than 1/n 2 , see Sect. 1.3 for a more detailed discussion. It is thus interesting to study the cases when {y n } n∈N is beyond the ranges in Theorems 1.1 and 1.4. Indeed, the rest of our results deal with sequences {y n } n∈N that can decay arbitrarily fast, and give both positive and negative results. This is the main novelty of this paper; the handling of cases in which the sample points can be arbitrarily sparse on the closed horocycles they lie on. We now state the main novel aspect of this paper: Theorem 1.5 For any sequence of positive numbers {c n } n∈N , there exists a sequence {y n } n∈N satisfying 0 < y n < c n for each n ∈ N and such that for almost every x ∈ R/Z the set of limiting measures of {δ n,x,y n } n∈N and {δ pr n,x,y n } n∈N both contain the uniform measure μ M , the zero measure, and singular probability measures. Theorem 1.5 is a sum of three more precise theorems, which each handles a specific limiting measure, and which we discuss in the next subsection. Discussion of the results Our proofs of Theorems 1.1 and 1.2 rely on spectral estimates collected in the recent paper of Kelmer and Kontorovich [18], with a necessary refinement of [18, (3.6)] in the form of Proposition 3.3, which comes at the cost of a higher degree Sobolev norm. This strategy is standard and is also found in [4,16,27,31], to name just a few recent papers on related problems. The analysis in [18] was carried out in a more general setting, namely for the congruence covers 0 ( p)\H with p a prime number. Theorem 1.1 can be extended to that more general setting, see Remark 3.11. With these spectral estimates in hand, we further prove an effective non-equidistribution result for rational translates from which part (1) of Theorem 1.3 follows, see Theorem 3.10. Part (2) of Theorem 1.3 is an easy application of the symmetry (1.6). Remark 1.11 As was pointed out to us by Asaf Katz, we could also have used the estimates from [31, Proposition 3.1] in place of [18,Proposition 3.4], which in our specific setting, give the same equidistribution range (with a higher degree Sobolev norm). We also mention that the estimates in [31,Proposition 3.1] are valid in the setting of 0 (q)\ SL 2 (R) with q ∈ N, and thus imply an effective equidistribution result analogous to Theorem 1.1 in this generality. As mentioned earlier, a generalization of the symmetry (1.6) is available for rational translates but breaks down for irrational translates. To handle irrational translates, we approximate them by rational ones to apply the symmetry relation, see Lemma 4.2. This is where Diophantine approximation kicks in. Similar ideas were also used in [27, Section 7] to construct counterexamples in their setting. In fact, we prove Theorem 1.4 by proving a more general result that captures the cusp excursion rates of the sample points R n (x, y n ) in terms of the Diophantine properties of the translate x, see Theorem 4.3. Theorem 1.4 will then follow from Theorem 4.3 by imposing a Diophantine condition which ensures cusp excursion, while also holds for almost every translate thanks to Khintchine's approximation theorem. This Diophantine condition accounts for the tight restrictions on {y n } n∈N in Theorem 1.4. On the other hand, assuming an even stronger Diophantine condition (which holds for a null set of translates), we can handle sequences decaying polynomially faster than 1/n 2 with a much faster excursion rate towards the cusp, see Theorem 4.4. We also prove a non-equidistribution result (which, this time, holds for every x) when y n = c/n 2 and the constant c is restricted to some range, see Theorem 4.5. The trade-off of this upgrade from Theorem 1.4 to the everywhere non-equidistribution result is that we can no longer prove the full escape to the cusp along subsequences as in Theorem 1.4. As mentioned before, Theorem 1.5 follows from three more precise theorems which each handles a specific limiting measure. Our first result confirms equidistribution almost surely along a fixed subsequence of N for any sequence {y n } n∈N decaying at least polynomially. Theorem 1.6 Fix α > 0. Then there exists a fixed unbounded subsequence N ⊂ N such that for any sequence of positive numbers {y n } n∈N satisfying y n n −α and for almost every x ∈ R/Z, both δ n,x,y n and δ pr n,x,y n weakly converge to μ M as n ∈ N goes to infinity. Remark 1.12 It will be clear from our proof that one can take N ⊂ N to be any subsequence satisfying n∈N n −c < ∞ for some positive c < min{ α 2 , 1 − 2θ }, e.g. we may take N = { n κ } n∈N for any κ > 1/ min{ α 2 , 1 − 2θ }. Theorem 1.6 follows from a second moment estimate for the discrepancies |δ n,x,y − μ M | and |δ pr n,x,y − μ M | along the closed horocycle H y (Theorem 5.2) together with a standard Borel-Cantelli type argument. This was also the strategy used in [27] when studying the Kronecker sequences in (1.3). Along these lines, they deduce from spectral estimates the equidistribution for almost every β ∈ R along a fixed subsequence {n k } n∈N when y n n −α with k ∈ N depending on α > 0. Then, using a continuity argument, this result is upgraded to the equidistribution along the full sequence of positive integers, see [27,Section 4]. This continuity argument fails in our situation. Instead of applying directly spectral estimates to the second moment formulas, we express the latter in terms of certain Hecke operators (Proposition 5.1), and rely on available (spectral) bounds for their operator norm, see [10]. Contrarily to spectral estimates, the recourse to Hecke operators allows us to have a uniform subsequence N which is valid for all {y n } n∈N decaying at least polynomially. Next, we show that there exists a sequence {y n } n∈N decaying arbitrarily rapidly such that for almost every x, R n (x, y n ) (and thus also R pr n (x, y)) escapes to the cusp with a certain rate along subsequences. Theorem 1.7 Fix z 0 ∈ M. For any sequence of positive numbers {c n } n∈N , there exists a sequence {y n } n∈N satisfying 0 < y n < c n for each n ∈ N and such that for almost every x ∈ R/Z lim n→∞ inf z∈R n (x,y n ) d M ( z 0 , z) log log n ≥ 1. (1.13) Finally, we show that escape to the cusp is not the only obstacle to equidistribution. Theorem 1.8 Let m ∈ N and Y > 0 satisfy m 2 Y > 1. Let P m ⊂ N and ν m,Y be as defined in (1.8) and (1.9) respectively. For any sequence of positive numbers {c n } n∈P m , there exists a sequence {y n } n∈P m satisfying 0 < y n < c n for all n ∈ P m such that for almost every x ∈ R/Z, the set of limiting measures of {δ n,x,y n } n∈P m contains ν m,Y . Remark 1.14 We note that P 1 is the set of prime numbers and ν 1,Y = μ Y . Since δ pr p,x,y ( ) = p p−1 δ p,x,y ( ) + O( p −1 ∞ ) whenever p is a prime number, when m = 1 the conclusion of Theorem 1.8 also holds for the sequence {δ pr n,x,y n } n∈P 1 . We also note that it will be clear from our proof that Theorems 1.7 and 1.8 can be combined. In fact, our argument shows that there always exists a sequence {y n } n∈N decaying faster than any prescribed sequence such that for almost every x ∈ R/Z the set of limiting measures of δ n,x,y n n∈N contains the trivial measure and ν m,Y for any finitely many pairs (m, Y ) ∈ N × R >0 with m 2 Y > 1, see Remark 7.26. Moreover, in view of Theorem 1.6 if y n n −α for some α > 0, then it also contains the hyperbolic area μ M almost surely. For the rest of this introduction we describe the strategy of our proof to Theorem 1.7 (Theorem 1.8 follows from similar ideas). To detect cusp excursions, we study for each n ∈ N the occurrence of the events x + j n + iy n ∈ C for all 0 ≤ j ≤ n − 1, (1.15) where C ⊂ M is some fixed cusp neighborhood of M. More precisely, we determine when the limsup set I ∞ = lim n→∞ I n is of full measure, where for each n ∈ N, I n := {x ∈ R/Z : R n (x, y n ) ⊂ C} consists of translates x ∈ R/Z for which the events in (1.15) occur. This requires to study the left regular u 1/n -action on C ⊂ M and thus calls for the underlying lattice to be normalized by u 1/n . Therefore, we construct an explicit tower of coverings { n \H} n∈N in which each n is a congruence subgroup normalized by u 1/n . We note that the existence of such n < is the starting point of our proof and it relies on the assumption that = SL 2 (Z); this construction would fail for replaced by a non-arithmetic lattice. The key ingredient of the proof will be a sufficient condition which states that if a point n (x + iy n ) ∈ n \H visits a certain cusp neighborhood C n on n \H, then the events in (1.15) will be realized for x ∈ R/Z, that is, x ∈ I n , see Lemma 7.6. Using this sufficient condition, we can then relate the measure of I n to the proportion of certain closed horocycles on n \H visiting the cusp neighborhood C n ⊂ n \H, which in turn, using the equidistribution of expanding closed horocycles on n \H, can be estimated for y n sufficiently small. Since the sets I n also need to satisfy certain quasi-independence conditions for I ∞ to have full measure (Lemma 2.5), we need to apply the equidistribution of certain subsegments of the expanding closed horocycles on n \H. More precisely, at the n-th step these subsegments will be taken to be the sets I m for all m < n. These subsegment are finite disjoint unions of subintervals whose number and size depend sensitively on the height parameters {y m } m<n , see Remark 6.3. If there would exist an effective equidistribution result which would be insensitive to the geometry of these subsegments, that is, for which the error term depends only on the measure of these subsegments, then we would have an effective control on the sequence {y n } n∈N in Theorem 1.7 (and similarly also in Theorem 1.8). However, it is not clear to us whether one should expect such an effective equidistribution result. Finally, we note that it was communicated to us by Strömbergsson that using a number theoretic interpretation of the aforementioned sufficient condition and some elementary estimates, one can alternatively prove Theorem 1.7 without going into these congruence covers, see Remark 7.17. Structure of the paper In Sect. 2, we collect some preliminary results that will be needed in the rest of the paper. In Sect. 3, we prove a key spectral estimate (Proposition 3.3) and proceed to prove Theorems 1.1 and 1.2. In Sect. 4, we prove Theorems 4.3 and 4.5 by examining the connections between Diophantine approximations and cusp excursions on the modular surface. In Sect. 5, we prove Theorem 1.6 by proving a second moment bound using Hecke operators. In Sect. 6, we study the left regular action of a normalizing element on the set of cusp neighborhoods of a congruence cover of the modular surface. Building on the results, we prove Theorems 1.7 and 1.8 in Sect. 7. Notation For two positive quantities A and B, we will use the notation A B or A = O(B) to mean that there is a constant c > 0 such that A ≤ cB, and we will use subscripts to indicate the dependence of the constant on parameters. We will write A B for A B A. For any z ∈ H we denote by e(z) := e 2πiz . For any n ∈ N, we denote by d|n the product over all positive divisors of n, and by p|n prime the product over all prime divisors of n. For any x ≥ 0 and n ∈ N, σ x (n) := d|n d x is the power-x divisor function which satisfies the estimate σ x (n) n x+ for any small > 0. Preliminaries Let G = SL 2 (R). We consider the Iwasawa decomposition G = N AK with N = {u x : x ∈ R} , A = a y : y > 0 , K = {k θ : 0 ≤ θ < 2π } , where u x = 1 x 0 1 , a y = y 1/2 0 0 y −1/2 and k θ = cos θ sin θ − sin θ cos θ respectively. Under the coordinates g = u x a y k θ on G, the Haar measure is given (up to scalars) by dg = y −2 dxdydθ. The group G acts on the upper half plane H = {z = x + iy ∈ C : y > 0} via the Möbius transformation: gz = az+b cz+d for any g = a b c d ∈ G and z ∈ H. This action preserves the hyperbolic area dμ(z) = y −2 dxdy and induces an identification between G/K and H. Let < G be a lattice, that is, is a discrete subgroup of G such that the corresponding hyperbolic surface \H has finite area (with respect to μ). We denote by μ := μ( \H) −1 μ the normalized hyperbolic area on \H such that μ ( \H) = 1. We note that when = SL 2 (Z) then μ = μ M with μ M the normalized hyperbolic area on the modular surface M given as in the introduction. We note that in this case it is well known μ(M) = π/3, and hence dμ M (z) = 3 π dxdy y 2 . (2.1) Using the above identification between H and G/K we can identify the hyperbolic surface \H with the locally symmetric space \G/K . We can thus view subsets of \H as right K -invariant subsets of \G. Similarly, we can view functions on \H as right K -invariant functions on \G. We note that using the above description of the Haar measure, the probability Haar measure on \G (when restricted to the subfamily of right K -invariant subsets) coincides with the normalized hyperbolic area μ on \H. Sobolev norms In this subsection we record some useful properties about Sobolev norms. Let g = sl 2 (R) be the Lie algebra of G. Fix a basis B = {X 1 , X 2 , X 3 } for g, and given a smooth test function ∈ C ∞ ( \G) we define the "L p , order-d" Sobolev norm S p,d ( ) as S p,d ( ) := ord(D)≤d D L p ( \G) , where D runs over all monomials in B of order at most d, and the L p -norm is with respect to the normalized Haar measure on \G. For any ∈ C ∞ ( \G) (which we think of a smooth left -invariant function on G) and for any h ∈ G we denote by L h (g) := (h −1 g) the left regular h-action on . It is easy to check that L h ∈ C ∞ (h h −1 \G), and since taking Lie derivatives commutes with the left regular action, we have S p,d ( ) = S h h −1 p,d (L h ). (2.2) Next we note that using the product rule for Lie derivatives (see e.g. [21, p. 90]), the triangle inequality and the Cauchy-Schwarz inequality, for any monomial D of order k≤ d we have for any smooth functions 1 , 2 ∈ C ∞ ( \G) D 1 2 L p ( \G) k S 2 p,k ( 1 )S 2 p,k ( 2 )≤ S 2 p,d ( 1 )S 2 p,d ( 2 ). In particular this implies that S p,d ( 1 2 ) d S 2 p,d ( 1 )S 2 p,d ( 2 ). (2.3) Finally, we note that if < is a finite-index subgroup of , then there is a natural embedding C ∞ ( \G) → C ∞ ( \G) since each ∈ C ∞ ( \G) can be viewed as a smooth left -invariant function on G. Since the Sobolev norms are defined with respect to the normalized Haar measure on the corresponding homogeneous space, we have for < of finite index and ∈ C ∞ ( \G) S p,d ( ) = S p,d ( ). (2.4) Spectral decomposition Let < G be a non-uniform lattice, that is, is a lattice and \H is not compact. Let = −y 2 ( ∂ ∂ x 2 + ∂ ∂ y 2 ) be the hyperbolic Laplace operator. It is a second order differential operator acting on C ∞ ( \H) and extends uniquely to a self-adjoint and positive semi-definite operator on L 2 ( \H). Since is non-uniform, the spectrum of is composed of a continuous part (spanned by Eisenstein series) and a discrete part (spanned by Maass forms) which further decomposes as the cuspidal spectrum and the residual spectrum. The residual spectrum always contains the constant functions (coming from the trivial pole of the Eisenstein series). If is a congruence subgroup, that is, contains a principal congruence subgroup (n) := {γ ∈ SL 2 (Z) : γ ≡ I 2 (mod n)} for some n ∈ N, then the residual spectrum consists only of the constant functions, see e.g. [15,Theorem 11.3]. Let {φ k } be an orthonormal basis of the space of cusp forms that are eigenfunctions of the Laplace operator . Explicitly, for each φ k there exists λ k ≥ 0 such that φ k = λ k φ k = s k (1 − s k )φ k = 1 4 + r 2 k φ k . Selberg's eigenvalue conjecture states that for congruence subgroups, λ k ≥ 1/4, or equivalently, there is no r k ∈ i(0, 1/2). Selberg's conjecture is known to be true for the modular surface M, and more generally, the best known bound towards this conjecture is currently λ k ≥ 1 4 − θ 2 , with θ = 7/64, which follows from the bound of Kim and Sarnak towards the Ramanujan conjecture, see [19, p. 176]. Let now = SL 2 (Z). In the notation introduced at the beginning of this section, the Eisenstein series for the modular group at the cusp ∞ is defined for Re(s) > 1 by E(z, s) = γ ∈( ∩±N )\ Im(γ z) s (2.5) with a meromorphic continuation to s ∈ C. Moreover, for any s ∈ C, E(·, s) is an eigenfunction of the Laplace operator with eigenvalue s(1 − s). Let ∈ L 2 (M) and we have the following spectral decomposition (see [15,Theorems 4.7 and 7.3]) (z) = μ M ( ) + r k ≥0 , φ k φ k (z) + 1 4π ∞ −∞ , E(·, 1 2 + ir) E(z, 1 2 + ir)dr, (2.6) where the convergence holds in the L 2 -norm topology, and is pointwise if ∈ C ∞ c (M). As a direct consequence we have for ∈ L 2 (M), 2 2 = μ M ( ) 2 + r k ≥0 | , φ k | 2 + 1 4π ∞ −∞ , E(·, 1 2 + ir) 2 dr. (2.7) Hecke operators The spectral theory of M has extra structure due to the existence of Hecke operators. The main goal of this subsection is to prove an operator norm bound for Hecke operators and the main reference is [15,Section 8.5]. For any n ∈ N define the set L n := n −1/2 g : g ∈ M 2 (Z), det(g) = n ⊂ G, (2.8) where M 2 (Z) is the space of two by two integral matrices. The n-th Hecke operator T n is defined by that for any ∈ L 2 (M) T n ( )(z) = 1 n 1/2 γ ∈ \L n (γ z). The Hecke operator T n is a self-adjoint operator on L 2 (M) and since T n commutes with the Laplace operator (since is defined via right multiplication and T n is defined via left multiplication) the orthonormal basis of the space of cusp forms {φ k } can be chosen consisting of joint eigenfunctions of all T n , that is, T n φ k = λ φ k (n)φ k . On the other hand, for any r ∈ R the Eisenstein series E(z, 1/2+ir) is an eigenfunction of T n with eigenvalue λ r (n) := d|n n d 2 ir , see [15,Equation (8.33)]. It is clear that |λ r (n)| ≤ σ 0 (n) with σ 0 (n) the divisor function. For the eigenvalue of cusp forms it is conjectured (Ramanujan-Petersson) that for any above φ k and for any n ∈ N λ φ k (n) ≤ σ 0 (n). The aforementioned bound of Sarnak and Kim [19] implies that λ φ k (n) ≤ σ 0 (n)n 7/64 . Using these bounds on eigenvalues and the above spectral decomposition (2.6) and (2.7) we have the following bound on the operator norm of the Hecke operator, see also [10, pp. 172-173]. Proposition 2.1 For any ∈ L 2 (M) and for any n ∈ N we have 0 , T n ( 0 ) L 2 (M) n θ+ 2 2 , where 0 := − μ M ( ) and θ = 7/64 as before. Hecke operators attached to a group element Let = SL 2 (Z) and let M = \H be the modular surface as above. There is another type of Hecke operators on L 2 (M) defined via a group element in SL 2 (Q). Namely, for each h ∈ SL 2 (Q) the Hecke operator attached to h, denoted by T h , is defined by that for any ∈ L 2 (M) T h ( )(z) = 1 #( \ h ) g∈ \ h (gz), (2.9) where h = {γ 1 hγ 2 : γ 1 , γ 2 ∈ } is the double coset attached to h. We note that T h is well-defined since is left -invariant. For our purpose, we will need another expression for T h . For any h ∈ SL 2 (Q) we denote by h := ∩h −1 h. We note that the map from to \ h sending γ ∈ to hγ induces an identification between h \ and \ h . This identification induces the following alternative expression for T h : T h ( )(g) = 1 [ : h ] γ ∈ h \ (hγ g). (2.10) It is clear from the definition that T h is defined only up to representatives for the double coset h , that is, T h = T h whenever h = h . For a fixed h ∈ SL 2 (Q), we call n ∈ N the degree of h if n is the smallest positive integer such that nh ∈ M 2 (Z). Using elementary column and row operations one can see that for h ∈ SL 2 (Q) with degree n h = diag(1/n, n) = n −1 g : g ∈ M 2 (Z), det(g) = n 2 , gcd(g) = 1 ⊂ G, (2.11) where gcd(g) is the greatest common divisor of the entries of g. Thus we can parameterize the Hecke operators by their degrees, that is, we will denote by T n := T h for any h ∈ SL 2 (Q) with degree n. We also note that by direct computation when h = diag(1/n, n) we have h = 0 (n 2 ), implying that for any h ∈ SL 2 (Q) with degree n (see e.g. [6, Section 1.2]) ν n := #( \ h ) = [ : h ] = [ : 0 (n 2 )] = n 2 p|n prime 1 + p −1 . (2.12) Now using the description (2.11) we have the double coset decomposition L n 2 = d|n d −1 0 0 d . This decomposition together with the definitions (2.8), (2.9) and (2.12) implies the relation nT n 2 = d|n ν d T d . Thus by the Möbius inversion formula we have T n = n ν n d|n μ(d) d T n 2 /d 2 . (2.13) Using this relation and Proposition 2.1 we can prove the following operator norm bounds for T n which we will later use, see also [3, Theorem 1.1] for such bounds in a much greater generality. Proposition 2.2 Keep the notation as in Proposition 2.1. For any ∈ L 2 (M) and for any n ∈ N we have 0 , T n ( 0 ) L 2 (M) n −1+2θ+ 2 2 . Proof. By Proposition 2.1 and using the relation (2.13), the trivial estimates |μ(d)| ≤ 1 and ν n ≥ n 2 and the triangle inequality we have 0 , T n ( 0 ) ≤ n −2 d|n (n/d) 0 , T n 2 /d 2 ( 0 ) n −2 d|n (n/d) 1+2θ+2 2 2 = n −1+2θ+2 σ −1+2θ+2 (n) 2 2 n −1+2θ+ 2 2 . Equidistribution of subsegments of expanding closed horocycles We record a special case of Sarnak's result [28, Theorem 1] on effective equidistribution of expanding closed horocycles, namely: Proposition 2.3 Let < SL 2 (Z) be a congruence subgroup and assume that has a cusp at ∞ with width one. Then for any ∈ C ∞ ( \H) ∩ L 2 ( \H) satisfying 2 < ∞ and for any 0 < y < 1 we have 1 0 (x + iy)dx − μ ( ) 3/4 2 1/4 2 y 1/2 , (2.14) where the implied constant is absolute, independent of , and y, and the L 2 -norm is with respect to the normalized hyperbolic area μ . Remark 2.15 We omit the proof here and refer the reader to [18, (3.5)]. We note that while [18] only deals with the case when = 0 ( p) with p a prime number, the proof there works for general congruence subgroups, given that they have trivial residual spectrum; see [15,Theorem 11.3]. We will also need the following (non-effective) equidistribution result replacing the whole closed horocycle by a fixed subsegment: Proposition 2.4 Let < SL 2 (Z) be as in Proposition 2.3. Let I ⊂ (0, 1) be an open interval, then for any ∈ C c ( \H) we have lim y→0 + 1 |I | I (x + iy)dx = μ ( ). (2.16) The proof of Proposition 2.4 uses Margulis' thickening trick [25] and mixing property of the geodesic flow on the unit tangent bundle of \H; this approach is also effective, see e.g. [17,Proposition 2.3]. A proof of (2.16) using spectral methods was also sketched in [12,Theorem 1 ]. We also note that both equidistribution results in Propositions 2.3 and 2.4 can be lifted to the unit tangent bundle of \H (with necessary modifications to the error term in (2.14)); since we will be only working in the hyperbolic surface level, we state these two results in the current format for convenience of our discussion. We further refer the reader to [13,30] for some much stronger effective equidistribution results regarding long enough (varying) subsegments on expanding closed horocycles. Proposition 2.4 can be equivalently stated as following: For any fixed open interval I ⊂ (0, 1), the measures μ I ,y weakly converge to μ as y → 0 + , where for any y ∈ (0, 1) and Remark 2.17 ∈ C c ( \H), μ I ,y ( ) := 1 |I | I (x + iy)dx. Thus by the Portmanteau theorem, (2.16) extends to = χ B with B ⊂ \H a Borel subset with boundary of measure zero. More generally, let ρ : [0, 1) → R be a Riemann integrable function. Since ρ can be weakly approximated from both above and below by step functions, we have lim y→0 + 1 0 ρ(x)χ B (x + iy)dx = μ (B) 1 0 ρ(x)dx with B ⊂ \H a Borel set with boundary of measure zero. A quantitative Borel-Cantelli lemma Finally we record here a quantitative Borel-Cantelli lemma which ensures for the limsup set of certain sequence of events to have full measure given certain quasiindependence conditions. = ν(A n ∩ A m ) − ν(A n )ν(A m ). Suppose that ∃ C > 0 such that for all k 2 > k 1 ≥ 1, k 2 n,m=k 1 R n,m ≤ C k 2 n=k 1 ν(A n ), (2.18) then n∈N ν(A n ) = ∞ implies that ν lim n→∞ A n = 1. Remark 2. 19 Keep the notation as in Lemma 2.5. It was shown in [20,Proposition 5.4] that if ∃C > 0 and η > 1 such that for any n = m, R n,m ≤ C √ ν(A n )ν(A m ) |n − m| η , then the sequence {A i } i∈N satisfies the condition (2.18). We will use the following slightly modified version of quantitative Borel-Cantelli lemma which has the flexibility to consider sequence of measurable sets {A n } n∈S indexed by a general unbounded subset S ⊂ N. Corollary 2.6 Let (X , B, ν) be as in Lemma 2.5. Let S ⊂ N be an unbounded subset and let {A n } n∈S be a sequence of measurable subsets in B. Suppose that ∃C > 0 and η > 1 such that ∀n, m ∈ S with m < n, R n,m ≤ C ν(A n )ν(A m ) n η , (2.20) then n∈S ν(A n ) = ∞ implies that ν lim n∈S n→∞ A n = 1. Proof For any i ∈ N let a i ∈ S be the i-th integer in S and let B i := A a i . For any i, j ∈ N let R i, j := ν(B i ∩ B j ) − ν(B i )ν(B j ) so that R i, j = R a i ,a j . Then by for any i < j we have R i, j = R a i ,a j ≤ C ν(A a i )ν(A a j ) a η j = C ν(B i )ν(B j ) a η j < C ν(B i )ν(B j ) |i − j| η , where for the first inequality we used the assumption (2.20) and for the second inequality we used the estimates a j ≥ j > j − i and ν(B i )ν(B j ) ≤ 1. Thus in view of Remark 2.19 and Lemma 2.5 we have i∈N ν(B i ) = ∞ implies that ν lim i→∞ B i = 1 which is equivalent to the conclusion of this corollary in view of the relation B i = A a i . Equidistribution range Let M = SL 2 (Z)\H. Since we fix = SL 2 (Z) throughout this section, we abbreviate the Sobolev norm S p,d by S p,d . In this section, we prove Theorems 1.1 and 1.2. The main ingredient of our proof is an explicit bound of Fourier coefficients which follows from a slight modification of the estimates obtained in [18]. Bounds on Fourier coefficients Let ∈ C ∞ c (M). Since is left -invariant, it is invariant under the transformation determined by u 1 : z → z +1, and it thus has a Fourier expansion for in the variable x = Re(z): (x + iy) = m∈Z a (m, y)e(mx), (3.1) where a (m, y) = 1 0 (x + iy)e(−mx)dx. Similarly we denote by a φ k (m, y) and a(s; m, y) the mth Fourier coefficients of the Hecke-Maass form φ k and the Eisenstein series E(·, s) respectively. Estimates on these Fourier coefficients yield, via the spectral expansion (2.6), estimates on the Fourier coefficients of . Namely, a (m, y) = r k ≥0 , φ k a φ k (m, y) + 1 4π ∞ −∞ , E(·, 1 2 + ir) a( 1 2 + ir; m, y)dr. We record the following bounds for a φ k (m, y) and a(s; m, y): Lemma|a φ k (m, y)| |m| θ y 1/2− (r k + 1) −1/3+ min{1, e πr k /2−2π |m|y },(3. 2) and |a 1 2 + ir; m, y | y 1/2− (1 + |r |) −1/3+ min{1, e π |r |/2−2π |m|y },(3. 3) where θ = 7/64 is the best known bound towards the Ramanujan conjecture as before. Remark 3.4 Contrarily to [18] that uses the trivial bound min{1, e πr /2−2π |m|y } ≤ 1, we keep this term. Proposition 3.2 [18, Proposition 3.4] For any ∈ C ∞ c (M), we have that a (0, y) = μ M ( ) + O 3/4 2 1/4 2 y 1/2 . (3.5) Moreover, for any m = 0, and any > 0 and any α 0 > 5/3, we have a (m, y) α 0 , , p S α 0 ( )y 1/2− |m| θ , (3.6) where S α 0 is a Sobolev norm of degree α 0 . Remark 3.7 The Sobolev norm S α 0 is explicit from the proof of [18,Proposition 3.4]: Writing α 0 = 5/3 + with > 0, then S α 0 ( ) = S 2,0 ( ) 2/3− /2 S 2,2 ( ) 1/3+ /2 for any ∈ C ∞ c (M). In particular, using the estimate S 2,0 ( ) ≤ S 2,2 ( ) we have S α 0 ( ) ≤ S 2,2 ( ). The following refinement of this last estimate allows to estimate the Fourier coefficients when |m| > y −1 is large. This refinement is crucial for our later results, and the price we pay is a Sobolev norm of higher degree. Proposition 3.3 Let ∈ C ∞ c (M) . Whenever |m|y > 1 and for any > 0, we have |a (m, y)| S 2,2 ( )|m| −4/3+θ+ y −5/6 . Proof For the contribution from the cusp forms we apply the bound (3.2) to the Fourier coefficients and the bound min{1, e πr /2−2π |m|y } ≤ e −π |m|y 0 ≤ r ≤ 2|m|y 1 r > 2|m|y, (3.8) and the relation , φ k = , φ k = (1/4 + r 2 k ) , φ k to get that r k ≥0 , φ k a φ k (m, y) 0≤r k ≤2|m|y | , φ k | |m| θ y 1/2− (r k + 1) −1/3+ e −π |m|y + r k >2|m|y | , φ k | |m| θ y 1/2− r −7/3+ k . (3.9) Now using Cauchy-Schwarz followed by summation by parts (together with Weyl's law stating that #{r k : r k ≤ M} M 2 (see e.g. [15,Corollary 11.2]) we can bound 0≤r k ≤2|m|y | , φ k | (r k + 1) −1/3+ ≤ 2 ⎛ ⎝ 0≤r k ≤2|m|y 1 (r k + 1) 2/3−2 ⎞ ⎠ 1/2 2 (|m|y) 2/3+ . Similarly, for the second sum we can bound r k >2|m|y | , φ k | r −7/3+ k ≤ 2 ⎛ ⎝ r k >2|m|y r −14/3+2 k ⎞ ⎠ 1/2 2 (|m|y) −4/3+ . To summarize, the left-hand side of (3.9) is bounded by 2 |m| 2/3+θ+ y 7/6 e −π |m|y + 2 |m| −4/3+θ+ y −5/6 . (3.10) For the contribution from the continuous spectrum using the estimates (3.3), (3.8), the relation , E(·, 1 2 + ir) = ( 1 4 + r 2 ) , E(·, 1 2 + ir) and Cauchy-Schwarz we can similarly bound ∞ −∞ , E(·, 1 2 + ir) a( 1 2 + ir; m, y)dr by e −π |m|y y 1/2− |r |≤2|m|y , E ·, 1 2 + ir (|r | + 1) −1/3+ dr + y 1/2− |r |>2|m|y , E ·, 1 2 + ir |r | −7/3+ dr y 1/2− 2 (|m|y) 1/6+ e −π |m|y + 2 (|m|y) −11/6+ , which is subsumed by the right-hand side of (3.10) (since |m|y > 1). Finally, we conclude the proof by applying the bounds max{ 2 , 2 } ≤ S 2,2 ( ) and e −π |m|y (|m|y) −2 (again since |m|y > 1) to the right hand side of (3.10). The following corollary of Proposition 3.3 is the key estimate that we will use to prove Theorem 1.1. Corollary 3.4 Let q be a positive integer. For any ∈ C ∞ c (M), y > 0, and any > 0, we have m =0 |a (qm, y)| S 2,2 ( )q −1 y −(1/2+θ+ ) . Proof If qy ≤ 1 we can separate the above sum into two parts to get m =0 |a (qm, y)| = 1≤|m|≤(qy) −1 |a (qm, y)| + |m|>(qy) −1 |a (qm, y)| . Applying ( where for the second estimate we used that 4/3 − θ − > 1. If qy > 1 then we have |qm|y > 1 for all m = 0. We can apply Proposition 3.3 to a (qm, y) for all integers m = 0 to get m =0 |a (qm, y)| S 2,2 ( ) |m| =0 |qm| −4/3+θ+ y −5/6 S 2,2 ( )q −4/3+θ+ y −5/6 S 2,2 ( )q −1 y −(1/2+θ+ ) , where for the last estimate we used that θ < 1/3 − . Remark 3.11 The estimates in [18] hold more generally for any conjugate to some 0 ( p). In this generality, there might be (finitely many) exceptional cusp forms with r k ∈ i(0, θ]. For such forms, it was shown in [18,Lemma 3.7] that for any m = 0 a φ k (m, y) , p 2 |m| θ y 1/2− (|m|y) −|r k |+ e −2π |m|y . Using the estimates (|m|y) −|r k |+ e −2π |m|y < (|m|y) −θ when |m|y ≤ 1 and (|m|y) −|r k |+ e −2π |m|y (|m|y) −2 when |m|y > 1 one can easily recover Corollary 3.4 for φ k , and hence for a general ∈ C ∞ c ( 0 ( p)\H). Then one can easily deduce analogous estimates as in Theorem 1.1 for , see the arguments in the next subsection. Proof of Theorem 1.1 In this subsection we prove Theorem 1.1. In view of (3.5) it suffices to prove the following proposition. |W J (m)| = 1 ϕ(n) j∈(Z/nZ) × e m j n = |μ(n m )| ϕ (n m ) ≤ 1 ϕ(n m ) . Hence we have m =0 a (m, y)W J (m) ≤ m =0 |a (m, y)| ϕ(n m ) = d|n 1 ϕ(d) m =0 gcd(m,n)=n/d |a (m, y)| ≤ d|n 1 ϕ(d) m =0 (n/d)|m |a (m, y)| d|n 1 ϕ(d) n d −1 y −(1/2+θ+ ) n −1 σ /2 (n)y −(1/2+θ+ ) n −1+ y −(1/2+θ+ ) , where for the second inequality we used the fact that gcd(m, n) = n/d implies that (n/d) | m, for the third inequality we applied Corollary 3.4 and for the second last inequality we applied the estimate ϕ(d) d 1− /2 . Full range equidistribution for rational translates In this subsection we prove Theorem 1.2. We fix x = p/q a primitive rational number and let N q = n ∈ N : gcd(n 2 , q) | n be as in Theorem 1.2. As mentioned in the introduction, the key ingredient is a symmetry lemma for rational translates which generalizes the symmetry (1.6). Before stating the lemma, let us briefly explain why we need to restrict to the subsequence N q . Let n ∈ N and let y > 0. We need to study the distribution of the points (x + j n + iy) = ( p q + j n + iy) for 0 ≤ j ≤ n − 1. Let p j q j be the reduced form of p q + j n and in view of the symmetry (1.6) we have x + j n + iy = p j q j + iy = − p j q j + i q 2 j y , where p j is the multiplicative inverse of p j modulo q j . To further analyze the distribution of these points, we thus need to solve the congruence equation x p j ≡ 1 (mod q j ) in x. Write k = gcd(n, q) and q = q/k and n = n/k. Then p q + j n = p kq + j kn = pn + jq kq n , implying that q j = kq n gcd( pn + jq ,kq n ) = kn q gcd( pn + jq ,kn ) = q n gcd( pn + jq ,n) can be written canonically as a product of two integers. Here for the second equality we used that gcd( pn + jq , q ) = gcd( pn , q ) = 1. In view of the Chinese remainder theorem, the above congruence equation modulo q j is relatively easy to solve when the two factors q and n/ gcd( pn + jq , n) are coprime (see the proof of Lemma 3.6 for more details). This condition can be guaranteed for any j if gcd(q , n) = gcd(q/ gcd(q, n), n) = 1 which is equivalent to the condition n ∈ N q . Finally, we also note that by writing n and q in prime decomposition forms, it is not hard to check that n ∈ N q is equivalent to q = kl with l = gcd(n, q) | n and gcd(k, n) = 1. We now state the symmetry lemma. Lemma 3.6 Let m kl be a primitive rational number and let n ∈ N such that l | n and gcd(k, n) = 1. Then for any 0 ≤ j ≤ n − 1 and for any y > 0 we have m kl + j n + iy = − dlmna k − m n l + jk /d * b n/d + i d 2 k 2 n 2 y ,(3. 14) where d = d j := gcd(m n l + jk, n) and a = a d , b = b d ∈ Z are some fixed integers such that a n d + bk = 1. Here, for any integer x, x denotes the multiplicative inverse of x modulo k, x * denotes the multiplicative inverse of x modulo n/d. If we further assume gcd( j, n) = l = 1, then d j = gcd(mn + jk, n) = 1 and m k + j n + iy = − mna k − ( jk) * b n + i k 2 n 2 y .(3. 15) Proof Since l | n, by direct computation we have m kl + j n = mn/l+ jk kn . Note that since gcd(k, mn) = 1 we have gcd(m n l + jk, k) = gcd(m n l , k) = 1. This implies that gcd(m n l + jk, kn) = gcd(m n l + jk, n) = d. Hence let p q be the reduced form of m kl + j n , then we have ( p, q) = ((m n l + jk)/d, kn/d). Now since gcd( p, q) = 1, there exist some integers v, w ∈ Z such that γ = w v −q p ∈ . By direct computation we have γ m kl + j n + iy = γ p q + iy = − w q + i q 2 y . implying that m kl + j n + iy = − w q + i q 2 y = − w kn/d + i d 2 k 2 n 2 y ,(3.16) where for the second equality we used the relation q = kn/d. Moreover, since γ ∈ we have wp + vq = 1, implying that (again using the relation ( p, q) = ((m n l + jk)/d, kn/d)) w (m n l + jk)/d ≡ 1 (mod k n d ). We claim that w ≡ dlmn n d a + m n l + jk /d * kb (mod k n d ). (3.17) In view of the Chinese Remainder Theorem, since gcd(k, n/d) = 1, it suffices to check dlmn n d a + m n l + jk /d * kb (m n l + jk)/d ≡ 1 (mod k) and dlmn n d a + m n l + jk /d * kb (m n l + jk)/d ≡ 1 (mod n d ). For the first equation we have dlmn n d a + m n l + jk /d * kb (m n l + jk)/d ≡ dlmn n d amnld ≡ a n d = 1 − bk ≡ 1 (mod k), where for the first equality we used the fact that gcd(dl, k) = 1 (since d | n, l | n and gcd(k, n) = 1). The second equation follows similarly. Now plugging relation (3.17) into (3.16) we get (3.14). For the second half we note that d j = gcd(mn + jk, n) = gcd( jk, n) = 1. The first equality is true since l = 1, and the second equality is true since by assumption gcd(k, n) = gcd( j, n) = 1. Thus in view of (3.14), to prove (3.15) it suffices to note that (mn + jk) * ≡ ( jk) * (mod n), or equivalently, mn + jk ≡ jk (mod n). Remark 3.18 When k = 1 we can take (a, b) = (0, 1), then (3.15) recovers the symmetry (1.6). We also note that for the point (x + j/n + iy) with x irrational, the above symmetry clearly breaks. Proposition 3.7 Let p/q be a primitive rational number and let n ∈ N q . Then for any y > 0 we have R n p q , y = d|n R pr n/d x d , d 2 k 2 n 2 y ,(3. 19) where x d ∈ R/Z is some number depending on d (and also on p, q, n) and k := q/ gcd(n, q). If we further assume gcd(n, q) = 1, then R pr n p q , y = R pr n − pna q , 1 q 2 n 2 y ,(3. 20) where x denotes the multiplicative inverse of x modulo q and a ∈ Z is as in Lemma 3.6. Proof Relation (3.20) follows immediately from (3.15) by taking (m, k) = ( p, q) and noting that {(−[(q j) * b] ∈ (Z/nZ) × : j ∈ (Z/nZ) × } = (Z/nZ) × , which follows from the fact that gcd(bq, n) = 1 (since gcd(bq, n) = gcd(1−an, n) = 1). Here (q j) * denotes the multiplicative inverse of q j modulo n and b ∈ Z is as in Lemma 3.6. For (3.19), we set m = p, l = gcd(n, q) (so that k = q/l). As mentioned above, the condition gcd(n 2 , q) | n implies that gcd(k, n) = 1. Thus the pair ( m kl , n) satisfies the assumptions in Lemma 3.6 and we can apply (3.14) for the points p q + j n + iy = m kl + j n + iy , 0 ≤ j ≤ n − 1. Now for any d | n define D d := 0 ≤ j ≤ n − 1 : d j = gcd(m n l + jk, n) = d so that R n p q , y = d|n p q + j n + iy ∈ M : j ∈ D d . (3.21) Moreover, we note that since gcd(k, n) = 1, we have On the other hand, by (3.14) we have p q + j n + iy ∈ M : j ∈ D d = − dlmna d k − m n l + jk /d * b d n/d + i d 2 k 2 n 2 y ∈ M : j ∈ D d , where for any integer x, x denotes the multiplicative inverse of x modulo k, x * denotes the multiplicative inverse of x modulo n/d, and a d , b d ∈ Z are some fixed integers such that a d We can thus conclude the proof by noting that the above relation follows immediately from (3.22) together with the fact gcd( b d , n d ) = 1 (since gcd(b d , n d ) = gcd(b d k, n d ) = gcd(1 − a d n d , n d ) = 1). Using these two relations and the estimate (3.13) one gets the following effective estimates. where for the second estimate we applied (3.13) and for the third estimate we used the trivial estimate ϕ(n/d) < n/d. Now plugging y d = d 2 /(k 2 n 2 y) into the above equation we get δ n,x,y ( ) = 1 n d|n ϕ n d a 0, d 2 k 2 n 2 y + O ,q S 2,2 ( )n −1 σ 1+2θ+3 (n)y 1/2+θ+ = 1 n d|n ϕ n d a 0, d 2 k 2 n 2 y + O ,q S 2,2 ( )n 2θ+4 y 1/2+θ+ , where the dependence on k in the first estimate is absorbed into the dependence on q (since k := q/ gcd(n, q) ≤ q). The second estimate follows from similar (but easier) analysis with the relation (3.20) in place of (3.19). We are now in the position to prove Theorem 1.2. We will prove the following proposition from which Theorem 1.2 follows, see also Remark 3.23. Theorem 3.9 Let x = p/q be a primitive rational number and let n ∈ N q . Let y n = c/n α for some 1 < α < 2 and c > 0. Then for any ∈ C ∞ c (M) we have δ n,x,y n ( ) − μ M ( ) ,q,c, n α/2−1+ + n 2θ+4 −α(1/2+θ+ ) . If we further assume gcd(n, q) = 1, then we have δ pr n,x,y n ( ) − μ M ( ) ,q,c S 2,2 ( ) n α/2−1 + n 2θ+3 −α(1/2+θ+ ) . Remark 3.23 The dependence on in the first estimate can also be made explicit. In fact, we can remove this dependence by adding a factor of S 2,2 ( ) + ∞ to the right hand side of this estimate. We also note that since we may take θ = 7/64, the right hand side of these two estimates decays to zero as n → ∞ for any 1 < α < 2. The second estimate follows immediately from (3.5) and the trivial estimate |q| ≥ 1. For the first estimate we separate the sum into two parts to get 1 n d|n ϕ n d a 0, d 2 k 2 n 2 y n = 1 n ⎛ ⎜ ⎜ ⎝ d|n d<n 1−α/2 + d|n d≥n 1−α/2 ⎞ ⎟ ⎟ ⎠ ϕ n d a 0, d 2 k 2 n 2 y n . Applying ⎛ ⎜ ⎜ ⎝ d|n d<n 1−α/2 ϕ n d μ M ( ) + O c, n d −1 n α/2 + O ⎛ ⎜ ⎜ ⎝ d|n d≥n 1−α/2 ϕ n d ⎞ ⎟ ⎟ ⎠ ⎞ ⎟ ⎟ ⎠ = μ M ( ) + 1 n O c, ⎛ ⎜ ⎜ ⎝ n α/2 d|n d<n 1−α/2 1 + d|n d≥n 1−α/2 n d ⎞ ⎟ ⎟ ⎠ = μ M ( ) + O c, n α/2−1 σ 0 (n) = μ M ( ) + O ,c, n α/2−1+ , finishing the proof, where for the first estimate we used the identity that d|n ϕ(n/d) = n and the estimate that ϕ (n/d) < n/d, and for the second estimate we used the estimates d|n d<n 1−α/2 1 ≤ σ 0 (n) and d|n d≥n 1−α/2 n d = d|n d≤n α/2 d ≤ n α/2 d|n d≤n α/2 1 ≤ n α/2 σ 0 (n). Quantitative non-equidistribution for rational translates As a direct consequence of the analysis in the previous subsection we also have the following quantitative non-equidistribution result for rational translates when {y n } n∈N is beyond the above range, generalizing the situation for {R pr n (0, y n )} n∈N . As before, for any Y > 0 we denote by μ Y the probability uniform distribution measure supported on H Y . Theorem 3.10 Let x = p/q be a primitive rational number and let y n = c/n 2 for some constant c > 0. Let ∈ C ∞ c ( ). Then for any n ∈ N q we have δ n,x,y n ( ) = 1 n d|n ϕ n d μ d 2 ck 2 n ( ) + O ,q,c S 2,2 ( )n −1+2 with k n = q/ gcd(n 2 , q). If we further assume that gcd(n, q) = 1, then δ pr n,x,y n ( ) = μ 1 cq 2 ( ) + O ,q,c S 2,2 ( )n −1+ . Proof These two effective estimates follow immediately from Proposition 3.8 by plugging in y n = c/n 2 and noting that a (0, Y ) = 1 0 (x + iY )dx = μ Y ( ). We can now give the Proof of Theorem 1.3. For part (1), in view of Theorem 3.10 only the second equation needs a proof. Since we are taking n ∈ P m going to infinity, it is sufficient to consider n = m ∈ P m with the prime number > q (so that q). For such n, we have gcd(n 2 , q) = gcd(m 2 2 , q) = gcd(m 2 , q). Since by assumption gcd(m 2 , q) | m and m | n, we can apply the first effective estimate in Theorem 3.10 for such n = m ∈ P m . Moreover, for any such n we have k n = q gcd(n 2 , q) = q gcd(m 2 , q) = q gcd(m, q) is a fixed number only depending on m and q. Here for the last equality we used the assumption that gcd(m 2 , q) | m. Now let n = m ∈ P m with q sufficiently large such that μ Y ( ) = 0 whenever Y > 2 /(ck n ) 2 (this can be guaranteed since k n is a fixed number and is compactly supported). In particular, for any d | n, μ d 2 /(ck 2 n ) ( ) = 0 whenever | d. This, together with the first estimate in Theorem 3.10 implies that for all such sufficiently large n = m ∈ P m δ n,x,y n ( ) = 1 m d|m ϕ m d μ d 2 ck 2 n ( ) + O ,q,c, ,m −1+2 = − 1 ν m, 1 ck 2 n ( ) + O ,q,c, ,m −1+2 , where for the second estimate we used that gcd(m, ) = 1 and is a prime number. We can now finish the proof by taking n = m → ∞ along the subsequence P m (equivalently, taking → ∞) and plugging in the relation k n = q/ gcd(m, q). For part (2), since R pr n (x, y n ) ⊂ R n (x, y n ), we only need to prove the full escape to the cusp for the sequence {R n (x, y n )} n∈N . Identify (up to a null set) M with the standard fundamental domain F := z ∈ H : Re(z) < 1 2 , |z| > 1 . For any n ∈ N and 0 ≤ j ≤ n − 1 let p j q j be the reduced form of x + j n = p q + j n = pn+q j qn so that by (1.6) x + j n + iy n = − p j q j + i q 2 j y n . Thus using the trivial inequality |q j | ≤ |q|n for all 0 ≤ j ≤ n − 1 and the assumption lim Negative results: in connection with Diophantine approximations Let = SL 2 (Z) and M = \H be the modular surface. Let μ M be the normalized hyperbolic area on M as before. In this section we prove a general result which captures the cusp excursion rate for the sample points R n (x, y n ) in terms of the Diophantine property of the translate x ∈ R/Z ∼ = [0, 1), see Theorem 4.3. Theorem 1.4 will then be an easy consequence of this result. Notation and a preliminary result on cusp excursions In this subsection we prove a preliminary lemma relating cusp excursions on the modular surface to Diophantine approximations. Let us first fix some notation. Finally, we record a distance formula that we will later use. Let d M (·, ·) be the distance function on M induced from the hyperbolic distance function d H on H, i.e., d M ( z 1 , z 2 ) = inf γ ∈ d H (γ z 1 , z 2 ).z ∈ C Y d M ( z 0 , z) ≥ log Y − c. (4.1) The estimate (4.1) holds for a general non-compact finite-volume hyperbolic manifold using reduction theory after Garland and Raghunathan [11,Theorem 0.6] combined with a distance estimate by Borel [2, Theorem C]. We give here a selfcontained elementary proof for the special case of the modular surface. Proof of Lemma 4.1. In view of the triangle inequality, we may assume z 0 = i. Note that d H (i, z) ≥ log Y for any z ∈ H with Im(z) ∈ (0, 1/Y ) ∪ (Y , ∞). Thus it suffices to show that if z ∈ C Y , then Im(γ z) ∈ (0, 1/Y ) ∪ (Y , ∞) for any γ ∈ . By the definition of C Y , we may assume z = x + iy ∈ H with y > Y . Now let γ = * * a b ∈ . If a = 0, then Im(γ z) = Im(z) > Y . If a = 0, then Im(γ z) = Im(z) |az + b| 2 = y (ax + b) 2 + a 2 y 2 ≤ 1 y < 1 Y . The following simple lemma is the key observation relating cusp excursions with Diophantine approximation. Then for any 0 ≤ j ≤ n − 1 we have x + j n + i 2Y n 2 ∈ C Y j ,2Y j , where Y j = gcd(n, m + j) 2 Y . (4.2) In particular, we have x + j n + i 2Y n 2 : 0 ≤ j ≤ n − 1 ⊂ C Y . (4.3) Proof The in particular part follows immediately from the inclusion C Y j ,2Y j ⊂ C Y , which in turn follows from the trivial bound Y j ≥ Y . Hence it suffices to prove the first half of the lemma. For simplicity of notation, we set r = 1/(2Y n 2 ). Then by assumption |x − m n | < r . Fix 0 ≤ j ≤ n − 1, and let p q be the reduced form of m+ j n (so that q = n gcd(n,m+ j) ). Then x + j n +ir ∈ H • p/q,r and x + j n +ir ∈ H p/q,r for some r < r < 2r . Take γ ∈ sending H • p/q,r to the region z ∈ H : Im(z) > 1/(2rq 2 ) = Y j . Then we have Im γ (x + j n + ir) > Y j and Im γ (x + j n + ir ) = Y j . Since r < r < 2r we can bound the hyperbolic distance d H γ (x + j n + ir), γ (x + j n + ir ) = log r r < log 2, implying that γ x + j n + ir ∈ z ∈ H : Y j < Im(z) < 2Y j , which implies (4.2). Full escape to the cusp along subsequences for almost every translate In this subsection we prove Theorem 4.3. Before stating this theorem, we first recall a definition from Diophantine approximation. Let ψ : N → (0, 1/2) be a non-increasing function. We say that x ∈ R is primitive ψ-approximable if there exist infinitely many n ∈ N such that the inequality x − m n < ψ(n) n (4.4) is satisfied by some m ∈ Z coprime to n. Since we assume ψ(N) ⊂ (0, 1/2), the existence of such an m implies its uniqueness. We prove the following: If x ∈ [0, 1) is primitive ψ-approximable, then R n (x, y n ) ⊂ C r n infinitely often. Remark 4.6 Since R pr n (x, y) ⊂ R n (x, y) for any n ∈ N, x ∈ R and y > 0, Theorem 4.3 also holds for translates of the primitive rational points. Proof of Theorem 4.3 Let x ∈ [0, 1) be primitive ψ-approximable. Then for Y n = 1/ (2nψ(n)), we have by (4.3) that x + j n + i ψ(n) n ∈ M : 0 ≤ j ≤ n − 1 ⊂ C Y n (4.7) for infinitely many n's. For every n ∈ N, set d n := Y n /r n = max {ψ(n)/(ny n ), ny n /ψ(n)}. Then d H (t + iψ(n)/n, t + iy n ) = log(d n ) (4.8) for any t ∈ R. As in the proof of Lemma 4.2, by (4.7) and (4.8) we have R n (x, y n ) ⊂ C Y n /d n for any n in (4.7). We now give a short Proof of Theorem 1.4 Let α = min{β, 2 − β}. For each n ≥ 2, let ψ(n) = 1/(n log n) and let {y n } n∈N be a sequence of positive numbers satisfying y n 1/(n 2 log β n). Then r n as in (4.5) is given by r n = 1 2 min{ψ(n) −2 y n , n −2 y −1 n } log α n. By Theorem 4.3, for any x ∈ [0, 1) primitive ψ-approximable, we have that R n (x, y n ) ⊂ C r n infinitely often. Hence by (4.1), for each such x ∈ R/Z, we have inf z∈R n (x,y n ) d M ( z 0 , z) ≥ log(r n ) + O(1) = α log log n + O(1) infinitely often, implying the inequality (1.10). Finally, since n∈N ψ(n) = ∞ and ψ is decreasing, the set of primitive ψ-approximable numbers in [0, 1) is of full measure by Khintchine's approximation theorem. For every irrational x ∈ R, the Diophantine exponent κ x > 0 is the supremum of κ > 0 for which x is primitive n −κ -approximable. Dirichlet's approximation theorem implies that κ x ≥ 1 for any irrational x and by Khintchine's theorem, κ x = 1 for almost every x ∈ R. When κ x > 1, we have the following result that yields much faster cusp excursion rates for our sample points while handling sequences {y n } n∈N decaying polynomially faster than 1/n 2 . lim n→∞ inf z∈R n (x,y n ) d M ( z 0 , z) log n ≥ min{2κ x − β, β − 2}. Proof Take κ ∈ (1, κ x ) and set α = min{2κ − β, β − 2}. Let ψ(n) = 1/n κ . Then x is primitive ψ-approximable since κ < κ x . By Theorem 4.3, we have R n (x, y n ) ⊂ C r n infinitely often with r n = 1 2 min{ψ(n) −2 y n , n −2 y −1 n } n α . This implies that lim n→∞ inf z∈R n (x,y n ) d M ( z 0 , z) log n ≥ α = min{2κ − β, β − 2}. Taking κ → κ x finishes the proof. A non-equidistribution result for all translates In this subsection we prove the following result which, together with part (1) of Theorem 1.3 implies non-equidistribution for all translates: Proof Let x ∈ [0, 1) be primitive ψ c -approximable, that is, there exist infinitely many n ∈ N satisfying |x − m/n| < c/n 2 = y n with some uniquely determined m ∈ Z satisfying gcd(m, n) = 1. For each such n, and for any 0 ≤ j ≤ n − 1, let k = gcd(n, m + j) 2 . Then by (4.2), (x + j/n + iy n ) ∈ C k 2 /(2c),k 2 /c . Moreover, since (k 2 /(2c), k 2 /c) ⊂ [1/(2c), 1/c] ∪ [2/c, 4/c] ∪ [9/(2c), ∞) for any k ∈ N, we have C k 2 /(2c),k 2 /c ⊂ E c for any k ∈ N, implying that R n (x, y n ) ⊂ E c for these infinitely many n ∈ N. Lemma 4.7 For any 0 < c < 3/2, we have μ M (E c ) ≤ 1 − 3 π 1 max{2c,4/c} − 2c 9 < 1. Proof Let U ⊂ M be the projection of the open set {z ∈ H : max {2c, 4/c} < Im(z) < 9/(2c)} . Since 0 < c < 3/2 we have max{2c, 4/c} < 9/(2c) implying that U is nonempty. We will show that E c is disjoint from U. Let U = {z ∈ F : max {2c, 4/c} < Im(z) < 9/(2c)} , E j c = z ∈ F : Im(z) ∈ I j for j = 2, 3. Moreover, since the interval (max {2c, 4/c} , 9/2c) intersects I 2 and I 3 trivially, we have E j c ∩ U = ∅ for j = 2, 3. It thus remains to show that E 1 c ∩ U = ∅. For this we note that z ∈ F satisfies the property that Im(z) = max γ ∈ Im(γ z). Hence to show E 1 c ∩ U = ∅, it suffices to show that max γ ∈ Im(γ z) ≤ max {2c, 4/c} for any z = s + it ∈ H with Im(z) = t ∈ I 1 = [1/(2c), 1/c]. For this, using the same discussion as in the proof of Lemma 4.1 we have for any z = s + it ∈ H with t ∈ [1/(2c), 1/c] max γ ∈ Im(γ z) ≤ max t, t −1 ≤ max {1/c, 2c} ≤ max {2c, 4/c} . Finally, using the above description of U and (2.1) we have by direct computation μ M (U) = 3 π 1 max{2c, 4/c} − 2c 9 implying that μ M (E c ) ≤ 1 − 3 π 1 max{2c,4/c} − 2c 9 < 1 (again since 0 < c < 3/2). Proof of Theorem 4.5 Let ψ c (n) = c/n. Since c ≥ 1/ √ 5, any irrational number is primitive ψ c -approximable by the Hurwitz's approximation theorem; see, e.g., [14,Theorem 193]. Hence by Lemma 4.6, for each irrational x ∈ [0, 1), we have R n (x, y n ) ⊂ E c infinitely often. Moreover, since c < 3/2 by Lemma 4.7 we have μ M (E c ) < 1, finishing the proof. Remark 4.9 The condition on the sequence {y n } n∈N in Theorem 4.5 is quite restrictive and the proof of Theorem 4.5 is much more involved than that of Theorem 4.3. We note that this is because we need to take care of the badly approximable numbers, that is, the set of irrational numbers that are not primitive ψ c -approximable for some c > 0. If x ∈ [0, 1) is not badly approximable, then a similar argument as in the proof of Theorem 4.3 using only the crude estimate (4.3) would already be sufficient to prove non-equidistribution of the sample points R n (x, y n ) for any sequence {y n } n∈N satisfying y n 1/n 2 . Second moments of the discrepancy Let = SL 2 (Z) and let M = \H be the modular surface as before. In this section we prove Theorem 1.6. Our proof relies on a second moment computation of the discrepancies |δ n,x,y − μ M | and |δ n,y ( ) respectively. Since we assume = SL 2 (Z) we will also use the notation μ for μ M . Relation to Hecke operators In this subsection we prove two preliminary estimates relating these second moments to the Hecke operators defined in Sect. 2.3. where 0 = − μ ( ), T u j/n is the Hecke operator associated to u j/n ∈ SL 2 (Q) defined as in (2.9), the Sobolev norm S( ) is defined by S( ) := S 4,2 ( ) 2 + S 2,2 ( )S 1,0 ( ),(5.3) and the implied constants are absolute. Proof Without loss of generality we may assume that is real-valued. Expanding the square in the left hand side of (5.1), doing a change of variables, and using the left u 1 -invariance of , we have that D n,y ( ) equals 1 n 2 n−1 j 1 , j 2 =0 1 0 (x + j 1 n + iy) (x + j 2 n + iy)dx − 2μ ( ) 1 n n−1 j=0 1 0 (x + j n + iy)dx + μ ( ) 2 = 1 n n−1 j=0 1 0 (x + iy) (x + j n + iy)dx − 2μ ( ) 1 0 (x + iy)dx + μ ( ) 2 . Applying (2.14) to the term 1 0 (x + iy)dx and using the trivial estimate 3/4 2 1/4 |μ ( )| ≤ S 2,2 ( )S 1,0 ( ) ≤ S( ),(5.u −1 j/n u j/n -invariant, we have F j ( ) ∈ C ∞ ( j \H). Moreover, F j ( )(x + iy) = (x + iy) (x + j n + iy). For each 0 ≤ j ≤ n − 1, it is easy to check that u 1 ∈ j and j contains the principal congruence subgroup (n 2 ), hence j satisfies the assumptions in Proposition 2.3. Then by (2.14), 1 0 F j ( )(x + iy)dx = j \H F j ( )(z)dμ j (z) + O F j ( ) 3/4 2 F j ( ) 1/4 2 y 1/2 . Next we note that by (2.3), F j ( ) 3/4 2 F j ( ) 1/4 2 ≤ S j 2,2 F j ( ) = S j 2,2 L u −1 j/n ≤ S j 4,2 ( ) S j 4,2 L u −1 j/n . Using the fact that is left -invariant and L u −1 j/n = S 4,2 ( ) , where for the second equality we used (2.2). Hence we have F j ( ) 3/4 2 F j ( ) 1/4 2 ≤ S j 2,2 F j ( ) ≤ S 4,2 ( ) 2 ≤ S( ) < ∞. (5.6) Thus applying (2.14) to F j ∈ C ∞ ( j \H) and using (5.6) we get 1 0 (x + iy) x + j n + iy dx = , L u −1 j/n L 2 ( j \H) + O S( )y 1/2 . (5.7) Plugging (5.7) into (5.5) and using the identities μ ( ) = μ j ( ) = μ j (L u −1 j/n ) (the second equality follows from the left G-invariance of the hyperbolic area μ j ) we get that Let F ⊂ H be a fundamental domain for \H. The disjoint union γ ∈ j \ γ F forms a fundamental domain for j \H. Thus we can conclude the proof of (5.1) by noting that γ ∈ j \ γ F 0 (z) 0 (u j/n z)dμ j (z) = γ ∈ j \ γ F 0 (z) 0 (u j/n z)dμ j (z) = F 0 (z) ⎛ ⎝ 1 [ : j ] γ ∈ j \ 0 (u j/n γ z) ⎞ ⎠ dμ (z) = F 0 (z) T u j/n ( 0 )(z)dμ (z), where for the second equation we did a change of variable z → γ z, used the leftinvariance of and the relation [ : j ]μ j = μ , and for the last equality we used the expression (2.10). Similarly, applying the estimates (2.14) and (5.4) and making change of variables we see that D pr n,y ( ) equals 1 ϕ(n) 2 n−1 j=0 c( j) 1 0 (x + iy) (x + j n + iy)dx − μ ( ) 2 + O S( )y 1/2 , where c( j) := # ([ j 1 ], [ j 2 ]) ∈ (Z/nZ) × × (Z/nZ) × : [ j 2 ] − [ j 1 ] = [ j] .= 1 ϕ(n) 2 n−1 j=0 c( j) 0 , T u j/n ( 0 ) L 2 ( \H) + O(S( )y 1/2 ). Finally we can finish the proof by noting that for each 0 ≤ j ≤ n − 1, c( j) ≤ ϕ(n) (since for each [ j 1 ] ∈ (Z/nZ) × , there is at most one [ j 2 ] ∈ (Z/nZ) × such that [ j 2 ] − [ j 1 ] = [ j]). Second moment estimates Combining Proposition 5.1 and the operator norm bound in Proposition 2.2 we have the following second moment estimates: where θ = 7/64 is the best bound towards the Ramanujan conjecture as before and the Sobolev norm S( ) is as defined in (5.3). Remark 5.9 It is also possible to approach the second moment computation using the spectral bounds on the Fourier coefficients of from Sect. 3.1 rather than Hecke operators. The spectral approach however yields a weaker estimate when y > 0 is small. For comparison, following the spectral approach, one obtains 1 0 |δ n,x,y ( ) − μ ( )| 2 dx n −1 y −2(θ+ ) + y 1/2 S 2,2 ( ). Proof of Theorem 5.2 First we prove (5.8). For each 0 ≤ j ≤ n − 1, it is clear that u j/n is of degree n j := n/ gcd(n, j), and thus T u j/n = T n j . Applying For any d | n, #{0 ≤ j ≤ n − 1 : n j = d} = ϕ(d), thus n j=1 n −1+2θ+ /4 j = d|n ϕ(d)d −1+2θ+ /4 < d|n d 2θ+ /4 = σ 2θ+ /4 (n) n 2θ+ /2 , where for the first inequality we used the trivial bound ϕ(d) < d. Finally, we observe that 0 2 ≤ 2 . We now give a quick Proof of Theorem 1. 6 Let α > 0 be the fixed number as in this theorem. Let β := min{ α 2 , 1 − 2θ }. Fix 0 < c < β and let N ⊂ N be an unbounded subsequence such that n∈N n −c < ∞. We want to show that for any {y n } n∈N satisfying y n n −α there exists a full measure subset I ⊂ R/Z such that for any x ∈ I , δ n,x,y n ( ) → μ M ( ) and δ pr n,x,y n ( ) → μ M ( ) for any ∈ C ∞ c (M) as n ∈ N goes to infinity. Since the function space C ∞ c (M) has a dense countable subset, it suffices to prove the above assertion for a fixed . Now we fix ∈ C ∞ c (M) and take > 0 sufficiently small such that β − 2 > c. For any n ∈ N define I n = I 1 n ∪ I 2 n ⊂ R/Z such that Thus by the second moment estimate (5.8), the assumption that y n n −α and Chebyshev's inequality we get |I n | ≤ I 1 n + I 2 n ≤ 2n max D n,y ( ), D pr n,y ( ) , n −β+2 < n −c , implying that n∈N |I n | < ∞. Hence taking I ⊂ R/Z to be the complement of this limsup set lim n∈N n→∞ I n ⊂ R/Z and by the Borel-Cantelli lemma we have I is of full measure. Moreover, for any x ∈ I , x ∈ I c n for all n ∈ N sufficiently large, that is, max δ n,x,y n ( ) − μ M ( ) , δ pr n,x,y n ( ) − μ M ( ) ≤ n − /2 , ∀ n ∈ N sufficiently large. In particular for such x, δ n,x,y n ( ) → μ M ( ) and δ pr n,x,y n ( ) → μ M ( ) as n ∈ N goes to infinity. Remark 5.10 The second moment D n,y ( ) is closely related to the sample points (1.2) considered in [12]: Using the extra invariance δ n,x+1/n,y ( ) = δ n,x,y ( ) and applying a change of variable, one can easily check that Thus let N ⊂ N be the fixed sequence as in the above proof, by Theorem 5.2 and the same Borel-Cantelli type argument we have that for almost every x ∈ R/Z the sequence of sample points { ( x+ j n + iy n : 0 ≤ j ≤ n − 1} equidistributes on M with respect to μ M as n ∈ N goes to infinity, as long as {y n } n∈N decays at least polynomially. Left regular action of normalizing elements In this section, denotes a congruence subgroup, and we set by 1 = SL 2 (Z). We moreover assume that there exists some h ∈ SL 2 (Q) normalizing , that is, h −1 h = . It induces the left regular h-action on \H given by z ∈ \H → hz ∈ \H. Since h normalizes , this map is well defined: Suppose z = z , that is there exists some γ ∈ such that z = γ z. Then hz = hγ z = hγ h −1 hz = hz. The goal of this section is to describe this action on cylindrical cuspidal neighborhoods of \H. Cusp neighborhoods of congruence surfaces Since is a congruence subgroup, the set of cusps of can be parameterized by the coset \ (Q ∪ {∞}) (see e.g. [21, p. 222]), where the action of on Q ∪ {∞} is defined via the Möbius transformation. We denote by a complete list of coset representatives for \ (Q ∪ {∞}). For each cusp representative c ∈ , its stabilizer subgroup is given by c := τ c N τ −1 c ∩ , where τ c ∈ 1 is such that τ c ∞ = c. (More precisely, c is an index two subgroup of the stabilizer subgroup of c if −I 2 ∈ .) The existence of such τ c is guaranteed by the transitivity of the action of 1 on Q ∪ {∞}. On the other hand, τ c is only unique up to right multiplication by any element of ±N . We note that c is independent of the choice of τ c , and since c ∈ is a cusp, c is nontrivial. Moreover, τ −1 c c τ c is a subgroup of N ∩ 1 = u 1 . Hence τ −1 c c τ c is a cyclic group generated by a unipotent matrix u ω c for some positive integer ω c , which is called the width of the cusp c. We can now define cusp neighborhoods on the hyperbolic surface \H around a cusp c ∈ . For any Y > 0, C ,c Y ⊂ \H denote the projection of the horodisc {τ c z ∈ H : Im(z) > Y } onto \H. Similarly, for any Y > Y > 0, let C ,c Y ,Y denote the projection of the cylindrical region {τ c z ∈ H : Y < Im(z) < Y } onto \H. We record the following two lemmas for the later purpose of computing the measure of certain unions of cusp neighborhoods. Lemma 6.1 If Y > Y > 1, the set C ,c Y ,Y is in one-to-one correspondence with the set {τ c z ∈ H : Re(z) ∈ R/ω c Z, Im(z) ∈ (Y , Y )}. (6.1) In particular, if −I 2 ∈ then for any Y > Y > 1 μ C ,c Y ,Y = 3ω c π [ 1 : ] 1 Y − 1 Y . (6.2) Proof The one-to-one correspondence is given by the projection of the above rectangular set onto \H. Indeed, since c ⊂ , this map projects the rectangular set in (6.1) onto C ,c Y ,Y . To show that it is also injective, suppose τ c z = τ c z for some z, z from this rectangular set. Then there exists some γ ∈ such that τ −1 c γ τ c z = z . If γ ∈ ± c then τ −1 c γ τ c ∈ ± u ω c , and this implies that z = z . Otherwise, let τ −1 c γ τ c = a b c d ∈ 1 . Since γ / ∈ ± c , c = 0. We easily see this cannot happen since it would imply Im(z ) = Im(z) |cz + d| 2 = Im(z) (cx + d) 2 + c 2 y 2 ≤ 1 y ≤ 1, contradicting that Im(z ) > Y > 1. For the area computation, we use the definition (2.1) of μ 1 together with μ 1 = [ 1 : ]μ (since −I 2 ∈ ). Lemma 6.2 Given two distinct cusps c 1 , c 2 ∈ , and any Y 1 , Y 2 ≥ 1, C ,c 1 Y 1 ∩ C ,c 2 Y 2 = ∅. Proof Since Y 1 , Y 2 ≥ 1, the sets {τ c 1 z ∈ H : Im(z) > Y 1 } and {τ c 2 z ∈ H : Im(z) > Y 2 } are subsets of the interior of the Ford circles based at c 1 and c 2 respectively. Two Ford circles are either disjoint or identical. Suppose z ∈ C ,c 1 Y 1 ∩ C ,c 2 Y 2 . Then there exists an isometry γ ∈ that maps the Ford circle at c 1 to the Ford circle at c 2 . Consequently, we must have γ c 1 = c 2 , which is a contradiction. Remark 6. 3 We will later consider sets I y,Y ,c := x ∈ (0, 1) : (x + iy) ∈ C ,c Y for some y > 0, Y > 1 and c ∈ . This set is the intersection of the line segment {x + iy ∈ H : 0 < x < 1} with the preimage of C ,c Y in H (under the natural projection from H to \H). By definition the preimage of C ,c Y is the disjoint (since Y > 1) union of the infinitely many horodiscs {τ c z ∈ H : Im(z) > Y } = H • p/q,1/(2q 2 Y ) for all cusps c = p/q ∈ c. Moreover, note that a necessary condition for such a horodisc intersecting the line segment {x + iy ∈ H : and 0 < x < 1} is that p/q ∈ c ∩ (− 1 2Y , 1 + 1 2Y ) and 1/(q 2 Y ) > y, i.e.τ −1 hc hτ c = √ ω hc /ω c * 0 √ ω c /ω hc ∈ SL 2 (Q). (6.5) Proof Since h normalizes we have h c h −1 = hτ c N τ −1 c h −1 ∩ . Thus to prove (6.4) it suffices to show hτ c N τ −1 c h −1 = τ hc N τ −1 hc . We show that τ −1 hc hτ c is an upper triangular matrix. Indeed, τ −1 hc hτ c ∞ = τ −1 hc (hc) = ∞. This proves (6.4). We moreover conclude that τ −1 hc hτ c = λ * 0 λ −1 (6.6) for some λ = 0, and it remains to show that λ 2 = ω hc /ω c . For this we conjugate the subgroup τ −1 hc hc τ h·c by the matrix τ −1 hc hτ c . We obtain with (6.4) that τ −1 c h −1 τ hc τ −1 hc hc τ hc τ −1 hc hτ c = τ −1 c c τ c = u ω c . On the other hand, using (6.6) and τ −1 hc hc τ hc = u ω hc , we have τ −1 c h −1 τ hc τ −1 hc hc τ hc τ −1 hc hτ c = λ −1 * 0 λ 1 ω hc 0 1 λ * 0 λ −1 = 1 ω hc /λ 2 0 1 . Comparing both equations we conclude that λ 2 = ω hc /ω c . Finally replacing τ hc with −τ hc if necessary, we can ensure λ is positive. Proposition 6.4 Let Y > Y > 0 and c ∈ . If z ∈ C ,c ω c Y ,ω c Y , then hz ∈ C ,hc ω hc Y ,ω hc Y . Similarly, if z ∈ C ,c ω c Y , then hz ∈ C ,hc ω hc Y . Proof The second statement follows from the first one by taking Y → ∞. Since z ∈ C ,c ω c Y ,ω c Y , by definition there exists z = x + iy ∈ H with 0 ≤ x < ω c and ω c Y < y < ω c Y and z = τ c z . Consider hτ c z = τ hc z with z = τ −1 hc hτ c z . By (6.5), we have Im(z ) = (ω hc /ω c )Im(z ) ∈ (ω hc Y , ω hc Y ), implying that hz = hτ c z ∈ C ,hc ω hc Y ,ω hc Y . Negative results: horocycles expanding arbitrarily fast In this section using the results from the previous section, we prove Theorems 1.7 and 1.8 which provide new limiting measures for the sequences δ n,x,y n n∈N and δ pr n,x,y n n∈N , allowing {y n } n∈N to decay arbitrarily fast. For any n ∈ N we consider the congruence subgroup n < SL 2 (Z) given by n := a b c d ∈ SL 2 (Z) : n 2 | c, a ≡ d ≡ ±1 (mod n) . (7.1) It is clear that 1 = SL 2 (Z) and that n contains the congruence subgroup 1 (n 2 ) := γ ∈ SL 2 (Z) : γ ≡ 1 * 0 1 (mod n 2 ) . Basic properties of the congruence subgroups 0 n First we show that n is normalized by u j/n for any j ∈ Z. As mentioned in the introduction this simple fact is the starting point of our proofs to Theorems 1.7 and 1.8. Hence if γ ∈ n , that is, n 2 | c and a ≡ d ≡ ±1 (mod n), all the entries are integers with the bottom left entry divisible by n 2 , and a − jc n ≡ a ≡ d ≡ d + jc n ≡ ±1 (mod n). This implies that u −1 j/n n u j/n ⊂ n . Next we prove the following index formula for n . Lemma 7.2 For any integer n ≥ 3, we have [ 1 : n ] = n 3 2 p|n prime 1 − p −2 . (7.2) Proof Let J n < Z/n 2 Z × be the subgroup J n := [a] ∈ Z/n 2 Z × : a ≡ ±1 (mod n) . (7.3) It is easy to check that #(J n ) = 2n. Consider the map h : n → J n sending γ = a b c d ∈ n to [a] ∈ Z/n 2 Z × . Using the definition of n , one can check that h is a group homomorphism with the kernel ker(h) = 1 (n 2 ). For each 0 ≤ k ≤ n − 1, set γ ± k = ± 1+kn 1 −k 2 n 2 1−kn ∈ n . Then h surjects the set γ ± k ∈ n : 0 ≤ k ≤ n − 1 onto J n . Finally we use the index formula for 1 (n 2 ) (see e.g. [6, Section 1.2]) to get [ 1 : n ] = [ 1 : 1 (n 2 )] [ n : 1 (n 2 )] = [ 1 : 1 (n 2 )] # J n = n 3 2 p|n prime 1 − p −2 . Next, we study the properties of n relative to its cusps. As in Sect. 6 we denote by n the set of cusps of n . The following lemma computes the width of each cusp of n . Proof Let τ c ∈ 1 be as before such that τ c ∞ = c. Thus the left column of τ c is m l . By direct computation we have τ c N τ −1 c = Thus by (7.1) an element in ( n ) c = τ c N τ −1 c ∩ n is of the form γ = 1−mlt m 2 t −l 2 t 1+mlt ∈ 1 satisfying that n 2 | l 2 t and 1 − mlt ≡ 1 + mlt ≡ ±1 (mod n). Looking at the top right and bottom left entries of γ , we have that m 2 t, l 2 t ∈ Z. Since gcd(m, l) = 1, we have t ∈ Z. Then the condition n 2 | l 2 t is equivalent to n 2 gcd(n,l) 2 | t, and the condition n | mlt is equivalent to that n gcd(n,ml) | t. Moreover, since n gcd(n,ml) | n 2 gcd(n,l) 2 , the condition n gcd(n,ml) | t is implied by the condition n 2 gcd(n,l) 2 | t. We conclude that n 2 | l 2 t implies 1 − mlt ≡ 1 + mlt ≡ ±1 (mod n). Thus ( n ) c = 1 − mlt m 2 t −l 2 t 1 + mlt ∈ 1 : n 2 | l 2 t . Conjugating ( n ) c back via τ c and using the equivalence of the two conditions n 2 | l 2 t and n 2 gcd(n,l) 2 | t we get τ −1 c ( n ) c τ c = u t = 1 t 0 1 ∈ 1 : n 2 gcd(n, l) 2 | t , implying that ω c = n 2 / gcd(n, l) 2 . Next we compute the number of cusps of n . Proposition 7.4 For any integer n ≥ 3 we have # n = n 2 2 p|n prime 1 − p −2 . Remark 7.4 It is easy to check that 2 = 0 (4). Thus [ 1 : 2 ] = 6 and 2 has three cusps which can be represented by ∞, 1/2 and 1 respectively. To prove Proposition 7.4 we first prove a preliminary formula for # n . Lemma 7.5 For any integer n ≥ 3 we have # n = d|n 2 ϕ(n 2 /d)ϕ(d) gcd(n 2 /d, d) 2n . Proof Since −I 2 ∈ n and 1 (n 2 ) < n , we have n = n \ 1 (n 2 ) . On the other hand, by the analysis in [6, p. 102], the set 1 (n 2 ) is in bijection with the union of cosets d|n 2 ±I 2 \Z d , where for each d | n 2 , For each d | n 2 , using the definition of n , it is easy to check that the linear action of n on Z 2 (by matrix multiplication) induces a well-defined action of n on Z d and that the corresponding action of the subgroup 1 (n 2 ) is trivial. From the proof of Lemma 7.2, we have n / 1 (n 2 ) ∼ = J n , where J n = ±[1 + kn] ∈ (Z/n 2 Z) × : 0 ≤ k ≤ n − 1 ,(7.# J n · ([m], [l]) t = # J n #(J n ) ([m],[l]) = 2n gcd(n 2 /d, d) , proving the claim, and hence also this lemma. We can now give the proof of Proposition 7.4 by simplifying the formula in Lemma 7.5. Proof of Proposition 7.4 Write n = k i=1 p α i i in the prime decomposition form and apply Lemma 7.5 to get # n = 1 2n β∈Z k :0≤β i ≤2α i ϕ k i=1 p β i i ϕ k i=1 p 2α i −β i i k i=1 p min{β i ,2α i −β i } i , where the summation is over all vectors β = (β 1 , . . . , β k ) ∈ Z k satisfying 0 ≤ β i ≤ 2α i for all 1 ≤ i ≤ k, and we used that gcd(n 2 /d, d) = k i=1 p min{β i ,2α i −β i } i for d = k i=1 p β i i . Using the fact that ϕ is multiplicative and interchanging the summation and product signs we get # n = 1 2n k i=1 ⎛ ⎝ 0≤β i ≤2α i ϕ p β i i ϕ p 2α i −β i i p min{β i ,2α i −β i } i ⎞ ⎠ = 1 2n k i=1 ⎛ ⎝ 1≤β i ≤2α i −1 p 2α i i (1 − p −1 i ) 2 p min{β i ,2α i −β i } i + 2 p 2α i i (1 − p −1 i ) ⎞ ⎠ = 1 2n k i=1 p 2α i i (1 − p −1 i ) ⎛ ⎝ (1 − p −1 i ) 1≤β i ≤2α i −1 p min{β i ,2α i −β i } i + 2 ⎞ ⎠ , where for the second equality we used that for 1 ≤ β i ≤ 2α i −1, ϕ p β i i ϕ p 2α i −β i i = p 2α i (1 − p −1 i ) 2 , and for β i = 0 or β i = 2α i , ϕ p β i i ϕ p 2α i −β i i = p 2α i (1 − p −1 i ) zero. Moreover, applying the volume formula (6.2), the index formula in Lemma 7.2 and the cusp number formula in Proposition 7.4 (see also Remark 7.4 for the case when n = 2) we have for any n ∈ N, μ n ( n ) = c∈ n μ n C n,c ω c Y n ,2ω c Y n = c∈ n 3ω c π [ 1 : n ] × 1 2ω c Y n = 3 2π Y n # n [ 1 : n ] 1 nY n . (7.6) For any n ∈ N and 0 < y < 1 we define I n (y) := {x ∈ R/Z : n (x + iy) = 1} . By definition, x ∈ I n (y) if and only if n (x + iy) ∈ C n,c ω c Y n ,2ω c Y n ⊂ C n,c ω c Y n for some c ∈ n . Thus Lemma 7.6 implies that I n (y) ⊂ {x ∈ R/Z : R n (x, y) ⊂ C Y n }. This, together with our choice that Y n = max{log n, 1} and the distance formula (4.1), implies that for any n ≥ 3 and for any x ∈ I n (y) inf 1 z∈R n (x,y) d M ( 1 z 0 , 1 z) ≥ log(Y n ) + O(1) = log log n + O(1). It thus suffices to show that there exists a sequence {y n } n∈N satisfying that 0 < y n < c n for all n ∈ N and that the limsup set lim n→∞ I n (y n ) is of full Lebesgue measure in R/Z. For this, we will construct a sequence {y n } n∈N decaying sufficiently fast and then apply the quantitative Borel-Cantelli lemma Corollary 2.6 to the sequence {I n (y n )} n∈N ⊂ R/Z. To ensure the quasi-independence condition (2.20) in Corollary 2.6, we need, for every pair 1 ≤ m < n ∈ N, the two quantities |I m (y m ) ∩ I n (y n )| and |I m (y m )| |I n (y n )| to be sufficiently close to each other. The key observations for this are the following two relations that |I n (y n )| = 1 0 n (x + iy n )dx (7.7) and |I m (y m ) ∩ I n (y n )| = 1 0 m (x + iy m ) n (x + iy n )dx = I m (y m ) n (x + iy n )dx. (7.8) Assuming the limit equation (2.16) holds for the pairs ((0, 1), n ) and (I m (y m ), n ) (we will verify this later), then by relation (7.8) the quantity |I m (y m ) ∩ I n (y n )| is close to the quantity |I m (y m )|μ n ( n ) which in turn is close to |I m (y m )||I n (y n )| by relation (7.7), provided that y n > 0 is sufficiently small. We now implement the above ideas rigorously. We first claim that there exists a sequence {y n } n∈N satisfying, for all n ∈ N, 0 < y n < c n and 1 |I | I n (x + iy n )dx − μ n ( n ) ≤ μ n ( n ) 2n 2 ,(7.9) for any subset I ⊂ R/Z taken from the finite set We now construct such a sequence successively. For the base case n = 1 since (7.11) holds for the pair ((0, 1), 1 ) on M = 1 \H, there exists 0 < y 1 < c 1 sufficiently small such that {(0, 1)} {I m (y m ) : 1 ≤ m < n} .(7.1 0 1 (x + iy 1 )dx − μ 1 ( 1 ) < 1 2 μ 1 ( 1 ). For a general integer n ≥ 2, suppose that we already have chosen 0 < y m < c m satisfying (7.9) for all the positive integers m < n. By Remark 6.3 the set I m (y m ) ⊂ R/Z is a disjoint union of finitely many open intervals for any m < n. Thus (7.11) is satisfied for all the pairs ((0, 1), n ) , (I m (y m ), n ), 1 ≤ m < n on n \H. Since there are only finitely many such pairs, we can take 0 < y n < c n sufficiently small such that (7.9) is satisfied for all I ∈ {(0, 1)} {I m (y m ) : 1 ≤ m < n}, which is the set in (7.10). This finishes the proof of the claim. Now let {y n } n∈N be as in the claim. For any n ∈ N apply (7.9) to the pair ((0, 1), n ) we get |I n (y n )| − μ n ( n ) ≤ μ n ( n ) 2n 2 . (7.12) By the triangle inequality, this implies μ n ( n ) ≤ 2|I n (y n )|. (7.13) More generally, for each 1 ≤ m < n apply (7.9) to the pair (I m (y m ), n ) we get |I m (y m ) ∩ I n (y n )| − |I m (y m )| μ n ( n ) ≤ |I m (y m )| μ n ( n ) 2n 2 . (7.14) Using the inequalities (7.12), (7.13), (7.14) together with the triangle inequality we get ||I m (y m ) ∩ I n (y n )| − |I m (y m )| |I n (y n )|| ≤ |I m (y m )| μ n ( n ) n 2 ≤ 2 |I m (y m )| |I n (y n )| n 2 . (7.15) Hence the sequence {I n (y n )} n∈N ⊂ R/Z satisfies the quasi-independence condition (2.20) (with the subset S = N and the exponent η = 2). Moreover, using the inequality (7.12), the volume computation (7.6) and the estimate that Y n log n we have that n∈N |I n (y n )| ≥ n∈N 1 2 μ n ( n ) n∈N 1 n log n = ∞. Thus by Corollary 2.6, lim n→∞ I n (y n ) ⊂ R/Z is of full Lebesgue measure, finishing the proof. Remark 7.16 It is not clear to us whether the rate log log n is the fastest excursion rate for generic translates. We note that in principle it can be proved (or disproved) if one can compute the volume of the set E n Y := n z ∈ n \H : 1 u j/n z ∈ C Y for all 0 ≤ j ≤ n − 1 . For instance, if one can show μ n (E n Y ) 1/(nY ) for all n ∈ N and for all Y ≥ 1, then Theorem 1.7 together with a standard application of the Borel-Cantelli lemma would imply that the inequality in (1.13) is indeed an equality for almost every x ∈ R/Z. We also note that our analysis (Lemmas 6.2, 7.6) shows that for any n ∈ N and for any Y ≥ 1 c∈ n C n,c ω c Y ⊂ E n Y ⊂ c∈ n C n,c Y , implying that 1/(nY ) μ n E n Y 1/Y . On the other hand using some elementary arguments (which relies on the width computation Lemma 7.3) one can show that any u 1/n -orbit contains at least one cusp of width one. This fact together with the fact that 1 ≤ ω c ≤ n 2 implies that E n Y = c∈ n C n,c Y when Y ≥ n 2 . However, both estimates are not sufficient for the purpose of obtaining an upper bound. Remark 7.17 Here we give a very brief sketch of the argument communicated to us by Strömbergsson: For each n ∈ N and y > 0, it is not difficult to see that n (x + iy) ∈ C n,c ω c Y n for some c = p q ∈ n with gcd( p, q) = 1 if and only if x − p q 2 < y ω c Y n q 2 − y 2 = y gcd(n, q) 2 n 2 Y n q 2 − y 2 . (7.18) Here Y n = max{log n, 1} is as in the above proof. Definẽ I n (y) := x ∈ R/Z : ∃ primitive p q s.t. n | q, q < 1 2 √ yY n , x − p q < √ y 2 √ Y n q . One can easily check that elements inĨ n (y) satisfy the inequality (7.18). Hence by Lemma 7.6 we haveĨ n (y) ⊂ {x ∈ R/Z : R n (x, y) ⊂ C Y n }. (7.19) Moreover, using some standard techniques from analytic number theory one can show that for any subinterval I ⊂ R/Z (or more generally, any finite disjoint union of subintervals), lim y→0 + |I | −1 Ĩ n (y) ∩ I = c n Y n with c n = 3 π 2 ϕ(n) n 2 p n (1− p −2 ) −1 ϕ(n) n 2 . This limit equation is the analog of (7.11). Another input is the divergence of the series n∈N c n Y n n∈N ϕ(n) n 2 log n , which follows from the estimate ϕ(n) n/ log log n. With these two inputs one can then mimic the arguments in the above proof to construct a sequence {y n } n∈N decaying sufficiently fast and then apply Corollary 2.6 to get a full measure limsup set lim n→∞Ĩn (y n ) ⊂ R/Z. Finally, we note that the relation (7.19) can be checked directly using the definition of the setĨ n (y). Hence this argument can be carried over without going into the congruence covers n \H. Proof of Theorem 1.8 We prove Theorem 1.8 in this subsection. The strategy is similar to that of Theorem 1.7 with the sequence of cuspidal sets approaching the cusps replaced by a sequence of compact cylinders approaching certain closed horocycles. Let n ∈ N be an integer and let n z ∈ n \H be a point close to a cusp c ∈ n . For any 0 ≤ j ≤ n−1, the analysis in Sect. 6 gives exact information about the height of the companion point n u j/n z with respect to the cusp u j/n c. While this is sufficient for Theorem 1.7 (cusp excursions), to realize the limiting measure ν m,Y in Theorem 1.8 one needs more refined information about the spacing of these companion points along the closed horocycles they lie on. For this, we further analyze the left regular u 1/n -action on points near certain type cusps which we now define. We say c ∈ n is of simple type if c can be represented by a primitive rational number m/q satisfying that gcd(n 2 , q) | n, and we denote by sim n ⊂ n the set of simple type cusps. (This notion of simple type cusps is closely related to the condition n ∈ N q in Theorem 1.2. In fact, let p/q be a primitive rational number then the condition n ∈ N q is equivalent to that the cusp c ∈ n represented by p/q is of simple type.) If m /q is another representative for c, that is, m /q is primitive and m /q = γ (m/q) for some γ ∈ n , then using the definition of n , it is easy to check that gcd(n 2 , q) = gcd(n 2 , q ). Hence the simple type cusps are well-defined. As mentioned in Sect. 3.3 the condition gcd(n 2 , q) | q implies the further decomposition q = kl with l = gcd(n, q) | n and k = q/l satisfying gcd(k, n) = 1. We can thus reparameterize a simple type c by m/(kl) with gcd(m, kl) = gcd(k, n) = 1 and l | n. The main new ingredient of our proof to Theorem 1.8 is the following decomposition of the sample points which generalizes (3.19). m a kl b ∈ 1 , and for each 1 ≤ j ≤ n − 1 let τ u j/n c = p j v j q j w j ∈ 1 , where p j , q j are as in the proof of Lemma 7.8, a, b, v j , w j are some integers such that τ c , τ u j/n c ∈ 1 , that is, mb − kla = 1 and m n l + jk w j − knv j = d j (7.20) with d j = gcd(m n l + jk, n) as in the proof of Lemma 7.8. By direct computation and using Lemmas 6.3 and 7.8 (and the relation ω c = d 2 0 = n 2 /l 2 ) we have (Here we used the assumption that mkl = 0.) Hence we have for any 0 ≤ j ≤ n − 1 n u j/n z = n u j/n τ c (x + iy ) = n τ u j/n c τ −1 u j/n c u j/n τ c (x + iy ) (7.21) = n τ u j/n c Here for the first equality we used the assumption that n z = n τ c z and the fact that u j/n normalizes n . Now as in the proof of Proposition 3.7 for any d | n, we define Use the second relation in (7.20) to get for j ∈ D d , w j (m n l + jk)/d ≡ 1 (mod k n d ). where for the last equality we used the identities gcd(n 2 /d, d) = d (since d | n) and ϕ n 2 d = n 2 d p|(n 2 /d) prime (1 − p −1 ) = n d × n p|n prime (1 − p −1 ) = nϕ(n) d , where for the second equality we used the fact that n 2 /d and n share the same set of prime divisors. Hence for n = m we have # c ∈ sim n : ω c ≥ m 2 = ϕ(n) 2 We note that the first condition is guaranteed by the facts that t n ≥ (m 2 Y − 1) −1 and that lim n∈P m n→∞ t n = ∞. For the second condition, we note that by the definitions of Y n and Y n , 1 Y n − 1 Y n = 1 Y t n . Moreover, using the fact that there are only finitely many prime numbers dividing m we get where the divergence of the rightmost series follows from the estimate j j log j which is an easy consequence of the prime number theorem. Here j ∈ P 1 denotes the j-th prime number. We now give the δ pr m /d,x n,c,d ,d 2 Y ( ). We now conclude by taking n → ∞ along the subsequence N x and noting that lim n∈N x n→∞ log Y n /Y n = 0 (since lim n∈P m n→∞ Y n /Y n = 1 which follows from the assumption lim n∈P m n→∞ Y n = lim n∈P m n→∞ Y n = Y ). Remark 7.26 It is clear that we can take a sequence {y n } n∈N decaying sufficiently fast such that the conditions (7.9) and (7.25) (for any finitely many pairs (m, Y ) with m 2 Y > 1) are all satisfied and hence (noting that the intersection of finitely many full measure sets is still of full measure) for such a sequence the conclusions of Theorems 1.7 and 1.8 (for any finitely many pairs (m, Y ) with m 2 Y > 1) hold simultaneously. Lemma 2.5 [29, Chapter I, Lemma 10] Let (X , B, ν) be a probability space with B a σ -algebra of subsets of X and ν : X → [0, 1] a probability measure on X with respect to B. Let {A i } i∈N be a sequence of measurable subsets in B. For any n, m ∈ N we denote by R n,m : 3. 1 [ 18 , 118Lemmata 3.7 and 3.13] For any m = 0 and for any > 0 we have 3.6) (and the estimate S α 0 ( ) ≤ S 2,2 ( ) by Remark 3.7) to the first sum and Proposition 3.3 to the second, we have m =0 |a (qm, y)| 2 ( ) q θ y 1/2− (qy) −(1+θ) + q −4/3+θ+ y −5/6 (qy) 1/3−θ− = S 2,2 ( )q −1 y −(1/2+θ+ ) , Proposition 3. 5 a 5Let M be the modular surface. For any ∈ C ∞ c (M), for any x ∈ R/Z and y > 0, we haveδ n,x,y ( ) = a (0, y) + O S 2,2 ( )n −1 y −(1/2+θ+ ) (3.12) and δ pr n,x,y ( ) = a (0, y) + O S 2,2 ( )n −1+ y −(1/2+θ+ ) . (3.13) Proof Let J ⊂ R/Z ∼ = [0, 1) be a finite subset and for any m ∈ Z denote by W J (m) := 1 |J | t∈J e(mt). We note that 1 |J | t∈J (t + iy) equals δ n,x,y ( ) when J = {x + j/n : 0 ≤ j ≤ n − 1}and equals δ pr n,x,y ( ) when J = {x + j/n : 0 ≤ j ≤ n − 1, gcd( j, n) = 1}. Applying the Fourier expansion (3.(m, y)e(mt) = m∈Z a (m, y) 1 |J | t∈J e(mt) = a (0, y) + m =0 a (m, y)W J (m). Now for (3.12) we take J = {x + j/n : 0 ≤ j ≤ n − 1} and note that for such J , |W J (m)| equals 1 if n | m and equals 0 otherwise. Hence m =0 a (m, y)W J (m) ≤ m =0 n|m |a (m, y)| n −1 y −(1/2+θ+ ) , where for the last estimate we applied Corollary 3.4. For (3.13) we take J = {x + j/n : 0 ≤ j ≤ n − 1, gcd( j, n) = 1} and note the identity j∈(Z/nZ) × e m j n = μ(n m )ϕ(n) ϕ(n m ) for the Ramanujan's sum, where n m := n/ gcd(n, m) and μ : N → {0, ±1} is the Möbius function; see e.g. [14, Theorem 272]. Then [m n l l+ jk] ∈ Z/nZ : 0 ≤ j ≤ n − 1 = Z/nZ and hence [m n l + jk] ∈ Z/nZ : j ∈ D d = {[ j] ∈ Z/nZ : gcd( j, n) = d} . (3.22) n d +b d k = 1. Now for each d | n we let x d ∈ [0, 1), x d ≡ − dlmna d k (mod 1) so that it remains to show −[ (m n l + jk)/d * b d ] ∈ (Z/(n/d)Z) × : j ∈ D d = (Z/(n/d)Z) × . Proposition 3. 8 a 8Let x = p/q be a primitive rational number and let n ∈ N q . Then for any ∈ C ∞ c (M) and y > 0 we haveδ n,x,y ( ) = 1 n d|n ϕ n d a 0, d 2 k 2 n 2 y + O ,q S 2,2 ( )n 2θ+4 y 1/2+θ+ ,where k := q/ gcd(n, q). If we further assume that gcd(n, q) = 1, then δ pr n,x,y ( ) = a 0, 1 q 2 n 2 y + O ,q S 2,2 ( )n 2θ+3 y 1/2+θ+ .Proof For any positive divisor d | n, let y d = d 2 /(k 2 n 2 y) with k := q/ gcd(n, q) as above and let x d ∈ R/Z be as in(3.19). Then by(3.19) for x = p/q we have δ n,x,y ( ) (0, y d ) + O S 2,2 ( ) = Proof of Theorem 3.9. In view of Proposition 3.8 and the assumption y n = c/n α μ M ( ) + O ,c, n α/2−1+ with k := q/ gcd(n, q), and that (under the extra assumption gcd(n, q) = 1) a 0, 1 q 2 n 2 y n = μ M ( ) + O c S 2,2 ( )n α/2−1 . t + i d 2 k 2 n 2 22(3.5) (and the trivial estimate |k| ≥ 1) for the first sum and applying the estimate a 0, d 2 k 2 n 2 y n n 2 y n = 0, we have R(x, y n ) ⊂ z ∈ F : Im(z) ≥ 1 q 2 n 2 y n n→∞ − −−→ cusp of M. For any Y > 0, we denote by C Y ⊂ M the image of the region {z ∈ H : Im(z) > Y } under the natural projection from H to M = \H. As Y goes to infinity, the sets C Y diverge to the cusp of M, and we call C Y a cusp neighborhood of M. Similarly, for any Y > Y > 0, we denote by C Y ,Y the projection onto M of the open setz ∈ H : Y < Im(z) < Y .For any primitive rational number m/n, and for any r > 0 we denote byH m/n,r := z = x + iy ∈ H : (x − m/n) 2 + (y − r ) 2 = r 2the horocycle tangent to ∂H at m/n with Euclidean radius r . We denote byH • m/n,r := z = x + iy ∈ H : (x − m/n) 2 + (y − r ) 2 < r 2the open horodisc enclosed by H m/n,r . We have the following geometric description of Lemma 3.6: Let γ = m * n * be an element in . Then γ sends the horizontal horocycle {z ∈ H : Im(z) = Y } to the horocycle H m/n,r with r = 1/(2Y n 2 ), while the open region {z ∈ H : Im(z) > Y } is mapped to the horodisc H • m/n,r . On the other hand, for any primitive rational number m/n, there is γ ∈ of the form γ = m * n * . Thus for any Y > 0 and for any z ∈ H, z ∈ C Y if and only if z ∈ H • m/n,r for some primitive rational number m/n with r = 1/(2Y n 2 ). Lemma 4. 1 1Let z 0 ∈ M be a fixed base point. Then there exists a constant c > 0 (which may depend on z 0 ) such that for any Y > 1 and for any Lemma 4. 2 2Let x ∈ [0, 1) be a real number. Suppose there exist a primitive rational number m/n and n > 0, and a real number Y > 0 satisfying x − m n < 1 2Y n 2 . Theorem 4. 3 3Let ψ : N → (0, 1/2) be a non-increasing function such that lim n→∞ nψ(n) = 0. Let {y n } n∈N be a sequence of positive numbers satisfying r n := 1 2 min ψ(n) −2 y n , n −2 y Theorem 4. 4 4Let z 0 ∈ M be a fixed base point. Let x ∈ [0, 1) with Diophantine exponent κ x > 1 and let {y n } n∈N be a sequence of positive numbers satisfying y n n −β for some fixed 2 < β < 2κ x . Then Theorem 4. 5 5Let 1/ √ 5 ≤ c < 3/2 and let y n = c/n 2 . Then there exists a closed measurable subset E c ⊂ M, depending only on c, with μ M (E c ) < 1, and such that for each irrational x ∈ [0, 1), R n (x, y n ) ⊂ E c infinitely often.The set E c in Theorem 4.5 is explicit: For any c > 0, E c ⊂ M is defined to be the image of the closed set {z ∈ H : Im(z) natural projection from H to M. It is clear from the definition that E c ⊂ M is closed. Theorem 4.5 is a direct consequence of the following two lemmas. Lemma 4. 6 6For any c > 0 let y n = c/n 2 and let ψ c (n) = c/n. Then if x ∈ [0, 1) is primitive ψ c -approximable, we have R n (x, y n ) ⊂ E c infinitely often. I 1 = 1[1/(2c), 1/c], I 2 = [2/c, 4/c] and I 3 = [9/(2c), ∞), and for 1 ≤ j ≤ 3, define E j c to be the projection onto M of {z ∈ H :Im(z) ∈ I j } such that E c = 3 j=1 E j c .It thus suffices to show that E j c ∩ U = ∅ for each 1 ≤ j ≤ 3. For this, we identify (up to a null set) M with the standard fundamental domain F := z ∈ H : Re(z) < 1 2 , |z| > 1 . Since 0 < c < 3/2, we have max {2c, 4/c} > 2/c > 2/(3/2) > 1. Thus we have pr n,x,y − μ M | along the closed horocycle H y . Throughout this section, we abbreviate the second moments 1 0 δ n,x,y ( ) − μ M ( ) x,y ( ) − μ M ( ) 2 dx by D n,y ( ) and D pr Proposition 5. 1 00 1For any n ∈ N, y > 0 and ∈ C ∞ c (M), , T u j/n ( 0 ) + O S( , T u j/n ( 0 ) + O S( )y 1/2 . (5.2) + iy) (x + j n + iy)dx − μ ( ) 2 + O(S( )y 1/2 ).(5.5) For each 0 ≤ j ≤ n − 1, let j := u j/n = ∩ u −1 j/n u j/n and define F j ( ) := L u −1 j/n ∈ C ∞ (H). Since is left -invariant, and L u −1 j/n is left 2 ( j \H) + O(S( )y 1/2 ). Now similar as before we can apply the estimate (5.7), the identities μ ( ) = μ j ( ) = μ j (L u −1 j/n ) and n−1 j=0 c( j) = ϕ(n) 2 ( j \H) + O(S( )y 1/2 ) Theorem 5. 2 2For any n ∈ N, y > 0 and ∈ C ∞ c (M) we have max D n,y ( ), D pr n,y ( ) n −1+2θ+ 2 2 + S( )y 1/2 , (5.8) estimate ϕ(n) n −1+ /2 and the operator norm bound in Proposition 2.2 to the terms 0 , T n j 0 , we get max D n,y ( ), I 1 n 1:= x ∈ R/Z : δ n,x,y n ( ) − μ M ( ) > n − /2 , and I 2 n := x ∈ R/Z : δ pr n,x,y n ( ) − μ M ( ) > n − /2 . q 2 2< 1/(yY ). Thus there are only finitely many such horodiscs intersecting {x + iy ∈ H : 0 < x < 1}. Moreover, each such intersection is an open interval and the set I y,Y ,c ⊂ (0, 1) is thus the disjoint union of these finitely many open intervals. Similarly, for any Y > Y > 1 the set x ∈ (0, 1) : (x + iy) ∈ C ,c Y ,Y = I y,Y ,c \I y,Y ,c is also a disjoint union of finitely many open intervals.6.2 Left regular action of normalizing elementsLet h ∈ SL 2 (Q) be a group element normalizing .The action of h on Q ∪ {∞} (by Möbius transformation) induces a well-defined action on \ (Q ∪ {∞}), the set of cusps of . Lemma 6.3 For each c ∈ , we have h c h −1 = hc (6.4) Lemma 7. 1 1For any n ∈ N and for any j ∈ Z, the unipotent matrix u j/n normalizes n .Proof By direct computation, for any γ = a b c d ∈ 1 and for any j ∈ Z we have u −1 j/n γ u j/n = a − jc n b + (a−d) Lemma 7. 3 3Let n ∈ N and let c = m/l ∈ n with gcd(m, l) = 1 (if c = ∞, m/l is understood as 1/0). Then we have ω c = n 2 gcd(n, l) 2 . Z d := ([m], [l]) t : [m] ∈ (Z/dZ) × , [l] ∈ Z/n 2 Z, gcd(n 2 , l) = d with ([m], [l]) t is the transpose of the row vector ([m], [l]) and the bijection is induced by the map sending m/l ∈ Q ∪ {∞} with gcd(m, l) = 1 to ([m], [l])) t ∈ Z d with d = gcd(n 2 , l). Note that # Z d = ϕ(n 2 /d)ϕ(d). Using the relations in (7.20) the top right entry becomesw j a + b jw j n − v j = w j a + 1 + kla m jw j n − v j = a(w j mn + jw j kl − klv j n) + jw j − v D d := {0 ≤ j ≤ n − 1 : d j = d} so that R n (x, y) = d|n 1 u j/n z ∈ M : j ∈ D d ,(7.22) and [(m n l + jk)/d] ∈ (Z/(n/d)Z) × : j ∈ D d = (Z/(n/d)Z) × . (7.23) Proof Lemma 7.10 Let m ∈ N and Y > 0 satisfy that m 2 Y > 1. Let P m = {n = m ∈ N : is a prime number and m} be as in (1.8). Then there exist sequences of positive numbers {Y n } n∈P m and {Y n } n∈P m satisfying that (1) Y n > Y > Y n > m −2 for any n ∈ P m and lim n∈P m n→∞ Y n = lim n∈P m n→∞ Y n = Y ; For each n = m ∈ P m , take Y n := (1 − (2t n ) −1 ) −1 Y and Y n := (1 + (2t n ) −1 ) −1 Y with t n = max{(m 2 Y − 1) −1 , log log }. 5 ) 5which is of size 2n. Hence the action of n on Z d induces the action of J n on Z d given by[a] · ([m], [l]) t = ([am], [al]) t ,with ([m], [l]) t ∈ Z d and a the multiplicative inverse of a modulo n 2 . We note that[am] ∈ (Z/dZ) × is well-defined since d | n 2 .We conclude that n = n \ 1 (n 2 ) is in bijection with the union of cosets# J n \Z d . Hence we want to compute the size of the coset J n \Z d for each d | n 2 . For this we claim that for any for any ([m], [l]) t ∈ Z d , the orbit J n · ([m], [l]) t is of size 2n/ gcd(n 2 /d, d), implying that We note that Lemma 7.5 then follows immediately from this claim. To prove this claim, it suffices to compute the size of the stabilizer (J n ) ([m],[l]) := [a] ∈ J n : [a] · ([m], [l]) t = ([m], [l]) t ∈ Z d . Since by definition [a] · ([m], [l]) t = ([am], [al]) t , [a] ∈ (J n ) ([m],[l]) if and only if am ≡ m (mod d) and al ≡ l (mod n 2 ). Since d = gcd(n 2 , l) and [m] ∈ (Z/dZ) × , these two conditions are equivalent to a ≡ 1 (mod d) and a ≡ 1 (mod n 2 /d), which are equivalent to a ≡ 1 (mod lcm(n 2 /d, d)). Hence using the description (7.5) of J n and the facts that n | lcm(n 2 /d, d) and lcm(n 2 /d, d) gcd(n 2 /d, d) = n 2 we have (J n ) ([m],[l]) = [1 + lcm(n 2 /d, d) j] ∈ J n : 0 ≤ j ≤ gcd(n 2 /d, d) − 1 is of size gcd(n 2 /d, d). This implies thatd|n 2 n \Z d = d|n 2 J n \Z d , implying that # n = d|n 2 # J n \Z d = #Z d 2n/ gcd(n 2 /d, d) = ϕ(n 2 /d)ϕ(d) gcd(n 2 /d, d) 2n . 10 ) 10For this, first note that by Remark 2.17 for any I ⊂ R/Z ∼ = [0, 1) a disjoint union of finitely many open intervals, we havelim y→0 + 1 |I | I n (x + iy)dx = μ n ( n ). (7.11) − mlt m 2 t −l 2 t 1 + mlt ∈ G : t ∈ R . AcknowledgementsThe first named author would like to thank Alex Kontorovich for explanations andand min{β i , 2α i − β i } = 0. We note that the term 1≤β i ≤2α i −1 pHence we havefinishing the proof.Proof of Theorem 1.7For simplicity of notation, we abbreviate the cusp neighborhoods C n ,c Y and C n ,c Y ,Y by C n,c Y and C n,c Y ,Y respectively and the set of cusps n by n . We first prove the following key lemma which says that if n z visits a cusp neighborhood on n \H, then all companion points 1 u j/n z, 0 ≤ j ≤ n − 1 make excursions to some cusp neighborhood on M = 1 \H, the modular surface. We recall that C Y is the projection onto M of the region {z ∈ H : Im(z) > Y }. Lemma 7.6 Let Y > 0 and n ∈ N. If n z ∈ C n,c ω c Y for some c ∈ n then 1 u j/n z ∈ C Y for all 0 ≤ j ≤ n − 1.Proof Fix 0 ≤ j ≤ n − 1. By Lemma 7.1, u j/n normalizes n . Assuming that n z ∈ C n,c ω c Y and applying Proposition 6.4 to h = u j/n , we get n u j/n z ∈ C n,hc ω hc Y . By definition, there exists z ∈ H with Im(z ) > ω hc Y ≥ Y such that n τ hc z = n u j/n z. Since τ hc ∈ 1 , this implies 1 u j/n z = 1 z ∈ C Y .We can now give the Proof of Theorem 1.7 For any n ∈ N let Y n = max{log n, 1}, and let n be the indicator function of the unionSince for any cusp c ∈ n , ω c Y n ≥ Y n ≥ 1, by Lemma 6.1, each C n,c ω c Y n ,2ω c Y n is a Borel set with boundary of measure zero; and by Lemma 6.2 the above union is disjoint. Thus n is the indicator function of a Borel set with boundary of measure Proposition 7.7 Fix n ∈ N, z = x + iy ∈ H and c ∈ sim n . Thenwhere z = x + iy ∈ H is such that n z = n τ c z , and x d,c ∈ R/Z depends only on x , c and d.We first prove a simple lemma computing the width of elements in the orbits u 1/n c when c ∈ sim n is of simple type. Lemma 7.8 Fix n ∈ N and c ∈ sim n a simple type cusp. Then for any 0 ≤ j ≤ n − 1 we havewhere m/(kl) is a representative for c with gcd(m, kl) = gcd(k, n) = 1 and l | n.Proof For any 0 ≤ j ≤ n − 1,Hence d j = gcd(m n l + jk, n) | n. Now by Lemma 7.3 and the assumption that gcd(k, n) = 1 we haveWe can now combine ideas from Sects. 3.3 and 6 to give the Proof of Proposition 7.7 Assume c = m/(kl) with gcd(m, kl) = gcd(k, n) = 1 and l | n. Up to changing the representatives for c, we may assume mkl = 0. Let τ c = Solving the above congruence equation as in the proof of Lemma 3.6 we getwhere for any integer t, t denotes the multiplicative inverse modulo k, t * denotes the multiplicative inverse modulo n/d, and e = e d , f = f d ∈ Z are two fixed integers such that e n d + f k = 1. Plugging this relation into (7.21) and using the relation ω c = n 2 /l 2 we get for any d | n and for any j ∈ D d ,Thus in view of (7.22) and the above relation it suffices to showBut this follows from (7.23) and the fact that gcd( f , n d ) = 1 (since gcd( f , n d ) = gcd( f k, n d ) = gcd(1 − e n d , n d ) = 1), and we have thus finished the proof.We will also need the following lemma estimating the number of cusps in sim n satisfying certain restrictions on the width.Lemma 7.9Let m ∈ N be a fixed integer and let n = m ≥ 3 for some prime number not dividing m. Then we haveProof Recall from the proof of Lemma 7.5 that n is in bijection with the disjoint union d|n 2 J n \Z d . On the other hand, by definition of the simple type cusps, sim n corresponds to the subset d|n J n \Z d . Moreover, let c = m/l ∈ sim n with gcd(m, l) = 1 be a simple type cusp corresponding to an element in J n \Z d for some d | n, that is, d = gcd(n 2 , l). Since d | n, this implies that d = gcd(n 2 , l) = gcd(n, l). Hence by Lemma 7.3, ω c = n 2 /d 2 . Therefore for each d | n #{c ∈ sim n : ω c = n 2 /d 2 } = |J n \Z d | =Proof of Theorem 1.8Fix throughout the proof m ∈ N and Y > 0 with m 2 Y > 1 and let P m be as above. Let {Y n } n∈P m and {Y n } n∈P m be two sequences satisfy the conditions in Lemma 7.10. For any n ∈ P m , let n ∈ L 2 ( n \H) such that n is the indicator function of the unionSince Y n > m −2 for any n ∈ P m , ω c Y n > 1 for any c ∈ sim n with ω c ≥ m 2 . Hence similar as in the proof of Theorem 1.7, by Lemmas 6.1 and 6.2 the above union is disjoint and n is the indicator function of a Borel set with boundary of measure zero. By the disjointness and the volume formula (6.2) we have for any n ∈ P mNote that for n = m ∈ P m , by Lemma 7.2, [ 1 : n ] m 3 . Hence by Lemma 7.9 and the above relation we get for any n = m ∈ P mSimilar as in the proof of Theorem 1.7 for any n ∈ P m and 0 < y < 1 we defineWe first show that there exists a sequence {y n } n∈P m satisfying that 0 < y n < c n for all n ∈ P m and that the limsup set lim n∈P m n→∞ I n (y n ) ⊂ R/Z is of full measure. As in the proof of Theorem 1.7, we can use Remark 2.17, together with Remark 6.3 and Lemma 6.1, to construct a sequence {y n } n∈P m successively satisfying for any n ∈ P m , 0 < y n < c n and thatfor all subsets I ⊂ R/Z taken from the finite set {(0, 1)} {I l (y l ) : l ∈ P m , l < n}.Again as before one can show that condition(7.25)implies that the sequence {I n (y n )} n∈P m ⊂ R/Z satisfies the quasi-independence condition (2.20) (with the subset S = P m and exponent η = 2). Moreover, using the estimate (7.24) and our assumptions on {Y n } n∈P m and {Y n } n∈P m ) we haveHence by Corollary 2.6, lim n∈P m n→∞ I n (y n ) ⊂ R/Z is of full Lebesgue measure.Now take x ∈ lim n∈P m n→∞ I n (y n ), then there exists an unbounded subsequence N x ⊂ P m such that x ∈ I n (y n ) for all n ∈ N x . It thus suffices to show that for anywith ν m,Y defined as in(1.9). For any n ∈ N x ⊂ P m , since x ∈ I n (y n ) by definition we have n (x + iy n ) ∈ C n,c ω c Y n ,ω c Y n for some c ∈ sim n of simple type, that is, there exist some c ∈ sim n and z n = x n + iω c y n ∈ H satisfying that n (x + iy n ) = n τ c z n with Y n < y n < Y n . Then by Proposition 7.7, we havefor some x n,c,d ∈ R/Z. This implies that for any n ∈ N x δ n,x,y n ( ) = 1 n d|n ϕ n d δ where for the second equality we used the facts that is a prime number and gcd(m, ) = 1 and applied the effective estimate (3.13) to each of the term M Bersudsky, arXiv:2003.04112On the image in the torus of sparse points on dilating analytic curves. arXiv preprintBersudsky, M.: On the image in the torus of sparse points on dilating analytic curves. arXiv:2003.04112 (arXiv preprint) (2020) Some metric properties of arithmetic quotients of symmetric spaces and an extension theorem. A Borel, J. Differ. Geometry. 6Borel, A.: Some metric properties of arithmetic quotients of symmetric spaces and an extension theo- rem. J. Differ. Geometry 6, 543-560 (1972) Hecke operators and equidistribution of Hecke points. L Clozel, H Oh, E Ullmo, Invent. Math. 1442Clozel, L., Oh, H., Ullmo, E.: Hecke operators and equidistribution of Hecke points. Invent. Math. 144(2), 327-351 (2001) Equidistribution des pointes de hecke. L Clozel, E Ullmo, Contribution to Automorphic Forms, Geometry and Number Theory. BaltimoreJohns Hopkins University PressClozel, L., Ullmo, E.: Equidistribution des pointes de hecke. In: Contribution to Automorphic Forms, Geometry and Number Theory, pp. 193-254. Johns Hopkins University Press, Baltimore (2004) Short incomplete Gauss sums and rational points on metaplectic horocycles. E Demirci Akarsu, Int. J. Number Theory. 106Demirci Akarsu, E.: Short incomplete Gauss sums and rational points on metaplectic horocycles. Int. J. Number Theory 10(6), 1553-1576 (2014) of Graduate Texts in Mathematics. F Diamond, J Shurman, Springer228New YorkA First Course in Modular FormsDiamond, F., Shurman, J.: A First Course in Modular Forms, Volume 228 of Graduate Texts in Math- ematics. Springer, New York (2005) D El-Baz, B Huang, M Lee, arXiv:1811.04019Effective joint equidistribution of primitive rational points on expanding horospheres. arXiv preprint)El-Baz, D., Huang, B., Lee, M.: Effective joint equidistribution of primitive rational points on expanding horospheres. arXiv:1811.04019 (arXiv preprint) (2018) Primitive rational points on expanding horocycles in products of the modular surface with the torus. M Einsiedler, M Luethi, N Shah, Ergodic Theory Dyn. Syst. 20Einsiedler, M., Luethi, M., Shah, N.: Primitive rational points on expanding horocycles in products of the modular surface with the torus. Ergodic Theory Dyn. Syst. 20, 1-45 (2020) Equidistribution of primitive rational points on expanding horospheres. M Einsiedler, S Mozes, N Shah, U Shapira, Compos. Math. 1524Einsiedler, M., Mozes, S., Shah, N., Shapira, U.: Equidistribution of primitive rational points on expanding horospheres. Compos. Math. 152(4), 667-692 (2016) On the equidistribution of Hecke points. D Goldstein, A Mayer, Forum Math. 152Goldstein, D., Mayer, A.: On the equidistribution of Hecke points. Forum Math. 15(2), 165-189 (2003) Fundamental domains for lattices in (R-)rank 1 semisimple Lie groups. H Garland, M S Raghunathan, Ann. Math. 292Garland, H., Raghunathan, M.S.: Fundamental domains for lattices in (R-)rank 1 semisimple Lie groups. Ann. Math. 2(92), 279-326 (1970) On value distribution properties of automorphic functions along closed horocycles. D A Hejhal, XVIth Rolf Nevanlinna Colloquium. Joensuu; Berlinde GruyterHejhal, D.A.: On value distribution properties of automorphic functions along closed horocycles. In: XVIth Rolf Nevanlinna Colloquium (Joensuu, 1995), pp. 39-52. de Gruyter, Berlin (1996) On the uniform equidistribution of long closed horocycles. D A Hejhal, Asian J. Math. 44Loo-Keng Hua: a great mathematician of the twentieth centuryHejhal, D.A.: On the uniform equidistribution of long closed horocycles. Asian J. Math. 4(4), 839-853 (2000). (Loo-Keng Hua: a great mathematician of the twentieth century) An Introduction to the Theory of Numbers. G H Hardy, E M Wright, D. R. Heath-Brown and J. H. SilvermanOxford University PressOxford6th ednHardy, G.H., Wright, E.M.: An Introduction to the Theory of Numbers, 6th edn. Oxford University Press, Oxford (2008).. ((Revised by D. R. Heath-Brown and J. H. Silverman, With a foreword by Andrew Wiles)) H Iwaniec, Spectral Methods of Automorphic Forms. Madrid; ProvidenceAmerican Mathematical Society53Graduate Studies in Mathematics. 2nd ednIwaniec, H.: Spectral Methods of Automorphic Forms, Volume 53 of Graduate Studies in Mathematics, Revista Matemática Iberoamericana, Madrid, 2nd edn. American Mathematical Society, Providence (2002) Joint equidistribution on the product of the circle and the unit cotangent bundle of the modular surface. S Jana, J. Number Theory. 226Jana, S.: Joint equidistribution on the product of the circle and the unit cotangent bundle of the modular surface. J. Number Theory 226, 271-283 (2021) Effective equidistribution of shears and applications. D Kelmer, A Kontorovich, Math. Ann. 3701-2Kelmer, D., Kontorovich, A.: Effective equidistribution of shears and applications. Math. Ann. 370(1- 2), 381-421 (2018) Exponents for the equidistribution of shears and applications. D Kelmer, A Kontorovich, J. Number Theory. 208Kelmer, D., Kontorovich, A.: Exponents for the equidistribution of shears and applications. J. Number Theory 208, 1-46 (2020) Refined estimates towards the Ramanujan and Selberg conjectures, appendix to H. H. Kim, functoriality for the exterior square of G L 4 and the symmetric fourth of G L 2 , with appendix 1 by Dinakar Ramakrishnan and appendix 2 by Kim and Peter Sarnak. H H Kim, P Sarnak, J. Am. Math. Soc. 161Kim, H.H., Sarnak, P.: Refined estimates towards the Ramanujan and Selberg conjectures, appendix to H. H. Kim, functoriality for the exterior square of G L 4 and the symmetric fourth of G L 2 , with appendix 1 by Dinakar Ramakrishnan and appendix 2 by Kim and Peter Sarnak. J. Am. Math. Soc. 16, 1 (2003) Shrinking target problems for flows on homogeneous spaces. D Kelmer, S Yu, Trans. Am. Math. Soc. 3729Kelmer, D., Yu, S.: Shrinking target problems for flows on homogeneous spaces. Trans. Am. Math. Soc. 372(9), 6283-6314 (2019) SL 2 (R). S Lang, Addison-Wesley Publishing CoReadingLang, S.: SL 2 (R). Addison-Wesley Publishing Co, Reading (1975) Effective limit distribution of the Frobenius numbers. H Li, Compos. Math. 1515Li, H.: Effective limit distribution of the Frobenius numbers. Compos. Math. 151(5), 898-916 (2015) Effective equidistribution of rational points on expanding horospheres. M Lee, J Marklof, Int. Math. Res. Not. 21Lee, M., Marklof, J.: Effective equidistribution of rational points on expanding horospheres. Int. Math. Res. Not. 21, 6581-6610 (2018) Primitive rational points on expanding horospheres in Hilbert modular surfaces. M Luethi, J. Number Theory. 225Luethi, M.: Primitive rational points on expanding horospheres in Hilbert modular surfaces. J. Number Theory 225, 327-359 (2021) With a survey by Richard Sharp: Periodic orbits of hyperbolic flows. G A Margulis, Springer Monographs in Mathematics. BerlinSpringerOn Some Aspects of the Theory of Anosov SystemsMargulis, G.A.: On Some Aspects of the Theory of Anosov Systems. Springer Monographs in Mathe- matics. Springer, Berlin (2004).. ((With a survey by Richard Sharp: Periodic orbits of hyperbolic flows, Translated from the Russian by Valentina Vladimirovna Szulikowska)) The asymptotic distribution of Frobenius numbers. J Marklof, Invent. Math. 1811Marklof, J.: The asymptotic distribution of Frobenius numbers. Invent. Math. 181(1), 179-207 (2010) Equidistribution of Kronecker sequences along closed horocycles. J Marklof, A Strömbergsson, Geom. Funct. Anal. 136Marklof, J., Strömbergsson, A.: Equidistribution of Kronecker sequences along closed horocycles. Geom. Funct. Anal. 13(6), 1239-1280 (2003) Asymptotic behavior of periodic orbits of the horocycle flow and Eisenstein series. P Sarnak, Comm. Pure Appl. Math. 346Sarnak, P.: Asymptotic behavior of periodic orbits of the horocycle flow and Eisenstein series. Comm. Pure Appl. Math. 34(6), 719-739 (1981) Metric theory of Diophantine approximations. V G Sprindžuk, Scripta Series in Mathematics. Donald J. NewmanV. H. Winston & SonsSprindžuk, V.G.: Metric theory of Diophantine approximations. V. H. Winston & Sons, Washington, D.C (1979).. ((Translated from the Russian and edited by Richard A. Silverman, With a foreword by Donald J. Newman, Scripta Series in Mathematics)) On the uniform equidistribution of long closed horocycles. A Strömbergsson, Duke Math. J. 1233Strömbergsson, A.: On the uniform equidistribution of long closed horocycles. Duke Math. J. 123(3), 507-547 (2004) The horocycle flow at prime times. P Sarnak, A Ubis, J. Math. Pures Appl. 9Sarnak, P., Ubis, A.: The horocycle flow at prime times. J. Math. Pures Appl. (9) 103(2), 575-618 (2015) Eisenstein series and the Riemann zeta function. D Zagier, of Tata Institute of Fundamental Research Studies in Mathematics. Bombay; Bombay10Tata Institute of Fundamental ResearchAutomorphic Forms, Representation Theory and ArithmeticZagier, D.: Eisenstein series and the Riemann zeta function. In: Automorphic Forms, Representation Theory and Arithmetic (Bombay, 1979), Volume 10 of Tata Institute of Fundamental Research Studies in Mathematics, pp. 275-301. Tata Institute of Fundamental Research, Bombay (1981) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
[]
[ "Symmetries in Synaptic Algebras", "Symmetries in Synaptic Algebras" ]
[ "David J Foulis ", "Sylvia Pulmannová " ]
[]
[]
A synaptic algebra is a generalization of the Jordan algebra of selfadjoint elements of a von Neumann algebra. We study symmetries in synaptic algebras, i.e., elements whose square is the unit element, and we investigate the equivalence relation on the projection lattice of the algebra induced by finite sequences of symmetries. In case the projection lattice is complete, or even centrally orthocomplete, this equivalence relation is shown to possess many of the properties of a dimension equivalence relation on an orthomodular lattice.
10.2478/s12175-014-0238-2
[ "https://arxiv.org/pdf/1304.4378v1.pdf" ]
119,620,997
1304.4378
01296b17c308a5e106c19fe83c64eab3e046070b
Symmetries in Synaptic Algebras 16 Apr 2013 David J Foulis Sylvia Pulmannová Symmetries in Synaptic Algebras 16 Apr 2013arXiv:1304.4378v1 [math-ph] A synaptic algebra is a generalization of the Jordan algebra of selfadjoint elements of a von Neumann algebra. We study symmetries in synaptic algebras, i.e., elements whose square is the unit element, and we investigate the equivalence relation on the projection lattice of the algebra induced by finite sequences of symmetries. In case the projection lattice is complete, or even centrally orthocomplete, this equivalence relation is shown to possess many of the properties of a dimension equivalence relation on an orthomodular lattice. Introduction Synaptic algebras, which were introduced in [6] and further studied in [10,25] tie together the notions of an order-unit normed space [1, p. 69], a special Jordan algebra [22], a convex effect algebra [14], and an orthomodular lattice [2,18]. The self-adjoint part of a von Neumann algebra is an example of a synaptic algebra; see [6,10,25] for numerous additional examples. Our purpose in this article is to study symmetries s in a synaptic algebra A and the equivalence relation ∼ induced by finite sequences of symmetries on the orthomodular lattice P of all projections p in A. For a symmetry s, we have s 2 = 1 (the unit element in A), and p 2 = p for a projection p. If P is a complete lattice, or even centrally orthocomplete, i.e., every family of projections that is dominated by an orthogonal family of central projections has a supremum, then we show that ∼ acquires many of the properties of a dimension equivalence relation on an orthomodular lattice [21]. In Section 2 we review the definition and basic properties of a synaptic algebra A. Since the projections in A form an orthomodular lattice (OML) P , we sketch in Section 3 a portion of the theory of OMLs that will be needed for our subsequent work. In Section 4 we focus on the special properties of the OML P that are acquired due to the fact that P ⊆ A. In Section 5 we introduce the notion of a symmetry s in A, study exchangeability of projections by a symmetry, and relate symmetries to the notion of perspectivity of projections. The condition of central orthocompleteness is defined in Section 6, and it is observed that, if P is centrally orthocomplete, then the center of A is a complete boolean algebra and A hosts a central cover mapping. From Section 6 onward, it is assumed that P is, at least, centrally orthocomplete. The equivalence relation ∼ on P induced by finite sequences of symmetries is introduced in Section 7 where we investigate the extent to which ∼ is a dimension equivalence relation. Finally, in Section 8 we cover some of the features of the relation of exchangeability of projections by symmetries that require completeness of the OML P . Basic Properties of a Synaptic Algebra In this section, we review the definition of a synaptic algebra and sketch some basic facts about synaptic algebras. For more details, see [6,10,25]. We use the notation := for "equals by definition" and "iff" abbreviates "if and only if." 2.1 Definition ([6, Definition 1.1]). Let R be a linear associative algebra with unity element 1 over the real numbers R, and let A be a real vector subspace of R. Let a, b ∈ A. We understand that the product ab is calculated in R, and that it may or may not belong to A. We write aCb iff a and b commute (i.e. ab = ba) and we define C The vector space A is a synaptic algebra with enveloping algebra R iff the following conditions are satisfied: SA1. A is a partially ordered archimedean real vector space with positive cone A + = {a ∈ A : 0 ≤ a}, 1 ∈ A + is an order unit in A, and · is the corresponding order unit norm on A. SA2. If a ∈ A then a 2 ∈ A + . SA3. If a, b ∈ A + , then aba ∈ A + . SA4. If a ∈ A and b ∈ A + , then aba = 0 ⇒ ab = ba = 0. SA5. If a ∈ A + , there exists b ∈ A + ∩ CC(a) such that b 2 = a. SA6. If a ∈ A, there exists p ∈ A such that p = p 2 and, for all b ∈ A, ab = 0 ⇔ pb = 0. SA7. If 1 ≤ a ∈ A, there exists b ∈ A such that ab = ba = 1. SA8. If a, b ∈ A, a 1 ≤ a 2 ≤ a 3 ≤ · · · is an ascending sequence of pairwise commuting elements of C(b) and lim n→∞ a − a n = 0, then a ∈ C(b). We define P := {p ∈ A : p = p 2 } and we refer to elements p ∈ P as projections. Elements e in the "unit interval" E := {e ∈ A : 0 ≤ e ≤ 1} are called effects. The set C(A) is called the center of A. We understand that subsets of A such as P , E, and C(A) are partially ordered by the respective restrictions of the partial order ≤ on A. If p, q ∈ P and p ≤ q, we say that p is a subprojection of q, or equivalently, that q dominates p. Standing Assumptions. For the remainder of this article, A is a synaptic algebra with unit 1, with enveloping algebra R, with E as its unit interval, and with P as its set of projections. To avoid trivialities, we shall assume that A is "non-degenerate," i.e., 0 = 1. Also, we shall follow the usual convention of identifying each real number λ ∈ R with the element λ1 ∈ A, so that R ⊆ C(A). As A is an order unit space with order unit 1, the order-unit norm · is defined on A by a := inf{0 < λ ∈ R : −λ ≤ a ≤ λ}. If a ∈ A, then by [6,Theorem 8.11], C(a) is norm closed in A. In fact, it can be shown that, in the presence of axioms SA1-SA7, axiom SA8 is equivalent to the condition that C(a) is norm closed in A for all a ∈ A. Since A is closed under squaring, it is a special Jordan algebra under the Jordan product a • b := 1 2 ((a + b) 2 − a 2 − b 2 ) = 1 2 (ab + ba) ∈ A for all a, b ∈ A. If a, b ∈ A, then ab+ba = 2(a•b) ∈ A and aCb ⇒ ab = ba = a•b = b•a ∈ A. Also, aba = 2a • (a • b) − a 2 • b ∈ A. 2.3 Definition ([6, Definition 4.1]). If a ∈ A, the mapping J a : A → A defined for b ∈ A by J a (b) := aba is called the quadratic mapping determined by a. If p ∈ P , then the quadratic mapping J p is called the compression determined by p [5]. If a ∈ A, than by [6, Theorem 4.2 and Lemma 4.4] the quadratic mapping J a : A → A is linear, order preserving, and norm-continuous. In particular, if 0 ≤ b ∈ A, then 0 ≤ J a (b) = aba, which is a stronger version of axiom SA3. If a, b, c ∈ A, then abc belongs to R, but not necessarily to A. However, we have the following. 2.4 Lemma. If a, b, c ∈ A, then abc + cba ∈ A. Proof. abc+cba = (a+c)b(a+c)−aba−cbc = J a+c (b)−J a (b)−J c (b) ∈ A. By [6, Theorem 2.2], each element a ∈ A + has a uniquely determined square root a 1/2 ∈ A + such that (a 1/2 ) 2 = a ; moreover, a 1/2 ∈ CC(a). If a ∈ A, then a 2 ∈ A + , whence a has an absolute value |a| := (a 2 ) 1/2 ∈ CC(a 2 ) ⊆ CC(a) which is uniquely determined by the properties |a| ∈ A + and |a| 2 = a 2 . By [6, Lemma 7.1 and Theorem 7.2], an element a ∈ A has an inverse a −1 ∈ A such that aa −1 = a −1 a = 1 iff there exists 0 < ǫ ∈ R such that ǫ ≤ |a|; moreover, if a is invertible (i.e., a −1 exists in A), then a −1 ∈ CC(a). If a ∈ A, then by [6,Theorem 3.3], a + := 1 2 (|a| + a) ∈ A + ∩ CC(a) and a − : = 1 2 (|a| − a) ∈ A + ∩ CC(a). Moreover, we have a = a + − a − , |a| = a + + a − , and a + a − = a − a + = 0. Clearly, P ⊆ E ⊆ A. An effect e ∈ E is said to be sharp iff the only effect f ∈ E such that f ≤ e and f ≤ 1 − e is f = 0. Obviously, the unit interval E is convex-in fact, E forms a convex effect algebra [14] under the partial binary operation obtained by restriction to E of the addition operation on A. By [6, Theorem 2.6], P is the set of all sharp effects, and it is also the set of all extreme points of the convex set E. The generalized Hermitian algebras, introduced and studied in [9,12], are special cases of synaptic algebras; in fact, the synaptic algebra A is a generalized Hermitian algebra iff it satisfies the condition that every bounded ascending sequence a 1 ≤ a 2 ≤ · · · of pairwise commuting elements in A has a supremum a in A and a ∈ CC({a n : n ∈ N}) [9, Section 6]. If (A i : i ∈ I) is a nonempty family of synaptic algebras and R i is the enveloping algebra of A i for each i ∈ I, then with coordinatewise operations and partial order, the cartesian product × i∈I A i is again a synaptic algebra with × i∈I R i as its enveloping algebra. Review of orthomodular lattices As we have mentioned, it turns out that the set P of projections in the synaptic algebra A forms an orthomodular lattice (OML) [2,18]; hence we devote this section to a brief review of some of the theory of OMLs that we shall require in what follows. Let L be a nonempty set partially ordered by ≤. If there is a smallest element, often denoted by 0, and a largest element, often denoted by 1, in L, then we say that L is bounded. If, for every p, q ∈ L, the meet p ∧ q (i.e., the greatest lower bound, or infimum) and the join p ∨ q (i.e., the least upper bound, or supremum) of p and q exist in L, then L is called a lattice. If L is a bounded lattice, then elements p, q ∈ L are said to be complements of each other iff p ∧ q = 0 and p ∨ q = 1. If every subset of L has an infimum and a supremum, then L is called a complete lattice. A subset S of L is said to be sup/inf-closed in L iff whenever a nonempty subset Q of S has a supremum s := Q (respectively, an infimum t := Q) in P , then s ∈ S, whence s is the supremum of Q as calculated in S (respectively, t ∈ S, whence t is the infimum of Q as calculated in S). Let L be a bounded lattice. A mapping p → p ⊥ on L is called an orthocomplementation iff, for all p, q ∈ L, (i) p ⊥ is a complement of p in L, (ii) (p ⊥ ) ⊥ = p, and p ≤ q ⇒ q ⊥ ≤ p ⊥ . We say that L is an orthomodular lattice (OML) iff it is equipped with an orthocomplementation p → p ⊥ that satisfies the orthomodular law : p ≤ q ⇒ q = p ∨ (q ∧ p ⊥ ) for all p, q ∈ L. If L is an OML, then elements p, q ∈ L are orthogonal, in symbols p ⊥ q, iff p ≤ q ⊥ . For the remainder of this section, we assume that L is an OML. The following De Morgan duality holds in L: If Q ⊆ L and the supremum Q (respectively, the infimum Q) exists in L, then ( Q) ⊥ = {q ⊥ : q ∈ Q} (respectively, ( Q) ⊥ = {q ⊥ : q ∈ Q}). The elements p, q ∈ L are said to be (Mackey) compatible in L iff there are pairwise orthogonal elements p 1 , q 1 , d ∈ L such that p = p 1 ∨ d and q = q 1 ∨ d. For instance, if p ≤ q, or if p ⊥ q, then p and q are compatible; also, if p and q are compatible, then so are p and q ⊥ . As is well-known (e.g., see [18,Proposition 4,p. 24] or [24,Prop. 1.3.8]), compatibility is preserved under the formation of arbitrary existing suprema or infima in L. Computations in L are facilitated by the following result: If p, q, r ∈ L and one of the elements p, q, or r is compatible with the other two, then the distributive relations (p ∨ q) ∧ r = (p ∧ r) ∨ (q ∧ r) and (p ∧ q) ∨ r = (p ∨ r) ∧ (q ∨ r) hold [4]. The subset of L consisting of all elements of L that are compatible with every element of L is called the center of L. As is well-known [18, p, 26], the center of L forms a boolean algebra, i.e., a bounded, complemented, distributive lattice [26], and it is sup/inf-closed in L. For each p ∈ L, the mapping φ p : P → P defined for q ∈ L by φ p q := p ∧ (p ⊥ ∨ q) is called the Sasaki projection corresponding to p. The Sasaki projection has the following properties for all p, q, r ∈ L: (i) φ p q ⊥ r ⇔ q ⊥ φ p r. (ii) φ p : P → P is order preserving. (iii) φ p (φ p q) = φ p q. (iv) p and q are compatible iff φ p q = p ∧ q iff φ p q ≤ q. (v) p ⊥ q iff φ p q = 0. (vi) φ p preserves arbitrary existing suprema in L. If p ∈ L, the p-interval, defined and denoted by L[0, p] := {q ∈ L : 0 ≤ q ≤ p}, is a sublattice of L with greatest element p and it forms an OML in its own right with q → q ⊥p = q ⊥ ∧ p as the orthocomplementation. If c belongs to the center of L, it is easy to see that c ∧ p belongs to the center of L[0, p]. If, conversely, for every p ∈ L, every element d of the center of L[0, p] has the form d = c ∧ p for some c in the center of L, then L is said to have the relative center property [3]. (i) φ p q r = φ p r, i.e., φ p q is the restriction to L[0, p] of the Sasaki projection φ p on L. (ii) φ p q (r ⊥p ) = φ p (r ⊥ ). Proof. (i) Since r = r ∧ p, q = q ∧ p, and p is compatible with both q ⊥ and r, we have φ p q r = q ∧ (q ⊥p ∨ r) = q ∧ ((q ⊥ ∧ p) ∨ (r ∧ p)) = q ∧ p ∧ (q ⊥ ∨ r) = q ∧ (q ⊥ ∨ r) = φ q (r). (ii) Similarly, φ p q (r ⊥p ) = q ∧ (q ⊥p ∨ r ⊥p ) = q ∧ ((q ⊥ ∧ p) ∨ (r ⊥ ∧ p)) = q ∧ p ∧ (q ⊥ ∨ r ⊥ ) = q ∧ (q ⊥ ∨ r ⊥ ) = φ q (r ⊥ ). If p and q share a common complement in L, they are said to be perspective and if p and q are perspective in L[0, p ∨ q], they are said to be strongly perspective. Strongly perspective elements are perspective, but in general, not conversely. In fact, L is modular (i.e., for all p, q, r ∈ L, p ≤ r ⇒ p ∧ (q ∨ r) = (p ∧ q) ∨ r) iff perspective elements in L are always strongly perspective [15, Theorem 2]. The transitive closure of the relation of perspectivity is an equivalence relation on L called projectivity. If L is modular and complete as a lattice, then by classic results of von Neumann [23] and Kaplansky [20], perspectivity is transitive on L, and therefore it coincides with projectivity. Proof of the following lemma is a straightforward OML-calculation. If p, q ∈ L, we have the parallelogram law asserting that (p ∨ q) ∧ p ⊥ = φ p ⊥ (q) is strongly perspective to q ∧ (p ∧ q) ⊥ = φ q (p ⊥ ) (see the proof of [15,Corollary 1]). Replacing p by p ⊥ , we obtain an alternative version of the parallelogram law asserting that φ p (q) is strongly perspective to φ q (p). (Another version of the parallelogram law is given in Theorem 5.9 (ii) below.) The following theorem provides an analogue for strong perspectivity of [21, Lemma 43] for a dimension equivalence relation on an OML. Theorem. Suppose p, q, e, f ∈ P with p ⊥ q, e ⊥ f and p∨q = e∨f . Put p 1 := p ∧ (p ∧ f ) ⊥ , p 2 := p ∧ f , q 1 := q ∧ e, q 2 := q ∧ (q ∧ e) ⊥ , e 1 := e ∧ (e ∧ q) ⊥ , and f 2 := f ∧ (f ∧ p) ⊥ . Then: (i) p 1 and e 1 are strongly perspective. (ii) q 2 and f 2 are strongly perspective. (iii) p 1 ⊥ p 2 with p 1 ∨ p 2 = p, q 1 ⊥ e 1 with q 1 ∨ e 1 = e, p 2 ⊥ f 2 with p 2 ∨ f 2 = f , and q 1 ⊥ q 2 with q 1 ∨ q 2 = q. (iv) p 1 ⊥ q 1 and p 1 ∨ q 1 is strongly perspective to e. (v) p 2 ⊥ q 2 and p 2 ∨ q 2 is strongly perspective to f . Proof. (i) Let k := p ∨ q = e ∨ f . By the parallelogram law in the OML L[0, k], and with the notation and results of Lemma 3.1, we find that φ k p (e) = φ k p (f ⊥ k ) = φ p (f ⊥ ) = p ∧ (p ⊥ ∨ f ⊥ ) = p ∧ (p ∧ f ) ⊥ = p 1 and φ k e (p) = φ k e (q ⊥ k ) = φ e (q ⊥ ) = e ∧ (e ⊥ ∨ q ⊥ ) = e ∧ (e ∧ q) ⊥ = e 1 have a common complement in (L[0, k])[0, p 1 ∨ e 1 ] = L[0, p 1 ∨ e 1 ], proving (i). (ii) Since k = f ∨ e = q ∨ p, (ii) follows from (i) by symmetry. (iii) The assertions in (iii) are obvious. (iv) We have p 1 ≤ p ≤ q ⊥ ≤ q ⊥ ∨ e ⊥ = q ⊥ 1 , so p 1 ⊥ q 1 . By (i) there exists v 1 ∈ L[0, p 1 ∨ e 1 ] such that p 1 ∨ v 1 = e 1 ∨ v 1 = p 1 ∨ e 1 and p 1 ∧ v 1 = e 1 ∧ v 1 = 0. We claim that v 1 is also a common complement of p 1 ∨ q 1 and e in L[0, (p 1 ∨ q 1 ) ∨ e]. We note that (p 1 ∨ q 1 ) ∨ e = p 1 ∨ q 1 ∨ e 1 ∨ q 1 = p 1 ∨ q 1 ∨ e 1 = p 1 ∨ e. Also, (p 1 ∨ q 1 ) ∨ v 1 = (p 1 ∨ v 1 ) ∨ q 1 = p 1 ∨ e 1 ∨ q 1 = p 1 ∨ e and e ∨ v 1 = q 1 ∨ e 1 ∨ v 1 = q 1 ∨ p 1 ∨ e 1 = p 1 ∨ e. Moreover, p 1 ≤ q ⊥ 1 and e 1 = e ∧ (e ∧ q) ⊥ ≤ (e ∧ q) ⊥ = q ⊥ 1 , whence v 1 ≤ p 1 ∨ e 1 ≤ q ⊥ 1 , and we have v 1 ⊥ q 1 . Thus, since q 1 is orthogonal to, hence compatible with, both p 1 and v 1 , it follows that (p 1 ∨ q 1 ) ∧ v 1 = (p 1 ∧ v 1 ) ∨ (q 1 ∧ v 1 ) = 0. Similarly, as q 1 is orthogonal to both e 1 and v 1 , we have (e 1 ∨ q 1 ) ∧ v 1 = (e 1 ∧ v 1 ) ∨ (q 1 ∧ v 1 ) = 0, completing the proof of our claim. (v) Since k = q ∨ p = f ∨ e, (v) follows from (iv) by symmetry. Remark. We recall that the OML L is organized into an effect algebra [11, p. 284] in which every element is principal [11, p. 286] by defining the orthosum p ⊕ q := p ∨ q of p and q in L iff p ⊥ q. Then the effect-algebra partial order coincides with the partial order on L and the effect-algebra orthosupplementation is the orthocomplementation on L. Thus the theory of effect algebras is applicable to OMLs. As is easily seen, if the OML L is regarded as an effect algebra, then a family of elements in L is orthogonal iff it is pairwise orthogonal, such an orthogonal family is orthosummable iff it has a supremum, and if the family is orthosummable, then its supremum is its orthosum [11, p. 286]. If every orthogonal family in an effect algebra is orthosummable, then the effect algebra is called orthocomplete [17]. If the OML L is regarded as an effect algebra, then L is orthocomplete iff it is complete as a lattice [16]. The orthomodular lattice of projections By [6, Theorem 5.6], under the partial order inherited from A, the set P of projections forms an orthomodular lattice with p → p ⊥ := 1 − p as the orthocomplementation. As P ⊆ A, the OML P acquires several special properties not enjoyed by OMLs in general. Let p, q ∈ P . By [6, Theorem 2.4 and Lemma 5 .3], p ≤ q ⇔ p = pq ⇔ p = qp ⇔ p = qpq = J q (p) and p ≤ q ⇒ q − p = q ∧ p ⊥ . Moreover, pCq ⇒ pq = qp = p ∧ q. Evidently, p ⊥ q iff p + q ≤ 1. Also by [6, Lemma 5.3], p ⊥ q ⇔ pq = qp = 0 and p ⊥ q ⇒ p ∨ q = p + q. We refer to p + q as the orthogonal sum of p and q iff p ⊥ q. A simple argument yields the important result that p and q are compatible iff pCq [7, Theorem 3.11]. By [6, Theorems 2.7 and 2.10], each element a ∈ A has a carrier projection a o ∈ P such that, for all b ∈ A, ab = 0 ⇔ a o b = 0; moreover, a o ∈ CC(a), a = aa o = a o a, a o = |a| o , and for all b ∈ A, ab = 0 ⇔ a o b o = 0 ⇔ b o a o = 0 ⇔ ba = 0. Furthermore, if q ∈ P , then aq = a ⇔ qa = a ⇔ a o ≤ q. The carrier projection a o is uniquely characterized by the property ap = 0 ⇔ a o p = 0 for all p ∈ P , or equivalently, by the property that a o is the smallest projection q ∈ P such that a = aq. Each element a ∈ A has a spectral resolution [6, Section 8], [8] that both determines and is determined by a, namely the right continuous ascending family (p a,λ : λ ∈ R) of projections in CC(a) given by 4.1 Lemma. If a 1 , a 2 , ..., a n ∈ A + , then ( n i=1 a i ) o = n i=1 (a i ) o . Proof. By [25,p a,λ := 1 − ((a − λ) + ) o = (((a − λ) + ) o ) ⊥ for all λ ∈ R. By [6, Theorems 8.4 and 8.5], L := sup{λ ∈ R : p a,λ = 0} ∈ R, U := inf{λ ∈ R : p a,λ = 1} ∈ R, and a = U L−0 λ dp a,λ , where the Riemann-Stieltjes type integral converges in norm. By [10,Theorem 8.3], any one of the following conditions is sufficient to guarantee modularity of the projection lattice P : (i) If p, q ∈ P , there exists 0 < ǫ ∈ R such that ǫ(pqp) o ≤ pqp. (ii) If p, q ∈ P , then pqp is an algebraic element of A; (iii) A is finite dimensional over R; (iv) P satisfies the ascending chain condition. Let p ∈ P . Then, according to [6,Theorem 4.9], pAp := {pap : a ∈ A} = {a ∈ A : pa = ap = a} = {a ∈ A : a o ≤ p} = J p (A) is norm closed in A, and with the partial order inherited from A, it is a synaptic algebra (degenerate if p = 0) with p as its order unit, pRp as its enveloping algebra, and the order unit norm on pAp is the restriction to pAp of the order unit norm on A. Moreover, if a, b ∈ pAp, then a • b, a o , |a|, a + , a − ∈ pAp, and if a ∈ A + , then a 1/2 ∈ pAp. Consequently, if a ∈ pAp, then pp a,λ = p a,λ p = p ∧ p a,λ for all λ ∈ R, and the spectral resolution of a as calculated in pap is (pp a,λ : λ ∈ R). Clearly, the OML of projections in the synaptic algebra pAp is the p-interval P [0, p] in P , and the orthocomplementation on P [0, p] is given by q → q ⊥p = p − q = J p (q ⊥ ) = pq ⊥ = q ⊥ p = p ∧ q ⊥ for q ∈ P [0, p]. Suppose that B ⊆ A. Then C(B) = b∈B C(b) is norm closed in A, and with the partial order inherited from A, C(B) is a synaptic algebra with order unit 1 and enveloping algebra R. Moreover, if a, c ∈ C(B), then a • c, a o , |a|, a + , a − ∈ C(B) and 0 ≤ a ⇒ a 1/2 ∈ C(B). Consequently, if a ∈ C(B), then the spectral resolution of a is the same whether calculated in A or in C(B). Also it is clear that the OML of projections in the synaptic algebra C(B) is just P ∩ C(B), and we have the following result. 4.2 Lemma. If B ⊆ A, then P ∩ C(B) is sup/inf-closed in P . Proof. As C(B) = b∈B C(b), it will be sufficient to prove the lemma for the special case B = {b}. Thus, assume that Q ⊆ P ∩ C(b) and that h = Q exists in P . For the projections in the spectral resolution of b, we have C(b) ⊆ C(p b,λ ), whence Q ⊆ P ∩ C(p b,λ ) for every λ ∈ R. We recall that compatibility is preserved under the formation of arbitrary existing suprema and infima. Thus, h ∈ C(p b,λ ) for all λ ∈ R, and it follows from [6, Theorem 8.10] that h ∈ C(b). A similar argument applies to the infimum k, if it exists in P . We shall make extensive use of the next theorem, often without explicit attribution. 4.3 Theorem. The center of P is P ∩ C(P ) = P ∩ C(A). Proof. As two projections in P are compatible iff they commute, the center of P is P ∩C(P ). Clearly, P ∩C(A) ⊆ P ∩C(P ). Conversely, by [6,Theorem 8.10], P ∩ C(P ) ⊆ P ∩ C(A), so P ∩ C(A) = P ∩ C(P ). In view of Theorem 4.3, if we say that c is a central projection in A, we mean that c ∈ P ∩ C(A), or what is the same thing, that c belongs to the center P ∩ C(P ) of the OML P . As is easily seen, if P is regarded as an effect algebra, then the center P ∩C(P ) of P coincides with the effect-algebra center of P [11, p. 287]. Remarks. Suppose that c is a central projection in A. Then c ⊥ is also a central projection and A is the (internal) direct sum of the synaptic algebras cAc = cA = Ac and c ⊥ Ac ⊥ = c ⊥ A = Ac ⊥ in the sense that: (1) As a vector space, A is the (internal) direct sum of the vector subspaces cA and c ⊥ A. (2) If a = x + y with x ∈ cA and y ∈ c ⊥ A, then 0 ≤ a ⇔ 0 ≤ x and 0 ≤ y. (3) If a i = x i + y i with x i ∈ cA and y i ∈ c ⊥ A for i = 1, 2, then a 1 a 2 = x 1 x 2 + y 1 y 2 (the products being calculated in R). With coordinatewise operations and partial order, the cartesian product cA×c ⊥ A is a synaptic algebra with order unit (c, c ⊥ ) and enveloping algebra cRc × c ⊥ Rc ⊥ , and A is order, linear, and Jordan isomorphic to cA × c ⊥ A under the mapping a → (ca, c ⊥ a). Naturally, A is called a commutative synaptic algebra iff aCb for all a, b ∈ A, i.e., iff A = C(A). Thus, a commutative synaptic algebra is a commutative associative archimedean partially ordered linear algebra with a unity element. In [12,Section 4], the following result is stated without proof. 4.5 Theorem. The synaptic algebra A is commutative iff P is a boolean algebra i.e., iff the lattice P is distributive. Proof. If A is commutative, then P ∩ C(A) = P ∩ A = P , i.e., P is its own center, whence P is a boolean algebra. Conversely, if P is a boolean algebra, then any two projections p, q ∈ P are compatible, and therefore p, q ∈ P ⇒ pCq. Let a, b ∈ A. Then, for all λ, µ ∈ R, p a,λ Cp b,µ , whence aCp b,µ by [6, Theorem 8.10]. Therefore, aCb by a second application of [6,Theorem 8.10], and it follows that A is commutative. It can be shown that every boolean algebra can be realized as the lattice of projections in a commutative synaptic algebra. The center C(A) is a commutative synaptic algebra, and if B is a subset of A consisting of pairwise commuting elements, then CC(B) is a commutative synaptic algebra. In particular, CC(a) is a commutative synaptic algebra for any choice of a ∈ A. Symmetries and perspectivities Although there is some overlap between this section and [25, Section 3], the material here is arranged a little differently, so for the reader's convenience, we give proofs of most of our results. 5.1 Definition. An element s ∈ A is called a symmetry iff s 2 = 1. An element t ∈ A is called a partial symmetry iff t 2 ∈ P . Proofs of the following statements are straightforward. (i) If t ∈ A is a partial symmetry with p := t 2 , then t is a symmetry in the synaptic algebra pAp. (ii) If a ∈ A, then the element t := sgn(a) in the polar decomposition a = t|a| is a partial symmetry with t 2 = a o . (iii) If s is a symmetry, then −s is a symmetry as well. (iv) There is a bijective correspondence p ↔ s between symmetries in A and projections in P given by s = 2p − 1 and p = 1 2 (1 + s). (v) If s is a symmetry, then s 2 = s 2 = 1 = 1, so s = 1. (vi) If s is a symmetry, then as 0 ≤ 1 2 (1 ± s) ∈ P , it follows that −1 ≤ s ≤ 1. If p ∈ P and s := 2p−1 = p−(1−p) is the corresponding symmetry, then s is the difference of the orthogonal projections p and 1 − p. More generally, by the following theorem, a difference of two orthogonal projections p and q is a partial symmetry and vice versa. Theorem. If p and q are orthogonal projections, then t := p − q is a partial symmetry with t 2 = p + q = p ∨ q ∈ P . Conversely, if t is a partial symmetry, then both p := t + and q := t − are projections, pq = 0, t = p − q, 1 2 = u 1 2 = u. Moreover, as 0 ≤ t + = p, 0 ≤ t − = q and u ∈ P , we have 0 ≤ p ≤ p + q = u ≤ 1, so p ∈ E, and it follows from [6, Theorem 2.4] that p = pu = p(p + q) = p 2 + pq = p 2 , so p ∈ P and likewise, q ∈ P . Then s := t + (1 − t 2 ) = p − q + (1 − (p + q)) = 1 − 2q is the symmetry corresponding to the projection 1 − q. If t is a partial symmetry, then we refer to the symmetry s := t + (1 − t 2 ) in Theorem 5.2 as the canonical extension of t to a symmetry s. If s ∈ A is a symmetry, then the quadratic mapping J s : A → A is called the symmetry transformation corresponding to s. (b) = J s (b)J s (a) ∈ A. (iv) If J s (a)J s (b) = J s (b)J s (a), then ab = ba ∈ A. (v) (J s (a)) o = J s (a o ). Proof. (i) As J s is a quadratic mapping, it is both linear and order preserving. Also, for a ∈ A, J s (a 2 ) = sa 2 s = sassas = (J s (a)) 2 , and it follows that J s is a Jordan homomorphism of A. Since J s (J s (a)) = ssass = a, it follows that J s is its own inverse on A; hence it is a linear, order, and Jordan automorphism. (ii) If e ∈ P , then (J s (e)) 2 = J s (e 2 ) = J s (e), so J s maps P into (and clearly onto) P . Thus the restriction of J s to P is an order automorphism of P , and as J s (1 − e) = 1 − J s (e), it is an OML automorphism. (iii) If ab = ba, then ab = ba = a • b = b • a, so part (iii) follows from the fact that J s is a Jordan automorphism. (iv) Assume the hypothesis of (iv). Then by (iii) with a replaced by J s (a) and b replaced by J s (b), we have ab = J s (J s (a))J s (J s (b)) = J s (J s (a)J s (b)) = J s (J s (b)J s (a)) = J s (J s (b))J s (J s (a)) = ba. Theorem. Let t ∈ A be a partial symmetry that exchanges the projections e, f ∈ P and let s := t + (1 − t 2 ) be the canonical extension of t to a symmetry. Then s exchanges e and f . Proof. Assume the hypotheses and let u := t 2 . Then u ∈ P and we have e = tf t = t 2 et 2 = ueu, so e = ue = eu, and therefore (1 − u)e = e(1 − u) = 0. Consequently, ses = (t + (1 − u))e(t + (1 − u)) = tet = f . The following lemma provides a weak version of finite additivity for the relation of exchangeability by a symmetry. Proof. Let p i = e i ∨ f i for i = 1, 2. As e 1 ≤ e ⊥ 2 , f ⊥ 2 , we have e 1 ≤ p ⊥ 2 . Also, f 1 ≤ e ⊥ 2 , f ⊥ 2 , so f 1 ≤ p ⊥ 2 , and it follows that p 1 = e 1 ∨ f 1 ≤ p ⊥ 2 , whence p 1 p 2 = 0. Set u = s 1 p 1 and v = s 2 p 2 . From s 1 p 1 s 1 = s 1 (e 1 ∨ f 1 )s 1 = s 1 e 1 s 1 ∨ s 1 f 1 s 1 = e 1 ∨ f 1 = p 1 , it follows that s 1 commutes with p 1 . Likewise s 2 commutes with p 2 , whence both u and v belong to A and are partial symmetries with u 2 = p 1 , v 2 = p 2 , ue 1 u = p 1 f 1 p 1 = f 1 , ve 2 v = p 2 f 2 p 2 = f 2 , and uv = 0. Straightforward calculation using the data above shows that s := u + v + (1 − p 1 − p 2 ) is a symmetry and that ses = f . Lemma. If e, f ∈ P and a := e − f , then e, f ∈ C(|a|). Proof. We have a 2 = (e − f ) 2 = e − ef − f e + f , from which it follows that ea 2 = a 2 e = e − ef e and f a 2 = a 2 f = f − f ef . Thus, e, f ∈ C(a 2 ), and as |a| ∈ CC(a 2 ) it follows that e, f ∈ C(|a|). Theorem. If e, f ∈ P , there exists a symmetry s ∈ A such that sef es = f ef , i.e., J s (ef e) = J s (J e (f )) = J f (e) = f ef . Proof. Let e, f ∈ P . By Lemma 5.7 with f replaced by 1−f and a := e+f −1, we have e, f ∈ C(|a|). Put t := sgn(a), so that t 2 = a o ∈ P . Thus, t is a partial symmetry with |a| = at = ta, and we have Proof. Let q := e ∧ p. Then q ≤ e and q ≤ p, so eq = pq = q. Thus, sq = (2p − 1)q = q, so f q = sesq = seq = sq = q, whence q ≤ f . But q ≤ e, so e ∧ p = q ≤ e ∧ f = 0. Now let r := e ⊥ ∧ p ⊥ . Then er = pr = 0, so sr = (2p − 1)r = −r, whence f r = sesr = −ser = 0, and we have r ≤ f ⊥ . But r ≤ e ⊥ , so r ≤ e ⊥ ∧ f ⊥ = 0. Therefore, p is a complement of e, and by a similar argument, p is also a complement of f . |a|f = taf = (te − t(1 − f ))f = tef and f |a| = f at = f (et − (1 − f )t) = f et. In the following theorem we improve the result in Lemma 5.10 by dropping the hypothesis that e and f are complements, and by concluding that e and f are not only perspective, but strongly perspective. Theorem. Let e, f ∈ P be exchanged by a symmetry s ∈ A. Then e and f are strongly perspective in P . In fact, if p := e ∨ f , r := p − (e ∧ f ), t := rsr, and q := 1 2 (r + t), then t is a symmetry in rAr, q is a projection in P [0, r], and k := q ∨ (r ⊥ ∧ p) is a common complement of e and f in P [0, p]. Proof. Assume the hypotheses. By Theorem 5.3 (ii), sps = s(e ∨ f )s = J s (e ∨ f ) = J s (e) ∨ J s (f ) = ses ∨ sf s = f ∨ e = p, whence sp = ps. Likewise, s(e ∧ f )s = e ∧ f and it follows that srs = r, whence t = rsr = sr = rs. Therefore, t ∈ rAr with t 2 = r, i.e., t is a symmetry in the synaptic algebra rAr. Clearly, both e and f commute with r, whence t(e ∧ r)t = rs(er)sr = rsesr = rf r = f r = rf = f ∧ r, and therefore t exchanges the projections e ∧ r and f ∧ r in P [0, r]. We have (e ∧ r) ∧ (f ∧ r) = (e ∧ f ) ∧ p ∧ (e ∧ f ) ⊥ = 0, and as r ≤ p, (e ∧ r) ∨ (f ∧ r) = (e ∨ f ) ∧ r = p ∧ r = r, so e ∧ r and f ∧ r are complements in P [0, r]. Thus, working in the synaptic algebra rAr with unit r, and applying Lemma 5.10, we find that e ∧ r and f ∧ r are perspective in P [0, r] with q = 1 2 (r + t) as a common complement. Therefore, (e ∧ r) ∨ k = (e ∧ r) ∨ q ∨ (r ⊥ ∧ p)) = r ∨ (p − r) = p. Also, as q ≤ r ≤ p and e∧r ≤ r ≤ p, it follows that both q and e∧r commute with r ⊥ ∧ p, whence (e ∧ r) ∧ k = (e ∧ r) ∧ (q ∨ (r ⊥ ∧ p)) = ((e ∧ r) ∧ q) ∨ (e ∧ r ∧ r ⊥ ∧ p) = 0. Likewise, (f ∧ r) ∨ k = p and (f ∧ r) ∧ k = 0. Here x and y belong to the enveloping algebra R, but not necessarily to A; however, x + y ∈ A by Lemma 2. Theorem 5.15 below, is a version of additivity for projections exchanged by a symmetry, but it requires completeness of P and the rather strong hypothesis that the suprema of the two orthogonal families involved are themselves orthogonal. The next two lemmas will aid in its proof. (ii) Clearly, xe = x, ey = y, and since ef = f e = 0, xf = 0, and f y = 0. Moreover, x 2 = sese = f e = 0, y 2 = eses = f s = f es = 0, ex = ese = sf sse = sf e = 0, yf = esf = esses = es = y, and similarly, y 2 = 0, ye = 0, and f x = x. (iii) We have x + y ∈ A, whence p ∈ A. A straightforward computation using the data in (ii) shows that p 2 = p. (iv) Since ex = ye = 0 and ef = 0, it follows that 2epe = exe + eye + e + ef e = e. Similarly, 2pe = xe + ye + e = x + e, so 2pep = xp + ep = 1 2 (x 2 + xy + xe + xf + ex + ey + e) = 1 2 (f + x + y + e) = p. (v) As in the proof of (iii), the proof of (iv) is a straightforward computation using the data in (ii). 5.14 Lemma. Suppose that P is a complete OML, let (q i ) i∈I be an orthogonal family in P , put q := i∈I q i , let i ∈ I, r ∈ P , and suppose that q j r = 0 for all j ∈ I with j = i. Then qr = q i r and rq = rq i . Proof. Define q ′ i := j∈I, j =i q j . As q j ⊥ q i for all j ∈ I with j = i, it follows that q i ⊥ q ′ i with q = q i ∨q ′ i = q i +q ′ i . Also, since q j ⊥ r for j ∈ I with j = i, it follows that q ′ i ⊥ r, whence q ′ i r = rq ′ i = 0, and therefore qr = (q i + q ′ i )r = q i r and rq = r(q i + q ′ i ) = rq i . We are going to show that s is the required symmetry. We claim that (p i ) i∈I is an orthogonal family. Indeed, suppose i, j ∈ I with i = j. Then 4p i p j = (x i + y i + e i + f i )(x j + y j + e j + f j ) and it will be sufficient to show that the sixteen terms that result from an expansion of the latter product are all zero. This follows from the facts that, for i = j, e i e j = e i f j = f i e j = f i f j = 0 together with the data in Lemma 5.13 (ii). For instance, x i x j = x i e i f j x j = 0. As in the argument above, p j e i = 0 for i, j ∈ I with j = i, and it follows from Lemma 5.14 with (q i ) i∈I = (e i ) i∈I and r = p i that ep i = e i p i and p i e = p i e i for all i ∈ I. Likewise, by Lemma 5.14, this time with (q i ) i∈I = (p i ) i∈I and r = e i we have Let us write h = ses = (2p−1)e(2p−1) = 4pep−2ep−2pe+e, noting that, since s is a symmetry, h is a projection. Using the facts that ef = f e = 0, 2pep = p, and 2f pf = f we find that f hf = f (4pep − 2ep − 2pe + e)f = 4f pepf = 2f pf = f. Similarly using the facts that ef = f e = 0, 2pf p = p, and 2epe = e, hf h = (2p−1)e(2p−1)f (2p−1)e(2p−1) = (2p−1)(2ep−e)f (2pe−e)(2p−1) = (2p − 1)(4epf pe)(2p − 1) = (2p − 1)(2epe)(2p − 1) = (2p − 1)e(2p − 1) = h. Therefore f (1 − h)f = f − f hf = f − f = 0, whence f (1 − h) = 0, so f ≤ h. Likewise, since hf h = h, it follows that h ≤ f , and we have h = f . Central orthocompleteness If P is regarded as an effect algebra (Remark 3.4) then the following definition of central orthocompleteness for the OML P is equivalent to the effect-algebra definition of central orthocompleteness [11, Definition 6.1]. 6.1 Definition. A family (p i ) i∈I in the OML P is centrally orthogonal iff there is a pairwise orthogonal family (c i ) i∈I in the center P ∩ C(A) of P such that p i ≤ c i for all i ∈ I. The projection lattice P is centrally orthocomplete iff every centrally orthogonal family (p i ) i∈I in P has a supremum p = i∈I p i in P . Obviously, if P is complete as a lattice, then it is centrally orthocomplete. Standing Assumption. For the remainder of this article, we assume that the OML P of projections in A is centrally orthocomplete. Proof. (i) Suppose that c is a central projection in dA and let a ∈ A. Then, as c ≤ d, we have c = cd = dc and ca = cda = dac = ac, so c ∈ P ∩ C(A). The converse is obvious, and (i) is proved. 6.5 Lemma ([11, Theorem 6.8 (ii)]). For each p ∈ P , there is a smallest central projection c ∈ P ∩ C(A) such that p ≤ c. 6.6 Definition. If a ∈ A, then the smallest central projection c ∈ P ∩ C(A) such that a o ≤ c is called the central cover of a and denoted by γ(a). The mapping γ : A → P ∩ CP is called the central cover mapping. Since a o is the smallest projection p ∈ P such that ap = a, it follows that γa is the smallest central projection c ∈ P ∩ C(A) such that ac = a. Moreover, by [11,Theorems 5.2 and 6.10], the central cover mapping γ has the following properties. 6.7 Theorem. Let p, q ∈ P . Then: 7.3 Definition. Let p, q ∈ P . (i) The projections p and q are J -equivalent, in symbols p ∼ q, iff there exists J ∈ J such that J(p) = q. If p ∼ q and Jequivalence is understood, we may simply say that p and q are equivalent. (ii) The projections p and q are related iff there are nonzero projections p 1 ≤ p and q 1 ≤ q such that p 1 ∼ q 1 . If p and q are not related, we say that they are unrelated. (iii) p is invariant iff it is unrelated to its orthocomplement p ⊥ . (iv) If there exists a projection q 1 ≤ q such that p ∼ q 1 , we say that p is sub-equivalent to q, in symbols, p q. Proof. As e ∼ f , there are symmetries s 1 , s 2 , ..., s n ∈ A such that f = s n s n−1 · · · s 2 s 1 es 1 s 2 · · · s n−1 s n . The proof is by induction on n. For n = 1 the desired conclusion is obvious. Suppose that the conclusion holds for all sequences of symmetries of length n − 1 and let r := s n f s n = s n−1 · · · s 2 s 1 es 1 s 2 · · · s n−1 . By the induction hypothesis, there are nonzero subprojections p ≤ e and q ≤ r and a symmetry s ∈ S such that sps = q. Let k := s n qs n ≤ s n rs n = f . If p ⊥ k, then by SK4, there are nonzero subprojections p 1 ≤ p ≤ e and k 1 ≤ k ≤ f that are exchanged by a symmetry, and our proof is complete. Thus, we can and do assume that p ⊥ k. Therefore, since k = s n spss n , it follows from Theorem 5.12 (ii) that p and k are exchanged by a symmetry. In [13, §7], for an SK-congruence, the infimum of all the invariant elements that dominate an element is called the hull of that element. Thus, by the following theorem, the central cover mapping is an analogue for the equivalence relation ∼ of a hull mapping for an SK-congruence. Theorem. Let h ∈ P . Then the following conditions are mutually equivalent: (i) h is invariant. (ii) If p ∈ P , s is a symmetry in A, and sps ≤ h, then p ≤ h. (iii) If p ∈ P and p h, then p ≤ h. (iv) If q ∈ P , then q ∧ h = 0 ⇒ q ⊥ h. (v) h is central. Proof. (i) ⇒ (ii). Let h be invariant p ∈ P , and let s ∈ A be a symmetry such that h 1 := sps ≤ h. Aiming for a contradiction, we assume that p is related to h ⊥ , i.e., there exist subprojections 0 = p 1 ≤ p and 0 = q 1 ≤ h ⊥ with p 1 ∼ q 1 . Now h 1 = sps = s(p 1 ∨ (p ∧ p ⊥ 1 ))s = sp 1 s ∨ s(p ∧ p ⊥ 1 )s, whence 0 = h 2 := sp 1 s ≤ h 1 ≤ h. But then h ≥ h 2 ∼ p 1 ∼ q 1 ≤ h ⊥ , so h is related to h ⊥ , contradicting the invariance of h. Therefore, p is unrelated to h ⊥ and it follows from SK4 that p ⊥ h ⊥ , i.e., p ≤ h. (ii) ⇒ (iii). If p h, there are symmetries s 1 , s 2 , ...s n ∈ A such that s n s n−1 · · · s 1 ps 1 s 2 · · · s n ≤ h, whence (iii) follows from (ii) by induction on n. (iii) ⇒ (iv). Assume that (iii) holds and that q ⊥ h. By SK4, there exist subprojections 0 = q 1 ≤ q and 0 = h 1 ≤ h with q 1 ∼ h 1 . Then q 1 h, so q 1 ≤ h by (iii), and we have 0 = q 1 ≤ q ∧ h, whence q ∧ h = 0. (iv) ⇒ (v). Assume that (iv) holds and let q ∈ P . Then φ q ⊥ (h ⊥ ) ∧ h = (q ∨ h ⊥ ) ∧ q ⊥ ∧ h = 0, so φ q ⊥ (h ⊥ ) ≤ h ⊥ by (iii) . Therefore, q ⊥ is compatible with h ⊥ , and it follows that q is compatible with h. Since q is an arbitrary element of P , it follows that h is in the center of P . (iv) ⇒ (i). Assume that h ∈ P ∩ C(A) and, aiming for a contradiction, suppose that h is related to h ⊥ . Then there exist subprojections 0 = h 1 ≤ h and 0 ≤ q 1 ≤ h ⊥ such that h 1 ∼ q 1 . Thus there are symmetries s 1 , s 2 , ..., s n ∈ A such that q 1 = s n s n−1 · · · s 1 h 1 s 1 · · · s n−1 s n . As h 1 ≤ h, we have s 1 h 1 s 1 ≤ s 1 hs 1 = hs 2 1 = h, and by induction on n, q 1 ≤ h. Consequently, q 1 ≤ h∧h ⊥ = 0, contradicting q 1 = 0. 7.6 Corollary. If p ∈ P and c ∈ P ∩ C(A), then p and c are unrelated iff p ⊥ c. Proof. If p and c are unrelated, then p ⊥ c by SK4. Conversely, suppose p ⊥ c, p 1 ≤ p, c 1 ≤ c and p 1 ∼ c 1 . Then p 1 c, so p 1 ≤ c by Theorem 7.5. But p 1 ≤ p ≤ c ⊥ , so p 1 ≤ c ∧ c ⊥ = 0; hence p and c are unrelated. 7.7 Theorem. Suppose that P is a complete OML and p ∈ P . Then γp = {q ∈ P ; q p}. Proof. Since P is complete, h := {q ∈ P : q p} exists in P . If q p, then since p ≤ γp, we have q γp ∈ P ∩ C(A), whence q ≤ γp by Theorem 7.5. Therefore h ≤ γp. We claim that h is a central projection. Indeed, suppose r ∈ P , s ∈ A is a symmetry, and srs ≤ h. By Theorem 7.5 it will be sufficient to show that r ≤ h. But srs ≤ h implies that r ≤ shs = {sqs : q p}, and since q p implies sqs p, it follows that {sqs : q p} ≤ h. Therefore, r ≤ h, whence h ∈ P ∩ C(A). Obviously, p ≤ h, so γp ≤ γh = h, and we have h = γp. By [13,Definition 7.14], an SK-congruence is a dimension equivalence relation (DER) iff unrelated elements have orthogonal hulls. Therefore, by Corollary 7.8 (i), if P is complete, then the equivalence relation ∼ is an analogue of a DER. 8 The case of a complete projection lattice Some of the results above, notably Theorems 5.15 and 7.7 require the completeness of the OML P . In this section, we present some additional results involving exchangeability of projections by symmetries that also require completeness of P . Standing Assumption. In this section, we assume that P is a complete lattice. Therefore, P is centrally orthocomplete, and the central cover mapping γ : A → P ∩ C(A) exists. 8.2 Theorem. Suppose that p ∈ P , and S is the set of all symmetries in A. Then γp = {sqs : s ∈ S, q ∈ P, and q ≤ p}. Proof. Let h := {sqs : q ∈ P and q ≤ p}. If s ∈ S, q ∈ P , and q ≤ p, then q ≤ γp, whence sqs ≤ s(γp)s = γp, and therefore h ≤ γp. Aiming for a contradiction, we assume that h = γp, i.e., that r := γp − h = γp ∧ h ⊥ = 0. Then r ≤ (γp) ⊥ so by Corollary 7.8, r and p are related. Therefore, there exist 0 = r 1 ≤ r and 0 = p 1 ≤ p with r 1 ∼ p 1 ; hence, by Theorem 7.4 there exist 0 = r 2 ≤ r 1 ≤ r ≤ h ⊥ and 0 = p 2 ≤ p 1 ≤ p such that r 2 and p 2 are exchanged by a symmetry s ∈ A. Thus, r 2 = sp 2 s ≤ h, so r 2 ≤ h ∧ h ⊥ = 0, contradicting r 2 = 0. Lemma. If e, f ∈ P are orthogonal projections, then there exist projections e 1 , e 2 , f 1 , f 2 ∈ P such that e 1 ⊥ e 2 , f 1 ⊥ f 2 , e = e 1 ∨ e 2 = e 1 + e 2 , f = f 1 ∨ f 2 = f 1 + f 2 , e 1 and f 1 are exchanged by a symmetry, and e 2 is unrelated to f 2 , whence γe 2 ⊥ γf 2 . Proof. Let (e i , f i ) i∈I be a maximal family of pairs of projections such that (e i ) i∈I is an orthogonal family of subprojections of e, (f i ) i∈I is an orthogonal family of subprojections of f , and for each i ∈ I, there is a symmetry s i ∈ A that exchanges e i and f i . We can assume that the natural numbers 1, 2, 3 and 4 do not belong to the indexing set I. Put e 1 := i∈I e i and f 1 := i∈I f i . Then e 1 ≤ e and f 1 ≤ f , so e 1 ⊥ f 1 . By Theorem 5.15, e 1 and f 1 are exchanged by a symmetry. Let e 2 := e − e 1 and f 2 : = f − f 1 . Then e 2 ≤ e ∧ e ⊥ i and f 2 ≤ f ∧ f ⊥ i for all i ∈ I. Suppose that e 2 is related to f 2 ; then they have nonzero subprojections 0 = e 3 ≤ e 2 and 0 = f 3 ≤ f 2 with e 3 ∼ f 3 . Thus, by Theorem 7.4, there are nonzero subprojections 0 = e 4 ≤ e 3 ≤ e 2 and 0 = f 4 ≤ f 3 ≤ f 2 that are exchanged by a symmetry s 4 ∈ A. Evidently, e 4 ≤ e ∧ e ⊥ i and f 4 ≤ f ∧ f ⊥ i for all i ∈ I. But then we can enlarge the family (e i , f i ) i∈I by appending the pair (e 4 , f 4 ), contradicting maximality. Therefore e 2 is unrelated to f 2 , whence γe 2 ⊥ γf 2 by Corollary 7.8. In the next theorem we improve Lemma 8.3, by removing the hypothesis that e and f are orthogonal. Since e 13 ⊥ f 11 and e 11 ⊥ f 13 , Lemma 5.6 provides a symmetry s exchanging e 1 := e 11 + e 13 and f 1 := f 11 + f 13 . In the next theorem we improve Lemma 8.5 by removing the hypothesis that e and f are orthogonal. = f 1 + (f 3 + f 2 (1 − h)),(1) whence s(f 1 + (f 3 + f 2 (1 − h)))s = e 1 + (e 2 h + e 3 ). Multiplying both sides of (1) by h, and both sides of (2) by 1−h, we find that s(eh)s = (f 1 + For the case under consideration in which P is a complete OML, the generalized comparability theorem above can be used to prove that P has the relative center property. Our proof of the following theorem is suggested by the proof of [3, Proposition 1] in which ∼ is replaced by strong perspectivity. 8.7 Theorem. The OML P has the relative center property. Proof. Let p ∈ P and suppose that d is a central element of the p-interval P [0, p]. Then pAp is a synaptic algebra with unit p, P [0, p] is the projection lattice of pAp, and d commutes with every element of pAp. Applying Theorem 8.6 to the projections d and p ∧ d ⊥ = p − d, we find that there is a symmetry s ∈ A and central element h ∈ P ∩ C(A) such that sdhs ≤ (p − d)h and s(p − d)(1 − h)s ≤ d(1 − h). Put q := dh ∨ sdhs ∈ P . Then sqs = sdhs ∨ dh = q, so sq = qs. Also, dh ≤ d ≤ p, sdhs ≤ (p − d)h ≤ p − d ≤ p, and we have dh ≤ q ≤ p. Let s 0 := sq = qs and t := s 0 + (p − q). Then s 2 0 = q, t 2 = p, and sdhs = s 0 dhs 0 = tdht. Also, ptp = t, so t ∈ pAp, and it follows that dt = td; hence dh = tdht = sdhs ≤ (p − d)h ≤ d ⊥ , and it follows that dh = 0. An analogous argument shows that (p − d)(1 − h) = 0, and it follows that d = d + (p − d)(1 − h) = p(1 − h). (a) := {b ∈ A : aCb}. If B ⊆ A, then C(B) := b∈B C(b), CC(B) := C(C(B)), and CC(b) := C(C(b)). 3. 1 1Lemma. Let p ∈ L, let q, r ∈ L[0, p], and let φ p q : L[0, p] → L[0, p] be the Sasaki projection determined by q on the OML L[0, p]. Then: 3. 2 2Lemma. If e, f, p ∈ L and if e and f are perspective in L[0, p], then e and f are perspective in L. In fact, if q ∈ L[0, p] is a common complement of e and f in L[0, p], then q ∨ p ⊥ is a common complement of e and f in L. Lemma 3.1], the lemma holds for the case n = 2, and the general case then follows by mathematical induction. If a ∈ A, then sgn(a) := (a + ) o − (a − ) o is called the signum of a, and by [6, Theorem 3.6], sgn(a) ∈ CC(a), (sgn(a)) 2 = a o , and a = sgn(a)|a| = |a| sgn(a), the latter formula being called the polar decomposition of a. 5. 3 3Theorem. Let s be a symmetry, a, b ∈ A, and e, f ∈ P . Then: (i) The symmetry transformation J s (a) := sas is an order, linear, and Jordan automorphism of A and (J s ) −1 = J s . (ii) J s restricted to P is an OMLautomorphism of P . (iii) If ab = ba, then J s (ab) = J s (a)J s (v) By (ii), J s (a o ) ∈ P , and since aa o = a o a = a, (iii) implies that J s (a)J s (a o ) = J s (aa o ) = J s (a). Suppose that f ∈ P with J s (a)f = J s (a). It will be sufficient to prove that J s (a o ) ≤ f . We have f J s (a) = J s (a) = J s (a)f . Put e := J s (f ). Then e ∈ P and J s (e) = f , so J s (e)J s (a) = J s (a)J s (e); hence, by (iv), ea = ae, and therefore by (iii), J s (a) = J s (a)f = J s (a)J s (e) = J s (ae), whence, a = ae and therefore a o ≤ e, whereupon J s (a o ) ≤ J s (e) = f . 5. 4 4Definition. A symmetry s ∈ A is said to exchange the projections e, f ∈ P iff ses = f , i.e., iff J s (e) = f . A partial symmetry t is said to exchange the projections e, f ∈ P iff both tet = f and tf t = e hold. Let s be a symmetry and let e, f ∈ P . Clearly, s exchanges e and f iff J s (e) = f iff J s (f ) = e iff s exchanges f and e. 5. 6 6Lemma. Let e, e 1 , e 2 , f, f 1 , f 2 ∈ P with e 1 ⊥ f 2 , e 2 ⊥ f 1 , e 1 ⊥ e 2 , f 1 ⊥ f 2 , e = e 1 + e 2 and f = f 1 + f 2 , and suppose that e i and f i are exchanged by a symmetry s i ∈ A for i = 1, 2. Then there is a symmetry s ∈ A exchanging e and f . Therefore, since ea = e + ef − e = ef and |a| commutes with e and f ,t(ef e)t = (tef )et = |a|f et = |a|f |a| = f |a||a| = f et|a| = f ea = f ef. Now let s := t + (1 − t 2 ) be the canonical extension of t to a symmetry (Theorem 5.2). Since tef = |a|f , and af = ef + f − f = ef it follows that t 2 ef e = t(tef )e = t|a|f e = af e = ef e, so (1 − t 2 )ef e = 0, and therefore sef es = tef et = f ef . 5. 9 9Theorem. Let e, f ∈ P . Then:(i) φ e f and φ f e are exchanged by a symmetry in A. ( ii) (Symmetry Parallelogram Law) e−(e∧f )) and (e∨f )−f are exchanged by a symmetry in A. ( iii ) iiiIf e and f are complements in P , then e and f ⊥ are exchanged by a symmetry in A.(iv) If e ⊥ f , there are nonzero subelements 0 = e 1 ≤ e and 0 = f 1 ≤ f that are exchanged by a symmetry. Proof. (i) According to [6, Definition 4.8 and Theorem 5.6], (ef e) o = φ e (f ) and (f ef ) o = φ f (e). Also, by Theorem 5.8, there exists a symmetry s ∈ A such that J s (ef e) = f ef , whence by Theorem 5.3 we have J s (φ e f ) = J s ((ef e) o ) = (J s (ef e)) o = (f ef ) o = φ f e. (ii) We have e∧(e∧f ) ⊥ = e∧(e ⊥ ∨f ⊥ ) = φ e (f ⊥ ) and (e∨f )∧f ⊥ = φ f ⊥ (e), so (ii) follows from (i). (iii) If e ∧ f = 0 and e ∨ f = 1, then the symmetry s in (ii) satisfies ses = J s (e) = f ⊥ . (iv) If e ⊥ f , then 0 = e 1 := φ e f ≤ e and 0 = f 1 := φ f (e) ≤ f , and by (i), e 1 and f 1 are exchanged by a symmetry. 5.10 Lemma. If the projections e and f are complements in P and s ∈ A is a symmetry exchanging e and f , then e and f are perspective. In fact, the projection p := 1 2 (1 + s) ∈ P corresponding to the symmetry s is a common complement of both e and f . . Let e, f ∈ P . Then:(i) If e and f are perspective, then there are symmetries s 1 , s 2 ∈ A with s 2 s 1 es 1 s 2 = J s 2 (J s 1 (e)) = f .(ii) Suppose e and f are orthogonal and there are symmetries s 1 , s 2 ∈ A with s 2 s 1 es 1 s 2 = f . Then there is a symmetry s exchanging e and f . (iii) If e and f are both perspective and orthogonal, then there is a symmetry s exchanging e and f . Proof. (i) Let p be a common complement of e and f . By Theorem 5.9 (iii), there exist symmetries s 1 , s 2 ∈ A with s 1 es 1 = p ⊥ = s 2 f s 2 , and it follows that s 2 s 1 es 1 s 2 = f . (ii) Assume the hypotheses of (ii). Then e = s 1 s 2 f s 2 s 1 . Let x := s 2 s 1 e and y := es 1 s 2 . 4 . 4As ef = f e = 0, it follows that xf = f y = 0. We have xy = f , yx = e, xe = x, and f x = s 2 s 1 es 1 s 2 s 2 s 1 e = x, so x 2 = xef x = 0. Also, ey = y, and yf = es 1 s 2 s 2 s 1 es 1 s 2 = y, so y 2 = yf ey = 0. Furthermore, ye = es 1 s 2 e = s 1 s 2 f s 2 s 1 s 1 s 2 e = s 1 s 2 f e = 0 and ex = es 2 s 1 e = es 2 s 1 s 1 s 2 f s 2 s 1 = ef s 2 s 1 = 0. Now put s := (x + y) + 1 − e − f . Using the data above, a straightforward computation shows that s is a symmetry in A and s exchanges e and f .(iii) Part (iii) follows from (i) and (ii). 5.13 Lemma.Let e, f ∈ P with e ⊥ f , suppose that e and f are exchanged by a symmetry s ∈ A, put x := se, y := es, and p := 1 2 (x + y + e + f ).Then: (i) xy = f and yx = e. (ii) x = xe = f x, y = ey = yf , and x 2 = y 2 = ex = xf = ye = f y = 0. (iii) p ∈ P . (iv) 2epe = e and 2pep = p. (v) 2f pf = f and 2pf p = p. Proof. (i) We have ses = f , sf s = e, so xy = f . Also yx = es 2 e = e 2 = e. 1 2 1Suppose that the OML P is complete, let (e i ) i∈I and (f i ) i∈I be orthogonal families in P with e = i∈I e i and f = i∈If i . Then, if e and f are orthogonal and if, for each i ∈ I, e i and f i are exchanged by a symmetry s i ∈ A, then there is a symmetry s ∈ A exchanging e and f . Proof. Our proof is suggested by the proof of [19, Lemma 3.1]. We begin by noting that e ⊥ f implies e i ⊥ f j for all i, j ∈ I. Also, for i ∈ I, we have s i e i s i = f i and s i f i s i = e i . Let x i := s i e i , y i := e i s i , and p i := (x i + y i + e i + f i ). By parts (i) and (iii) of Lemma 5.13, y i x i = e i , x i y i = f i , and p i ∈ P . Put p := i∈I p i and s := 2p − 1. pe i = p i e i and e i p = e i p i for all i ∈ I. By Lemma 5.13 (iii), 2e i p i e i = e i , whence 2epe i = 2ep i e i = 2e i p i e i = e i , and we have (2ep − 1)e i = 0, whereupon (2ep − 1) o e i = 0 for all i ∈ I. Therefore, e i ≤ ((2ep−1) o ) ⊥ for all i ∈ I, and it follows that e ≤ ((2ep−1) o ) ⊥ , whence (2ep − 1) o e = 0, and consequently, (2ep − 1)e = 0, i.e., 2epe = e. By similar arguments, 2pep = p, 2f pf = f , and 2pf p = p. 6. 3 3Theorem ([11, Theorem 6.8 (i)]). The center P ∩C(A) of P is a complete boolean algebra. 6.4 Lemma. Let d ∈ P ∩ C(A) and let c be a projection in the synaptic algebra dAd = dA = Ad. Then: (i) c is a central projection in dA iff c is a central projection in A. (ii) The OML P [0, d] of projections in dA is centrally orthocomplete. 8. 4 4Theorem. If e and f are any two projections, then we can write orthogonal sums e = e 1 + e 2 and f = f 1 + f 2 , where e 1 and f 1 are exchanged by a symmetry and γe 2 ⊥ γf 2 . Proof. Write e 11 := φ e (f ) = e ∧ (e 12 ) ⊥ , where e 12 := e ∧ f ⊥ , and write f 11 := φ f (e) = f ∧ (f 12 ) ⊥ , where f 12 := e ⊥ ∧ f . Then e 12 commutes with both e and (e 12 ) ⊥ , whence e = e 11 ∨e 12 = e 11 +e 12 , and likewise f = f 11 +f 12 . Also by Theorem 5.9 (i) e 11 and f 11 are exchanged by a symmetry s 1 . Moreover, since e 12 ⊥ f 12 , Lemma 8.3 provides orthogonal decompositions e 12 = e 13 +e 2 and f 12 = f 13 + f 2 where e 13 and f 13 are exchanged by a symmetry s 2 and γe 2 ⊥ γf 2 . Thus e = e 11 +e 12 = e 11 +e 13 +e 2 and f = f 11 +f 12 = f 11 +f 13 +f 2 . 8. 5 5Lemma. If e and f are orthogonal projections, then there is a central projection h ∈ P such that eh and a subprojection of f h are exchanged by a symmetry s, and f (1 − h) and a subprojection of e(1 − h) are exchanged by the symmetry s. Proof. Let e = e 1 + e 2 , f = f 1 + f 2 be the decompositions of Theorem 8.4 and set h = γf 2 . Then h is a central projection, f 2 h = f 2 , h ⊥ γe 2 , and there is a symmetry s ∈ A with se 1 s = f 1 . Thus, eh = e 1 h + e 2 h = e 1 h + e 2 γe 2 h = e 1 h, and s(eh)s = s(e 1 h)s= f 1 h ≤ f h. Also f (1 − h) = f 1 (1 − h) + f 2 (1 − h) = f 1 (1 − h) and sf (1 − h)s = sf 1 (1 − h)s = e 1 (1 − h) ≤ e(1 − h). 8. 6 6Theorem (Generalized Comparability). Given any two projections e, f there is a central projection h and a symmetry s with s(eh)s ≤ f h and sf (1 − h)s ≤ e(1 − h). Proof. By Theorem 5.9 (i) the subprojections e 1 := φ e (f ) ≤ e and f 1 := φ f (e) ≤ f are exchanged by a symmetry s 1 . Set e 2 = e ∧ f ⊥ and f2 = e ⊥ ∧ f . Then e 1 = e ∧ e ⊥ 2 , f 1 = f ∧ f ⊥ 2 , e 1 ⊥ e 2 , f 1 ⊥ f 2 , e = e 1 + e 2 , and f = f 1 + f 2 . Since e 2 ⊥ f 2 ,Lemma 8.5 applies giving a symmetry s 2 and a central projection h with f 3 := s 2 (e 2 h)s 2 ≤ f 2 h and e 3 := s 2 (f 2 (1−h))s 2 ≤ e 2 (1−h). We note that f 3 (1 − h) = 0 and e 3 h = 0.We claim that the projections e 1 , e 2 h and e 3 are pairwise orthogonal. Indeed, as e 1 ⊥ e 2 , we have e 1 ⊥ e 2 h. Also, e 1 ⊥ e 2 (1 − h), so e 1 ⊥ e 3 . Moreover, e 3 ≤ 1 − h, so e 2 h ⊥ e 3 . Thus, e 1 + (e 2 h + e 3 ) is a projection.Similarly, f 1 + (f 3 + f 2 (1 − h)) is a projection. Since e 1 ≤ e ≤ e ∨ f ⊥ = f ⊥ 2 and f 3 ≤ f 2 it follows that e 1 ⊥ (f 3 + (1 − h)f 2 ). Similarly, f 1 ⊥ (e 2 h + e 3 ),Thus, as s 1 exchanges e 1 and f 1 and s 2 exchanges e 2 h + e 3 and f 3 + f 2 (1 − h), it follows from Lemma 5.6 that there is a symmetry s ∈ S such that s(e 1 + (e 2 h + e 3 ))s f 3 )h ≤ f h and s(f (1 − h))s = (e 1 + e 3 )(1 − h) ≤ e(1 − h). t 2 = p + q = |t| ∈ P , and s := t + (1 − t 2 ) = t + (1 − |t|) = 1 − 2q is a symmetry.Proof. The first statement of the lemma is obvious. So assume that u := t 2 ∈ P , p := t + and q := t − . Then t = p − q and pq = 0. Also, as 0 ≤ u and u 2 = u, we have p + q = |t| = (t 2 ) γ1 = 1, γp = 0 ⇔ p = 0, and γ(P ) := {γp : p ∈ P } = P ∩ C(A). γ1 = 1, γp = 0 ⇔ p = 0, and γ(P ) := {γp : p ∈ P } = P ∩ C(A). If (p i ) i∈I is a family of elements in P and the supremum i∈I p i exists in P , then i∈I γp i exists in P and γ( i∈I p i ) = i∈I γp i. If (p i ) i∈I is a family of elements in P and the supremum i∈I p i exists in P , then i∈I γp i exists in P and γ( i∈I p i ) = i∈I γp i . Theorem. (i) The center P ∩ C(A) of P is sup/inf-closed in P . (ii) Let. Theorem. (i) The center P ∩ C(A) of P is sup/inf-closed in P . (ii) Let A) is a complete boolean algebra, the supremum p and the infimum q of (c i ) i∈I exist in P ∩ C(A); moreover, p and q are, respectively, the supremum and the infimum of (c i ) i∈I in P. P ∩ C, P ∩ C(A) is a complete boolean algebra, the supremum p and the infimum q of (c i ) i∈I exist in P ∩ C(A); moreover, p and q are, respectively, the supremum and the infimum of (c i ) i∈I in P . Part (i) follows from the fact that the center of any OML is sup/infclosed in the OML. Using the central cover mapping. one proceeds as in the proof of [11, Theorem 5.2 (xiii)] to prove part (iiProof. Part (i) follows from the fact that the center of any OML is sup/inf- closed in the OML. Using the central cover mapping, one proceeds as in the proof of [11, Theorem 5.2 (xiii)] to prove part (ii). Let J be the set of all mappings J : A → A of the form J = J sn J s n−1 · · · J s 1 where s 1 , s 2 , ..., s n are symmetries in A. Thus, J is the group under composition generated by the symmetry transformations on A. Definition. As a consequence of Theorem 5.3, the transformations J ∈ J have the following propertiesDefinition. Let J be the set of all mappings J : A → A of the form J = J sn J s n−1 · · · J s 1 where s 1 , s 2 , ..., s n are symmetries in A. Thus, J is the group under composition generated by the symmetry transfor- mations on A. As a consequence of Theorem 5.3, the transformations J ∈ J have the following properties. Then: (i) The J is an order, linear, and Jordan automorphism of A. (ii) J restricted to P is an OML-automorphism of P . (iii) If ab = ba. Theorem, Let, then J(ab) = J(a)J(b) =Theorem. Let J ∈ J , a, b ∈ A, and e, f ∈ P . Then: (i) The J is an order, linear, and Jordan automorphism of A. (ii) J restricted to P is an OML-automorphism of P . (iii) If ab = ba, then J(ab) = J(a)J(b) = (a), then ab = ba ∈ A. (v) (J(a)) symmetry and that, as a consequence of Theorems 5.11 and 5.12, two projections p and q are equivalent iff they are projective in the OML P . Now we investigate the extent to which the equivalence relation ∼ is a Sherstnev-Kalinin. J(b)J(a) ∈ A. (iv) If J(a)J(b) = J(b)JSK-) congruence on the OML P [13, §7]. By definition, an SK-congruence satisfies axioms SK1-SK4 in [13, Definition 7.2 and RemarksJ(b)J(a) ∈ A. (iv) If J(a)J(b) = J(b)J(a), then ab = ba ∈ A. (v) (J(a)) symmetry and that, as a consequence of Theorems 5.11 and 5.12, two projections p and q are equivalent iff they are projective in the OML P . Now we investigate the extent to which the equivalence relation ∼ is a Sherstnev-Kalinin (SK-) congruence on the OML P [13, §7]. By definition, an SK-congruence satisfies axioms SK1-SK4 in [13, Definition 7.2 and Remarks • (SK2) Axiom SK2, complete additivity (and even finite additivity) of ∼, is problematic. Theorem 5.15 which assumes completeness of the OML P , is a weak substitute for axiom SK2 and Lemma 5.6 is a weak substitute for finite additivity. • (SK2) Axiom SK2, complete additivity (and even finite additivity) of ∼, is problematic. Theorem 5.15 which assumes completeness of the OML P , is a weak substitute for axiom SK2 and Lemma 5.6 is a weak substitute for finite additivity. If (e i ) i∈I is an orthogonal family in P , p ∈ P , and p ∼ i∈I e i , then there exists an orthogonal family (p i ) i∈I such that p = i∈I p i and p i ∼ e i for all i ∈ I. Indeed, if J ∈ J with p = J( i∈I e i ) = i∈I J(e i ). • (SK3d) Axiom SK3d (divisibility) holds, in fact we have the following complete divisibility property. 21p.4. then p i := J(e i ) ∼ e i for all i ∈ I• (SK3d) Axiom SK3d (divisibility) holds, in fact we have the following complete divisibility property [21, p.4]: If (e i ) i∈I is an orthogonal family in P , p ∈ P , and p ∼ i∈I e i , then there exists an orthogonal family (p i ) i∈I such that p = i∈I p i and p i ∼ e i for all i ∈ I. Indeed, if J ∈ J with p = J( i∈I e i ) = i∈I J(e i ), then p i := J(e i ) ∼ e i for all i ∈ I. ∈ , • (SK3e) Combining Theorems 3.3 and 5.12 (i), we find that ∼ satisfies axiom SK3e: If p, q. then there exist p 1 , p 2 , q 1 , q 2 ∈ P such that p 1 ⊥ p 2 , q 1 ⊥ q 2 , p 1 ⊥ q 1 , p 2 ⊥ q 2 , p 1 ∨ p 2 =• (SK3e) Combining Theorems 3.3 and 5.12 (i), we find that ∼ satisfies axiom SK3e: If p, q, e, f ∈ P , p ⊥ q, e ⊥ f , and p ∨ q = e ∨ f , then there exist p 1 , p 2 , q 1 , q 2 ∈ P such that p 1 ⊥ p 2 , q 1 ⊥ q 2 , p 1 ⊥ q 1 , p 2 ⊥ q 2 , p 1 ∨ p 2 = • (SK4) Non-orthogonal projections are related, in fact, they have nonzero subprojections that are exchanged by a symmetry (Theorem. 5• (SK4) Non-orthogonal projections are related, in fact, they have nonzero subprojections that are exchanged by a symmetry (Theorem 5.9 (iv)). As we shall see, in spite of the fact that ∼ may not qualify as an SKcongruence, it does enjoy a number of important properties. As we shall see, in spite of the fact that ∼ may not qualify as an SK- congruence, it does enjoy a number of important properties. Corollary. Let P be a complete OML. Then: (i) If e, f ∈ P , then γe ⊥ γf iff e ⊥ γf iff e and f are unrelated. (ii) P is irreducible, i.e., P ∩ C(A) = {0, 1}. iff every pair of nonzero projections are relatedCorollary. Let P be a complete OML. Then: (i) If e, f ∈ P , then γe ⊥ γf iff e ⊥ γf iff e and f are unrelated. (ii) P is irreducible, i.e., P ∩ C(A) = {0, 1}, iff every pair of nonzero projections are related. Proof. (i) If γe ⊥ γf , than as e ≤ γe, it follows that e ⊥ γf . If e ⊥ γf , then since γf ∈ P ∩ C(A), Corollary 7.6 implies that e and γf are unrelated, and since f ≤ γf. it follows that e is unrelated to f . To complete the proof ofProof. (i) If γe ⊥ γf , than as e ≤ γe, it follows that e ⊥ γf . If e ⊥ γf , then since γf ∈ P ∩ C(A), Corollary 7.6 implies that e and γf are unrelated, and since f ≤ γf , it follows that e is unrelated to f . To complete the proof of ≤ (γf ) ⊥ = {q ⊥ : q f }, whence there exists a projection q f such that e ⊥ q. Therefore, by SK4, e is related to q, and since q f. that γe ≤ (γf ) ⊥ Then e ≤ (γf ) ⊥ , else γe ≤ γ(γf ) ⊥ = (γf ) ⊥ . Thus by Theorem 7.7 and De Morgan duality. it follows that e is related to f . (ii) Part (ii) follows immediately from (i)that γe ≤ (γf ) ⊥ Then e ≤ (γf ) ⊥ , else γe ≤ γ(γf ) ⊥ = (γf ) ⊥ . Thus by Theorem 7.7 and De Morgan duality, e ≤ (γf ) ⊥ = {q ⊥ : q f }, whence there exists a projection q f such that e ⊥ q. Therefore, by SK4, e is related to q, and since q f , it follows that e is related to f . (ii) Part (ii) follows immediately from (i). E M Alfsen, ISBN 0-387-05090-6Compact Convex Sets and Boundary Integrals. New YorkSpringer-VerlagAlfsen, E.M., Compact Convex Sets and Boundary Integrals, Springer- Verlag, New York, 1971, ISBN 0-387-05090-6. . L Beran, Orthomodular Lattices, An Algebraic Approach, Mathematics and its Applications. 18Reidel Publishing CompanyBeran, L., Orthomodular Lattices, An Algebraic Approach, Mathematics and its Applications, Vol. 18, D. Reidel Publishing Company, Dordrecht, 1985. Around the relative center property in orthomodular lattices. G Chevalier, Proc. Amer. Math. Soc. 112Chevalier, G., Around the relative center property in orthomodular lat- tices, Proc. Amer. Math. Soc. 112 (1991), 935-948. A note on orthomodular lattices. D J Foulis, Portugal. Math. 21Foulis, D.J., A note on orthomodular lattices, Portugal. Math. 21 (1962) 65-72. Compressions on partially ordered abelian groups. D J Foulis, Proc. Amer. Math. Soc. 132Foulis, D.J., Compressions on partially ordered abelian groups, Proc. Amer. Math. Soc. 132 (2004) 3581-3587. Synaptic algebras. D J Foulis, Math. Slovaca. 605Foulis, D.J., Synaptic algebras, Math. Slovaca 60, no. 5 (2010) 631-654. Monotone sigma-complete RC-groups. D J Foulis, S Pulmannová, J. London Math. Soc. 732Foulis, D.J. and Pulmannová, S., Monotone sigma-complete RC-groups, J. London Math. Soc. 73, No. 2 (2006) 304-324. Spectral resolution in an order unit space. D J Foulis, S Pulmannová, Rep. Math. Phys. 623Foulis, D.J. and Pulmannová, S., Spectral resolution in an order unit space, Rep. Math. Phys. 62, no. 3 (2008) 323-344. Generalized Hermitian Algebras. D J Foulis, S Pulmannová, Int. J. Theor. Phys. 485Foulis, D.J. and Pulmannová, S., Generalized Hermitian Algebras, Int. J. Theor. Phys. 48, no. 5 (2009) 1320-1333. Projections in synaptic algebras. D J Foulis, S Pulmannová, Order. 272Foulis, D.J. and Pulmannová, S., Projections in synaptic algebras, Order 27, no. 2 (2010) 235-257. Centrally orthocomplete effect algebras, Algebra Univers. D J Foulis, S Pulmannová, DOI 10.007/s00012-010-0100- 564Foulis, D.J. and Pulmannová, S., Centrally orthocomplete effect alge- bras, Algebra Univers. 64 (2010) 283-307, DOI 10.007/s00012-010-0100- 5. Regular elements in generalized Hermitian Algebras. D J Foulis, S Pulmannová, Math. Slovaca. 612Foulis, D.J. and Pulmannová, S., Regular elements in generalized Her- mitian Algebras, Math. Slovaca 61, no. 2 (2011) 155-172. Hull mappings and dimension effect algebras. D J Foulis, S Pulmannová, Math. Slovaca. 613Foulis, D.J. and Pulmannová, S., Hull mappings and dimension effect algebras, Math. Slovaca 61, no. 3 (2011) 1-38. Convex and linear effect algebras. S Gudder, S Pulmannová, S Bugajski, E Beltrametti, Rep. Math. Phys. 443Gudder, S., Pulmannová, S., Bugajski, S., and Beltrametti, E., Convex and linear effect algebras, Rep. Math. Phys. 44, no. 3 (1999) 359-379. Distributivity and perspectivity in orthomodular lattices. S S HollandJr, Trans. Amer. Math. Soc. 112Holland, S.S, Jr., Distributivity and perspectivity in orthomodular lat- tices, Trans. Amer. Math. Soc. 112 (1964) 330-343. An m-orthocomplete orthomodular lattice is mcomplete. S S Holland, Jr, Proc. Amer. Math. Soc. 24Holland, S.S., Jr., An m-orthocomplete orthomodular lattice is m- complete, Proc. Amer. Math. Soc. 24 (1970) 716-718. Orthocomplete effect algebras. G Jenča, S Pulmannová, Proc. Amer. Math. Soc. 131Jenča, G. and Pulmannová, S., Orthocomplete effect algebras, Proc. Amer. Math. Soc. 131 (2003) 2663-2671. . G Kalmbach, Orthomodular Lattices, Academic PressLondon, New YorkKalmbach, G., Orthomodular Lattices, Academic Press, London, New York, 1983. Projections in Banach algebras. I Kaplansky, Annals of Mathematics. 532Kaplansky, I., Projections in Banach algebras, Annals of Mathematics 53 no. 2, (1951) 235-249. Any orthocomplemented complete modular lattice is a continuous geometry. I Kaplansky, Ann. of Math. 61Kaplansky, I., Any orthocomplemented complete modular lattice is a continuous geometry, Ann. of Math. 61 (1955) 524-541. The lattice theoretic background of the dimension theory of operator algebras. L H Loomis, Memoirs of AMS. 13Loomis, L.H., The lattice theoretic background of the dimension theory of operator algebras, Memoirs of AMS No 13 (1955) 1-36. A taste of Jordan algebras. K Mccrimmon, Springer-VerlagNew YorkUniversitextMcCrimmon, K., A taste of Jordan algebras, Universitext, Springer- Verlag, New York, 2004, ISBN: 0-387-95447-3. J Von Neumann, Continuous geometry. Princeton Univ. Pressvon Neumann, J., Continuous geometry, Princeton Univ. Press, Prince- ton 1960. P Pták, S Pulmannová, Orthomodular Structures as Quantum Logics. Dordrecht, Boston, LondonKluwer Academic PublPták, P. and Pulmannová, S., Orthomodular Structures as Quantum Logics, Kluwer Academic Publ., Dordrecht, Boston, London, 1991. A note on ideals in synaptic algebras. S Pulmannová, Math. Slovaca. 626Pulmannová, S., A note on ideals in synaptic algebras, Math. Slovaca 62, no. 6 (2012) 1091-1104. . R Sikorsky, Boolean Algebras, Springer Verlag2New York; Berlinnd ed.Sikorsky, R., Boolean Algebras, 2 nd ed., Academic Press, New York and Springer Verlag, Berlin, 1964.
[]
[ "ON UNIQUENESS AND NONUNIQUENESS OF ANCIENT OVALS", "ON UNIQUENESS AND NONUNIQUENESS OF ANCIENT OVALS" ]
[ "Wenkui Du ", "Robert Haslhofer " ]
[]
[]
In this paper, we prove that any nontrivial SO(k) × SO(n +
null
[ "https://arxiv.org/pdf/2105.13830v2.pdf" ]
235,247,979
2105.13830
2a97d97cdda41c25d6bb0c9a47151b1d1b4879f9
ON UNIQUENESS AND NONUNIQUENESS OF ANCIENT OVALS 11 Mar 2022 Wenkui Du Robert Haslhofer ON UNIQUENESS AND NONUNIQUENESS OF ANCIENT OVALS 11 Mar 2022arXiv:2105.13830v2 [math.DG] In this paper, we prove that any nontrivial SO(k) × SO(n + Introduction An ancient oval is an ancient compact noncollapsed mean curvature flow that is not self-similar. We recall that a mean curvature flow M t is called ancient if it is defined for all t ≪ 0, and noncollapsed if it is mean-convex and there exists a constant α > 0 such that at each point p ∈ M t admits interior and exterior balls of radius at least α/H(p), c.f. [SW09,And12,HK17a]. The existence of ancient ovals has been proved first by White [Whi03]. Later, Hershkovits and the second author carried out White's construction in more detail [HH16], which in particular yielded O(k)×O(n+ 1 − k)-symmetric ancient ovals for every 1 ≤ k ≤ n. Moreover, Angenent predicted based on formal matched asymptotics [Ang13], that for t → −∞ such ovals should look like small perturbations of ellipsoids with k long axes of length 2|t| log |t| and n − k short axes of length 2(n − k)|t|. Ancient ovals play an important role as potential singularity models in mean-convex flows [Whi00,Whi03,HK17a]. Moreover, they appear in the canonical neighborhood theorem, which is crucial for the construction of flows with surgery [HS09, BH16,HK17b]. Related to this, they also appeared in the recent proof of the mean-convex neighborhood conjecture [CHH, CHHW]. Furthermore, ancient ovals are also tightly related to the fine structure of singularities [CM16,CHH21a], including questions about accumulation of neckpinch singularities and finiteness of singular times. In a recent breakthrough [ADS19, ADS20], Angenent-Daskalopoulos-Sesum proved a uniqueness theorem for uniformly two-convex ancient ovals: mechanism for non-uniqueness is that the widths in all the R k -directions can be different. See Theorem 1.9 below for a more detailed description of our ancient ovals with reduced symmetry. The symmetry of our examples, possibly up to reflection, seems to be optimal. Indeed, by the Brendle-Choi neck improvement [BC19,BC] and its generalization to bubble-sheets by Zhu [Zhu20] it is reasonable to expect that any ancient oval satisfying (1.2) inherits the SO(n + 1 − k)-symmetry from its tangent flow at −∞. Finally, let us mention that Bourni-Langford-Tinaglia [BLT17,BLT20] recently obtained existence and uniqueness results for SO(n)-symmetric ancient pancakes, and moreover constructed examples of polygonal pancakes that are not SO(n)-symmetric. However, pancake solutions are collapsed, and thus less relevant for singularity analysis. Let us now outline the main steps of our proofs: To begin with, note that for any SO(k) × SO(n + 1 − k)-symmetric ancient oval M t (possibly after replacing k by n − k) the tangent flow at −∞ is given by (1.2). Any such ancient oval can be described by a profile function, namely (1.3) M t = (x ′ , x ′′ ) ∈ R k × R n+1−k : |x ′′ | = U (|x ′ |, t), |x ′ | ≤ d(t) . The profile function U (r, t) is positive on a maximal interval 0 ≤ r < d(t) and satisfies lim r→d(t) U (r, t) = 0. In terms of the profile function, the mean curvature flow equation takes the form (1.4) U t = U rr 1 + U 2 r + k − 1 r U r − n − k U . It is also useful to consider the renormalized profile function (1.5) u(ρ, τ ) = e τ /2 U (e τ /2 ρ, −e −τ ) , which geometrically is the profile function of the renormalized flow e τ 2 M −e −τ . Moreover, we consider the inverse profile function Y (·, τ ) defined as the inverse function of u(·, τ ), and its zoomed in version (1.6) Z(s, τ ) = |τ | 1/2 Y (|τ | −1/2 s, τ ) − Y (0, τ ) . In Section 2, we establish the following sharp asymptotics: Theorem 1.7 (sharp asymptotics). Any SO(k) × SO(n + 1 − k)-symmetric ancient oval (possibly after replacing k by n − k) satisfies the following sharp asymptotics: • Parabolic region: For every 0 < M < ∞, the renormalized profile function satisfies u(ρ, τ ) = 2(n − k) 1 − ρ 2 − 2k 4|τ | + o(|τ | −1 ) uniformly for ρ ≤ M and τ ≪ 0. In particular, we have d(t) = 2|t| log |t|(1 + o(1)) for t → −∞. Let us now outline the proof of sharp asymptotics in Theorem 1.7. Generally speaking, while the sharp asymptotics in [ADS19] are based on fine neck analysis, our proof of Theorem 1.7 is based on fine cylindrical analysis. To begin with, observe that by (1.2) we have (1.7) lim τ →−∞ u(·, τ ) = 2(n − k) smoothly and locally uniformly. We then consider the function (1.8) v(ρ, τ ) := u(ρ, τ ) − 2(n − k). The evolution of v is governed by the radial Ornstein-Uhlenbeck operator (1.9) L = ∂ 2 ∂ρ 2 + k − 1 ρ ∂ ∂ρ − ρ 2 ∂ ∂ρ + 1. This operator is self-adjoint with respect to the Gaussian L 2 -norm (1.10) f 2 H = ∞ 0 f (ρ) 2 e − ρ 2 4 ρ k−1 dρ. The operator L has one positive eigenvalue 1 with eigenfunction 1, one zero eigenvalue with eigenfunction ρ 2 −2k, and all other eigenvalues are negative. Using results from [DH21], we show that the neutral eigenfunction (1.11) ρ 2 − 2k must be dominant for τ → −∞. We next consider the truncated function (1.12)v(ρ, τ ) = v(ρ, τ )ϕ ρ ρ(τ ) , where ϕ is a suitable cutoff function and ρ(τ ) is a suitable cylindrical radius function, similarly to the neck radius function in [CHH]. Carefully analyzing the evolution of the coefficient α(τ ) in the expansion (1.13)v(ρ, τ ) = α(τ )(ρ 2 − 2k) + . . . we then obtain the sharp asymptotics in the parabolic region. Next, to promote information from the parabolic region to the intermediate region, we use suitable barriers and supersolutions. While the upper bound (1.14) lim sup τ →−∞ū (σ, τ ) ≤ (n − k)(2 − σ 2 ) follows from a straightforward adaption of the argument in [ADS19, Section 6], establishing a matching lower bound is more delicate. To this end, we take the (n + 1 − k)-dimensional barriers Σ a from [ADS19] and shift them by η, and rotate them, to build SO(k) × SO(n + 1 − k)-symmetric barriers Γ η a , related to the ones in [CHH21b,DH21]. Carefully choosing η = η(τ ) → 0 slowly enough as τ → −∞, we are then able to derive the lower bound (1.15) lim inf τ →−∞ū (σ, τ ) ≥ (n − k)(2 − σ 2 ). In particular, recalling that ρ = log |t|σ, the sharp asymptotics in the intermediate region imply that d(t) = max Mt |x ′ | for t → −∞ satisfies (1.16) d(t) = 2|t| log |t|(1 + o(1)). Finally, rescaling by λ(t) = log |t|/|t| around any point p t ∈ M t that maximizes |x ′ | and passing to a limit, we obtain an ancient noncollapsed flow that splits off k − 1 lines. Together with the classification from Brendle-Choi [BC19,BC], this yields the sharp asymptotics in the tip region. This concludes the outline of the proof of Theorem 1.7. In Section 3, we upgrade the sharp asymptotics to uniqueness by adapting the argument from [ADS20, Sections 3-8] to our setting as follows: Let M 1 (t) and M 2 (t) be two SO(k)×SO(n+1−k)-symmetric ancient ovals. The goal is to show that, after suitable time-shift and parabolic dilation, the difference of the renormalized profile functions w := u 1 − u 2 vanishes, and so does the difference of the inverse profile functions W : = Y 1 − Y 2 . For estimating the difference, it is useful to analyze different regions in turn, similarly as in [ADS20]. Specifically, fixing θ > 0 sufficiently small and L < ∞ sufficiently large, we consider the cylindrical region (1.17) C θ = {u 1 ≥ θ/2},(1.18) T θ = {u 1 ≤ 2θ}. The tip region can be further decomposed as the union of the soliton region S L = {u 1 ≤ L/ |τ |} and the collar region K θ,L = {L/ |τ | ≤ u 1 ≤ 2θ}. The analysis in the cylindrical region is governed by the radial Ornstein-Uhlenbeck operator L from (1.9). To deal with the nonnegative eigenvalues, for any β, γ ∈ R we consider the time-shifted and parabolically dilated flow (1.19) M βγ 2 (t) = e γ/2 M 2 (e −γ (t − β)). We denote by u βγ 2 the renormalized profile functions of M βγ 2 (t). In Section 3.2, using a degree theory argument similarly as in [ADS20, Section 4], we show that given any τ 0 ≪ 0, we can find suitable parameters β, γ such that at time τ 0 the difference w = u 1 − u βγ 2 satisfies the orthogonality condition (1.20) 1, ϕ C w H = 0 = ρ 2 − 2k, ϕ C w H , where ϕ C is a suitable cutoff function that localizes in the cylindrical region. To establish uniqueness, it then suffices to prove: Theorem 1.8 (uniqueness). If the parameters β, γ are chosen as above, then the difference functions w = u 1 − u βγ 2 and W = Y 1 − Y βγ 2 vanish identically. The proof is based on energy estimates. In addition to the Gaussian L 2 -norm from (1.10), we also consider the Gaussian H 1 -norm (1.21) f 2 D = ∞ 0 f (ρ) 2 + f ρ (ρ) 2 e − ρ 2 4 ρ k−1 dρ, and the Gaussian H −1 -norm f D * = sup g D ≤1 f, g . Moreover, for timedependent functions, we consider the corresponding parabolic norms (1.22) f X ,∞ := sup τ ≤τ 0 τ τ −1 f (·, σ) 2 X dσ 1/2 , where X = H, D or D * . Furthermore, in the tip region we consider the norm (1.23) F 2,∞ := sup τ ≤τ 0 1 |τ | 1/4 τ τ −1 2θ 0 F (u, σ) 2 e µ(u,σ) dudσ 1/2 , where µ is a carefully chosen weight with µ(u, τ ) = − 1 4 Y 1 (u, τ ) 2 for u ≥ θ/2. In Section 3.3, we prove that in the cylindrical region the truncated difference w C = ϕ C w satisfies the energy estimate (1.24) ŵ C D,∞ ≤ ε ( w C D,∞ + wχ D θ H,∞ ) , where D θ = {θ/2 ≤ u 1 ≤ θ} andŵ C = w C − w C ,ρ 2 −2k H (ρ 2 −2k) ρ 2 −2k 2 H . The proof is similar to the one in [ADS20, Section 6]. There are some extra terms, coming for example from the differential operator k−1 ρ ∂ ρ for k ≥ 2, but the energy method is robust enough to easily handle these extra terms. In Section 3.4, we prove that in the tip region the truncated difference W T = ϕ T W satisfies the energy estimate (1.25) W T 2,∞ ≤ C |τ 0 | W χ [θ,2θ] 2,∞ . The proof is similar to the one in [ADS20, Section 7]. There are again some new terms, but the energy method is robust enough to handle them. Finally, in Section 3.5, we conclude similarly as in [ADS20, Section 8]. Namely, combining the inequalities (1.24) and (1.25), and the equivalence of norms in the transition region, we conclude that w and W vanish identically. The argument we sketched above crucially relies on certain a priori estimates. We establish the needed a priori estimates in Section 3.1. Specifically, the two main a priori estimates are the quadratic concavity estimate (1.26) (u 2 ) ρρ ≤ 0, and the cylindrical estimate, which says that for τ ≪ 0 we have (1.27) |u ρ | + u|u ρρ | ≤ ε at all points where u(ρ, τ ) ≥ L/ |τ |. The proof of (1.26) relies on the maximum principle, and is thus less robust. For k ≥ 2 there are some new terms compared to [ADS20, Section 5], but fortunately it turns out that these new terms have the good sign. The proof of the cylindrical estimate (1.27) for k = 1 in [ADS19, ADS20] was based on a rather involved combination of the maximum principle and the Huisken-Sinestrari convexity estimate. Here, we observe instead that any suitable limit away from the tips splits off k lines and apply [HK17a, Lemma 3.14]; this considerably simplifies the argument even for k = 1. In Section 4, we prove the existence of ancient ovals with reduced symmetry by combining methods from [Whi03,HH16] and Hoffman-Ilmanen-Martin-White [HIMW19]. The idea is to consider ovals that are obtained as limits of flows with ellipsoidal initial conditions, and to do a continuity argument and degeneration analysis in the ellipsoidal parameters. For example in R 4 our construction would, loosely speaking, produce a one-parameter family of 3d-ovals that interpolates between R×2d-oval and 2d-oval×R. To describe this in more detail, for any numbers a 1 , . . . , a k > 0 with a 1 + . . . + a k = 1 and any ℓ < ∞ consider the ellipsoid (1.28) E ℓ,a :=    x ∈ R n+1 : j≤k a 2 j ℓ 2 x 2 j + j≥k+1 x 2 j = 2(n − k)    . We choose time-shifts t ℓ,a and a dilation factors λ ℓ,a so that the flow (1.29) M ℓ,a t := λ ℓ,a · E ℓ,a λ −2 ℓ,1 (4π) n/2 e − |x| 2 4 = σ n−k + σ n−k+1 2 , where σ j denotes the entropy of the j-sphere. We then consider sequences a i = (a i 1 , . . . , a i k ) as above and ℓ i → ∞ and introduce the class of flows (1.31) A • := lim i→∞ M ℓ i ,a i t : the limit along a i , ℓ i exists and is compact . We prove that this gives ancient ovals with the desired properties, in particular that the class A • contains a (k − 1)-parameter family of geometrically distinct elements with prescribed values of the reciprocal width ratio: Theorem 1.9 (existence with reduced symmetry and further properties). Given any numbers µ 1 , . . . , µ k > 0 with µ 1 + . . . + µ k = 1 there exists an M t ∈ A • satisfying (1.32) (max x∈M −1 |x j |) −1 k j ′ =1 (max x∈M −1 |x j ′ |) −1 = µ j for 1 ≤ j ≤ k. In particular, whenever the reciprocal width ratio parameters µ 1 , . . . , µ k are not all equal to 1/k, then the solution is not O(k) × O(n + 1 − k)-symmetric. Furthermore, for any M t ∈ A • we have the following properties: • It is an ancient compact noncollapsed mean curvature flow. • It is Z k 2 × O(n + 1 − k)-symmetric. • It is uniformly (k + 1)-convex and the tangent flow at −∞ is lim λ→0 λM λ −2 t = R k × S n−k ( 2(n − k)|t|) • It becomes extinct at time 0 and satisfies M ℓ,a −1 1 (4π) n/2 e − |x| 2 4 = σ n−k + σ n−k+1 2 . To show this, we relax the condition that the ellipsoidal parameters have to be strictly positive, i.e. we now consider a in the compact (k − 1)-simplex ∆ k−1 , and study the reciprocal width ratio map (1.33) F ℓ k : ∆ k−1 → ∆ k−1 , a → (sup x∈M a,ℓ −1 |x j |) −1 k j ′ =1 (sup x∈M a,ℓ −1 |x j ′ |) −1 . Analyzing degenerations and using induction on the cell dimension we prove that F ℓ k maps the boundary ∂∆ k−1 onto itself. Together with continuity this yields that F ℓ k is onto, and thus allows us to obtain existence with prescribed reciprocal width ratios. This concludes our outline of the proofs. Sharp asymptotics The goal of this section is to prove Theorem 1.7 (sharp asymptotics). Let M t = ∂K t be an ancient compact noncollapsed mean curvature flow in R n+1 . By [HK17a, Theorem 1.14] the hypersurfaces M t are smooth and convex. Assuming the flow is SO(k) × SO(n + 1 − k)-symmetric, but not a family of round shrinking spheres, it follows from [HK17a] that (possibly after replacing k by n − k) the tangent flow of {K t } at −∞ is (2.1) lim λ→0 λK λ −2 t = R k × D n+1−k ( 2(n − k)t). We can assume throughout the paper that k ≥ 2, since the case k = 1 has already been dealt with in [ADS19, ADS20]. We will often write points in R n+1 in the form x = (x ′ , x ′′ ), where (2.2) x ′ = (x 1 , . . . , x k ) and x ′′ = (x k+1 , . . . , x n+1 ). We consider the renormalized flow (2.3)M τ = e τ 2 M −e −τ . Then, as τ → −∞ the hypersurfacesM τ converges to the cylinder (2.4) Γ = R k × S n−k ( 2(n − k)). Similarly as in the introduction, we denote by u(ρ, τ ) the renormalized profile function. In other words (2.5)M τ = (y ′ , y ′′ ) ∈ R k × R n+1−k : |y ′′ | = u(|y ′ |, τ ), |y ′ | ≤d(τ ) , whered(τ ) = e τ 2 d(−e −τ ) . In particular, thanks to (2.4) we have (2.6) lim τ →−∞ u(·, τ ) = 2(n − k) smoothly and uniformly on compact sets. Foliations and barriers. In this section, we discuss some foliations and barriers that we will use frequently. This is closely related to [CHH21b,DH21], but for convenience of the reader we give a self-contained exposition. We recall from Angenent-Daskalopoulos-Sesum [ADS19, Figure 1 and Section 8] that there is some L 0 > 1 such that for every a ≥ L 0 and b > 0, there are d-dimensional shrinkers in R d+1 , Σ a = {surface of revolution with profile r = u a (y 1 ), 0 ≤ y 1 ≤ a}, (2.7)Σ b = {surface of revolution with profile r =ũ b (y 1 ), 0 ≤ y 1 < ∞}. Here, the parameter a captures where the concave function u a meets the y 1 -axis, namely u a (a) = 0, and the parameter b is the asymptotic slope of the convex functionũ b , namely lim y 1 →∞ũ ′ b (y 1 ) = b. We now choose d = n + 1 − k, and shift and rotate these d-dimensional surfaces to construct a suitable foliation in R n+1 : Definition 2.1 (cylindrical foliation, c.f. [CHH21b,DH21]). Given η > 0, for every a ≥ L 0 , we denote by Γ η a the SO(k) × SO(n + 1 − k)-symmetric hypersurface in R n+1 given by (2.8) Γ η a = {(y ′ , y ′′ ) : (|y ′ | − η, y ′′ ) ∈ Σ a }. Similarly, for every b > 0, we denote byΓ η b the SO(k) × SO(n + 1 − k)symmetric hypersurface in R n+1 given by (2.9)Γ η b = {(y ′ , y ′′ ) : (|y ′ | − η, y ′′ ) ∈Σ b }. Now, we set L 1 = L 0 + 2(k − 1)η −1 . Lemma 2.2 (foliation lemma, c.f. [CHH21b, DH21]). There exist constants a 0 > L 0 , b 0 < ∞ and δ > 0 such that the families of hypersurfaces {Γ η a } a≥a 0 and {Γ η b } b≤b 0 together with the cylinder Γ = R k × S n−k ( 2(n − k)) foliate the region Ω = (y 1 , . . . , y n+1 )|y 2 1 + . . . + y 2 k ≥ L 2 1 , y 2 k+1 + . . . + y 2 n+1 ≤ 2(n − k) + δ . Moreover, denoting by ν fol the outward unit normal of this foliation, we have (2.10) div(ν fol e −|y| 2 /4 ) ≤ 0 inside the cylinder, and (2.11) div(ν fol e −|y| 2 /4 ) ≥ 0 outside the cylinder. Proof. By [ADS19, Lemma 4.9 and 4.10] there exist constants a 0 > L 0 , b 0 < ∞ and δ > 0 such that the shrinkers {Σ a } a≥a 0 and {Σ b } b≤b 0 together with the cylinder Σ = {x 2 2 + . . . x 2 d+1 = 2(d − 1)} ⊂ R d+1 foliate the region (y 1 , . . . , y d+1 )|y 1 ≥ L 0 , y 2 2 + . . . + y 2 d+1 ≤ 2(d − 1) + δ . Hence, by Definition 2.1 our hypersurfaces foliate the region Ω. Now, observe that for every element in Γ * in the foliation of Ω, we have (2.12) div(ν fol e −|y| 2 /4 ) = H Γ * − 1 2 y, ν fol e −|y| 2 /4 . By symmetry, it suffices to compute the curvatures H Γ * of Γ * in the region {y 1 > 0, y 2 = · · · = y k = 0}, where we can identify points and unit normals in Γ * with the corresponding ones in Σ * , by disregarding the y 2 , . . . , y k components. The relation between the mean curvature of a surface Σ * and its (unshifted) rotation Γ * ⊂ R n+1 on points with y 1 > 0, y 2 = · · · = y k = 0 is given by (2.13) H Γ * = H Σ * + k − 1 y 1 e 1 , ν fol , where e 1 = (1, 0, . . . , 0) ∈ R n−k+2 . For the shrinkers Σ a , the concavity of u a implies e 1 , ν fol ≥ 0, so we infer that (2.14) H Γ η a = 1 2 y − ηe 1 , ν fol + k − 1 y 1 e 1 , ν fol ≤ 1 2 y, ν fol , since in Ω ∩ {y 1 > 0, y 2 = · · · = y k = 0} we have y 1 ≥ L 1 ≥ 2(k − 1)η −1 . For the shrinkersΣ b , the convexity ofũ b implies that e 1 , ν fol ≤ 0, so similarly we have (2.15) HΓη b ≥ 1 2 y, ν fol . This proves the lemma. Corollary 2.3 (inner barriers, c.f. [CHH21b, DH21]). Let {K τ } τ ∈[τ 1 ,τ 2 ] be compact domains, whose boundary evolves by renormalized mean curvature flow. If Γ η a is contained in K τ for every τ ∈ [τ 1 , τ 2 ], and ∂K τ ∩ Γ η a = ∅ for all τ < τ 2 , then (2.16) ∂K τ 2 ∩ Γ η a ⊆ ∂Γ η a . Proof. Lemma 2.2 implies that the vector H + y ⊥ 2 points outwards of Γ η a . The result now follows from the maximum principle. 2.2. Parabolic region. The goal of this subsection is to establish the sharp asymptotics in the parabolic region. Our first aim is to find a suitable cylindrical radius function. Note that thanks to (2.6) there exists a smooth positive function ρ 0 (τ ) satisfying (2.17) lim τ →−∞ ρ 0 (τ ) = ∞ and − ρ 0 (τ ) ≤ ρ ′ 0 (τ ) ≤ 0, such that for all τ ≪ 0 we have (2.18) u(·, τ ) − 2(n − k) C 2n ([0,2ρ(τ )]) ≤ ρ 0 (τ ) −2 . In the following discussion, we let (2.19) v(ρ, τ ) := u(ρ, τ ) − 2(n − k). SinceM τ moves by renormalized mean curvature flow, the evolution of the graph function v is governed by the radial Ornstein-Uhlenbeck operator (2.20) L = ∂ 2 ∂ρ 2 + k − 1 ρ ∂ ∂ρ − ρ 2 ∂ ∂ρ + 1. Denote by H the Hilbert space of functions f on R + satisfying (2.21) f 2 H = ∞ 0 f 2 e − ρ 2 4 ρ k−1 dρ < ∞. Analyzing the spectrum of L, we can decompose the Hilbert space H as (2.22) H = H + ⊕ H 0 ⊕ H − , where H + = span{1}, (2.23) and H 0 = span{ρ 2 − 2k}. (2.24) We fix a smooth nonnegative cutoff function ϕ satisfying ϕ(s) = 1 for s ≤ 1 and ϕ(s) = 0 for s ≥ 2, and consider the quantities (2.25) α(τ ) = ∞ 0 v 2 (ρ, τ )ϕ 2 ρ ρ 0 (τ ) e − ρ 2 4 ρ k−1 dρ 1/2 , and (2.26) β(τ ) = sup σ≤τ α(σ). Moreover, we set (2.27) ρ(τ ) = β(τ ) − 1 5 . Proposition 2.4 (cylindrical radius). The function ρ(τ ) is an admissible cylindrical radius function. Namely, we have (2.28) lim τ →−∞ ρ(τ ) = ∞, and for τ ≪ 0 it holds that (2.29) − ρ(τ ) ≤ ρ ′ (τ ) ≤ 0 and (2.30) v(·, τ ) C 2n ([0,2ρ(τ )]) ≤ ρ(τ ) −2 . Proof. First, iterating the inverse Poincare inequality from [DH21, Proposition 4.1], similarly as in [CHH,Lemma 4.17], we see that (2.31) ∞ 0 v 2 (ρ, τ )ϕ 2 ρ ρ 0 (τ ) e − ρ 2 4 ρ k−1 dρ ≤ C L 1 0 v 2 (ρ, τ )e − ρ 2 4 ρ k−1 dρ. Since the right hand side converges to 0 as τ → −∞, this implies (2.32) lim τ →−∞ ρ(τ ) = ∞. Also, note that ρ(τ ) is monotone as a direct consequence of the definitions. Next, by [DH21, Proposition 4.2] the truncated graph function (2.33)v(ρ, τ ) = v(ρ, τ )ϕ ρ ρ 0 (τ ) satisfies the evolution equation 2 (2.34)v τ = Lv +Ê, where the error term can be estimated by (2.35) Ê H ≤ Cρ −1 0 v H .α 2 = o(α 2 ). In particular, for τ ≪ 0 this gives (2.37) − ρ(τ ) ≤ ρ ′ (τ ). Finally, to prove (2.30), we need the following claim: Claim 2.5 (barrier estimate, c.f. [CHH,Proposition 4.18]). There are constants c > 0 and C < ∞ such that (2.38) |v(ρ, τ )| ≤ Cβ(τ ) 1/2 holds for ρ ≤ cβ(τ ) −1/4 and τ ≪ 0. Proof of Claim 2.5. By parabolic estimates (c.f. [CHH21b, Appendix A]) there is a constant K < ∞ such that for τ ≪ 0 we have (2.39) v(·, τ ) L ∞ ([0,2L 1 ]) ≤ Kβ(τ ). Now, givenτ ≪ 0, consider the hypersurface Γ η a from the cylindrical foliation (Definition 2.1) with parameters η = 1 and (2.40) a = c 0 (Kβ(τ )) −1/2 . By [ADS19, Lemma 4.4], provided we choose c 0 > 0 small enough, the profile function u a of the shrinker Σ a satisfies (2.41) u a (L 1 − 1) ≤ 2(n − k) − Kβ. Combining this with (2.39), the inner barrier principle (Corollary 2.3) implies that Γ η a is enclosed byM τ for ρ ≥ L 1 and τ ≤τ . Since u 2 a ( √ a) ≥ 2(n − k) − 2 a , c.f. [CHH,equation (195)], this yields (2.42) v(ρ, τ ) ≥ −Cβ(τ ) 1 2 for ρ ≤ cβ − 1 4 and τ ≪ 0. Finally, by convexity, remembering also (2.39), the lower bound implies a corresponding upper bound. This finishes the proof of the claim. Using Claim 2.5 together with convexity and standard derivative estimates, and recalling also that ρ = β −1/5 , we conclude that (2.43) v(·, τ ) C 2n ([0,2ρ(τ )]) ≤ ρ(τ ) −2 for all τ ≪ 0. This finishes the proof of the proposition. From now on, we work with the truncated graph function (2.44)v(ρ, τ ) = v(ρ, τ )ϕ ρ ρ(τ ) , where ρ(τ ) denotes the cylindrical radius function from Proposition 2.4. Proposition 2.6 (parabolic region). The functionv satisfies (2.45) lim τ →−∞ |τ |v(ρ, τ ) + 2(n − k)(ρ 2 − 2k) 4 H = 0 . Moreover, there exists T > −∞ and an increasing function δ : (−∞, T ) → (0, 1/100) with lim τ →−∞ δ(τ ) = 0 such that for τ ≤ T we have (2.46) sup ρ≤δ −1 (τ ) v(ρ, τ ) + 2(n − k)(ρ 2 − 2k) 4|τ | ≤ δ(τ ) |τ | . Proof. By [DH21, Proposition 5.1], temporarily viewingv(·, τ ) as a function on R k , every sequence τ i → −∞ has a subsequence τ im such that (2.47) lim τ im →−∞v (·, τ im ) v(·, τ im ) H = yQy ⊤ − 2trQ, in H-norm, where Q = {q ij } is a normalized semi-negative definite k × k matrix. By symmetry, all eigenvalues of Q must be equal. Hence, the subsequential convergence entails full convergence, and we have (2.48) lim τ →−∞v (·, τ ) v(·, τ ) H = −ψ 0 in H-norm, where (2.49) ψ 0 = c 0 ρ 2 − 2k , with (2.50) c 0 = ρ 2 − 2k −1 H . Now, we consider the evolution of the coefficient (2.51) α 0 := v, ψ 0 H . Similarly as in [CHH,Lemma 4.20 and Proposition 4.21], we have (2.52)v τ = Lv − 1 2 2(n − k)v 2 + E, where (2.53) | E, ψ 0 H | ≤ Cβ 2+ 1 5 . Using also Lψ 0 = 0, we infer that d dτ α 0 = − 1 2 2(n − k) v 2 , ψ 0 H + E, ψ 0 H = −cα 2 0 + o(β 2 ), (2.54) where (2.55) c = ψ 2 0 , ψ 0 H 2 2(n − k) . Claim 2.7 (expansion coefficient). For τ → −∞, we have (2.56) α 0 (τ ) = −1 c|τ | + o(|τ | −1 ). Proof of Claim 2.7. By (2.48), we have α 0 (τ ) < 0 for τ ≪ 0. Now, we consider the function (2.57) β 0 (τ ) := sup σ≤τ |α 0 |(σ). For τ ≪ 0, equation (2.54) implies (2.58) d dτ α 0 + cα 2 0 ≤ c 10 β 2 0 . Then, we fix T ≪ 0 and consider the set I = {τ ≤ T : −α 0 (τ ) = β 0 (τ )}. Note that I = ∅, since α(τ ) → 0 as τ → −∞. Also, clearly I is a closed set. For each τ 0 ∈ I, by (2.58) we have d dτ α 0 (τ 0 ) < 0. Together with α 0 < 0, this implies that there is some δ > 0, such that (τ 0 − δ] ⊂ I. Hence, I is also open, and thus I = (−∞, T ]. Therefore, for τ ≪ 0 we get (2.59) d dτ α 0 = −cα 2 0 + o(α 2 0 ). Solving this ODE yields the claim. By Claim 2.7, we see that v(ρ, τ ) = α 0 ψ 0 + o(|τ | −1 ) = − c 0 (ρ 2 − 2k) c|τ | + o(|τ | −1 ) (2.60) holds in H-norm. To explicitly compute the prefactor, we consider the eigenfunctions (2.61) ψ +1 = c +1 1, ψ −1 = c −1 ρ 4 − (8 + 4k)ρ 2 + (8 + 4k)k . A straightforward computation shows that Lψ ±1 = ±ψ ±1 , and (2.62) ψ 0 c 0 2 = ψ −1 c −1 + 8 ψ 0 c 0 + 8k ψ +1 c +1 . Together with the fact that eigenfunctions corresponding to different eigenvalues are orthogonal this yields (2.63) ψ 2 0 , ψ 0 H = 8c 0 . Hence,v (ρ, τ ) = − 2(n − k)(ρ 2 − 2k) 4|τ | + o(|τ | −1 ) in H-norm, which proves (2.45). Finally, together with standard parabolic estimates this yields (2.46). This finishes the proof of the proposition. Intermediate region. To capture the intermediate region, we consider the function (2.64)ū(σ, τ ) = u(|τ | 1 2 σ, τ ). The goal of this subsection is to prove the following proposition: Proposition 2.8 (intermediate region). For any compact subset K ⊂ [0, √ 2), we have (2.65) lim τ →−∞ sup σ∈K |ū(σ, τ ) − (n − k)(2 − σ 2 )| = 0. Proof. We will adapt the proof from [ADS19, Section 6] to our setting. Then, forτ ≤ T , we have uâ (τ ) (L(τ ) +η(τ )) − 2(n − k) ≤ − √ n − k[(L(τ ) +η(τ )) 2 − 3] √ 2â 2 (τ ) (2.69) ≤ − √ n − kL 2 (τ ) 2 √ 2|τ | . On the other hand, by Proposition 2.6 (parabolic region) we have (2.70) u(L(τ ), τ ) − 2(n − k) ≥ − √ n − kL 2 (τ ) 2 √ 2|τ | for τ ≤τ . Hence, by Corollary 2.3 (barrier principle), we infer that (2.71) u(ρ, τ ) ≥ uâ (τ ) (L(τ ) +η(τ )) holds for all ρ ≥ L(τ ) and τ ≤τ . Therefore, for any ε ∈ (0, √ 2), we can find a T 1 ≤ T , such that (2.72)ū(σ, τ ) ≥ uâ (τ ) (|τ | 1 2 σ) holds for σ ∈ [|τ | − 1 100 , √ 2 − ε] and τ ≤ T 1 . Thus, by [ADS19, Lemma 4.3], we can find some δ 1 (τ ) > 0 with lim τ →−∞ δ 1 (τ ) = 0, such that (2.73)ū(σ, τ ) + δ 1 (τ ) ≥ √ n − k 2 − (1 + L(τ ) −1 )σ 2 holds for all σ ∈ [|τ | − 1 100 , √ 2 − ε] and τ ≤ T 1 . Finally, using the convexity of the flow, we conclude that (2.74) lim inf τ →−∞ inf σ≤ √ 2−ε ū(σ, τ ) − (n − k)(2 − σ 2 ) ≥ 0. Upper bound: By the evolution equation (1.4) and the chain rule, the renormalized profile function u satisfies (2.75) u τ = u ρρ 1 + u 2 ρ + k − 1 ρ u ρ − ρ 2 u ρ − n − k u + u 2 . Since the flow is convex, we have u ρ ≤ 0 and u ρρ ≤ 0, hence (2.76) u τ ≤ − n − k u + 1 2 (u − ρu ρ ). Thus, w := u 2 − 2(n − k) satisfies (2.77) w τ ≤ w − 1 2 ρw ρ . Hence, for every ρ 0 > 0 we have (2.78) d dτ (e −τ w(ρ 0 e τ 2 , τ )) ≤ 0 for τ ≪ 0. Integrating this inequality, we obtain for every λ ∈ (0, 1] that (2.79) w(y, τ ) ≤ λ −2 w(λy, τ + 2 log λ). On the other hand, by Proposition 2.6 (parabolic region), given any A < ∞, the inequality (2.80) w(ρ, τ ) ≤ |τ | −1 (n − k)(2k − ρ 2 ) + o(|τ | −1 ) holds for ρ ≤ A. Thus, for ρ ≥ A we obtain (2.81) w(ρ, τ ) ≤ − (1 − 2A −2 )(n − k)ρ 2 |τ | + 2 log(ρ/A) + o(|τ − 2 log(ρ/A)| −1 ). Hence, (2.82)ū(σ, τ ) ≤ √ n − k 2 − (1 − 2A −2 )σ 2 + o(1) holds uniformly on σ ≥ A|τ | −1/2 . In addition, using the concavity ofū, we obtain (2.83)ū(σ, τ ) ≤ √ n − k 2 − (1 − 8A −2 )σ 2 + o(1) for σ ≤ A|τ | −1/2 . By the arbitrariness of A, we conclude that (2.84) lim sup τ →−∞ sup σ≤ √ 2−ε ū(σ, τ ) − (n − k)(2 − σ 2 ) ≤ 0. This finishes the proof of the proposition. 2.4. Tip region. Set λ(s) = |s| −1 log |s|, and let p s ∈ M s be any point that maximizes |x ′ |. We consider the rescaled flows (2.85) M s t = λ(s) · (M s+λ −2 (s)t − p s ). Let s i → ∞ be any sequence for which lim i→∞ ps i |ps i | exists. The goal of this section is to prove: Proposition 2.9 (tip region). As i → ∞ the flows M s i t converge to R k−1 × N t , where N t is the SO(n + 1 − k)-symmetric bowl soliton in R n+2−k with speed 1/2. Proof. Using Proposition 2.8 (intermediate region) , and recalling also that σ = ρ/|τ | 1/2 , we see thatd(τ ) = max Mτ |y ′ | satisfies (2.86)d(τ ) = 2|τ |(1 + o(1)). By the global convergence theorem [HK17a, Theorem 1.12] the sequence of flows M s i t converges subsequentially to a limit M ∞ t . By SO(k) symmetry, M ∞ t contains k − 1 lines. Hence, it splits isometrically as M ∞ t = R k−1 × N t . Moreover, by construction N t is a noncompact ancient noncollapsed SO(n + 1 − k)-symmetric flow in R n+2−k , whose time zero slice is contained in a halfspace with mean curvature 1/2 at the base point. Together with the classification by Brendle-Choi [BC19,BC], this implies the assertion. Theorem 1.7 now follows from Proposition 2.6, Proposition 2.8 and Proposition 2.9. Moreover, by Uniqueness The goal of this section is to upgrade the sharp asymptotics to uniqueness. We recall that the renormalized profile function u(ρ, τ ) is defined for ρ ≤d(τ ) and satisfies the evolution equation (3.1) u τ = u ρρ 1 + u 2 ρ + k − 1 ρ u ρ − ρ 2 u ρ − n − k u + u 2 . As before we also work with the inverse profile function Y (·, τ ) defined as the inverse function of u(·, τ ). Differentiating the identity Y (u(ρ, τ ), τ ) = ρ we see that Y u u ρ = 1, Y u u τ + Y τ = 0, and u ρρ = −Y −3 u Y uu , hence (3.2) Y τ = Y uu 1 + Y 2 u + n − k u Y u + 1 2 (Y − uY u ) − k − 1 Y . Finally, we recall that the zoomed in profile function Z is defined by (3.3) Z(s, τ ) = |τ | 1/2 Y (|τ | −1/2 s, τ ) − Y (0, τ ) . 3.1. A priori estimates. The goal of this subsection is to establish certain a priori estimates that will be used frequently in the following. Specifically, the two main estimates of this subsection are the quadratic concavity estimate in Proposition 3.1 and the cylindrical estimate in Proposition 3.4. To prove this proposition, we will adapt the argument from [ADS20, Section 5] to our setting. We start with the following lemma: Lemma 3.2. Given any L < ∞, if max u≥L/ √ |τ | (u 2 ) ρρ > 0 for some τ , then we have (u 2 ) ρρτ < 0 at any interior maximum. Proof. For the following maximum principle argument, it is convenient to work with the unrescaled profile function U (r, t) instead of the renormalized profile function u(ρ, τ ). By the chain rule we have (u 2 ) ρρ = (U 2 ) rr . Hence, it suffices to show that (U 2 ) rrt < 0 at any positive maximum of (U 2 ) rr . Set Q := U 2 . Using the evolution equation (1.4), we infer that Q t = 4QQ rr − 2Q 2 r 4Q + Q 2 r + (k − 1) r Q r − 2(n − k). (3.5) Differentiating this equation with respect to r yields (3.6) Q rt = 4QQ rrr 4Q + Q r + 4Q r (2 + Q rr )(Q 2 r − 2QQ r ) (4Q r + Q 2 r ) 2 − k − 1 r 2 (Q r − rQ rr ). Differentiating this equation again, we see that at any critical point of Q rr , we have Q rrt = Q rrrr 1 + U 2 r − 1 2Q (2 + Q rr )(Q rr − 2U 2 r ) (1 − 3U 2 r )Q rr − 8U 2 r (1 + U 2 r ) 3 + 2(k − 1) r 3 (Q r − rQ rr ). (3.7) Now, at any positive interior maximum of Q rr we have Q rrrr ≤ 0 and Q rr > 0. Moreover, by convexity of our hypersurface we have the inequalities Q rr − 2U 2 r = 2U U rr < 0 and Q r = 2U U r ≤ 0. Furthermore, if 3U 2 r < 1, then (1 − 3U 2 r )Q rr − 8U 2 r < Q rr − 8U 2 r < −6U 2 r < 0. Finally, if 3U 2 r ≥ 1, then the inequality (1 − 3U 2 r )Q rr − 8U 2 r < 0 also holds. Combining these facts, we conclude that Q rrt < 0 at any positive interior maximum of Q rr . This proves the lemma. Using the lemma, we can now prove Proposition 3.1: Proof of Proposition 3.1. By Theorem 1.7 (sharp asymptotics) the zoomed in profile function Z(s, τ ) converges for τ → −∞ to the profile function Z(s) of the (n + 1 − k)-dimensional bowl soliton with speed 1/ √ 2. Hence, applying [ADS20, Lemma 4.4], we get that for for sufficiently negative τ in the soliton region u ≤ L/ |τ | we have (u 2 ) ρρ < 0. Now, suppose towards a contradiction there is a sequence τ i → −∞ such that max(u 2 ) ρρ (·, τ i ) > 0. By the above, choosing ρ i with (u 2 ) ρρ (ρ i , τ i ) = max(u 2 ) ρρ (·, τ i ) we have u(ρ i , τ i ) |τ i | → ∞. Applying Lemma 3.2, we see that the sequence (u 2 ) ρρ (ρ i , τ i ) is monotone increasing. In particular, we have (u 2 ) ρρ (ρ i , τ i ) ≥ c for some c > 0. Together with (u 2 ) ρρ = 2uu ρρ +2u 2 ρ < 2u 2 ρ , which holds by concavity, we infer that u 2 ρ (ρ i , τ i ) ≥ c/2. This is in contradiction with u(ρ i , τ i ) |τ i | → ∞ and the fact that the soliton region converges to a bowl soliton, and thus proves the proposition. In particular, we see that Y ∼ C exp − u 2 4(n−k) in the collar region: Given η > 0, if θ > 0 is small enough and L < ∞ is large enough, then for τ ≪ 0 we have (3.8) 1 + uY 2(n − k)Y u ≤ η in the collar region {L/ |τ | ≤ u ≤ 2θ}. Proof. It is enough to show that (3.9) 1 − η ≤ − ρ(u 2 ) ρ 4(n − k) ≤ 1 + η. To this end, using the description of the intermediate region from Theorem 1.7 (sharp asymptotics), we see that in the region {u ≤ 2θ} we have (3.10) 2|τ |(1 − 4θ 2 ) ≤ ρ ≤ 2|τ |(1 + o(1)), and moreover it holds that (3.11) − (u 2 ) ρ | u=2θ = 2 √ 2(n − k) |τ | 1 − 2θ 2 n − k + o(1) . Furthermore, using the description of the tip region from Theorem 1.7 (sharp asymptotics) we see that (3.12) − (u 2 ) ρ | u=L/ √ |τ | ≤ 2 √ 2(n − k) |τ | (1 + CL −1 ). Recall the monotonicity of ρ → (u 2 ) ρ (ρ, τ ) for τ ≪ 0, by Proposition 3.1 (quadratic concavity) and the above inequalities, the assertion holds. We conclude this subsection with the following cylindrical estimate: Proposition 3.4 (cylindrical estimate). For any ε > 0, there exist L < ∞ and τ * ≪ 0 such that (3.13) |u ρ | + u|u ρρ | ≤ ε at all points where u(ρ, τ ) ≥ L/ |τ | and τ ≤ τ * . Proof. By the sharp asymptotics in the tip region from Theorem 1.7, and convexity, for every ε 1 > 0, there exist L 1 < ∞ and τ 1 ≪ 0 such that (3.14) |u ρ | ≤ ε 1 at all points where u(ρ, τ ) ≥ L 1 / √ τ and τ ≤ τ 1 . Now suppose towards a contraction there are times τ i → −∞ and radii ρ i such that (3.15) u(ρ i , τ i ) √ τ i → ∞, but (3.16) u|u ρρ | ≥ ε/2. Set t i = −e −τ i and p i = |t i |(ρ i , 0, . . . , 0, u(ρ i , τ i ), 0, . . . , 0) ∈ M t i . By the noncollapsing property, we have (3.17) H(p i , t i ) ≥ 1 |t i |u(ρ i , τ i ) . Let M i t be the sequence of flows that is obtained from M t by shifting (p i , t i ) to the origin, and parabolically rescaling by H(p i , t i ) −1 . By the global convergence theorem [HK17a, Theorem 1.12], we can pass to a subsequential limit M ∞ t . It follows from (3.14), (3.15) and (3.17), together with the symmetry, that M ∞ t splits off k lines. Hence, applying [HK17a, Lemma 3.14], whose assumptions are satisfied thanks to Lemma 3.5 below, we see that M ∞ t must be a round shrinking R k × S n−k . This contradicts (3.16), and thus proves the proposition. In the above proof we used the following lemma: Lemma 3.5 (uniform k-convexity). Any compact ancient noncollapsed flow, whose tangent flow at −∞ is given by (1.2), is uniformly k-convex. Proof. By the strict maximum principle, our flow M t is strictly convex. Suppose towards a contradiction there is some sequence p i ∈ M t i such that (3.18) λ 1 + . . . + λ k+1 H (p i , t i ) → 0. Let M i t be the sequence of flows that is obtained from M t by shifting (p i , t i ) to the origin, and parabolically rescaling by H(p i , t i ) −1 . By the global convergence theorem [HK17a, Theorem 1.12], we can pass to a subsequential limit M ∞ t . By the strict tensor maximum principle, M ∞ t splits off k + 1 lines. Hence, by [HK17a, Theorem 1.14], the tangent flow of M ∞ t at −∞ must be R ℓ × S n−ℓ for some ℓ ≥ k + 1. This contradicts the fact that by (1.2) the entropy of M ∞ t is less than or equal to the entropy of R k × S n−k . This proves the lemma. The cylindrical estimate also implies the following corollary: |u ρ | + |u ρρ | + |u ρρρ | ≤ C(θ) |τ | holds in cylindrical region C θ for τ ≪ 0. Proof. By Theorem 1.7 (sharp asymptotics) and convexity, we have (3.20) |u ρ | ≤ C(θ) |τ | . Together with standard interior estimates this yields the assertion. Difference between solutions. From now on M 1 (t) and M 2 (t) denote two SO(k) × SO(n + 1 − k)-symmetric ancient ovals. Similarly as in [ADS20, Figure 1] we consider the following regions, which for concreteness are defined via the renormalized profile function u 1 : Definition 3.7 (regions). Fixing θ > 0 sufficiently small and L < ∞ sufficiently large, we call C θ = {u 1 ≥ θ/2} the cylindrical region and T θ = {u 1 ≤ 2θ} the tip region. The tip region is the union of the soliton region S L = {u 1 ≤ L/ |τ |} and the collar region K θ,L = {L/ |τ | ≤ u 1 ≤ 2θ}. To localize in the cylindrical region, we fix a smooth monotone function Φ satisfying Φ(ξ) = 1 for ξ ≤ 2 − θ 2 /(n − k) and Φ(ξ) = 0 for ξ ≥ 2 − θ 2 /4(n − k) and set (3.21) ϕ C (ρ, τ ) = Φ(ρ/|τ |). Observe that by Theorem 1.7 (sharp asymptotics) we have (3.22) spt(ϕ C ) ⋐ C θ and ϕ C ≡ 1 on C 2θ . Similarly, to localize in the tip region, we fix a smooth monotone function 0 ≤ ϕ T (u) ≤ 1 such that (3.23) spt(ϕ T ) ⋐ T θ and ϕ T ≡ 1 on T θ/2 . Now, for any real numbers β and γ, consider the time-shifted and parabolically dilated flow (3.24) M βγ 2 (t) = e γ/2 M 2 (e −γ (t − β)). We denote by u βγ 2 the renormalized profile functions of M βγ 2 (t). Proposition 3.8 (orthogonality). For any ε > 0 there exists τ * ≪ 0, such that for any τ 0 ≤ τ * we can find parameters β, γ with |β| ≤ ε |τ 0 |e τ 0 and |γ| ≤ ε|τ 0 | such that at time τ 0 we have (3.25) 1, ϕ C (u 1 − u βγ 2 ) H = 0 = ρ 2 − 2k, ϕ C (u 1 − u βγ 2 ) H . Proof. We will use a degree theory argument, similarly as in [ADS20, Section 4]. For convenience, we write u i = 2(n − k)(1 + v i ) and set (3.26) B = 1 + βe τ − 1, Γ = γ − log(1 + βe τ ) τ . Then, we have (3.27) v βγ 2 (ρ, τ ) = B + (1 + B)v 2 ρ 1 + B , (1 + Γ)τ . Our goal is to find a suitable zero of the map (3.28) Φ(B, Γ) := 1, ϕ C (v βγ 2 − v 1 ) H , ρ 2 − 2k, ϕ C (v βγ 2 − v 1 ) H , where β = β(B, Γ) and γ = γ(B, Γ) are defined via (3.26). To this end, we start with the following estimate: Claim 3.9. For every η > 0 there exists τ η > −∞, such that for all τ ≤ τ η and all B ∈ [−1/|τ |, 1/|τ |] and Γ ∈ [−1/2, 1/2] we have (3.29) 1 1 H , ϕ C (v βγ 2 − v 1 ) H − B + ρ 2 − 2k ρ 2 − 2k H , ϕ C (v βγ 2 − v 1 ) H − Γ 4(1 + Γ)|τ | ≤ η |τ | . Proof of Claim 3.9. By Proposition 2.6, the truncated functionsv i := ϕ C v i satisfy (3.30)v i (ρ, τ ) = − ρ 2 − 2k 4|τ | + o(|τ | −1 ) in H-norm. Since |B| ≤ 1/|τ | and |Γ| ≤ 1/2, this implieŝ v 1 −v βγ 2 = − ρ 2 − 2k 4|τ | −   B + (1 + B) ρ 1+B 2 − 2k 4(1 + Γ)|τ |    + o(|τ | −1 ) = −B − Γ 1 + Γ ρ 2 − 2k 4|τ | + o(|τ | −1 ) in H-norm. This yields the assertion. By Claim 3.9, fixing η small enough, for τ ≪ 0 the maps Φ and Ψ are homotopic to each other when restricted to the boundary of (3.32) D := {(B, Γ) : |τ | 2 B 2 + Γ 2 ≤ η 2 }, where the homotopy can be chosen trough maps avoiding the origin. Because the winding number of Ψ| ∂D around the origin is 1, the map Φ must hit the origin for some (B 0 , Γ 0 ) ∈ D. This proves the proposition. From now on, given τ 0 ≪ 0, we simply write u 2 = u βγ 2 , where the parameters β and γ are from Proposition 3.8 (orthogonality), and set (3.33) w := u 1 − u 2 . Similarly, we denote by (3.34) W := Y 1 − Y 2 the difference of the inverse profile functions Y 1 and Y 2 = Y βγ 2 . 3.3. Energy estimates in the cylindrical region. The goal of this subsection is to prove the following energy estimate for w = u 1 − u 2 in the cylindrical region C θ = {u 1 ≥ θ/2}: Proposition 3.10 (energy estimate in cylindrical region). For every small ε, θ > 0 there exists τ * > −∞, such that if w satisfies w C , 1 H = 0 at some time τ 0 ≤ τ * , then (3.35) ŵ C D,∞ ≤ ε w C D,∞ + χ D θ w H,∞ , where D θ = {θ/2 ≤ u 1 ≤ θ} andŵ C = w C − w C ,ρ 2 −2k H (ρ 2 −2k) ρ 2 −2k 2 H . Proof. We will adapt the argument from [ADS20, Section 6] to our setting. Using (3.1) we see that the difference w = u 1 − u 2 satisfies the equation (3.36) (∂ τ − L)w = E[w], where L is the radial Ornstein-Uhlenbeck operator from (1.9), and (3.37) E[w] = − u 2 1,ρ 1 + u 2 1,ρ w ρρ − (u 1,ρ + u 2,ρ )u 2,ρρ (1 + u 2 1,ρ )(1 + u 2 2,ρ ) w ρ + 2(n − k) − u 1 u 2 2u 1 u 2 w. Now, given θ > 0, we recall that w C = ϕ C w, where ϕ C is the cutoff function from (3.21) that localizes in the cylindrical region C θ . It follows that (3.38) (∂ τ − L)w C = E[w C ] +Ē[w, ϕ C ], where (3.39)Ē[w, ϕ C ] = ϕ C,τ − ϕ C,ρρ + u 2 1,ρ 1 + u 2 1,ρ ϕ C,ρρ + (u 1,ρ + u 2,ρ )u 2,ρρ (1 + u 2 1,ρ )(1 + u 2 2,ρ ) ϕ C,ρ + ρ 2 ϕ C,ρ − k − 1 ρ ϕ C,ρ w + 2u 2 1,ρ 1 + u 2 1,ρ ϕ C,ρ − 2ϕ C,ρ w ρ . By Lemma 3.11 below, thanks to w C (·, τ 0 ), ρ 2 − 2k H = 0 we infer that (3.40) ŵ C D,∞ ≤ C E[w C ] +Ē[w, ϕ C ] D * ,∞ . To estimate the right hand side, recall that by Corollary 3.6 (derivative estimates) for (y, τ ) ∈ C θ and i = 1, 2 it holds that (3.41) |u i,ρ | + |u i,ρρ | + |u i,ρρρ | ≤ C(θ) |τ | . Arguing similarly as in [ADS20, proof of Lemma 6.8] this yields (3.42) E[w C ] D * ,∞ ≤ ε w C D,∞ . Equation (3.39) in contrast to [ADS20, Equation (6.11)] contains the extra term k−1 ρ ϕ C,ρ w. To estimate this extra term, we observe that ϕ C,ρ is supported in the transition region D θ = {θ/2 ≤ u 1 ≤ θ} and satisfies (3.43) |ϕ C,ρ | ≤ C(θ) |τ | . Moreover, we have (3.44) ρ ≥ |τ | in D θ . Using this, we can estimate (3.45) k − 1 ρ ϕ C,ρ w D * ,∞ ≤ ε 2 χ D θ w H,∞ . Then, arguing similarly as in [ADS20, proof of Lemma 6.9], we conclude that (3.46) Ē [w, ϕ C ] D * ,∞ ≤ ε χ D θ w H,∞ . Combining the above inequalities, this proves the proposition. In the above proof, we used the following lemma: Lemma 3.11. For any differentiable function f : (−∞, τ 0 ] → D we have (3.47) f D,∞ ≤ C (∂ τ − L)f D * ,∞ , wheref denotes the projection of f to the orthogonal complement of ker(L). Proof. By Ecker's weighted Sobolev inequality [Eck00, page 109] we have (3.48) ∞ 0 ρ 2 f (ρ) 2 e − ρ 2 4 ρ k−1 dρ ≤ C(n) ∞ 0 f (ρ) 2 + f ρ (ρ) 2 e − ρ 2 4 ρ k−1 dρ. Hence, L : D → D * is a bounded linear elliptic operator. Using this, the same standard parabolic energy method as in [ADS20, Section 6.4] yields the assertion. 3.4. Energy estimates in the tip region. In this subsection, we prove an energy estimate in the tip region T θ = {u 1 ≤ 2θ}. To state the estimate precisely, we fix a smooth monotone cutoff function ζ satisfying ζ(u) = 1 for u ≥ θ/2 and ζ(u) = 0 for u ≤ θ/4, and consider the weight function (3.49) µ(u, τ ) = − Y 2 (θ, τ ) 4 + u θ ζ(v) − Y 2 4 v + (1 − ζ(v)) (n − k)(1 + Y 2 v ) v dv, where here and in the following we write Y = Y 1 for brevity. This weight function µ is used in the definition of the weighted norm (3.50) F 2,∞ = sup τ ≤τ 0 1 |τ | 1/4 τ τ −1 2θ 0 F (u, σ) 2 e µ(u,σ) du dσ 1/2 . The goal of this subsection is to prove the following estimate: Proposition 3.12 (energy estimate in tip region). For every θ ≪ 1 there is a constant C < ∞, such that for any τ 0 ≪ 0 we have (3.51) W T 2,∞ ≤ C |τ 0 | W χ [θ,2θ] 2,∞ , where W T = ϕ T W is the truncated difference of the inverse profile functions. To prove this, we adapt the argument from [ADS20, Section 7] to our setting. First, we need the following lemma to control the weight function: Lemma 3.13 (tip estimates, c.f. [ADS20, Lemma 7.4 and 7.5]). For any η > 0, there exist θ > 0 and τ * ≪ 0 such that in the tip region T θ for τ ≤ τ * we have (3.52) |τ | 2(n + 1 − k) ≤ |Y u | u ≤ |τ |, |Y τ | ≤ η |Y u | u , and (3.53) uµ u (n − k)(1 + Y 2 u ) − 1 ≤ η, |µ τ | ≤ η|τ |. Proof. First observe that the inequalities (3.52) hold in the soliton region S L , since by the sharp asymptotics in the tip region from Theorem 1.7, the function Z(s, τ ) = |τ | 1/2 Y (|τ | −1/2 s, τ ) − Y (0, τ ) converges locally smoothly to the profile functionZ(s) of the (n + 1 − k)-dimensional bowl. On the other hand, by the sharp asymptotics in the intermediate region from Theorem 1.7, the estimate (3.54) |τ | 2(n + 1 − k) ≤ |Y u | u ≤ |τ | also holds for u = 2θ. Together with the fact that the function u → |Y u |/u is monotone by Proposition 3.1 (quadratic concavity), we infer that the estimate (3.54) holds in the whole tip region T θ . Next, to check that |Y τ | ≤ η|Y u |/u also holds in the collar region K θ,L , we rewrite the evolution equation (3.2) in the form (3.55) Y τ = Y uu 1 + Y 2 u + (n − k)Y u u 1 + uY 2(n − k)Y u − u 2 2(n − k) − (k − 1)u (n − k)Y u Y . By Proposition 3.4 (cylindrical estimate), we can choose L large enough such that (3.56) Y uu 1 + Y 2 u ≤ η 4 |Y u | u . By Corollary 3.3 (almost Gaussian collar), for θ small enough we get (3.57) 1 + uY 2(n − k)Y u ≤ η 4(n − k) . Since u ≤ 2θ in the tip region, possibly after decreasing θ we have (3.58) u 2 2(n − k) ≤ η 4(n − k) . Finally, using also (3.54) and the fact that Y ≥ |τ | in the tip region by Theorem 1.7 (sharp asymptotics), we can arrange that (3.59) (k − 1)u (n − k)Y u Y ≤ η 4(n − k) . Combining the above inequalities yields (3.60) |Y τ | ≤ η |Y u | u . Next, by definition of the weight function µ we have (3.61) uµ u = ζ(u) −uY Y u 2 + (1 − ζ(u)) (n − k)(1 + Y 2 u ). Hence, it suffices to show that for θ/4 ≤ u ≤ 2θ, we have the estimate (3.62) uY Y u 2(n − k)(1 + Y 2 u ) + 1 ≤ η. This easily follows from Corollary 3.3 (almost Gaussian collar) since in the region under consideration we have |Y u | ≫ 1. This proves (3.63) uµ u (n − k)(1 + Y 2 u ) − 1 ≤ η. Finally, using the estimates that we already established and arguing similarly as in [ADS20, proof of Lemma 7.5], we obtain |µ τ | ≤ η|τ |. This concludes the proof of the lemma. Corollary 3.14 (Poincare inequality, c.f. [ADS20, Proposition 7.6]). There are C 0 < ∞ and τ * ≪ 0 such that for any θ ≪ 1 and all τ ≤ τ * , we have Proof. Set u 0 := 4(n + 1 − k)/ |τ |. For u ≤ u 0 and τ ≪ 0, by our choice of weight function, we have (3.65) µ(u, τ ) − µ(u 0 , τ ) = (n − k) log u u 0 + (n − k) u u 0 1 v Y 2 v dv. The bound |Y v | ≤ |τ |v from Lemma 3.13 (tip estimates) implies (3.66) u u 0 1 v Y 2 v dv ≤ 1 2 |τ |u 2 0 ≤ 8(n + 1 − k) 2 . This yields (3.67) C −1 e µ(u 0 ,τ ) u u 0 n−k ≤ e µ(u,τ ) ≤ Ce µ(u 0 ,τ ) u u 0 n−k , as needed for integrating in polar coordinates in R n+1−k . Using this and the estimates from Lemma 3.13 (tip estimates), the argument from [ADS20, proof of Proposition 7.6] with n replaced by n + 1 − k goes through. Using the above results, we can now prove the main energy estimate: Proof of Proposition 3.12. Using equation (3.2), we see that W = Y 1 − Y 2 satisfies the evolution equation (3.68) W τ = W uu 1 + Y 2 1,u + n − k u − u 2 + D W u + 1 2 + k − 1 Y 1 Y 2 W, where (3.69) D = − (Y 1,u + Y 2,u )Y 2,uu (1 + Y 2 1,u )(1 + Y 2 2,u ) . Multiplying this evolution equation by W ϕ 2 e µ , where ϕ = ϕ T is the cutoff function from (3.23), and integrating by parts gives (3.70) 1 2 d dτ W 2 ϕ 2 e µ = − W 2 u 1 + Y 2 1,u ϕ 2 e µ + G W u W ϕ 2 e µ − 2 1 1 + Y 2 1,u W u W ϕ u ϕe µ + W 2 ϕ 2 1 2 + µ τ + k − 1 Y 1 Y 2 e µ , where (3.71) G = n − k u − u 2 − µ u 1 + Y 2 1,u + 2Y 1,u Y 1,uu (1 + Y 2 1,u ) 2 + D. For the second term we estimate GW u W ≤ 1 2 W 2 u 1 + Y 2 1,u + 1 2 G 2 (1 + Y 2 1,u )W 2 , (3.72) and for the third term we estimate (3.73) − W u W ϕ u ϕ = −(ϕW ) u W ϕ u + W 2 ϕ 2 u ≤ 1 8 ((ϕW ) u ) 2 + 2W 2 ϕ 2 u . Recalling also that spt(ϕ u ) ⊆ [θ, 2θ], it follows that W T = ϕW satisfies (3.74) d dτ W 2 T e µ ≤ − 1 2 (W T ) 2 u 1 + Y 2 1,u e µ + Ḡ W 2 T e µ +C(θ) 2θ θ W 2 1 + Y 2 1,u e µ , where (3.75)Ḡ = G 2 (1 + Y 2 1,u ) + 1 + 2µ τ + 2(k − 1) Y 1 Y 2 . Using Lemma 3.13 (tip estimates) and arguing similarly as in [ADS20, proof of Claim 7.7], given any η > 0, we see that for θ ≪ 1 small enough (depending on η) in the tip region T θ for τ ≪ 0 we have (3.76)Ḡ ≤ η|τ |. Indeed, the only new term is 1 Y 1 Y 2 which can be easily controlled since Y i ≥ |τ | in the tip region. Moreover, note that the estimates from Lemma 3.13 indeed also apply to Y 2 = Y βγ 2 , since |β| ≤ εe τ 0 /|τ 0 | and γ ≤ ε|τ 0 | implies that Z βγ 2 (s, τ ) →Z(s) locally smoothly as τ → −∞. Together with Corollary 3.14 (Poincare inequality), we infer that (3.77) Ḡ W 2 T e µ ≤ η|τ | W 2 T e µ ≤ C 0 η 2θ 0 (W T ) 2 u 1 + Y 2 1,u e µ . Choosing η = 1 4C 0 , and using also that Y 2 1,u ≥ c(θ)|τ | for u ∈ [θ, 2θ] by Lemma 3.13 (tip estimates), our energy inequality thus becomes (3.78) d dτ W 2 T e µ ≤ −η|τ | W 2 T e µ + C(θ) |τ | (W χ [θ,2θ] ) 2 e µ . Integrating this differential inequality, we conclude that (3.79) W T 2,∞ ≤ C |τ 0 | W χ [θ,2θ] 2,∞ . This completes the proof of the proposition. 3.5. Conclusion of the proof. In this subsection, we will combine the results from the previous sections to conclude the proof of our main uniqueness theorem: Proof of Theorem 1.4. Let M 1 (t) and M 2 (t) be two SO(k) × SO(n + 1 − k)symmetric ancient ovals. Assuming k denotes the number of long directions, which is always the case possibly after replacing k by n − k, the tangent flow at −∞ is given by (1.2). We now fix small enough constants ε > 0 and θ > 0, a large enough constant L < ∞, and a negative enough time τ 0 ≪ 0. As before, we denote by u 1 and Y 1 the renormalized profile function and inverse profile function of M 1 (t). By Proposition 3.8 (orthogonality), we can find parameters β, γ with |β| ≤ εe −τ 0 /|τ 0 | and |γ| ≤ ε|τ 0 | such that letting u βγ 2 be the renormalized profile function of M βγ 2 (t) = e γ/2 M 2 (e −γ (t−β)), the truncated difference w C = ϕ C (u 1 − u βγ 2 ) satisfies the orthogonality condition (3.80) 1, w C (·, τ 0 ) H = 0 = ρ 2 − 2k, w C (·, τ 0 ) H . Our goal is to show that w = u 1 − u βγ 2 and W = Y 1 − Y βγ 2 vanish identically. Recall that our weight function satisfies µ(u, τ ) = − 1 4 Y 1 (u, τ ) 2 for u ≥ θ/2. Hence, similarly as in [ADS20, Lemma 8.1], in the transition region D θ = {θ/2 ≤ u 1 ≤ θ}, we have the equivalence of norms (3.81) C −1 W χ [θ/2,θ] 2,∞ ≤ wχ D θ H,∞ ≤ C W χ [θ/2,θ] 2,∞ , where C = C(θ) < ∞. Moreover, as before, we let (3.82)ŵ C (·, τ ) = w C (·, τ ) − w C (·, τ ), ψ 0 H ψ 0 , where ψ 0 (ρ) = c 0 (ρ 2 − 2k) with c 0 = ρ 2 − 2k −1 H . Combining Proposition 3.10 (energy estimate in the cylindrical region), Proposition 3.12 (energy estimate in the tip region) and (3.81) we infer that ŵ C D,∞ ≤ ε ( w C D,∞ + wχ D θ H,∞ ) ≤ ε ( w C D,∞ + C W T 2,∞ ) ≤ ε w C D,∞ + C |τ 0 | w C H,∞ ≤ 2ε w C D,∞ . (3.83) We now consider the expansion coefficient (3.84) a(τ ) = w C (·, τ ), ψ 0 H . Using (3.38) and (2.63) we see that (3.85) d dτ a(τ ) = 2a(τ ) |τ | + F (τ ), where (3.86) F (τ ) = Ē [w, ϕ C ], ψ 0 ψ 0 2 + E[w C ] − 4 −1 |τ | −1 a(τ )ψ 2 0 , ψ 0 ψ 0 2 . Since a(τ 0 ) = 0 thanks to (3.80), we infer that (3.87) a(τ ) = − τ 0 τ σ 2 F (σ) dσ τ 2 . Now, we consider the function Using Theorem 1.7 (sharp asymptotics) and our a priori estimates from Section 3.1, and arguing similarly as in [ADS20, Proof of Claim 8.3], we see that (3.89) τ τ −1 |F (σ)|dσ ≤ ε |τ | A(τ ). Chopping [τ, τ 0 ] into unit intervals and applying this repeatedly, we infer that (3.90) τ 0 τ σ 2 F (σ) dσ ≤ ε|τ | 2 A(τ ). Hence, (3.91) |a(τ )| ≤ εA(τ 0 ) for all τ ≤ τ 0 . This in turn implies A(τ 0 ) ≤ εA(τ 0 ), and thus proves that We conclude that w and W vanish identically. This completes the proof of our uniqueness theorem. Nonuniqueness The goal of this final section is to prove Theorem 1.9 (existence with reduced symmetry and further properties). Proof of Theorem 1.9. We work with ellipsoidal parameters in the (k − 1)simplex Given any a ∈ ∆ k−1 and any ℓ < ∞ we consider the (possibly degenerate) ellipsoid (4.2) E ℓ,a :=    x ∈ R n+1 : j≤k a 2 j ℓ 2 x 2 j + j≥k+1 x 2 j = 2(n − k)    . Note that for a ∈ Int(∆ k−1 ) this is a compact ellipsoid, while if a j = 0 for some index j then E ℓ,a can be expressed as the product of an R-factor in x j -direction and an ellipsoid in one dimension lower. Now, using the explicit formula for ellipsoids it is not hard to see that there are uniform constants α, β > 0 such that the hypersurfaces E ℓ,a are α-noncollapsed and satisfy λ 1 + . . . + λ k+1 ≥ βH. Also recall that αnoncollapsing and β-uniform (k + 1)-convexity are preserved under mean curvature flow, rescalings, and passing to limits [And12,HS09,HK17a]. Likewise, the Z k 2 × O(n + 1 − k)-symmetry of E ℓ,a is also preserved. Next, observe that (4.3) lim ℓ→∞ E ℓ,a = R k × S n−k ( 2(n − k)). By symmetry and [Hui84] the mean curvature evolution E ℓ,a t becomes extinct in a round point at the origin at some t ℓ,a < ∞. Using (4.3) and continuous dependence of the mean curvature flow on the initial data we see that Then, for any large enough ℓ we can find a unique t ′ ℓ,a such that (4.6) Θ ℓ,a (t ′ ℓ,a ) = σ n−k + σ n−k+1 2 , where σ j denotes the entropy of the j-sphere. Using again (4.3) and continuous dependence of the mean curvature flow on the initial data we see that The flow M ℓ,a t is defined for t ∈ (T ℓ,a , 0), where T ℓ,a = −λ 2 ℓ,a t ℓ,a , and thanks to (4.4) and (4.7) we have Using this we can now define the reciprocal width ratio map (4.13) F ℓ k : ∆ k−1 → ∆ k−1 , a → w ℓ j (a) −1 k j ′ =1 w ℓ j ′ (a) −1 . Claim 4.1 (reciprocal width ratio map). F ℓ k is continuous and surjective. Proof. We first check continuity. Let a i ∈ ∆ k−1 be a sequence that converges to a ∈ ∆ k−1 . Then, clearly E ℓ,a i → E ℓ,a , and arguing similarly as above we also see that t ℓ,a i → t ℓ,a and λ ℓ,a i → λ ℓ,a . Now, by the global convergence theorem [HK17a] there is a subsequence such that M ℓ,a i t converges to some noncollasped limit. By the above and by uniqueness of mean curvature flow this limit must be equal to M ℓ,a t . Hence, it follows that F ℓ k (a i ) → F ℓ k (a). Now, if a ∈ ∂∆ k−1 is such that its j-th component a j vanishes, then then E ℓ,a can be expressed as the product of an R-factor in x j -direction and an ellipsoid in one dimension lower. Hence, w ℓ j (a) −1 = 0 and consequently F ℓ k (∂∆ k−1 ) ⊆ ∂∆ k−1 . Moreover, it also follows that the map F ℓ k restricted to the face {a j = 0} ⊆ ∆ k−1 agrees with the map F ℓ k−1 (here the dimension n, which we suppressed in the notation, also decreases by one), namely (4.14) F ℓ k (a 1 , . . . , a j−1 , 0, a j+1 , . . . , a k ) = F ℓ k −1 (a 1 , . . . , a j−1 , a j+1 , . . . , a k ) . Consequently, using induction on k and the topological fact that a disc cannot be retracted to its boundary, we conclude that F ℓ k is surjective. Continuing the proof of the theorem, given µ i → µ ∈ Int(∆ k−1 ) and ℓ i → ∞, by the above we can find a i ∈ Int(∆ k−1 ) so that F ℓ i k (a i ) = µ i . Now, by the global convergence theorem [HK17a] theorem, after passing to a subsequence M ℓ i ,a i t converges to an ancient noncollapsed uniformly (k+1)convex mean curvature flow {M t } t<0 that satisfies (4.15) (max x∈M −1 |x j |) −1 k j ′ =1 (max x∈M −1 |x j ′ |) −1 = µ j for 1 ≤ j ≤ k. This proves the existence with prescribed reciprocal width ratio. Finally, let us verify the additional properties. We have already addressed noncollapsing, uniform (k + 1)-convexity and the symmetries. Furthermore, it is also clear by construction that any M t ∈ A • becomes extinct at the origin at time 0 and satisfies (4.16) M ℓ,a −1 1 (4π) n/2 e − |x| 2 4 = σ n−k + σ n−k+1 2 . By [HK17a], any tangent flow at −∞ must be a generalized cylinder, and together with (4.16) and the symmetries it follows that (4.17) lim λ→0 λM λ −2 t = R k × S n−k ( 2(n − k)|t|). This concludes the proof of Theorem 1.9. • Intermediate , τ ) = (n − k)(2 − σ 2 )uniformly on every compact subset of σ < √ 2. • Tip region: For τ → −∞, the function Z(s, τ ) converges locally smoothly to the profile functionZ(s) of the (n + 1 − k)-dimensional round bowl soliton with speed 1/ √ 2, namely to the solution of the Lower bound: By [ADS19, Lemma 4.4] there exists an increasing positive function M (a) with lim a→∞ M (a) = ∞, such that(2.66) u a (r) ≤ 2(n − k) 1 − r 2 − 3 2a 2 for 0 ≤ r ≤ M (a). We fix T negative enough, and for τ ≤ T let (2.67) L(τ ) = min{δ(τ ) −1 , M where δ(τ ) is the function from Proposition 2.6. Now, we set (2.68)η(τ ) = 3(k − 1)L(τ ) −1 andâ(τ ) = 2|τ | 1 + L(τ ) −1 . Proposition 3. 1 1(quadratic concavity, c.f. [ADS20, Prop 5.2]). There is some τ 0 ≪ 0, such that for τ ≤ τ 0 we have (3.4) (u 2 ) ρρ ≤ 0. Corollary 3. 3 3(almost Gaussian collar, c.f. [ADS20, Lemma 5.7]). Corollary 3.6 (derivative estimates, c.f. [ADS19, Lem 4.1]). For any θ > 0, there is a constant C(θ) < ∞ such that (3.19) (u,τ ) du for all smooth functions f satisfying f ′ (0) = 0 and spt(f ) ⊆ [0, 2θ]. ( 3 . 392) a(τ ) = 0 for all τ ≤ τ 0 . Together with (3.82) and (3.83), this yields (3.93) w C (·, τ ) = 0 for all τ ≤ τ 0 . Finally, by Proposition 3.12 (energy estimate in the tip region), this implies (3.94) W T (·, τ ) = 0 for all τ ≤ τ 0 . 1 , . . . , a k ) : a 1 ≥ 0, . . . , a k ≥ 0, (t ℓ,a − t)) n/2 e t ℓ,a −t) . Now, for j = 1, . . . , k we consider the width at time −1 along the x j -axis, By the Merle-Zaag alternative[DH21, Proposition 4.3], for τ → −∞, either the unstable mode from (2.23) is dominant or the neutral mode from (2.24) is dominant. In the former case, by [DH21, Theorem 6.4] associated to our solution, there would be a fine cylindrical vector (a 1 , . . . , a k ) pointing in some specific direction; this contradicts our symmetry assumption. Hence, the neutral mode must be dominant. This implies(2.36) d dτ Hamilton's Harnack inequality [Ham95], we have(2.87) d dt H(p t ) ≥ 0. Together with the ODE (2.88) d dτd (τ ) = 1 2d (τ ) −H(τ ), whereH(τ ) = H(p t )/ |t| and τ = − log |t|, it follows that (2.89) H(p t ) 2|t| −1 log |t| =H (τ ) 2|τ | = 1 2 + o(1). Due to symmetry, the fine-tuning rotation S is simply the identity matrix. Acknowledgments. This research was supported by the NSERC Discovery Grant and the Sloan Research Fellowship of the second author. Unique asymptotics of ancient convex mean curvature flow solutions. S Angenent, P Daskalopoulos, N Sesum, J. Differential Geom. 1113S. Angenent, P. Daskalopoulos, and N. Sesum. Unique asymptotics of ancient convex mean curvature flow solutions. J. Differential Geom., 111(3):381-455, 2019. Uniqueness of two-convex closed ancient solutions to the mean curvature flow. S Angenent, P Daskalopoulos, N Sesum, Ann. of Math. 1922S. Angenent, P. Daskalopoulos, and N. Sesum. Uniqueness of two-convex closed ancient solutions to the mean curvature flow. Ann. of Math. (2), 192(2):353- 436, 2020. Noncollapsing in mean-convex mean curvature flow. B Andrews, Geom. Topol. 163B. Andrews. Noncollapsing in mean-convex mean curvature flow. Geom. Topol., 16(3):1413-1418, 2012. Formal asymptotic expansions for symmetric ancient ovals in mean curvature flow. S Angenent, Netw. Heterog. Media. 81S. Angenent. Formal asymptotic expansions for symmetric ancient ovals in mean curvature flow. Netw. Heterog. Media, 8(1):1-8, 2013. Uniqueness of convex ancient solutions to mean curvature flow in higher dimensions. S Brendle, K Choi, Geom. Top. to appearS. Brendle and K. Choi. Uniqueness of convex ancient solutions to mean cur- vature flow in higher dimensions. Geom. Top. (to appear). Uniqueness of convex ancient solutions to mean curvature flow in R 3. S Brendle, K Choi, Invent. Math. 2171S. Brendle and K. Choi. Uniqueness of convex ancient solutions to mean cur- vature flow in R 3 . Invent. Math., 217(1):35-76, 2019. Mean curvature flow with surgery of mean convex surfaces in R 3. S Brendle, G Huisken, Invent. Math. 2032S. Brendle and G. Huisken. Mean curvature flow with surgery of mean convex surfaces in R 3 . Invent. Math., 203(2):615-654, 2016. A collapsing ancient solution of mean curvature flow in R 3. T Bourni, G Langford, Tinaglia, arXiv:1705.06981T. Bourni, M Langford, and G. Tinaglia. A collapsing ancient solution of mean curvature flow in R 3 . arXiv:1705.06981, 2017. T Bourni, G Langford, Tinaglia, arXiv:2006.16338The atomic structure of ancient grain boundaries. T. Bourni, M Langford, and G. Tinaglia. The atomic structure of ancient grain boundaries. arXiv:2006.16338, 2020. Ancient low entropy flows, mean convex neighborhoods, and uniqueness. K Choi, R Haslhofer, O Hershkovits, Acta Math. to appearK. Choi, R. Haslhofer, and O. Hershkovits. Ancient low entropy flows, mean convex neighborhoods, and uniqueness. Acta Math. (to appear). A note on the selfsimilarity of limit flows. B Choi, R Haslhofer, O Hershkovits, Proc. Amer. Math. Soc. 1493B. Choi, R. Haslhofer, and O. Hershkovits. A note on the selfsimilarity of limit flows. Proc. Amer. Math. Soc., 149(3):1239-1245, 2021. A non-existence result for wing-like mean curvature flows in R 4. K Choi, R Haslhofer, O Hershkovits, arXiv:2105.13100K. Choi, R. Haslhofer, and O. Hershkovits. A non-existence result for wing-like mean curvature flows in R 4 . arXiv:2105.13100, 2021. Ancient asymptotically cylindrical flows and applications. K Choi, R Haslhofer, O Hershkovits, B White, Invent. Math. to appearK. Choi, R. Haslhofer, O. Hershkovits, and B. White. Ancient asymptotically cylindrical flows and applications. Invent. Math. (to appear). Differentiability of the arrival time. T Colding, W Minicozzi, Comm. Pure Appl. Math. 6912T. Colding and W. Minicozzi. Differentiability of the arrival time. Comm. Pure Appl. Math., 69(12):2349-2363, 2016. The blowdown of ancient noncollapsed mean curvature flows. W Du, R Haslhofer, arXiv:2106.04042W. Du and R. Haslhofer. The blowdown of ancient noncollapsed mean curva- ture flows. arXiv:2106.04042, 2021. Logarithmic Sobolev inequalities on submanifolds of Euclidean space. K Ecker, J. Reine Angew. Math. 522K. Ecker. Logarithmic Sobolev inequalities on submanifolds of Euclidean space. J. Reine Angew. Math., 522:105-118, 2000. Harnack estimate for the mean curvature flow. R Hamilton, J. Differential Geom. 411R. Hamilton. Harnack estimate for the mean curvature flow. J. Differential Geom., 41(1):215-226, 1995. Ancient solutions of the mean curvature flow. R Haslhofer, O Hershkovits, Comm. Anal. Geom. 243R. Haslhofer and O. Hershkovits. Ancient solutions of the mean curvature flow. Comm. Anal. Geom., 24(3):593-604, 2016. Graphical translators for mean curvature flow. D Hoffman, T Ilmanen, F Martín, B White, Calc. Var. Partial Differential Equations. 584D. Hoffman, T. Ilmanen, F. Martín, and B. White. Graphical translators for mean curvature flow. Calc. Var. Partial Differential Equations, 58(4):Art. 117, 29, 2019. Mean curvature flow of mean convex hypersurfaces. R Haslhofer, B Kleiner, Comm. Pure Appl. Math. 703R. Haslhofer and B. Kleiner. Mean curvature flow of mean convex hypersur- faces. Comm. Pure Appl. Math., 70(3):511-546, 2017. Mean curvature flow with surgery. R Haslhofer, B Kleiner, Duke Math. J. 1669R. Haslhofer and B. Kleiner. Mean curvature flow with surgery. Duke Math. J., 166(9):1591-1626, 2017. Mean curvature flow with surgeries of two-convex hypersurfaces. G Huisken, C Sinestrari, Invent. Math. 1751G. Huisken and C. Sinestrari. Mean curvature flow with surgeries of two-convex hypersurfaces. Invent. Math., 175(1):137-221, 2009. Flow by mean curvature of convex surfaces into spheres. G Huisken, J. Differential Geom. 201G. Huisken. Flow by mean curvature of convex surfaces into spheres. J. Dif- ferential Geom., 20(1):237-266, 1984. Asymptotic behavior for singularities of the mean curvature flow. G Huisken, J. Differential Geom. 311G. Huisken. Asymptotic behavior for singularities of the mean curvature flow. J. Differential Geom., 31(1):285-299, 1990. Singularity profile in the mean curvature flow. W Sheng, X Wang, Methods Appl. Anal. 162W. Sheng and X. Wang. Singularity profile in the mean curvature flow. Methods Appl. Anal., 16(2):139-155, 2009. The size of the singular set in mean curvature flow of mean convex sets. B White, J. Amer. Math. Soc. 133B. White. The size of the singular set in mean curvature flow of mean convex sets. J. Amer. Math. Soc., 13(3):665-695, 2000. The nature of singularities in mean curvature flow of mean-convex sets. B White, J. Amer. Math. Soc. 161B. White. The nature of singularities in mean curvature flow of mean-convex sets. J. Amer. Math. Soc., 16(1):123-138, 2003. SO(2) symmetry of the translating solitons of the mean curvature flow in R 4. J Zhu, arXiv:2012.09319J. Zhu. SO(2) symmetry of the translating solitons of the mean curvature flow in R 4 . arXiv:2012.09319, 2020.
[]
[ "An optical example for classical Zeno effect", "An optical example for classical Zeno effect" ]
[ "Li-Gang Wang \nDepartment of Physics and ITP\nThe Chinese University of Hong Kong\nHong KongChina\n\nDepartment of Physics\nZhejiang University\n310027HangzhouChina\n", "Shi-Jian Gu \nDepartment of Physics and ITP\nThe Chinese University of Hong Kong\nHong KongChina\n", "Hai-Qing Lin \nDepartment of Physics and ITP\nThe Chinese University of Hong Kong\nHong KongChina\n" ]
[ "Department of Physics and ITP\nThe Chinese University of Hong Kong\nHong KongChina", "Department of Physics\nZhejiang University\n310027HangzhouChina", "Department of Physics and ITP\nThe Chinese University of Hong Kong\nHong KongChina", "Department of Physics and ITP\nThe Chinese University of Hong Kong\nHong KongChina" ]
[]
In this brief report, we present a proposal to observe the classical zeno effect via the frequent measurement in optics. 03.65.Xp, Zeno paradox, such as the problem of the so-called flying arrow, contrary to the evidence of our senses, seems never happen in real classical world. But this paradox is believed to be possible realized uniquely in quantum world, known as quantum Zeno effect proposed by Misra and Sudarshan [1] in 1977, because of probability properties of quantum states and the projective measurement in quantum mechanics (for a review, see Ref.[2]). Recently, one of us (Gu) argued that the classical Zeno effect is possible recovered with Super Mario's intelligent feedback[3]. Later, we further showed that the decay of a classical state in classical noise channels can be significantly suppressed with the aid of the successive repeaters[4], in this sense we claim that the classical Zeno effect may exist in classical stochastic process. In this report, we present a proposal to observe the classical zeno effect in optics. The evolution of the polarized-light intensity in the designed system are strongly affected by the measured times.As shown inFig. 1(a), when a linear polarized light beam with initial intensity I 0 is incident on a series of successive Faraday media, the polarization direction of the beam gradually changes with the increasing number of the Faraday media. Assuming that the initial direction of light polarization is in the y direction and the angle for the polarization rotation from the input to output changes π/2, then the intensity of the linear polarization FIG. 1: (Color online) (a) Setup for successive polarization rotation with a series of Faraday media and (b) setup for the observation of optical Zeno effect with vertical-polarization measurements after each Faraday medium. Each Faraday medium with the length L/N induces a polarization rotation angle of π/2N and L is the total length of all Faraday media. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.2 0.4 0.6 0.8
null
[ "https://arxiv.org/pdf/1003.5545v1.pdf" ]
118,976,512
1003.5545
fe04773e824b0c7938ecd0282603a85f3ce3de3a
An optical example for classical Zeno effect 29 Mar 2010 (Dated: March 30, 2010) Li-Gang Wang Department of Physics and ITP The Chinese University of Hong Kong Hong KongChina Department of Physics Zhejiang University 310027HangzhouChina Shi-Jian Gu Department of Physics and ITP The Chinese University of Hong Kong Hong KongChina Hai-Qing Lin Department of Physics and ITP The Chinese University of Hong Kong Hong KongChina An optical example for classical Zeno effect 29 Mar 2010 (Dated: March 30, 2010)arXiv:1003.5545v1 [quant-ph] In this brief report, we present a proposal to observe the classical zeno effect via the frequent measurement in optics. 03.65.Xp, Zeno paradox, such as the problem of the so-called flying arrow, contrary to the evidence of our senses, seems never happen in real classical world. But this paradox is believed to be possible realized uniquely in quantum world, known as quantum Zeno effect proposed by Misra and Sudarshan [1] in 1977, because of probability properties of quantum states and the projective measurement in quantum mechanics (for a review, see Ref.[2]). Recently, one of us (Gu) argued that the classical Zeno effect is possible recovered with Super Mario's intelligent feedback[3]. Later, we further showed that the decay of a classical state in classical noise channels can be significantly suppressed with the aid of the successive repeaters[4], in this sense we claim that the classical Zeno effect may exist in classical stochastic process. In this report, we present a proposal to observe the classical zeno effect in optics. The evolution of the polarized-light intensity in the designed system are strongly affected by the measured times.As shown inFig. 1(a), when a linear polarized light beam with initial intensity I 0 is incident on a series of successive Faraday media, the polarization direction of the beam gradually changes with the increasing number of the Faraday media. Assuming that the initial direction of light polarization is in the y direction and the angle for the polarization rotation from the input to output changes π/2, then the intensity of the linear polarization FIG. 1: (Color online) (a) Setup for successive polarization rotation with a series of Faraday media and (b) setup for the observation of optical Zeno effect with vertical-polarization measurements after each Faraday medium. Each Faraday medium with the length L/N induces a polarization rotation angle of π/2N and L is the total length of all Faraday media. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.2 0.4 0.6 0.8 In this brief report, we present a proposal to observe the classical zeno effect via the frequent measurement in optics. Zeno paradox, such as the problem of the so-called flying arrow, contrary to the evidence of our senses, seems never happen in real classical world. But this paradox is believed to be possible realized uniquely in quantum world, known as quantum Zeno effect proposed by Misra and Sudarshan [1] in 1977, because of probability properties of quantum states and the projective measurement in quantum mechanics (for a review, see Ref. [2]). Recently, one of us (Gu) argued that the classical Zeno effect is possible recovered with Super Mario's intelligent feedback [3]. Later, we further showed that the decay of a classical state in classical noise channels can be significantly suppressed with the aid of the successive repeaters [4], in this sense we claim that the classical Zeno effect may exist in classical stochastic process. In this report, we present a proposal to observe the classical zeno effect in optics. The evolution of the polarized-light intensity in the designed system are strongly affected by the measured times. As shown in Fig. 1(a), when a linear polarized light beam with initial intensity I 0 is incident on a series of successive Faraday media, the polarization direction of the beam gradually changes with the increasing number of the Faraday media. Assuming that the initial direction of light polarization is in the y direction and the angle for the polarization rotation from the input to output changes π/2, then the intensity of the linear polarization beam for the y component is given by I(z) = I 0 cos 2 πz 2L ,(1) where L is the total length of all Faraday media that induce the π/2 rotation of light polarization, and z is the internal distance inside the Faraday media. Since there is no measurement in (a), the intensity of polarization beam for the y component evolutes smoothly as a function of z. However, once the measurements are presented in Fig. 1(b), the evolution of the resulted intensity is dramatically changed. In Fig. 1(b), we do a vertical-polarization measurement (along the y direction) after each Faraday medium, then the resulted intensity of the polarization beam for the y component becomes I(z) = I 0 cos 2 ( π 2N ) i−1 cos 2 π 2L z − L N (i − 1) ,(2) where N is the total number of Faraday media with the same polarization rotation angle π/(2N ), and i denotes the ith Faraday medium at which the distance z is located. Therefore, the intensity for the polarization light beam for the y component at the output end finally becomes I out = I 0 cos 2 π 2N N .(3) When the measured times increase to be infinite, i. e., N → ∞, the output intensity at the output end will be close to I 0 . This indicates that the initial intensity of the linear polarized light beam for the y component, passing through numerical Faraday media with small polarization rotation angles, will not decay after the infinite measurements. Figure 2(a) shows clearly the changes of the intensities of the linear polarized light in the y component for different measurement times, and with the increasing of the measurement times the decay of the intensity becomes slower. In Fig. 2(b), it is found that the output intensity at the output end becomes larger and larger, and it gradually tends to be one with the increasing of the measurement times N . In summary, instead of the memory effect in the previous scheme [4], in the present scenario we use the polarization property of light to recover the Zeno effect. We can see that the measurement of the light polarization plays the same role of the projective measurement in quantum mechanics. However, all the quantities involved here are classical. In a word, Zeno effect does happen in the classical world. PACS numbers: 05.40.-a, 03.65.Xp, 03.67.-a FIG. 1 : 1(Color online) (a) Setup for successive polarization rotation with a series of Faraday media and (b) setup for the observation of optical Zeno effect with vertical-polarization measurements after each Faraday medium. Each Faraday medium with the length L/N induces a polarization rotation angle of π/2N and L is the total length of all Faraday media. online) (a) Intensity evolutions of the polarized beam in the y component as a function of distance z for different measurement times. (b) Dependence of the output intensities of the polarized beam on the measurement times N . . B Misra, E C G Sudarshan, J. Math. Phys. 18756B. Misra and E. C. G. Sudarshan, J. Math. Phys. 18, 756 (1977). . K Koshino, A Shimizu, Phys. Rep. 412191K. Koshino and A. Shimizu, Phys. Rep. 412, 191 (2005). . S J Gu, EPL. 8820007S. J. Gu, EPL 88, 20007 (2009). . S J Gu, L G Wang, Z G Wang, H Q Lin, arXiv:0911.5388S. J. Gu, L. G. Wang, Z. G. Wang, and H. Q. Lin, arXiv:0911.5388 (2009).
[]
[ "Rigorous derivation of the formula for the buckling load in axially compressed circular cylindrical shells", "Rigorous derivation of the formula for the buckling load in axially compressed circular cylindrical shells" ]
[ "Yury Grabovsky ", "Davit Harutyunyan " ]
[]
[]
The goal of this paper is to apply the recently developed theory of buckling of arbitrary slender bodies to a tractable yet non-trivial example of buckling in axially compressed circular cylindrical shells, regarded as three-dimensional hyperelastic bodies. The theory is based on a mathematically rigorous asymptotic analysis of the second variation of 3D, fully nonlinear elastic energy, as the shell's slenderness parameter goes to zero. Our main results are a rigorous proof of the classical formula for buckling load and the explicit expressions for the relative amplitudes of displacement components in single Fourier harmonics buckling modes, whose wave numbers are described by Koiter's circle. This work is also a part of an effort to quantify the sensitivity of the buckling load of axially compressed cylindrical shells to imperfections of load and shape.
10.1007/s10659-015-9513-x
[ "https://arxiv.org/pdf/1405.0714v1.pdf" ]
55,369,893
1405.0714
5ba54495192ffdc06df5d375aa122ae9ed573767
Rigorous derivation of the formula for the buckling load in axially compressed circular cylindrical shells May 6, 2014 Yury Grabovsky Davit Harutyunyan Rigorous derivation of the formula for the buckling load in axially compressed circular cylindrical shells May 6, 2014 The goal of this paper is to apply the recently developed theory of buckling of arbitrary slender bodies to a tractable yet non-trivial example of buckling in axially compressed circular cylindrical shells, regarded as three-dimensional hyperelastic bodies. The theory is based on a mathematically rigorous asymptotic analysis of the second variation of 3D, fully nonlinear elastic energy, as the shell's slenderness parameter goes to zero. Our main results are a rigorous proof of the classical formula for buckling load and the explicit expressions for the relative amplitudes of displacement components in single Fourier harmonics buckling modes, whose wave numbers are described by Koiter's circle. This work is also a part of an effort to quantify the sensitivity of the buckling load of axially compressed cylindrical shells to imperfections of load and shape. Introduction The buckling of rods, shells and plates is traditionally described in mechanics textbooks as an instability in the framework of nonlinear shell theory obtained by semi-rigorous dimension reduction of three-dimensional nonlinear elasticity. While these theories are effective in describing large deformations of rods and shells (including buckling), their heuristic nature obscures the source of the discrepancy between theoretical and experimental results, as is the case for axially compressed circular cylindrical shells [17]. At the same time, a rigorously derived theory of bending of shells [3] captures deformations in the vicinity of relatively smooth isometries of the middle surface. Unfortunately, the isometries of the straight circular cylinder are nonsmooth [16]. Our approach, originating in [6], is capable of giving a mathematically rigorous treatment of buckling of slender bodies and determining whether the tacit assumptions of the classical derivation are the source of the discrepancy with experiment. In this paper, we apply our theory and obtain a mathematically rigorous proof of the classical formula for buckling load [11,14]. This result justifies the generally accepted assumption that the paradoxical behavior of cylindrical shells in buckling is due to the high sensitivity of the buckling load to imperfections [1,13,15]. This phenomenon is commonly explained by the instability of equilibrium states in the vicinity of the buckling point on the bifurcation diagram [9,15,2]. However, the exact mechanisms of imperfection sensitivity are not fully understood, nor is there a reliable theory capable of predicting experimentally observed buckling loads [10,17,7]. While a full bifurcation analysis is necessary to understand the stability of equilibria near the critical point, our method's singular focus on the stability of the trivial branch gives access to the scaling behavior of key measures of structural stability in the thin shell limit. We have argued in [5] that axially compressed circular cylindrical shells are susceptible to scaling instability of the critical load, whereby the scaling exponent, and not just its coefficient, can be affected by imperfections. The new analytical tools developed in [4] give hope for a path towards quantification of imperfection sensitivity. Our approach is based on the observation that the pre-buckled state is governed by equations of linear elasticity [6]. At the critical load, the linear elastic stress reaches a level at which the trivial branch becomes unstable within the framework of 3D hyperelasticity. The origin of this instability is completely geometric: the frame-indifference of the energy density function implies 1 non-convexity in the compressive strain region. Since buckling occurs at relatively small compressive loads, the material's stress-strain response is locally linear. This explains why all classical formulas for buckling loads of various slender structures involve only linear elastic moduli and hold regardless of the material response model. The significance of our approach is two-fold. First, it provides a common platform to study buckling of arbitrary slender bodies. Second, its conclusions are mathematically rigorous and its underlying assumptions explicitly specified. The goal of this paper is to demonstrate the power and flexibility of our method on the non-trivial, yet analytically solvable example of the axially compressed circular cylindrical shell. Our analysis is powered by asymptotically sharp Korn-like inequalities [8,12], where instead of bounding the L 2 norm of the displacement gradient by the L 2 norm of the strain tensor, we bound the L 2 norm of individual components of the gradient by the L 2 norm of the strain tensor. These inequalities have been derived in our companion paper [4]. The method of buckling equivalence [6] provides flexibility by furnishing a systematic way of discarding asymptotically insignificant terms, while simplifying the variational functionals that characterize buckling. The paper is organized as follows. In Section 2, we describe the loading and corresponding trivial branch of an axially compressed cylindrical shell treated as 3-dimensional hyperelastic body. We define stability of the trivial branch in terms of the second variation of energy. Next, we describe our approach from [6] and recall all necessary technical results from [4,5] for the sake of completeness. In Section 3, we give the rigorous derivation of the classical buckling load and identify the explicit form of buckling modes. Our two most delicate results are a rigorous proof of the existence of a buckling mode that is a single Fourier harmonic and the linearization of the dependence of this buckling mode on the radial variable-the two assumptions that are commonly made in the classical derivation of the critical load formula. Axially compressed cylindrical shell In this section we will give a mathematical formulation of the problem of buckling of axially compressed cylindrical shell. Boundary conditions and trivial branch Consider the circular cylindrical shell given in cylindrical coordinates (r, θ, z) as follows: C h = I h × T × [0, L], I h = [1 − h/2, 1 + h/2], where T is a 1-dimensional torus (circle) describing 2π-periodicity in θ. Here h is the slenderness parameter, equal to the ratio of the shell thickness to the radius. In this paper we consider the axial compression of the shell where the Lipschitz deformation y : C h → R 3 satisfies the boundary conditions, given in cylindrical coordinates by y θ (r, θ, 0) = y z (r, θ, 0) = y θ (r, θ, L) = 0, y z (r, θ, L) = (1 − λ)L. (2.1) The loading is parametrized by the compressive strain λ > 0 in the axial direction. The trivial deformation y(x) = x satisfies the boundary conditions for λ = 0. By a stable deformation we mean a Lipschitz function y(x; h, λ), satisfying boundary conditions (2.1) and being a weak local minimizer 2 of the energy functional (P3) Local stability of the trivial deformation y(x) = x: E(y) = C h W(L 0 ξ, ξ > α L0 |ξ| 2 , ξ ∈ Sym(R 3 ), (2.2) where Sym(R 3 ) is the space of symmetric 3×3 matrices, and L 0 = W F F (I) is the linearly elastic tensor of material properties. Here, and elsewhere we use the notation A, B = Tr (AB T ) for the Frobenius inner product on the space of 3 × 3 matrices. While this is not needed for general theory, in this paper we will also assume that W (F ) is isotropic: W (F R) = W (F ) for every R ∈ SO(3). (2.3) This assumption is necessary to obtain an explicit formula for the critical load. Our goal is to examine stability of the homogeneous trivial branch y(x; h, λ) given in cylindrical coordinates by y r = (a(λ) + 1)r, y θ = 0, y z = (1 − λ)z, (2.4) where the function a(λ) is determined by the natural boundary conditions P (∇y)e r = 0, r = 1 ± h 2 , P (∇y)e z · e r = 0, z = 0, L, (2.5) since uniform deformations always satisfy equations of equilibrium. Here P (F ) = W F (F ), the gradient of W with respect to F , is the Piola-Kirchhoff stress tensor. We observe that The isotropy (2.3) implies thatŴ (RCR T ) =Ŵ (C) for all R ∈ SO(3). Differentiating this relation in R at R = I we conclude thatŴ C (C) must commute with C. In particular, this implies that the matrixŴ C (C) must be diagonal, whenever C is diagonal. We compute that in cylindrical coordinates F = ∇y =   1 + a 0 0 0 1 + a 0 0 0 1 − λ   , C = F T F =   (1 + a) 2 0 0 0 (1 + a) 2 0 0 0 (1 − λ) 2   Hence, P (F ) is diagonal, and conditions (2.5) reduce to a single scalar equation W C ((1 + a) 2 (e r ⊗ e r + e θ ⊗ e θ ) + (1 − λ) 2 e z ⊗ e z )e r · e r = 0, (2.6) where the left-hand side of (2.6) is a twice continuously differentiable function of (λ, a). Condition (P1) implies that (λ, a) = (0, 0) is a solution. The conclusion of the lemma is guaranteed by the implicit function theorem, whose non-degeneracy condition reduces to 1 2 L 0 (e r ⊗ e r + e θ ⊗ e θ )e r · e r = 0. (2.7) By assumption, L 0 is isotropic, and the non-degeneracy condition (2.7) becomes κ + µ/3 = 0, which is satisfied due to (P3). Here κ and µ are the bulk and shear moduli, respectively. It is important, that as h → 0, the trivial branch does not blow up. In fact, in our case the trivial branch is independent of h. The general theory of buckling [6] is designed to detect the first instability of a trivial branch in a slender body Ω h that is well-described by linear elasticity. Here is the formal definition from [6,5]. Definition 2.2. We call the family of Lipschitz equilibria y(x; h, λ) of E(y) a linearly elastic trivial branch if there exist h 0 > 0 and λ 0 > 0, so that for every h ∈ [0, h 0 ] and λ ∈ [0, λ 0 ] (i) y(x; h, 0) = x (ii) There exist a family of Lipschitz functions u h (x), independent of λ, such that ∇y(x; h, λ) − I − λ∇u h (x) L ∞ (Ω h ) ≤ Cλ 2 , (2.8) (iii) ∂(∇y) ∂λ (x; h, λ) − ∇u h (x) L ∞ (Ω h ) ≤ Cλ (2.9) where the constant C is independent of h and λ. We remark, that the leading order asymptotics u h (x) of the nonlinear trivial branch is nothing else but a linear elastic displacement, that can be found by solving the equations of linear elasticity ∇ · (L 0 e(u h )) = 0, u h (x) = ∂y(x; h, λ) ∂λ λ=0 = a (0)re r − ze z = νre r − ze z is independent of h. Here we computed that a (0) = ν (Poisson's ratio) by differentiating (2.6) in λ at λ = 0. Stability of the trivial branch We define critical strain λ crit in terms of the second variation of energy δ 2 E(φ; h, λ) = C h (W F F (∇y(x; h, λ))∇φ, ∇φ)dx,(2.10) defined on the space of admissible variations V • h = {φ ∈ W 1,∞ (C h ; R 3 ) : φ θ (r, θ, 0) = φ z (r, θ, 0) = φ θ (r, θ, L) = φ z (r, θ, L) = 0}. By density of W 1,∞ (C h ; R 3 ) in W 1,2 (C h ; R 3 ) we extend the space of admissible variations from V • h to its closure V h in W 1,2 . V h = {φ ∈ W 1,2 (C h ; R 3 ) : φ θ (r, θ, 0) = φ z (r, θ, 0) = φ θ (r, θ, L) = φ z (r, θ, L) = 0}. (2.11) The critical strain λ crit can be defined as follows. λ crit (h) = inf{λ > 0 : δ 2 E(φ; h, λ) < 0 for some φ ∈ V h }. (2.12) While this definition is unambiguous, it is inconvenient, since the critical strain strongly depends on the choice of the nonlinear energy density function. Instead, we will focus only on the leading order asymptotics of the critical strain, as h → 0. The corresponding buckling mode, to be defined below, will also be understood in an asymptotic sense. Definition 2.3. We say that a function λ(h) → 0, as h → 0 is a buckling load if lim h→0 λ(h) λ crit (h) = 1. (2.13) A buckling mode is a family of variations φ h ∈ V h \ {0}, such that lim h→0 δ 2 E(φ h ; h, λ crit (h)) λ crit (h) ∂(δ 2 E) ∂λ (φ h ; h, λ crit (h)) = 0. (2.14) Targeting only the leading order asymptotics allows us to determine critical strain and buckling modes from a constitutively linearized second variation [6]: δ 2 E cl (φ; h, λ) = C h { L 0 e(φ), e(φ) + λ σ h , ∇φ T ∇φ }dx, φ ∈ V h ,(2. 15) and σ h is the linear elastic stress σ h (x) = L 0 e(u h (x)). (2.16) Since the first term in (2.15) is always non-negative we define the set A h = φ ∈ V h : σ h , ∇φ T ∇φ < 0 (2.17) of potentially destabilizing variations. The constitutively linearized critical load will then be determined by minimizing the Rayleigh quotient R(h, φ) = − Ω h L 0 e(φ), e(φ) dx Ω h σ h , ∇φ T ∇φ dx . (2.18) The functional R(h, φ) expresses the relative strength of the destabilizing compressive stress, measured by the functional C h (φ) = Ω h σ h , ∇φ T ∇φ dx (2.19) and the reserve of structural stability measured by the functional S h (φ) = Ω h L 0 e(φ), e(φ) dx. (2.20) Definition 2.4. The constitutively linearized buckling load λ cl (h) is defined by λ cl (h) = inf φ∈A h R(h, φ). (2.21) We say that the family of variations {φ h ∈ A h : h ∈ (0, h 0 )} is a constitutively linearized buckling mode if lim h→0 R(h, φ h ) λ cl (h) = 1. (2.22) In [6] we have defined a measure of "slenderness" of the body in terms of the Korn constant K(V h ) = inf φ∈V h e(φ) 2 L 2 (Ω h ) ∇φ 2 L 2 (Ω h ) . (2.23) It is obvious, that if K(V h ) stays uniformly positive, then so does the constitutively linearized second variation δ 2 E cl (φ; h, λ(h)) as a quadratic form on V h , for any λ(h) → 0, as h → 0. Definition 2.5. We say that the body Ω h is slender if lim h→0 K(V h ) = 0. (2.24) This notion of slenderness requires not only geometric slenderness of the domain but also tractiondominated boundary conditions conveniently encoded in the subspace V h , satisfying W 1,2 0 (Ω h ; R 3 ) ⊂ V h ⊂ W 1,2 (Ω h ; R 3 ). We can now state sufficient conditions, established in [5], under which the constitutively linearized buckling load and buckling mode, defined in (2.21)-(2.22), verify Definition 2.3. Theorem 2.6. Suppose that the body is slender in the sense of Definition 2.5. Assume that the constitutively linearized critical load λ cl (h), defined in (2.21) satisfies λ cl (h) > 0 for all sufficiently small h and lim h→0 λ cl (h) 2 K(V h ) = 0. (2.25) Then λ cl (h) is the buckling load and any constitutively linearized buckling mode φ h is a buckling mode in the sense of Definition 2.3. Now we will show that Theorem 2.6 applies to the axially compressed circular cylindrical shells. The asymptotics of the Korn constant K(V h ), as h → 0, was established in [4]. Theorem 2.7. Let V h be given by (2.11). Then, there exist positive constants c(L) < C(L), depending only on L, such that c(L)h 3/2 ≤ K(V h ) ≤ C(L)h 3/2 . (2.26) In order to establish (2.25) we need to estimate λ cl (h). For the trivial branch (2.4) we compute σ h = −Ee z ⊗ e z , (2.27) where E is the Young's modulus. Hence, C h (φ) = −E( φ r,z 2 + φ z,z 2 + φ θ,z 2 ). (2.28) where from now on · will always denote the L 2 -norm on C h . In order to estimate λ cl (h) we need to prove Korn-like inequalities for the gradient components, φ r,z , φ z,z , and φ θ,z . This was done in [4]. Theorem 2.8. There exist a constant C(L) > 0 depending only on L such that for any φ ∈ V h one has, φ θ,z 2 ≤ C(L) √ h e(φ) 2 , (2.29) φ r,z 2 ≤ C(L) h e(φ) 2 . (2.30) Moreover, the powers of h in the inequalities (2.26)-(2.30) are optimal, achieved simultaneously by the ansatz              φ h r (r, θ, z) = −W ,ηη θ 4 √ h , z φ h θ (r, θ, z) = r 4 √ hW ,η θ 4 √ h , z + r−1 4 √ h W ,ηηη θ 4 √ h , z , φ h z (r, θ, z) = (r − 1)W ,ηηz θ 4 √ h , z − √ hW ,z θ 4 √ h , z , (2.31) where W (η, z) can be any smooth compactly supported function on (−1, 1) × (0, L), with the understanding that the above formulas hold on a single period θ ∈ [0, 2π], while the function φ h (r, θ, z) is 2π-periodic in θ. Corollary 2.9. ch ≤ λ cl (h) ≤ Ch. (2.32) Proof. This is an immediate consequence of Theorem 2.8. The lower bound follows from inequalities (2.2), (2.29) and (2.30) (and also an obvious inequality φ z,z ≤ e(φ) ). The upper bound follows from using a test function (2.31) in the constitutively linearized second variation. Inequalities (2.26) and (2.32) imply that the condition (2.25) in Theorem 2.6 is satisfied for the axially compressed circular cylindrical shell. Buckling equivalence The problem of finding the asymptotic behavior of the critical strain λ crit and the corresponding buckling mode, as h → 0 now reduces to minimization of the Rayleigh quotient (2.18), which is expressed entirely in terms of linear elastic data. Even though this already represents a significant simplification of our problem, its explicit solution is still technically difficult. However, the asymptotic flexibility of the notion of buckling load and buckling mode permits us to replace R(h, φ h ) with an equivalent, but simpler functional. The notion of buckling equivalence was introduced in [6] and developed further in [5]. Here we give the relevant definition and theorems for the sake of completeness. Definition 2.10. Assume that J(h, φ) is a variational functional defined on B h ⊂ A h . We say that the pair (B h , J(h, φ)) characterizes buckling if the following three conditions are satisfied (a) Characterization of the buckling load: If λ(h) = inf φ∈B h J(h, φ),lim h→0 J(h, φ h ) λ(h) = 1. (2.33) (c) Characterization of the buckling mode: If φ h ∈ B h satisfies (2.33) then it is a buckling mode. Definition 2.11. Two pairs (B h , J(h, φ)) and (B h , J (h, φ)) are called buckling equivalent if the pair (B h , J(h, φ)) characterizes buckling if and only if (B h , J (h, φ)) does. Of course this definition becomes meaningful only if the pairs (B h , J(h, φ)) and (B h , J (h, φ)) are related. The following lemma has been proved in [5]. The key tool for simplification of functionals characterizing buckling is the following theorem, [5]. Theorem 2.13 (Buckling equivalence). Suppose that λ(h) is a buckling load in the sense of Definition 2.3. If either lim h→0 λ(h) sup φ∈B h 1 J 1 (h, φ) − 1 J 2 (h, φ) = 0, (2.34) or lim h→0 1 λ(h) sup φ∈B h |J 1 (h, φ) − J 2 (h, φ)| = 0, (2.35) then the pairs (B h , J 1 (h, φ)) and (B h , J 2 (h, φ)) are buckling equivalent in the sense of Definition 2.11. As an application we will simplify the denominator in the functional R(h, φ), given by (2.18). Theorem 2.8 suggests that φ r,z 2 can be much larger than φ z,z 2 and φ θ,z 2 . Hence, we will prove that we can to replace C h (φ), given by (2.28), with −E φ r,z 2 . Hence, we define a simplified functional R 1 (h, φ) = C h L 0 e(φ), e(φ) dx C h |φ r,z | 2 dx , L 0 = L 0 E . Lemma 2.14. The pair (A h , R 1 (h, φ)) characterizes buckling. Proof. By Theorem 2.8 we have 1 R(h, φ) − 1 R 1 (h, φ) = φ θ,z 2 + φ z,z 2 C h L 0 e(φ), e(φ) dx ≤ C √ h . for every φ ∈ V h . Condition (2.34) now follows from (2.32). Thus, by Theorem 2.13, the pair (A h , R 1 (h, φ)) characterizes buckling. 3 Rigorous derivation of the classical formula for the buckling load In this section we prove the classical asymptotic formula for the critical strain [11,14] λ crit (h) ∼ h 3(1 − ν 2 ) . (3.1) Restriction to a single Fourier mode The goal of this section is to show that even if we shrink the space of admissible variations to the set of single Fourier modes in (θ, z), we still retain the ability to characterize buckling. The first step is to define Fourier modes by constructing an appropriate 2L-periodic extension of φ in z variable. Since, no continuous 2Lperiodic extension φ has the property that e( φ)(r, θ, −z) = ±e(φ)(r, θ, z), we will have to navigate around various sign changes in components of e(φ). We can handle this difficulty if L 0 is isotropic, which we have already assumed. It is easy to check that there are only two possibilities that work 4 : odd extension for φ r , φ θ , even for φ z , and even for φ r , φ θ , odd for φ z . Since, φ r is unconstrained at the boundary z = 0, L, only the latter possibility is available to us. Hence, we expand φ r and φ θ is the cosine series in z, while φ z is represented by the sine series:                      φ r (r, θ, z) = n∈Z ∞ m=0 φ r (r, m, n)e inθ cos πmz L , φ θ (r, θ, z) = n∈Z ∞ m=0 φ θ (r, m, n)e inθ cos πmz L , φ z (r, θ, z) = n∈Z ∞ m=0 φ z (r, m, n)e inθ sin πmz L . (3.2) While functions in V h can be represented by the expansion (3.2), single Fourier modes do not belong to V h . Yet, the convenience of working with such simple test functions outweighs this unfortunate circumstance, and hence, we switch (for the duration of technical calculations) to the spacẽ V h = φ ∈ W 1,2 (C h ; R 3 ) : φ z (r, θ, 0) = φ z (r, θ, L) = L 0 φ θ (r, θ, z)dz = 0 ∀(r, θ) ∈ I h × T . (3.3) We will come back at the very end to the space V h to get the desired result for our original boundary conditions. The spaceṼ h appears in our companion paper [4] as V 3 h , where the inequalities (2.26), (2.29) and (2.30) have been proved for it. As a consequence, the estimates (2.32) hold for λ(h) = inf φ∈à h R(h, φ). (3.4) We conclude that the pair (à h , R(h, φ)) characterizes buckling (for the new boundary conditions associated with the spaceṼ h ). In that case the proof of Lemma 2.14 carries with no change for the spaceṼ h . Hence, the pair (à h , R 1 (h, φ)) characterizes buckling as well. We now define the single Fourier mode spaces F(m, n). For any complex-valued function f (r) = (f r (r), f θ (r), f θ (r)) and any m ≥ 1, n ≥ 0 we define Φ m,n (f ) = f r (r) cos πmz L , f θ (r) cos πmz L , f z (r) sin πmz L e inθ , and F(m, n) = { e(Φ m,n (f )) : f ∈ C 1 (I h ; C 3 )}, m ≥ 1, n ≥ 0. (3.5) Letà h be given by (2.17) with V h replaced byṼ h . We define λ 1 (h) = inf φ∈à h R 1 (h, φ),λ m,n (h) = inf φ∈F (m,n) R 1 (h, φ). (3.6) Theorem 3.1. (i) λ 1 (h) = inf m≥1 n≥0λ m,n (h). (3.7) (ii) The infimum in (3.7) is attained at m = m(h) and n = n(h) satisfying m(h) ≤ C(L) √ h , n(h) 2 m(h) ≤ C(L) √ h (3.8) for some constant C(L) depending only on L. It is clear thatλ m,n (h) ≥ λ 1 (h) for any m ≥ 1 and n ≥ 0, since F(m, n) ⊂à h . Therefore, α(h) ≥ λ 1 (h). Let us prove the reverse inequality. By definition of α(h) we have C h L 0 e(φ), e(φ) dx ≥ α(h) φ r,z 2 (3.9) for any φ ∈ F(m, n), and any m ≥ 1 and n ≥ 0. Any φ ∈à h can be expanded in the Fourier series in θ and z φ(r, θ, z) = ∞ m=0 ∞ n=0 φ (m,n) (r, θ, z), where φ (m,n) (r, θ, z) ∈ F(m, n) for all m ≥ 1, n ≥ 0. If L 0 is isotropic, then the sine and cosine series in z do not couple and the Plancherel identity implies that the quadratic form L 0 e(φ), e(φ) diagonalizes in Fourier space: C h L 0 e(φ), e(φ) dx = ∞ m=0 ∞ n=0 C h L 0 e(φ m,n ), e(φ m,n ) dx. (3.10) We also have φ r,z 2 = ∞ m=0 ∞ n=0 φ (m,n) r,z 2 = ∞ m=1 ∞ n=0 φ (m,n) r,z 2 . Inequality (3.9) implies that C h L 0 e(φ m,n ), e(φ m,n ) dx ≥ α(h) φ (m,n) r,z 2 , m ≥ 1, n ≥ 0. (3.11) Summing up, we obtain that C h L 0 e(φ), e(φ) dx ≥ ∞ m=1 ∞ n=0 C h L 0 e(φ m,n ), e(φ m,n ) dx ≥ α(h) φ r,z 2 for every φ ∈à h . It follows that λ 1 (h) ≥ α(h), and Part (i) is proved. To establish Part(ii) we require a new delicate Korn-type inequality, proved in [4]. It is a weighted Korn inequality in Nazarov's terminology [12]. To prove the first estimate in (3.8) we apply inequality (3.12) to φ h and then estimate e(φ h ) via (3.13): m 2 π 2 L 2 φ h r 2 = φ h r,z 2 ≤ ∇φ h 2 ≤ C(L) m 2 h + m √ h φ h r 2 . Hence h + 1 m √ h ≥ c(L) for some constant c(L) > 0, independent of h. Therefore, we obtain a uniform in h ∈ (0, 1) upper bound on m √ h. To estimate n we write n 2 φ h r 2 = φ h r,θ 2 ≤ C 0 ( (∇φ h ) rθ 2 + φ h θ 2 ). By the Poincaré inequality φ h θ 2 ≤ L 2 π 2 φ h θ,z 2 ≤ L 2 π 2 (∇φ h ) θz 2 , and hence n 2 φ h r 2 ≤ C(L) (∇φ h ) 2 . Applying (3.12) again and estimating e(φ h ) via (3.13) we obtain n 2 ≤ C(L) hm 2 + m √ h , from which (3.8) 2 follows via (3.8) 1 . Part (ii) is proved now. Part (iii). Now, let m(h), n(h) be the minimizers in (3.7). It is sufficient to show, due to Lemma 2.12, that F(m(h), n(h)) contains a buckling mode. By definition of the infimum in (3.6), for each h ∈ (0, h 0 ) there exists ψ h ∈ F(m(h), n(h)) ⊂à h such that λ 1 (h) = λ m(h),n(h) (h) ≤ R 1 (h, ψ h ) ≤ λ 1 (h) + ( λ 1 (h)) 2 . Therefore, lim h→0 R 1 (h, ψ h ) λ 1 (h) = 1. Hence, ψ h ∈ F(m(h), n(h)) is a buckling mode, since the pair (à h , R 1 (h, φ)) characterizes buckling. Linearization in r In this section we prove that the buckling load and a buckling mode can be captured by single Fourier harmonics whose θ and z components are linear in r. In fact, we specify an explicit structure for buckling mode candidates. We define the linearization operator as follows: L(φ) = (φ r (r, θ, z), rφ θ (1, θ, z) − (r − 1)φ r,θ (1, θ, z), φ z (1, θ, z) − (r − 1)φ r,z (1, θ, z)). We show now that the buckling mode can be found among the linearized single Fourier modes F lin (m, n) = {L(φ) : φ ∈ F(m, n)}, m ≥ 1, n ≥ 0. (3.14) Lemma 3.3. There exists C(L) > 0 depending only on L, so that for every h ∈ (0, 1), every m ≥ 1 and n ≥ 0, satisfying (3.8), and every φ ∈ F(m, n) we have the estimate R 1 (h, L(φ)) ≤ (1 + C(L)h)R 1 (h, φ). (3.15) Proof. We will perform linearization in r sequentially, first in φ θ and then in φ z . Step 1 (Linearization of φ θ .) We introduce the operator of linearization of φ θ component. L θ (φ) = (φ r (r, θ, z), rφ θ (1, θ, z) − (r − 1)φ r,θ (1, θ, z), φ z (r, θ, z)), For φ ∈ F lin (m, n) we define φ (1) = L θ (φ). Then, it is easy to see that φ (1) ∈ F lin (m, n). It is clear that e(φ (1) ) rr = e(φ) rr , e(φ (1) ) zr = e(φ) zr , e(φ (1) ) zz = e(φ) zz , Thus we can estimate: e(φ (1) ) 2 ≤ e(φ) 2 + e(φ (1) ) rθ 2 + 2 e(φ (1) ) θθ − e(φ) θθ 2 + 2 e(φ (1) ) θz − e(φ) θz 2 , Tr (e(φ (1) )) − Tr (e(φ)) = e(φ (1) ) θθ − e(φ) θθ 2 . We also have e(φ (1) ) θθ − e(φ) θθ ≤ 2 φ (1) θ,θ − φ θ,θ , e(φ (1) ) θz − e(φ) θz ≤ φ (1) θ,z − φ θ,z . Therefore, (3.16) and e(φ (1) ) 2 ≤ e(φ) 2 + e(φ (1) ) rθ 2 + 2( φ (1) θ,θ − φ θ,θ 2 + φ (1) θ,z − φ θ,z 2 ), Tr (e(φ (1) )) − Tr (e(φ)) ≤ 2 φ (1) θ,θ − φ θ,θ 2 . (3.17) Recalling that {φ, φ (1) } ⊂ F(m, n), and that the inequalities (3.8) imply that n 2 ≤ C(L)/h, we obtain φ (1) θ,θ − φ θ,θ 2 = n 2 φ (1) θ − φ θ 2 ≤ C(L) h φ (1) θ − φ θ 2 ,(3. 18) due to (3.8). Similarly, φ (1) θ,z − φ θ,z 2 = π 2 m 2 L 2 φ (1) θ − φ θ 2 ≤ C(L) h φ (1) θ − φ θ 2 , (3.19) Observe that e(φ (1) ) rθ 2 = φ r,θ − φ r,θ (1, θ, z) r 2 = n 2 1 r r 1 φ r,r (t, θ, z)dt 2 . Using the inequality (3.20) and the bounds on wave numbers (3.8) we obtain I h r 1 f (t)dt 2 dr ≤ h 2 4 I h f (r) 2 dr,e(φ (1) ) rθ 2 ≤ 2n 2 C(L)h 2 φ r,r 2 ≤ C(L)h e(φ) 2 . We now proceed to estimate φ (1) θ − φ θ . Let w(r, θ, z) = φ θ,r + φ r,θ − φ θ = 2e(φ) rθ − (1 − r)(∇φ) rθ . Therefore, w 2 ≤ 8 e(φ) 2 + h 2 ∇φ 2 ≤ 8 e(φ) 2 + C(L) √ h e(φ) 2 due to Korn's inequality (2.26). Thus, w ≤ C(L) e(φ) . We can express φ (1) θ − φ θ in terms of w as follows φ θ − φ (1) θ = r 1 w(t, θ, z)dt + r 1 (φ θ (t, θ, z) − φ θ (1, θ, z))dt − r 1 (φ r,θ (t, θ, z) − φ r,θ (1, θ, z))dt. Hence, by (3.20), we have φ θ − φ (1) θ 2 ≤ 3h 2 4 ( w 2 + φ θ − φ θ (1, θ, z) 2 + φ r,θ − φ r,θ (1, θ, z) 2 ). By the Poincaré inequality followed by the application of Korn's inequality (2.26) we obtain, φ θ − φ θ (1, θ, z) 2 ≤ h 2 φ θ,r 2 ≤ C(L) √ h e(φ) 2 . Similarly, by the Poincaré inequality and (3.8) we estimate φ r,θ − φ r,θ (1, θ, z) 2 = n 2 φ r − φ r (1, θ, z) 2 ≤ C(L)n 2 h 2 φ r,r 2 ≤ C(L)h e(φ) 2 . We conclude that φ θ − φ (1) θ 2 ≤ C(L)h 2 e(φ) 2 . Hence, (3.16) and (3.17) become respectively, e(φ (1) ) 2 ≤ e(φ) 2 (1 + C(L)h),(3.21) and Tr (e(φ (1) )) 2 ≤ Tr (e(φ)) 2 + C(L)h e(φ) 2 . L 0 e(φ (1) ), e(φ (1) ) dx = 1 1 + ν ν 1 − 2ν Tr (e(φ (1) )) 2 + e(φ (1) ) 2 ≤ (1 + C(L)h) C h L 0 e(φ), e(φ) dx. (3.23) Step 2 (Linearization of φ z .) In this step we define φ (2) = L(φ) = L(φ (1) ), and proceed exactly as in Step 1. We observe that e(φ (2) ) rr = e(φ (1) ) rr , e(φ (2) ) rθ = e(φ (1) ) rθ , e(φ (2) ) θθ = e(φ (1) ) θθ , and hence, e(φ (2) ) 2 ≤ e(φ (1) ) 2 + e(φ (2) ) rz 2 + 2 e(φ (2) ) θz − e(φ (1) ) θz 2 + 2 e(φ (2) ) zz − e(φ (1) ) zz 2 , and Tr (e(φ (1) )) − Tr (e(φ (2) )) ≤ 2 φ (1) z,z − φ (2) z,z 2 . (3.24) We also have e(φ (2) ) θz − e(φ (1) ) θz ≤ 2 φ (1) z,θ − φ (2) z,θ , e(φ (2) ) zz − e(φ (1) ) zz ≤ φ (1) z,z − φ (2) z,z . (3.25) For functions {φ (1) , φ (2) } ⊂ F(m, n) we obtain φ (1) z,θ − φ (2) z,θ = n φ (1) z − φ (2) z ≤ C(L) h φ (1) z − φ (2) z , and φ (1) z,z − φ (2) z,z = πm L φ (1) z − φ (2) z ≤ C(L) h φ (1) z − φ (2) z , where the bounds (3.8) on wave numbers have been used. Hence, e(φ (2) ) θz − e(φ (1) ) θz 2 ≤ C(L) h φ (1) z − φ (2) z 2 , e(φ (2) ) zz − e(φ (1) ) zz 2 ≤ C(L) h φ (1) z − φ (2) z 2 . (3.26) For e(φ (2) ) rz we obtain e(φ (2) ) rz 2 = φ r,z − φ r,z (1, θ, z) 2 = π 2 m 2 L 2 φ r − φ r (1, θ, z) 2 = π 2 m 2 L 2 r 1 φ r,r (t, θ, z)dt 2 . Applying inequalities (3.20) and (3.8) we obtain e(φ (2) ) rz 2 ≤ C(L)m 2 h 2 φ r,r 2 ≤ C(L)h e(φ (1) ) 2 . Finally, we estimate the norm φ (1) z − φ (2) z . Integrating the identity φ (1) z,r = 2e(φ (1) ) rz − φ(1) r,z we get φ (1) z (r, θ, z) − φ (1) z (1, θ, z) = 2 r 1 e(φ (1) ) rz (t, θ, z)dt − r 1 φ (1) r,z (t, θ, z)dt. Therefore, φ (1) z − φ (2) z = 2 r 1 e(φ (1) ) rz (t, θ, z)dt − r 1 (φ (1) r,z (t, θ, z) − φ (1) r,z (1, θ, z))dt. Applying inequalities (3.20) and (3.8) we get φ (1) z − φ (2) z 2 ≤ h 2 ( e(φ (1) ) 2 + φ (1) r,z (r, θ, z) − φ (1) r,z (1, θ, z) 2 ) = h 2 ( e(φ (1) ) 2 + π 2 m 2 L 2 r 1 φ (1) r,r (t, θ, z)dt 2 ) ≤ h 2 ( e(φ (1) ) 2 + π 2 m 2 h 2 L 2 φ (1) r,r 2 ) ≤ h 2 (1 + π 2 m 2 h 2 L 2 ) e(φ (1) ) 2 ≤ C(L)h 2 e(φ (1) ) 2 . Applying this estimate to (3.26) and (3.24) we obtain e(φ (2) ) θz − e(φ (1) ) θz 2 ≤ C(L)h e(φ (1) ) 2 , e(φ (2) ) zz − e(φ (1) ) zz 2 ≤ C(L)h e(φ (1) ) 2 , and Tr (e(φ (2) )) 2 ≤ Tr (e(φ (1) )) 2 + C(L)h e(φ (1) ) 2 . We conclude that e(φ (2) ) 2 ≤ e(φ (1) ) 2 (1 + C(L)h), Tr (φ (2) ) 2 ≤ Tr (e(φ (1) )) 2 + C(L)h e(φ (1) ) 2 , and hence, by coercivity of L 0 we have C h L 0 e(φ (2) ), e(φ (2) ) dx ≤ (1 + C(L)h) C h L 0 e(φ (1) ), e(φ (1) ) dx. (3.27) Combining (3.23) and (3.27) we obtain (3.15). Lemma 3.3 permits us to look for a buckling mode among those single Fourier modes, whose θ and z components are linear in r. Let C(L) be a constant, whose existence is guaranteed by Lemma 3.3. Let Proof. By Lemma 2.12 it is sufficient to show that F h lin contains a buckling mode. Let (m(h), n(h)) be minimizers in (3.7). Then, according to Theorem 3.1, (m(h), n(h)) ∈ M h and F(m(h), n(h)) contains a buckling mode. Let ψ h ∈ F(m(h), n(h)) be a buckling mode. Let us show that L(ψ h ) ∈ F h lin is also a buckling mode. Indeed, by Lemma 3.3 1 ≤ R 1 (h, L(ψ h )) λ 1 (h) ≤ (1 + C(L)h) R 1 (h, ψ h ) λ 1 (h) . Taking a limit as h → 0 and using the fact that ψ h is a buckling mode, we obtain lim h→0 R 1 (h, L(ψ h )) λ 1 (h) = 1. Hence, L(ψ h ) is also a buckling mode, since, by Theorem 3.1, the pair (F(m(h), n(h)), R 1 (h, φ)) characterizes buckling. Simplification via buckling equivalence The linearization Lemma 3.3 allowed us to reduce the set of buckling modes significantly. Yet, even for functions φ ∈ F lin (m, n) the explicit representation of the functional R 1 (h, φ) is extremely messy. This can be dealt with by further simplification of the functional via buckling equivalence that permits us to eliminate lower order terms that do not influence the asymptotic behavior of the functional. Our first step is to simplify the denominator in R 1 (h, φ) by replacing the unknown function f r (r) in φ r = f r (r) cos(mz)e inθ with f r (1). Here, in order to simplify the formulas we use m in place of πm/L. Hence, we define a new simplified functional R 2 (h, φ) = C h L 0 e(φ), e(φ) dx C h |φ r,z (1, θ, z)| 2 dx . Lemma 3.5. The functionals R 1 (h, φ) and R 2 (h, φ) are buckling equivalent. Proof. We observe that |φ r,z (r, θ, z) − φ r,z (1, θ, z)| = m r 1 φ r,r (t, θ, z) . Hence, due to (3.20) φ r,z (r, θ, z) − φ r,z (1, θ, z) ≤ mh e(φ) . Therefore, C h |φ r,z (r, θ, z)| 2 dx − C h |φ r,z (1, θ, z)| 2 dx ≤ mh e(φ) φ r,z ≤ m √ h e(φ) 2 , due to Theorem 2.8. Hence, 1 R 1 (h, φ) − 1 R 2 (h, φ) ≤ Cm √ h, by coercivity of L 0 . For (m, n) ∈ M h we conclude that, due to (2.32) and (3.8), lim h→0 λ(h) 1 R 1 (h, φ) − 1 R 2 (h, φ) = 0. Theorem 2.13 applies and hence the functionals R 1 (h, φ) and R 2 (h, φ) are buckling equivalent. We can also simplify the numerator of R 2 (h, φ) by replacing r with 1 in those places, where it does not affect the asymptotics. The simplification now proceeds at the level of individual components of e(φ). We may, without loss of generality, restrict our attention to φ ∈ F lin (m, n), such that φ r = f r (r) cos(nθ) cos(mz). (3.28) Of course, choosing sin(nθ) instead of cos(nθ) in (3.28) works just as well. The choice between sin(nθ) and cos(nθ) in the remaining components becomes uniquely determined by the requirement that every entry in e(φ) must be made up of terms that have the same kind of trigonometric function in nθ. (We have already taken care of the same requirement in z.) Hence, the θ and z components of φ ∈ F lin (m, n) must have the form φ θ = (ra θ + (r − 1)nf r (1)) sin(nθ) cos(mz), φ z = (a z + (r − 1)mf r (1)) cos(nθ) sin(mz), (3.29) where a θ and a z are real scalars that determine the amplitude of the Fourier modes. We compute,                                        e(φ) rr = f r (r) cos(nθ) cos(mz), e(φ) rθ = n(f r (1) − f r (r)) 2r sin(nθ) cos(mz), e(φ) rz = m(f r (1) − f r (r)) 2 cos(nθ) sin(mz), e(φ) θθ = n(ra θ + (r − 1)nf r (1)) + f r (r) r cos(nθ) cos(mz), e(φ) θz = − mr 2 a θ + na z + (r 2 − 1)mnf r (1) 2r sin(nθ) sin(mz), e(φ) rz = m(a z + (r − 1)mf r (1)) cos(nθ) cos(mz). We can now replace e(φ) with a much simpler matrix E(φ), given by                                          E(φ) rr = f r (r) √ r cos(nθ) cos(mz), E(φ) rθ = 0, E(φ) rz = 0, E(φ) θθ = n(ra θ + (r − 1)nf r (1)) + f r (1) √ r cos(nθ) cos(mz), E(φ) θz = − mr 2 a θ + na z + (r 2 − 1)mnf r (1) 2 √ r sin(nθ) sin(mz), E(φ) rz = m(a z + (r − 1)mf r (1)) √ r cos(nθ) cos(mz) Lemma 3.6. The functionals R 2 (h, φ) and R 3 (h, φ) = C h L 0 E(φ), E(φ) dx C h |φ r,z (1, θ, z)| 2 dx (3.30) are buckling equivalent. Proof. Observing that f r (r) − f r (1) = r 1 f r (t)dt, we obtain via (3.20) that e(φ) rθ 2 ≤ Cn 2 h 2 f r 2 ≤ Cn 2 h 2 e(φ) rr 2 . Similarly, e(φ) rz 2 ≤ Cm 2 h 2 e(φ) rr 2 . Hence, for every (m, n) ∈ M h we have e(φ) rθ 2 + e(φ) rz 2 ≤ Ch e(φ) rr 2 . For the components (rr), (θz) and (zz) we have E(φ) rr = e(φ) rr √ r , E(φ) θz = √ re(φ) θz , E(φ) zz = e(φ) zz √ r . Therefore, |E(φ) rr − e(φ) rr | ≤ Ch|e(φ) rr |, |E(φ) θz − e(φ) θz | ≤ Ch|e(φ) θz |, |E(φ) zz − e(φ) zz | ≤ Ch|e(φ) zz |. Finally we compute E(φ) θθ − e(φ) θθ = ( √ r − 1)e(φ) θθ − f r (r) − f r (1) √ r cos(nθ) cos(mz), which implies E(φ) θθ − e(φ) θθ ≤ Ch( e(φ) θθ + e(φ) rr ). We conclude that that E(φ) − e(φ) ≤ C √ h e(φ) , and thus C h L 0 E(φ), E(φ) dx − C h L 0 e(φ), e(φ) dx ≤ C √ h e(φ) 2 ≤ C √ h C h L 0 e(φ), e(φ) dx, by coercivity of L 0 . It follows that |R 3 (h, φ) − R 2 (h, φ)| ≤ C √ hR 2 (h, φ) ≤ C √ hR 3 (h, φ) + C √ h|R 3 (h, φ) − R 2 (h, φ)|. Thus, |R 3 (h, φ) − R 2 (h, φ)| ≤ C √ h 1 − C √ h R 3 (h, φ). Dividing this inequality by R 2 (h, φ)R 3 (h, φ) we obtain 1 R 2 (h, φ) − 1 R 3 (h, φ) ≤ C √ h R 2 (h, φ) . Therefore, sup φ∈F h linλ (h) 1 R 2 (h, φ) − 1 R 3 (h, φ) ≤ Cλ(h) √ h inf φ∈F h lin R 2 (h, φ) . It follows that, due to (2.32), lim h→0 sup φ∈F h lin λ(h) 1 R 2 (h, φ) − 1 R 3 (h, φ) = 0. The application of Theorem 2.13 completes the proof. The formula for the buckling load At this point the strategy for finding the asymptotic formula for the buckling load can be stated as follows. We first compute λ 3 (h; m, n) = inf φ∈F lin (m,n) The goal of the section is to prove that R 3 (h, φ),(3.lim h→0 λ 3 (h) λ * (h) = 1, λ * (h) = h 3(1 − ν 2 ) . (3.33) The functional R 3 (h, φ) given by (3.30) will now be analyzed in its explicit form. R 3 (h, φ) = 1 2(ν + 1)hm 2 |f r (1)| 2 I h {(mr 2 a θ + na z + (r 2 − 1)mnf r (1)) 2 + + 2(f r ) 2 + 2(nra θ + (r − 1)n 2 f r (1) + f r (1)) 2 + 2m 2 (a z + (r − 1)mf r (1)) 2 Λ(f r (r) + nra θ + (r − 1)n 2 f r (1) + f r (1) + ma z + (r − 1)m 2 f r (1)) 2 }dr, where Λ = 2ν 1−2ν . We minimize the numerator in f r (r) with prescribed value f r (1). This can be done by minimizing the numerator in f r (r) treating it as a scalar variable for each fixed r: f r (r) = − Λ Λ + 2 p(r), where p(r) = nra θ + (r − 1)n 2 f r (1) + f r (1) + ma z + (r − 1)m 2 f r (1). Thus, we reduce the problem of computing λ 3 (h; m, n) to finite-dimensional unconstrained minimization: λ 3 (h; m, n) = min a θ ,az,fr(1) I h { 2Λ Λ+2 p(r) 2 + q(r)}dr 2(ν + 1)hm 2 |f r (1)| 2 , (3.34) where q(r) = (mr 2 a θ + na z + (r 2 − 1)mnf r (1)) 2 + 2(nra θ + (r − 1)n 2 f r (1) + f r (1)) 2 + 2m 2 (a z + (r − 1)mf r (1)) 2 . Since the function to be minimized in (3.34) is homogeneous of degree zero in the vector variable (a θ , a z , f r (1)), we can set f r (1) = 1, without loss of generality. Then, evaluating the integral in r we obtain λ 3 (h; m, n) = min a θ ,az 1 2(ν + 1)m 2 Q (0) m,n (a θ , a z ) + h 2 12 Q (1) m,n (a θ , a z ) + h 4 80 Q (2) m,n (a θ , a z ) , where Q (0) m,n = 2Λ Λ + 2 (1 + na θ + ma z ) 2 + 2(na θ + 1) 2 + 2m 2 a 2 z + (ma θ + na z ) 2 , Q (1) m,n = 2Λ Λ + 2 (na θ + m 2 + n 2 ) 2 + 2n 2 (a θ + n) 2 + 2m 4 + 4m 2 (a θ + n) 2 + 2m(a θ + n)(ma θ + na z ), Q (2) m,n = m 2 (a θ + n) 2 . Let us show that the last term in Q (1) m,n , as well as Q (2) m,n can be discarded. Let Q (1) m,n (a θ ) = 2Λ Λ + 2 (na θ + m 2 + n 2 ) 2 + 2n 2 (a θ + n) 2 + 2m 4 + 4m 2 (a θ + n) 2 be the simplified version of Q (1) m,n . We observe that 2m|(a θ + n)(ma θ + na z )| ≤ hm 2 (a θ + n) 2 + 1 h (ma θ + na z ) 2 ≤ h 4Q (1) m,n + 1 h Q (0) m,n . Therefore, h 4 80 m 2 (a θ + n) 2 + h 2 6 m|(a θ + n)(ma θ + na z )| ≤ (h 2 + h) Q (0) m,n + h 2 12Q (1) m,n . Hence, (1 − h − h 2 ) Q (0) m,n + h 2 12Q (1) m,n ≤ Q (0) m,n + Buckling modes In this section we return to the original boundary conditions and the space V h , defined in (2.11). Let λ 1 (h) = inf φ∈A h R 1 (h, φ). (3.44) Even though, technically speaking, V h is not a subspace ofṼ h , it is helpful to think of it as such. Hence, our next lemma is natural (but not entirely obvious). This is done by repeating the arguments in the proof of the analogous inequality in Theorem 3.1. The argument is based on the fact the 2L-periodic extension of φ ∈ A h ⊂ V h , such that φ r and φ θ are even and φ z is odd, is still of class H 1 , and the expansion (3.10) is valid. The inequality (3.45) follows from (3.7) and the inequality (3.11), which is valid for each single Fourier mode. In order to prove that the asymptotic formula (3.1) holds for λ 1 (h) (and hence for λ crit (h)) it is sufficient to find a test function φ h ∈ A h such that and that φ h ∈ A h is a buckling mode. We construct the buckling mode φ h ∈ V h as a 2-term Fourier expansion (3.2). For this purpose we choose m = m(h) → ∞, as h → 0, and n = n(h) to lie on Koiter's circle and                φ h r (r, θ, z) = m∈{m(h),m(h)+2} f r (r, m, n(h)) cos(n(h)θ) cos(mz), f z (r, m, n(h)) cos(n(h)θ) sin(mz), (3.47) where now, in order to avoid confusion, we distinguish between m ∈ Z and m = πm L . To ensure that φ h ∈ V h we require f θ (r, m(h) + 2, n(h)) = −f θ (r, m(h), n(h)). Maple calculation verifies that the test function, φ h satisfies (3.46). Figure 1 shows buckling modes for m(h) = 2 λ * (h) α , α = 1/4, 1, 2, 3/4. Lemma 2. 1 . 1Assume that W (F ) is three times continuously differentiable in a neighborhood of F = I, satisfies properties (P1)-(P3) and is isotropic (i.e. satisfies (2.3)). Then there exists a unique function a(λ), of class C 2 on a neighborhood of 0, such that a(0) = 0 and the natural boundary conditions (2.5) are satisfied Proof. By (P2) W (F ) =Ŵ (F T F ). The functionŴ (C) is three times continuously differentiable in a neighborhood of C = I. Thus, P (F ) = W F (F ) = 2FŴ C (F T F ). augmented by the appropriate boundary conditions. Here e(u h ) = 1 2 (∇u h + (∇u h ) T ) is the linear elastic strain. The linear elastic trivial branch λu h (x) depends only on the linear elastic moduli L 0 , unlike the model-dependent nonlinear trivial branch y(x; h, λ). The fact that our trivial branch (2.4) satisfies all conditions in Definition 2.2 is easy to verify. Here then λ(h) is a buckling load in the sense of Definition 2.3. (b) Minimizing property of the buckling mode: If φ h ∈ B h is a buckling mode in the sense of Definition 2.3, then Lemma 2 . 12 . 212Suppose the pair (B h , J(h, φ)) characterizes buckling. Let B h ⊂ B h be such that it contains a buckling mode. Then the pair (B h , J(h, φ)) characterizes buckling 3 . ( iii) Let (m(h), n(h)) be a minimizer in (3.7). Then the pair (F(m(h), n(h)), R 1 (h, φ)) characterizes buckling in the sense of Definition 2.10. Proof. Part (i). Let α(h) = inf m≥1 n≥0λ m,n (h). Theorem 3 . 2 . 32There exists a constant C(L) depending only on L such that (∇φ) 2 ≤ C( φ ∈Ṽ h .Observe that, according to the estimatec(L)h ≤ λ 1 (h) ≤ C(L)h. h = {(m, n) :λ m,n (h) ≤ 2C(L)h}.Let us show that the bounds (3.8) hold for all (m, n) ∈ S h . In particular, the sets S h are finite for all h > 0, and hence, the infimum in (3.7) is attained. Let h > 0 and (m, n) ∈ S h be fixed. Then, by definition of the infimum there exists φ h ∈ F(m, n) such that R 1 (h, φ h ) ≤ 3C(L)h. Hence, there exists a possibly different constant C(L) (not relabeled, but independent of m, n and h), such that e(φ h ) 2 ≤ C(L) by (3.21),(3.22) and the coercivity of L 0 , we haveC h F lin (m, n). Corollary 3. 4 . 4The pair (F h lin , R 1 ) characterizes buckling. 31)and then we find m(h) and n(h) as minimizers in h − h 2 )λ 3 (h; m, n) ≤ λ 3 (h; m, n) ≤ (1 + h + h 2 )λ 3 (h; n (a θ , a z ) in a z we obtain a z = − m(2ν + (ν + 1)na θ ) 2m 2 + (1 − ν)n 2 .(3.37) Lemma 3. 8 . 8Let λ 1 (h) andλ 1 (h) be given by(3.44) and (3.6), respectively, thenλ 1 (h) ≥λ 1 (h).(3.45)Proof. In view of Theorem 3.1 it is sufficient to prove the inequalityλ 1 (h) ≥ inf m≥1 n≥0λm,n (h). Figure 1 :F 1Koiter circle buckling modes corresponding, left to right, to m(h) ∼ h −1/8 , h −1/4 and h −3/8 . Poisson's ratio ν = 1/3.The structure of coefficients f (r, m, n) is determined by optimality at each of the two values of m separately, since the expansion (3.10) is valid for φ ∈ V h . In particular, we choosef θ (r, m(h), n(h)) = ra θ (h) + (r − 1)n(h), z (r, m, n, h) = a z (m, n, h) + (r − 1)m, a z (m, n, h) = −m (2ν + (ν + 1)na θ (h)) 2m 2 + (1 − ν)n 2 , F r (r, m, n, h) = 1 − ν(r − 1) 1 − ν (na θ (h) + 1 +ma z (m, n, h)) − ν(r − 1) 2 2(1 − ν) (na θ (h) + n 2 +m 2 ).Then f r (r, m(h), n(h)) = F r (r, m(h), n(h), h), f r (r, m(h) + 2, n(h)) = −F r (r, m(h) + 2, n(h), h), f z (r, m(h), n(h)) = F z (r, m(h), n(h), h), f z (r, m(h) + 2, n(h)) = −F z (r, m(h) + 2, n(h), h). The assumption that the trivial deformation is stress-free is also essential. 2 A deformation y is called a weak local minimizer, if it delivers the smallest value of the energy E(y) among all Lipschitz function satisfying boundary conditions (2.1) that are sufficiently close to y in the W 1,∞ norm. This lemma highlights the fact that Part (b) in Definition 2.10 is designed to capture the buckling mode. We make no attempt to characterize an infinite set of geometrically distinct, yet energetically equivalent buckling modes that exist in our example. Meaning that each component of e(φ) and its trace either changes sign or remains unchanged. (h; m * (h), n * (h)) λ * 3 (h; m * (h), n * (h)), and (3.43) follows.We have now achieved our goal, since (3.33) follows from (3.36) and Lemma 3.7. Acknowledgments. The authors are grateful to Eric Clement and Mark Peletier for their valuable comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 1008092.The minimization in a θ was too tedious to be done by hand. Using computer algebra software (Maple), we have obtained the following expression forλ 3 (h; m, n):where r 1 (m, n) is a polynomial in (m, n) of degree 6, r 2 (m, n) is a polynomial in (m, n) of degree 8 and r 3 (m, n) is a polynomial in (m, n) of degree 4. The minimum was achieved at a θ = − n(n 2 + (ν + 2)m 2 ) + Hs 1 (m, n) (m 2 + n 2 ) 2 + Hs 2 (m, n) ,where s 1 (m, n) is a polynomial in (m, n) of degree 5, and s 2 (m, n) is a polynomial in (m, n) of degree 4. Let us show that the terms r i (m, n) do not affect the asymptotics ofλ 3 (h). LetIt is also clear from the degrees of polynomials r 2 (m, n) and r 3 (m, n) that sup m≥π/L n≥0Hr 2 (m, n) 2(m 2 + n 2 ) 4 ≤ CH, and sup m≥π/L n≥0for some constant C, independent of m, n, and h.In order to show that we can also eliminate Hr 1 (m, n) from the numerator ofλ 3 (h; m, n) we observe that for any constant C lim Postbuckling behaviour of axially compressed circular cylinders. B O Almroth, AIAA J. 1B. O. Almroth. Postbuckling behaviour of axially compressed circular cylinders. AIAA J, 1:627-633, 1963. A survey of some buckling problems. B Budiansky, J Hutchinson, CR-66071NASATechnical ReportB. Budiansky and J. Hutchinson. A survey of some buckling problems. Technical Report CR-66071, NASA, February 1966. Derivation of nonlinear bending theory for shells from three-dimensional nonlinear elasticity by gamma-convergence. G Friesecke, R D James, M G Mora, S Müller, Comptes Rendus Mathematique. 3368G. Friesecke, R. D. James, M. G. Mora, and S. Müller. Derivation of nonlinear bending theory for shells from three-dimensional nonlinear elasticity by gamma-convergence. Comptes Rendus Mathematique, 336(8):697 -702, 2003. Exact scaling exponents in korn and korn-type inequalities for cylindrical shells. Y Grabovsky, D Harutyunyan, submittedY. Grabovsky and D. Harutyunyan. Exact scaling exponents in korn and korn-type inequalities for cylindrical shells. submitted. Scaling instability of the buckling load in axially compressed circular cylindrical shells. Y Grabovsky, D Harutyunyan, SubmittedY. Grabovsky and D. Harutyunyan. Scaling instability of the buckling load in axially compressed circular cylindrical shells. Submitted. The flip side of buckling. Y Grabovsky, L Truskinovsky, Cont. Mech. Thermodyn. 193-4Y. Grabovsky and L. Truskinovsky. The flip side of buckling. Cont. Mech. Thermodyn., 19(3-4):211-243, 2007. Cylinder buckling: the mountain pass as an organizing center. J Horák, G J Lord, M A Peletier, SIAM J. Appl. Math. 665J. Horák, G. J. Lord, and M. A. Peletier. Cylinder buckling: the mountain pass as an organizing center. SIAM J. Appl. Math., 66(5):1793-1824 (electronic), 2006. Korn's inequalities and their applications in continuum mechanics. C O Horgan, SIAM Rev. 374C. O. Horgan. Korn's inequalities and their applications in continuum mechanics. SIAM Rev., 37(4):491- 511, 1995. On the stability of elastic equilibrium. W T Koiter, Delft, HollandTechnological University of DelftTechnische HogeschoolW. T. Koiter. On the stability of elastic equilibrium. PhD thesis, Technische Hogeschool (Technological University of Delft), Delft, Holland, 1945. Paradoxical buckling behaviour of a thin cylindrical shell under axial compression. E Lancaster, C Calladine, S Palmer, International Journal of Mechanical Sciences. 425E. Lancaster, C. Calladine, and S. Palmer. Paradoxical buckling behaviour of a thin cylindrical shell under axial compression. International Journal of Mechanical Sciences, 42(5):843 -865, 2000. Die nicht achsensymmetrische knickung dünnwandiger hohlzylinder. R Lorenz, Physikalische Zeitschrift. 127R. Lorenz. Die nicht achsensymmetrische knickung dünnwandiger hohlzylinder. Physikalische Zeitschrift, 12(7):241-260, 1911. Korn inequalities for elastic junctions of massive bodies, thin plates, and rods. S A Nazarov, Russian Mathematical Surveys. 63135S. A. Nazarov. Korn inequalities for elastic junctions of massive bodies, thin plates, and rods. Russian Mathematical Surveys, 63(1):35, 2008. An experimental investigation of the buckling of circular cylindrical shells in axial compression using the photoelastic technique. R C Tennyson, 102Toronto, ON, CanadaUniversity of TorontoTech. ReportR. C. Tennyson. An experimental investigation of the buckling of circular cylindrical shells in axial compression using the photoelastic technique. Tech. Report 102, University of Toronto, Toronto, ON, Canada, 1964. Towards the question of deformation and stability of cylindrical shell. S Timoshenko, Vesti Obshestva Tekhnologii21S. Timoshenko. Towards the question of deformation and stability of cylindrical shell. Vesti Obshestva Tekhnologii, 21:785-792, 1914. Elastic stability of thin-walled cylindrical and conical shells under axial compression. V I Weingarten, E J Morgan, P Seide, AIAA J. 3V. I. Weingarten, E. J. Morgan, and P. Seide. Elastic stability of thin-walled cylindrical and conical shells under axial compression. AIAA J, 3:500-505, 1965. On the mechanism of buckling of a circular shell under axial compression. Y Youshimura, 1390National advisory committee for aeronautics. Washington, DCTechnical ReportY. Youshimura. On the mechanism of buckling of a circular shell under axial compression. Technical Report 1390, National advisory committee for aeronautics, Washington, DC, 1955. Buckling of thin cylindrical shells: an attempt to resolve a paradox. E Zhu, P Mandal, C Calladine, International Journal of Mechanical Sciences. 448E. Zhu, P. Mandal, and C. Calladine. Buckling of thin cylindrical shells: an attempt to resolve a paradox. International Journal of Mechanical Sciences, 44(8):1583 -1601, 2002.
[]
[ "Universal structure of two and three dimensional self-gravitating systems in the quasi-equilibrium state", "Universal structure of two and three dimensional self-gravitating systems in the quasi-equilibrium state" ]
[ "Tohru Tashiro \nDepartment of Physics\nOchanomizu University\n2-1-1 Ohtuka112-8610BunkyoTokyoJapan\n" ]
[ "Department of Physics\nOchanomizu University\n2-1-1 Ohtuka112-8610BunkyoTokyoJapan" ]
[]
We study a universal structure of two and three dimensional self-gravitating systems in the quasiequilibrium state. It is shown numerically that the two dimensional self-gravitating system in the quasi-equilibrium state has the same kind of density profile as the three dimensional one. We develop a phenomenological model to describe this universal structure by using a special Langevin equation with a distinctive random noise to self-gravitating systems. We find that the density profile derived theoretically is consistent well with results of observations and simulations.
10.1103/physreve.93.020103
[ "https://arxiv.org/pdf/1505.02014v1.pdf" ]
18,389,525
1505.02014
e7fc66d53caa197364534ea55fbc045b13b1c397
Universal structure of two and three dimensional self-gravitating systems in the quasi-equilibrium state 8 May 2015 Tohru Tashiro Department of Physics Ochanomizu University 2-1-1 Ohtuka112-8610BunkyoTokyoJapan Universal structure of two and three dimensional self-gravitating systems in the quasi-equilibrium state 8 May 2015(Dated: May 11, 2015) We study a universal structure of two and three dimensional self-gravitating systems in the quasiequilibrium state. It is shown numerically that the two dimensional self-gravitating system in the quasi-equilibrium state has the same kind of density profile as the three dimensional one. We develop a phenomenological model to describe this universal structure by using a special Langevin equation with a distinctive random noise to self-gravitating systems. We find that the density profile derived theoretically is consistent well with results of observations and simulations. INTRODUCTION Systems with long range forces exhibit various specific properties which systems with short range forces do not have. The prime example of them is the presence of another stable state differing from the thermal equilibrium state. In this paper, we call it quasi-equilibrium state (QES). It has been found numerically that the distribution of QES depends strongly on initial conditions whereas the thermal equilibrium is independent on initial states. QES of Hamiltonian mean field model which is a toy model of systems with long range interactions varies from a ferromagnetic state to a paramagnetic (homogeneous) state with increase of the initial total energy [1]. We have obtained numerically the result that the number density N of three dimensional self-gravitating system (3DSGS) in the real space can be approximated by the following representation when the initial virial ratio V (≡ 2K/Ω) is 0 where K and Ω are the total kinetic and gravitational energy respectively [2,3]: N (r) ≃ N (0) (1 + r 2 /a 2 ) κ(1) with κ ∼ 3/2, where r means the distance from the center of system. The number density of globular clusters consisting of about hundreds of thousands of stars which is the best example of 3DSGS also can be depicted by Eq. (1) with κ ∼ 3/2 [4][5][6]: This density profile is universal for 3DSGS. The recent observations have made it clear that the universality is not just limited to 3DSGS. As will be discussed later, the cylindrical symmetric filamentary structure of molecular clouds can be treated as a two dimensional self-gravitating system (2DSGS). The Herschel Space Observatory revealed the 27 filamentary structures in IC 5146 which is a reflection nebula in the Cygnus and the number density of molecular clouds have a cylindrical symmetry around the axis of filament [7]. All of the number densities of molecular clouds around the axis of filament can be fitted by Eq. (1) with κ from 0.75 to 1.25 [7], where r means the distance from the axis of filament. On the other hand, the equation (1) with κ = 1 was uti-lized in order to describe the number density of a filament in the Taurus [8]. So, 2DSGS has a similar universality depicted by Eq. (1) with κ ∼ 1. The interaction potential energies of 2D and 3DSGS have a difference between being bounded and unbounded which will be explained later. However, the universality is beyond the difference. Obviously, the universal number density cannot be derived by assuming the system is in the isothermal equilibrium: The equilibrium state for 2DSGS has an exact solution for number density represented by Eq. (1) with κ = 2 [9,10]. On the other hand, the number density of 3DSGS in the thermal equilibrium decays with r −2 at large radius r [6,11]. Therefore, a new theory explaining the physical mechanism behind the universality is necessary. In this paper, we shall derive the universal number density Eq. (1) from one phenomenological model particular for gravity by utilizing a special Langevin equation for QES and corresponding Fokker-Planck equation in µ dimensional space where µ is 2 or 3. We shall treat them in the over-damped limit. Indeed, the normal Langevin equation in the limit and the corresponding Fokker-Planck equation (i.e., Smoluchowski equation) is appropriate for describing 3DSGS enclosed in a box near the isothermal equilibrium [11], since the Maxwell-Boltzmann distribution is stable for 3DSGS in the thermal equilibrium state with the total mass and energy fixed if the radius of system is less than a critical value [12,13]. However, it is well-known empirically that the SGSs without boundary are trapped by another stable state, that is QES, so that we must modify the Langevin and the Fokker-Planck equation in order to describe the state. In particular, we shall introduce another noise whose foundation is so simple. This paper is organized as follows. Firstly, we review the two dimensional gravity and show that the cylindrically symmetric 3DSGS is equivalent to 2DSGS mathematically . Moreover, we exhibit results of N-body simulations of it which are the same as observations of molecular clouds. Next, a Fokker-Planck equation to describe the universal number density is derived. In doing so, we start a Langevin equation specialized for gravity. Lastly, numerical solutions of the Fokker-Planck equation are shown and we reveal that the results fit well with observations and numerical simulations. TWO DIMENSIONAL GRAVITY AND N-BODY SIMULATIONS OF TWO DIMENSIONAL SGS Two dimensional gravity The two dimensional gravitational potential per unit mass φ 2 (r) generated by mass source ρ 2 (r) satisfies the following Poisson equation in the two dimensional space. 1 r ∂ ∂r r ∂φ 2 (r) ∂r + 1 r 2 ∂ 2 φ 2 (r) ∂θ 2 = 4πG ′ ρ 2 (r) (2) where the subscript 2 means the two dimensional space and we denote the two dimensional gravitational constant by G ′ in order to distinguish it from the ordinary gravitational constant G. If the distribution of mass is circular symmetric, the above equation becomes 1 r d dr r dφ 2 (r) dr = 4πG ′ ρ 2 (r) .(3) When the mass source is a mass point with a mass m existing in the origin, ρ 2 (r) = mδ(0)/πr. Then, the Poisson equation is altered to 1 r d dr r dφ 2 (r) dr = 4mG ′ δ(0) r .(4) Therefore, the potential can be solved like φ 2 (r) = 2G ′ m ln r + const.(5) Then, the interaction potential of 2DSGS is bounded whereas one of 3DSGS is proportional to −1/r. Finally, we shall provide a brief explanation for the reason that cylindrically symmetric filaments of molecular clouds can be regarded as 2DSGS: By using the mass density of molecular clouds ρ mc (r) and the gravitational potential per unit mass φ mc (r) where r means a distance from the axis, the Poisson equation becomes 1 r d dr r dφ mc (r) dr = 4πGρ mc (r) ,(6) which is mathematically equivalent to the Poisson equation for 2DSGS Eq. (3). N-body simulations Because the equivalence is merely formal, we followed the time evolution of 10 4 -body system in the two dimensional space interacting by Eq. (5) numerically in order to investigate QES of 2DSGS. A polytrope solution with a polytrope index n was adopted as the initial condition. The solution with n = ∞ corresponds to the thermal equilibrium state, so the finite n represents the deviation from equilibrium. Generally, the solution in the three dimensional space is well-known [6]. Here, we have extended it to the two dimensional space: The distribution function f (r, v) in phase space can be shown as f (r, v) ∝ E n−1 (E < 0) 0 (E ≥ 0)(7) where E ≡ φ 2 (R) − 1 2 v 2 + φ 2 (r) and R means a radius of the system. Then, we run the N-body simulation by varying an initial virial ratio for 2DSGS V ′ which is different from a virial ratio for 3DSGS (See Refs. [14] and [15]). Note that the system is not enclosed into a circle. Specifically when V ′ = 0, it is found that the number density in QES has a universality and the density around the center of the system can be fitted well by Eq. (1). The results are shown in Fig. 1. It can be seen in Fig. 1(a) that the number density from n = 1 to 10 has the same profile. In addition, figure 1(b) denotes that κ which is the index in Eq. (1) ranges from 0.8 to 1.1. These results have a good consistency with the observations of molecular clouds [7,8]. FOKKER-PLANCK MODEL We have made sure of a fact that the number density of 2DSGS without boundary in QES can be represented well by Eq. (1) with κ ∼ 1. On the other hand, the number density of most globular clusters is well-known to be fitted by Eq. (1) with κ ∼ 3/2 [4][5][6]. Furthermore, we have reported that the same density profile is obtained through N-body simulations of 3DSGS [2,3]. Therefore, we can conclude that the SGSs without boundary have the universal density profile depicted by Eq. (1) in QES. Here, let us derive the density profile uniformly by a special Fokker-Planck equation in the µ dimensional space where µ is 2 or 3 [22]. Before constructing the Fokker-Planck equation, we shall model forces influencing an element of the system. That is, we shall begin by constructing a Langevin equation. We assume that the frictional force −mγṙ µ (t) and the random noise with constant intensity √ 2Dξ µ (t) which are essential for a many-body system to reach the thermal equilibrium state act on the element [11], where m is a mass of the element, r µ is a position vector, and ξ µ (t) means Gaussian-white noises [23]. The index represents that the vector is in the µ dimensional space. Postulating the system to be circular (µ = 2) or spherical symmetric (µ = 3), one can see that the element is also influenced by a mean gravitational force −F µ (r) along the radial direction of the system which is derived by differentiating mφ µ (r): −F µ (r) = −m∂ r φ µ (r) where φ µ (r) is the mean gravitational potential in µ dimensional space and r ≡ |r µ | in which the index µ are abbreviated for simplify. However, this is just a mean gravity. It is natural to consider that the element actually is influenced by a fluctuating gravity around the mean value: The number density producing φ µ (r) through the Poisson equation is the mean value, and the actual distribution of elements must fluctuate around the value. This means that another noise which prevents the system from reaching the thermal equilibrium state is added to the normal Langevin equation, and so this system goes to another stable state, that is QES. Therefore, we can consider the noise distinctive to SGS. If assuming the intensity of the noise to be constant, we can obtain the following Langevin equation: mr µ (t) + mγṙ µ (t) = −F µ (r) 1 + √ 2ǫη(t) e µ r + √ 2Dξ µ (t)(8) where e µ r is a unit vector along the radial direction in µ dimensional space. In the over-damped limit, equa-tion (8) becomes mγṙ µ (t) = −F µ (r) 1 + √ 2ǫη(t) e µ r − ∂ ∂r ǫ 2mγ F µ (r) 2 e µ r + √ 2Dξ µ (t) .(9) The second term on the right hand side of the above equation is a correction term in order to regard products as the Storatonovich product [18]D The corresponding Fokker-Planck equation is given by ∂ ∂t P µ (r, t) = D (mγ) 2 ∂ 2 ∂r 2 + µ − 1 r ∂ ∂r P µ (r, t) + 1 mγ 1 r ∂ ∂r rF µ (r)P µ (r, t) + ǫ (mγ) 2 ∂ 2 ∂r 2 + µ − 1 r ∂ ∂r + µ − 1 r µ−1 ∂ ∂r r µ−2 × F µ (r) 2 P µ (r, t) .(10) We are treating the system as a circular or a spherical symmetric one including N elements. So, PDF P µ is a function of the distance from the origin r. Note that a relation among PDF P µ , the number density N µ and the mass density ρ µ is as follows: P µ = N µ /N = ρ µ /(mN ). Let us use P µ (r, t) = J µ (r)P µ (r, t) instead of P µ , where J µ means Jacobian determinant, J 2 = 2πr and J 3 = 4πr 2 : It can be depicting by J µ = 2 µ−1 πr µ−1 . In doing so, we can obtain ∂ ∂t P µ (r, t) = D (mγ) 2 ∂ 2 ∂r 2 − ∂ ∂r µ − 1 r P µ (r, t) + 1 mγ ∂ ∂r F µ (r)P µ (r, t) + ǫ (mγ) 2 ∂ 2 ∂r 2 F µ (r) 2 P µ (r, t) .(11) When the system reaches QES, ∂ t P µ qe (r) = 0. Here, we integrate the Fokker-Planck equation by r. Owing to use P µ qe , the integration becomes easier: D (mγ) 2 + ǫF µ (r) 2 (mγ) 2 P µ qe ′ (r) − D (mγ) 2 µ − 1 r − 2ǫF µ (r)F µ′ (r) (mγ) 2 − F µ (r) mγ P µ qe (r) = const.(12) Since P qe (0) and P ′ qe (0) are bounded, P µ qe (0) = P µ qe ′ (0) = 0. Therefore, the constant of the right hand side of Eq. (12) becomes 0: P µ qe ′ (r) = − rF µ (r) mγ + 2ǫF µ′ (r) − (µ − 1)D r {D + ǫF µ (r) 2 } P µ qe (r) .(13) If using the number density in QES N µ qe (= N P µ qe /J µ ), we can obtain N µ qe ′ (r) = − rF µ (r) mγ + 2ǫF µ′ (r) + (µ − 1)ǫF µ (r) 2 r {D + ǫF µ (r) 2 } N µ qe (r) .(14) By substituting the gravitational potential per unit mass φ µ (r)(= 1 m drF µ (r)) into the Poisson equation △φ µ = 4πG µ ρ µ qe = 4πG µ mN µ qe (r), an equation governing F µ can be obtained as follows: F µ′ (r) + µ − 1 r F µ (r) = 4πG µ m 2 N µ qe (r) ,(15) where ρ µ qe = mN µ qe = mN P µ qe = mN P µ qe /J µ and G µ relates quantities appeared previously as G 2 = G ′ and G 3 = G. Here, we nondimensionalize these equations by using the following units of length and force: [length] = µ 2 (µ+2)D 2πG µ m 2 N µ qe (0){µ 2 mγ+8π(µ 2 +4µ+2)ǫG µ m 2 N µ qe (0)}(16) [force] = 8πµ 2 (µ+2)DG µ m 2 N µ qe (0) µ 2 mγ+8π(µ 2 +4µ+2)ǫG µ m 2 N µ qe (0)(17) Then, equations (14) and (15) are altered tō N µ qe ′ (r) = − 2µ(µ + 2)rF µ (r) 1 + 2µqF µ′ (r) + µ(µ − 1)qF µ (r) 2 r µ + 2(µ 2 + 4µ + 2)q + 2µ 2 (µ + 2)qF µ (r) 2 ×N µ qe (r) ,(18)andF µ′ (r) + µ − 1 rF µ (r) =N µ qe (r) ,(19) where q ≡ 4πǫG µ m 2 N µ qe (0)/(µmγ) and variables with overbars denote dimensionless. We should solve these equations with boundary conditionN µ qe (0) = 1. RESULTS The numerical solutions for µ = 2 and 3 are shown in Fig. 2 by changing q. The curves with q = 0 on both figures correspond to the thermal equilibrium state. For comparison with observations, 1/(1+r 2 ) and 1/(1+r 2 ) 3/2 are also plotted by a dashed curve in Fig. 2 (a) and (b), respectively. The dashed curves are typical best-fit ones for densities of molecular clouds or globular clusters. From Fig. 2 (b), one can see that the numerical result of our model with q = 0.01 completely coincide with the typical number density. Fig. 2 (a) also shows the good agreement of our model with observations and numerical simulations for small radius by setting q = 0.56, although the deviation between two curves increases as r gets larger. Therefore, we can understand that the best-fit curves are derived from our model by varying q appropriately. Finally, we examine the range of the index κ. From Eqs. (18) and (19), the number density around the center of the system in QES can be described bȳ N µ qe (r) ≃ 1 (1 +r 2 ) κ(q)(20) where κ is a function of q: κ(q) = (µ + 2) {1 + (µ + 1)q} µ + 2(µ 2 + 4µ + 2)q .(21) From this equation, because of q ≥ 0, one can see the range of κ as follows: (µ + 1)(µ + 2) 2(µ 2 + 4µ + 2) < κ ≤ µ + 2 µ .(22) Therefore, with regard to µ = 2 and µ = 3, 3 7 < κ ≤ 2 and 10 23 < κ ≤ 5 3 , respectively. Both ranges include the observed indexes. However, the observed ranges of the index are so narrower, which means that the value of q is limited. The limited q can be regarded as a new kind of fluctuation-dissipation relation [19] which is particular for SGS because q includes a ratio of the intensity of mean gravity fluctuation ǫ to the friction coefficient γ. Indeed, this phenomenological approach cannot provide an answer to a question why the specific value of q is selected. CONCLUDING REMARKS In this paper, we made it clear numerically that 2DSGS without boundary goes to QES in which the density profile is depicted by Eq. (1) with κ ∼ 1. It is well-known that QES of 3DSGS without boundary can be described by the same one through the observations of globular clusters and N-body simulations. That is, it was shown that there is the universal QES for the SGSs: As mentioned before, the interaction potential at 2DSGS are qualitatively different from one at 3DSGS. Moreover, the isothermal equilibrium of SGS with boundary differs between 2D and 3D [10,11,20,21]. On the contrary, the paper unveiled that 2D and 3DSGS without boundary have the same universality in QES. Furthermore, we developed the model to derive the universal density profile from the special Langevin equation including the distinctive noise of SGS. Indeed, the solution of the corresponding Fokker-Planck equation in QES was depicted by Eq. (1). In addition, it was found that the index κ in Eq. (1) can be represented as a function of the intensity of the particular noise for SGS, the friction coefficient, and others. So, we showed the range of κ for each dimension analytically which has a good consistency with observations and simulations. The theory justifying the frictional force and the random noise is indispensable for constructing the Langevin equation. As mentioned before, there are theories for 3DSGS [16,17], but not for 2DSGS. Therefore, such a theory must be developed. FIG . 1: (color online) (a) Number densities at QES for several initial polytrope indexes are plotted by open-circles. The curve passing the circles is a fitting function, Eq. (1). So as to be seen easily, each density is shifted by two digits to the below. (b) The optimum values of κ in Eq. (1) for fitting the number densities vs the initial polytope index n. FIG. 2 : 2(color online) Numerical solutions of Eqs.(18) and(19) for µ = 2 (a) and µ = 3 (b). As the curve changes from the left to the right in (a), q gets larger from 0 to 0.28 in steps of 0.07. On the other hand, q gets larger from 0 to 0.05 in steps of 0.01 in (b). The (red) dashed curve in (a) and (b) means 1/(1 +r 2 ) and 1/(1 +r 2 ) 3/2 , respectively. ACKNOWLEDGMENTSWe would like to thank Prof. Takayuki Tatekawa for N-body simulations and members of astrophysics laboratory at Ochanomizu University for extensive discussions. This work was supported by JSPS Grant-in-Aid for Chal-lenging Exploratory Research 24654120. . Y Levin, R Pakter, F B Rizzato, T N Teles, F P C Benetti, Phys. Rep. 5351Y. Levin, R. Pakter, F. B. Rizzato, T. N. Teles and F. P. C. Benetti, Phys. Rep. 535, 1 (2014). . T Tashiro, T Tatekawa, J. Phys. Soc. Jpn. 7963001T. Tashiro and T. Tatekawa, J. Phys. Soc. Jpn, 79, 063001 (2010). T Tashiro, T Tatekawa, Numerical Simulations of Physical and Engineering Processes. CroatiaINTECH301T. Tashiro and T. Tatekawa, Numerical Simulations of Physical and Engineering Processes, INTECH, Croatia, pp. 301, 2011. . C J Peterson, I R King, Astron. J. 80427C. J. Peterson and I. R. King, Astron. J. 80, 427 (1975). . S C Trager, I R King, S Djorgovski, Astron. J. 109218S. C. Trager, I. R. King, and S. Djorgovski, Astron. J. 109, 218 (1995). J Binney, S Tremaine, Galactic Dynamics: (Second Edition). Princeton, NJPrinceton University PressJ. Binney and S. Tremaine, Galactic Dynamics: (Sec- ond Edition), Princeton University Press, Princeton, NJ, 2008. . D Arzoumanian, A&A. 5296D. Arzoumanian et. al., A&A, 529, L6 (2011). . B Stepnik, A&A. 398551B. Stepnik et. al., A&A, 398, 551 (2003). . J Ostriker, ApJ. 1401056J. Ostriker, ApJ, 140, 1056 (1964). . C Sire, P-H Chavanis, Phys. Rev. E. 6646133C. Sire and P-H. Chavanis, Phys. Rev. E 66, 046133 (2002). . P-H Chavanis, C Rosier, C Sire, Phys. Rev. E. 6636105P-H. Chavanis, C. Rosier and C. Sire, Phys. Rev. E 66, 036105 (2002). . V A Antonov, Vest. Leningr. Gos. Univ. 7135V. A. Antonov, Vest. Leningr. Gos. Univ. 7, 135 (1962). . D Lynden-Bell, R Wood, Mon. Not. R. Astron. Soc. 138495D. Lynden-Bell and R. Wood, Mon. Not. R. Astron. Soc. 138, 495 (1968). . P-H Chavanis, C Sire, Phys. Rev. E. 7366103P-H. Chavanis and C. Sire, Phys. Rev. E 73, 066103 (2006). . T N Teles, Y Levin, R Pakter, F B Rizzato, J. Stat. Mech. 5007T. N. Teles, Y. Levin, R. Pakter and F. B. Rizzato, J. Stat. Mech., 2010, P05007 (2010). . S Chandrasekhar, ApJ. 97255S. Chandrasekhar, ApJ, 97, 255 (1943). Dynamical Evolution of Globular Clusters. L Spitzer, PrincetonUniversity PressPrinceton, NJL. Spitzer, Dynamical Evolution of Globular Clusters, PrincetonUniversity Press, Princeton, NJ, 1987. . K Sekimoto, J. Phys. Soc. Jpn. 681448K. Sekimoto, J. Phys. Soc. Jpn, 68, 1448 (1999). R Kubo, M Toda, N Hashitsume, Statistical Physics II: Nonequilibrium Statistical Mechanics. BerlinSpringer-VerlagR. Kubo, M. Toda and N. Hashitsume, Statisti- cal Physics II: Nonequilibrium Statistical Mechanics, Springer-Verlag, Berlin, 1991. . M Kiessling, J. Stat. Phys. 55203M. Kiessling, J. Stat. Phys. 55, 203 (1989). . J J Aly, Phys. Rev. E. 493771J. J. Aly, Phys. Rev. E 49, 3771 (1994). It is founded numerically that 2D and 3DSGS reach the universal quasi-equilibrium state specifically when initial virial ratios are 0. A reason why we adopt the Fokker-Planck equation approach rather than the Boltzmann equation is as follows. The initial conditions cause the cold collapse resulting in the high density around the center of the system. Therefore, each collision there cannot be distinguished and the Boltzmann equation premising that the collision is distinguishable is not appropriateA reason why we adopt the Fokker-Planck equation ap- proach rather than the Boltzmann equation is as follows: It is founded numerically that 2D and 3DSGS reach the universal quasi-equilibrium state specifically when initial virial ratios are 0. The initial conditions cause the cold collapse resulting in the high density around the center of the system. Therefore, each collision there cannot be dis- tinguished and the Boltzmann equation premising that the collision is distinguishable is not appropriate. With regard to µ = 3, the friction from SGS and the time at which the orbit of the element becomes stochastic are known as the dynamical friction. 16] and the two body relaxation time [17], respectively. However, no theory for the two dimensional space has been doneWith regard to µ = 3, the friction from SGS and the time at which the orbit of the element becomes stochastic are known as the dynamical friction [16] and the two body relaxation time [17], respectively. However, no theory for the two dimensional space has been done.
[]
[ "A new method to calibrate ionospheric pulse dispersion for UHE cosmic ray and neutrino detection using the Lunar Cherenkov technique", "A new method to calibrate ionospheric pulse dispersion for UHE cosmic ray and neutrino detection using the Lunar Cherenkov technique" ]
[ "R Mcfadden \nASTRON\n7990 AADwingelooThe Netherlands\n", "R Ekers \nCSIRO, Astronomy and Space Science\n1710EppingNSWAustralia\n", "P Roberts \nCSIRO, Astronomy and Space Science\n1710EppingNSWAustralia\n" ]
[ "ASTRON\n7990 AADwingelooThe Netherlands", "CSIRO, Astronomy and Space Science\n1710EppingNSWAustralia", "CSIRO, Astronomy and Space Science\n1710EppingNSWAustralia" ]
[]
UHE particle detection using the lunar Cherenkov technique aims to detect nanosecond pulses of Cherenkov emission which are produced during UHE cosmic ray and neutrino interactions in the Moon's regolith. These pulses will reach Earth-based telescopes dispersed, and therefore reduced in amplitude, due to their propagation through the Earth's ionosphere. To maximise the received signal to noise ratio and subsequent chances of pulse detection, ionospheric dispersion must therefore be corrected, and since the high time resolution would require excessive data storage this correction must be made in real time. This requires an accurate knowledge of the dispersion characteristic which is parameterised by the instantaneous Total Electron Content (TEC) of the ionosphere. A new method to calibrate the dispersive effect of the ionosphere on lunar Cherenkov pulses has been developed for the LUNASKA lunar Cherenkov experiments. This method exploits radial symmetries in the distribution of the Moon's polarised emission to make Faraday rotation measurements in the visibility domain of synthesis array data (i. e. instantaneously). Faraday rotation measurements are then combined with geomagnetic field models to estimate the ionospheric TEC. This method of ionospheric calibration is particularly attractive for the lunar Cherenkov technique as it may be used in real time to estimate the ionospheric TEC along a line-of-sight to the Moon and using the same radio telescope.
10.1016/j.nima.2010.10.126
[ "https://arxiv.org/pdf/1102.5770v1.pdf" ]
119,273,273
1102.5770
050e8ad954f8dcec885ccb10ec4525c8d7fd52f8
A new method to calibrate ionospheric pulse dispersion for UHE cosmic ray and neutrino detection using the Lunar Cherenkov technique 28 Feb 2011 R Mcfadden ASTRON 7990 AADwingelooThe Netherlands R Ekers CSIRO, Astronomy and Space Science 1710EppingNSWAustralia P Roberts CSIRO, Astronomy and Space Science 1710EppingNSWAustralia A new method to calibrate ionospheric pulse dispersion for UHE cosmic ray and neutrino detection using the Lunar Cherenkov technique 28 Feb 2011UHE neutrino detectionLunar Cherenkov techniqueDetectors -telescopesIonosphereLunar polarisation UHE particle detection using the lunar Cherenkov technique aims to detect nanosecond pulses of Cherenkov emission which are produced during UHE cosmic ray and neutrino interactions in the Moon's regolith. These pulses will reach Earth-based telescopes dispersed, and therefore reduced in amplitude, due to their propagation through the Earth's ionosphere. To maximise the received signal to noise ratio and subsequent chances of pulse detection, ionospheric dispersion must therefore be corrected, and since the high time resolution would require excessive data storage this correction must be made in real time. This requires an accurate knowledge of the dispersion characteristic which is parameterised by the instantaneous Total Electron Content (TEC) of the ionosphere. A new method to calibrate the dispersive effect of the ionosphere on lunar Cherenkov pulses has been developed for the LUNASKA lunar Cherenkov experiments. This method exploits radial symmetries in the distribution of the Moon's polarised emission to make Faraday rotation measurements in the visibility domain of synthesis array data (i. e. instantaneously). Faraday rotation measurements are then combined with geomagnetic field models to estimate the ionospheric TEC. This method of ionospheric calibration is particularly attractive for the lunar Cherenkov technique as it may be used in real time to estimate the ionospheric TEC along a line-of-sight to the Moon and using the same radio telescope. Introduction The lunar Cherenkov technique [1] provides a promising method of UHE neutrino detection since it utilises the lunar regolith as a detector; which has a far greater volume than current ground-based detectors. This technique makes use of Earth-based radio telescopes to detect the coherent Cherenkov radiation emitted when a UHE neutrino interacts in the outer layers of the Moon. It was first applied by Hankins, Ekers and O'Sullivan using the 64-m Parkes radio telescope [2] and significant limits have been already been placed on the UHE neutrino flux by several collaborations [2][3][4][5][6]. Electromagnetic pulses originating in the lunar surface will be dispersed when they arrive at Earth-based receivers due to propagation through the ionosphere which introduces a frequency-dependent time delay. This dispersion reduces the peak amplitude of a pulse, however, dedispersion techniques can be used to recover the full pulse amplitude and consequently increase the chances of detection. Accurate dedispersion requires an understanding of the ionospheric dispersion characteristic and it's effect on radio-wave propagation. Effects of Ionospheric Dispersion The ionosphere is a weakly ionized plasma which is formed by ultraviolet ionizing radiation from the sun. Due to its relationship with the sun, the ionosphere's electron density experiences a strong diurnal cycle and is also dependent on the season of the year, the current phase of the 11-year solar cycle and the geometric latitude of observation. The differential additive delay, caused by pulse dispersion, is parameterised by the ionospheric TEC (see Equation 1) ∆t = 0.00134 × ST EC × (ν −2 lo − ν −2 hi ),(1) where ∆t is the duration of the dispersed pulse in seconds, ST EC is the Slant Total Electron Content in electrons per cm 2 and ν lo and ν hi are the receiver bandwidth band edges in Hz. Cherenkov emission produces a sub-nanosecond pulse and therefore detection requires gigahertz bandwidths to achieve the high time resolution needed to optimse the signal to noise. Due to excessive data storage requirements, the only way to exploit such high data rates is to implement real-time dedispersion and detection algorithms and to store potential events at full bandwidth for later processing. This requires an accurate knowledge of the realtime ionospheric TEC. Ionospheric dispersion also offers some potential experimental advantages, particularly for single dish experiments which can not use array timing information to discriminate against RFI. A lunar pulse will travel a one-way path through the ionosphere and be dispersed according to the current ionospheric TEC. Conversely, terrestrial RFI will not be dispersed at all and any Moon-bounce RFI will travel a two-way path through the ionosphere and be dispersed according to twice the current ionospheric TEC. Therefore performing real-time ionospheric dedispersion will optimise detection for lunar pulses and provide discrimination against RFI. Dispersion may also be seen to offer an increase in dynamic range. If triggering is performed on a dedispersed data stream while the raw data is buffered, any pulse clipping that occurs in the triggering hardware can be recovered during offline processing by reconstructing the pulse from the raw, undispersed data. Dedispersion Hardware Pulse dispersion can be corrected using matched filtering techniques implemented in either analog or digital technology. Early LUNASKA experiments made use of the Australia Telescope Compact Array (ATCA) which consists of six 22-m dish antennas. Three antennas were fitted with custom-designed hardware for the neutrino detection experiments and pulse dedispersion was achieved through the use of innovative new analog dedispersion filters that employ a method of planar microwave filter design based on inverse scattering [7]. As the microwave dedispersion filters have a fixed dedispersion characteristic, an estimate had to be obtained for the TEC which would minimise errors introduced by temporal ionospheric fluctuations. The ATCA detection experiments were performed in 2007 and 2008 during solar minimum and therefore relatively stable ionospheric conditions. Initial observations were during the nights of May 5, 6 and 7, 2007 and these dates were chosen to ensure that the Moon was at high elevation (particularly during the night-time hours of ionospheric stability) and positioned such that the ATCA would be sensitive to UHE particles from the galactic center. The filter design was based on predictions made using dual-frequency GPS data and assumed a differential delay of 5 ns across the 1.2-1.8 GHz bandwidth. Data available post experiment revealed that the average differential delay for these nights was actually 4.39 ns, with a standard deviation of 1.52 ns. The ionosphere experiences both temporal and spatial fluctuations in TEC and therefore some signal loss is expected with a fixed dedispersion filter. A promising digital solution to overcome these losses lies in the use of high speed Field Programmable Gate Arrays (FPGAs). An FPGA implementation allows the dedispersion characteristic to be tuned in real time to reflect temporal changes in the ionospheric TEC. A fully coherent or predetection dedispsersion method was pioneered by Hankins and Rickett [8,9] which completely eliminates the effect of disper-sive smearing. This is achieved by passing the predetected signal through an inverse ionosphere filter which can be implemented in either the time domain, as an FIR filter, or in the frequency domain. In 2009, the LUNASKA collaboration started a series of UHE neutrino detection experiments using the 64-m Parkes radio telescope. For these experiments, dedispersion was achieved via a suite of FIR filters implemented on a Vertex 4 FPGA. As GPS TEC estimates are currently not available in real-time, near real-time TEC measurements were derived from foF2 ionosonde measurements. Ionosondes probe the peak transmission frequency (fo) through the F2-layer of the ionosphere which is related to the ionospheric TEC squared. A comparison to GPS data available post-experiment revealed that the foF2-derived TEC data consistently underestimated the GPS TEC measurements. This is attributed to the ground-based ionosondes probing mainly the lower ionospheric layers and not properly measuring TEC contribution from the plasmasphere [10]. Monitoring the Ionospheric TEC Coherent pulse dedispersion requires an accurate knowledge of the ionospheric dispersion characteristic which is parameterised by the instantaneous ionospheric TEC. TEC Measurements can be derived from dual-frequency GPS signals and are available online from NASA's CDDIS [11], however, these values are not available in real time. The CDDIS TEC data is sampled at two-hour intervals and is currently published after at least a few days delay. Estimates derived from foF2 ionosonde measurements are available hourly from the Australian Ionospheric Prediction Service [12]. However, as discussed, there are known inaccuracies in the derivation of the foF2-based TEC estimates. Both of these products are published as vertical TEC (VTEC) maps which must be converted to Slant TEC (STEC) estimates to obtain the true total electron content through the slant angle line-of-sight to the Moon. To perform this conversion, the ionosphere can be modeled as a Single Layer Model (SLM) [13] which assumes all free electrons are concentrated in an infinitesimally thin shell and removes the need for integration through the ionosphere. Slant and vertical TEC are related via ST EC = F (z)V T EC.(2) where F (z) is a slant angle factor defined as F (z) = 1 cos(z ′ )(3)= 1 1 − Re Re+H sin(z) 2 ,(4) R e is the radius of the Earth, z is the zenith angle to the source and H is the height of the idealised layer above the Earth's surface (see Figure 1). The CDDIS also use an SLM ionosphere for GPS interpolation algorithms and assume a mean ionospheric height of 450 km. A New Method of Ionospheric Calibration As the solar cycle enters a more active phase accurate pulse dedispersion is becoming a more important experimental concern for the lunar Cherenkov technique. This requires methods of obtaining more accurate measurements of the ionospheric TEC. A new technique has been formulated to obtain TEC measurements that are both instantaneous and line-ofsight to the Moon. The ionospheric TEC can be deduced from Faraday rotation measurements of a polarised source combined with geomagnetic field models, which are more stable than ionospheric models (the CCDIS [11] states that ionospheric TEC values are accurate to ∼20% while the IGRF [14] magnetic field values are accurate to better than 0.01%). Lunar thermal emission can be used as the polarised source since Brewster angle effects produce a nett polarisation excess in the emission from the lunar limb [15]. This provides a method for calibrating the ionosphere directly line-of-sight to the Moon and makes the lunar Cherenkov technique extremely attractive for UHE cosmic ray and neutrino astronomy as it allows the characteristic dispersion to be used as a powerful discriminant against terrestrial RFI whilst removing the need to search in dispersion-space. The unique constraints of an UHE neutrino detection experiment using the lunar Cherenkov technique conflict with traditional methods of planetary synthesis imaging and polarimetry which requires a complete set of spacings and enough observing time for earth rotation. Therefore to apply this method of ionospheric calibration to the ATCA detection experiments, innovations in the analysis of lunar polarisation observations were required. In particular, a method of obtaining lunar Faraday rotation estimates in the visibility domain (i. e. without Fourier inversion to the image plane) had to be developed. Working in the visibility domain removes both the imaging requirement of a compact array configuration, which would increase the amount of correlated lunar noise between receivers, and also removes the need for earth rotation allowing measurements to obtained in real time. This technique makes use of the angular symmetry in planetary polarisation distribution. The intrinsic thermal radiation of a planetary object appears increasingly polarised toward the limb, when viewed from a distant point such as Earth [15,16]. The polarised emission is radially aligned and is due to the changing angle of the planetary surface toward the limb combined with Brewster angle effects. The angular symmetry of this distribution can be exploited by an interferometer so that an angular spatial filtering technique may be used to obtain real-time position angle measurements directly in the visibility domain. The measured position angles are uniquely related to the corresponding uv angle at the time of the observation. Comparison with the expected radial position angles, given the current uv angle of the observation, gives an estimate of the Faraday rotation induced on the Moon's polarised emission. Faraday rotation estimates can be combined with geomagnetic field models to determine the associated ionospheric TEC and subsequently provide a method of calibrating the current atmospheric effects on potential Cherenkov pulses. Observations of the Moon were taken using the 22-m telescopes of the Australia Telescope Compact Array with a center frequency of 1384 MHz. At this frequency the Moon is in the near field of the array, however, investigation of the Fresnel factor in polar coordinates showed that it has no dependence on the spatial parameter, which determines the polarisation distribution of a planetary body. Using the angular spatial filtering technique, position angle estimates were calculated directly in the visibility domain of the lunar observational data. The Faraday rotation estimates were obtained by comparing these angles to the instantaneous uv angle and the resultant Faraday rotation estimates were averaged over small time increments to smooth out noise-like fluctuations. Since the polarised lunar emission received on each baseline varied in intensity over time, there were nulls during which the obtained position angle information was not meaningful. A threshold was applied to remove position angle measurements taken during these periods of low polarised intensity and baseline averaging was considered necessary as the results on each baseline were slightly different and each affected differently by intensity nulls. The Faraday rotation estimates were converted to estimates of ionospheric TEC via Ω = 2.36 × 10 4 ν −2 path N (s)B(s) cos(θ)ds, where Ω is the rotation angle in radians, f is the signal frequency in Hz, N is the electron density in m 3 , B is the geomagnetic field strength in T, θ is the angle between the direction of propagation and the magnetic field vector and ds is a path element in m. To evaluate the effectiveness of this new ionospheric calibration technique, the TEC results were compared against ionospheric TEC estimates derived from dual-frequency GPS data (Figure 2) . Slant angle factors were used to convert the GPS VTEC estimates to STEC toward the Moon for comparison with the ATCA data. Both data sets exhibited a similar general trend of symmetry around the Moon's transit. However, the ATCA data often underestimated the GPS data, particularly around 14:30-17:00 UT where the STEC estimates may have been influenced by bad data from the shorter baselines or due to TEC contribution from the plasmasphere which is not in the presence of a magnetic field [10]. These observations were taken when the TEC was very low and therefore the relative error in the TEC estimates is large. Conclusions As the sun enters a more active phase, accurate ionospheric pulse dispersion is becoming a more important experimental concern for UHE neutrino detection using the lunar Cherenkov technique. Hardware dedispersion options rely on the accuracy of real-time ionospheric TEC measurements and, while there are a few options available for obtaining these measurements, they are not currently available in real time nor directly line-of-sight to the Moon. A new ionospheric calibration technique has been developed. This technique uses Faraday rotation measurements of the polarised thermal radio emission from the lunar limb combined with geomagnetic field models to obtain estimates of the ionospheric TEC which are both instantaneous and line-of-sight to the Moon. STEC estimates obtained using this technique have been compared to dual-frequency GPS data. Both data sets exhibited similar features which can be attributed to ionospheric events, however, more observations are required to investigate this technique further. Figure 1 : 1Parameters of the Ionospheric Single Layer Model. Figure 2 : 2Lunar Faraday rotation estimates converted to (left) ionospheric TEC values and (right) the differential delay across 1.2-1.8 GHz. AcknowledgementsThis research was supported as a Discovery Project by the Australian Research Council. The Compact Array and Parkes Observatory are part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. A radio astronomy method of detecting neutrinos and other superhigh-energy elementary particles. R D Dagkesamanskii, I M Zheleznykh, PAZh. 50233R. D. Dagkesamanskii, I. M. Zheleznykh, A radio astronomy method of detecting neutrinos and other superhigh-energy ele- mentary particles, PAZh 50 (1989) 233. A search for lunar radio Cerenkov emission from high-energy neutrinos. T H Hankins, R D Ekers, J D O&apos;sullivan, MNRAS. 2831027T. H. Hankins, R. D. Ekers, J. D. O'Sullivan, A search for lunar radio Cerenkov emission from high-energy neutrinos, MNRAS 283 (1996) 1027. Experimental Limit on the Cosmic Diffuse Ultrahigh Energy Neutrino Flux. P W Gorham, C L Hebert, K M Liewer, C J Naudet, D Saltzberg, D Williams, 10.1103/PhysRevLett.93.041101arXiv:arXiv:astro-ph/0310232Phys. Rev. Lett. 93441101P. W. Gorham, C. L. Hebert, K. M. Liewer, C. J. Naudet, D. Saltzberg, D. Williams, Experimental Limit on the Cos- mic Diffuse Ultrahigh Energy Neutrino Flux, Phys. Rev. Lett. 93 (4) (2004) 041101. arXiv:arXiv:astro-ph/0310232 , doi:10.1103/PhysRevLett.93.041101 . A R Beresnyak, R D Dagkesamanskii, I M Zheleznykh, A V Kovalenko, V V Oreshko, 10.1134/1.1862359Limits on the Flux of Ultrahigh-Energy Neutrinos from Radio Astronomical Observations. 49127A. R. Beresnyak, R. D. Dagkesamanskii, I. M. Zheleznykh, A. V. Kovalenko, V. V. Oreshko, Limits on the Flux of Ultrahigh- Energy Neutrinos from Radio Astronomical Observations, As- tronomy Reports 49 (2005) 127. doi:10.1134/1.1862359. C W James, R J Protheroe, R D Ekers, J Alvarez-Muñiz, R A Mcfadden, C J Phillips, P Roberts, J D Bray, 10.1111/j.1365-2966.2010.17486.x1425arXiv:0906.3766LUNASKA experiment observational limits on UHE neutrinos from Centaurus A and the Galactic Centre. C. W. James, R. J. Protheroe, R. D. Ekers, J. Alvarez- Muñiz, R. A. McFadden, C. J. Phillips, P. Roberts, J. D. Bray, LUNASKA experiment observational lim- its on UHE neutrinos from Centaurus A and the Galactic Centre, MNRAS (2010) 1425arXiv:0906.3766, doi:10.1111/j.1365-2966.2010.17486.x . S Buitink, O Scholten, J Bacelar, R Braun, A G De Bruyn, H Falcke, K Singh, B Stappers, R G Strom, R A Yahyaoui, ArXive-printsarXiv:1004.0274Constraints on the flux of Ultra-High Energy neutrinos from WSRT observations. S. Buitink, O. Scholten, J. Bacelar, R. Braun, A. G. de Bruyn, H. Falcke, K. Singh, B. Stappers, R. G. Strom, R. a. Yahyaoui, Constraints on the flux of Ultra-High Energy neutrinos from WSRT observations, ArXiv e-printsarXiv:1004.0274. Design of microwave filters by inverse scattering. P Roberts, G Town, IEEE Transactions on Microwave Theory and Techniques. 434739Part 1P. Roberts, G. Town, Design of microwave filters by inverse scattering, IEEE Transactions on Microwave Theory and Tech- niques 43 (4 Part 1) (1995) 739. Microsecond Intensity Variations in the Radio Emissions from CP 0950. T H Hankins, 10.1086/151164ApJ. 169487T. H. Hankins, Microsecond Intensity Variations in the Radio Emissions from CP 0950, ApJ 169 (1971) 487. doi:10.1086/151164. Pulsar signal processing. T H Hankins, B J Rickett, Methods in Computational Physics. 1455T. H. Hankins, B. J. Rickett, Pulsar signal processing, in: Meth- ods in Computational Physics. Volume 14 -Radio astronomy, Vol. 14, 1975, p. 55. Determination of ionospheric electron content from the Faraday rotation of geostationary satellite signals. J E Titheridge, Planet. Space Sci. 203353J. E. Titheridge, Determination of ionospheric electron con- tent from the Faraday rotation of geostationary satellite signals, Planet. Space Sci. 20 (3) (1972) 353. . Crustal Dynamics Data Information System. NASANASA, Crustal Dynamics Data Information System., http://cddisa.gsfc.nasa.gov/gnss_datasum.html. Ionospheric Prediction Service, Australasia Total Electron Content. Ionospheric Prediction Service, Australasia Total Electron Con- tent., http://www.ips.gov.au/Satellite/2/1/1l. Using the Global Navigation Satellite System and satellite altimetry for combined Global Ionosphere Maps. S Todorova, T Hobiger, H Schuh, 10.1016/j.asr.2007.08.024Advances in Space Research. 42S. Todorova, T. Hobiger, H. Schuh, Using the Global Naviga- tion Satellite System and satellite altimetry for combined Global Ionosphere Maps, Advances in Space Research 42 (2008) 727- 736. doi:10.1016/j.asr.2007.08.024. The International Geomagnetic Reference Field Model. British Geological Survey, The Interna- tional Geomagnetic Reference Field Model., http://www.geomag.bgs.ac.uk/gifs/igrf.html. On the polarization and intensity of thermal radiation from a planetary surface. C E Heiles, F D Drake, Icarus. 2281C. E. Heiles, F. D. Drake, On the polarization and intensity of thermal radiation from a planetary surface, Icarus 2 (1963) 281. The Moon as a calibrator of linearly polarized radio emission for the SPOrt project. S Poppi, E Carretti, S Cortiglioni, V D Krotikov, E N Vinyajkin, Astrophysical Polarized Backgrounds. S. Cecchini, S. Cortiglioni, R. Sault, C. Sbarra609187American InstitutePhysics Conference SeriesS. Poppi, E. Carretti, S. Cortiglioni, V. D. Krotikov, E. N. Vinyajkin, The Moon as a calibrator of linearly polarized ra- dio emission for the SPOrt project, in: S. Cecchini, S. Cor- tiglioni, R. Sault, C. Sbarra (Eds.), Astrophysical Polarized Backgrounds, Vol. 609 of American Institute of Physics Con- ference Series, 2002, p. 187.
[]
[ "The Knee and the Second Knee of the Cosmic-Ray Energy Spectrum", "The Knee and the Second Knee of the Cosmic-Ray Energy Spectrum" ]
[ "T Abu-Zayyad \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "D Ivanov \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "C C H Jui \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "J H Kim \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "J N Matthews \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "J D Smith \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "S B Thomas \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "G B Thomson \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n", "Z Zundel \nDepartment of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA\n" ]
[ "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA", "Department of Physics and Astronomy\nHigh Energy Astrophysics Institute\nUniversity of Utah\nSalt Lake City, UtahUSA" ]
[]
The cosmic ray flux measured by the Telescope Array Low Energy Extension (TALE) exhibits three spectral features: the knee, the dip in the 10 16 eV decade, and the second knee. Here the spectrum has been measured for the first time using fluorescence telescopes, which provide a calorimetric, model-independent result. The spectrum appears to be a rigidity-dependent cutoff sequence, where the knee is made by the hydrogen and helium portions of the composition, the dip comes from the reduction in composition from helium to metals, the rise to the second knee occurs due to intermediate range nuclei, and the second knee is the iron knee.
null
[ "https://arxiv.org/pdf/1803.07052v1.pdf" ]
55,776,631
1803.07052
2b1d078530a523841159f792c1f06a919324ef46
The Knee and the Second Knee of the Cosmic-Ray Energy Spectrum T Abu-Zayyad Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA D Ivanov Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA C C H Jui Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA J H Kim Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA J N Matthews Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA J D Smith Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA S B Thomas Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA G B Thomson Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA Z Zundel Department of Physics and Astronomy High Energy Astrophysics Institute University of Utah Salt Lake City, UtahUSA The Knee and the Second Knee of the Cosmic-Ray Energy Spectrum , The cosmic ray flux measured by the Telescope Array Low Energy Extension (TALE) exhibits three spectral features: the knee, the dip in the 10 16 eV decade, and the second knee. Here the spectrum has been measured for the first time using fluorescence telescopes, which provide a calorimetric, model-independent result. The spectrum appears to be a rigidity-dependent cutoff sequence, where the knee is made by the hydrogen and helium portions of the composition, the dip comes from the reduction in composition from helium to metals, the rise to the second knee occurs due to intermediate range nuclei, and the second knee is the iron knee. Introduction The spectrum [1] recently measured by the TALE fluorescence detector of the Telescope Array experiment covers the energy range, 10 15.3 eV < E < 10 18.3 eV, and represents the first time that fluorescence telescopes have observed this low in energy. In this energy range there are three spectral features. In the TALE spectrum the knee appears as a broad maximum centered at 10 15.6 eV, there is a broad dip centered at 10 16.2 eV, and the second knee occurs at 10 17.04 eV. The energy scale of TALE is the same as that of the entire TA experiment, and is consistent with the energy of the GZK cutoff [2,3] which is observed at 10 19.75 eV [4]. The energy resolution of the TALE fluorescence detector is about 15% and is constant as a function of energy. This represents the first time that this energy range has been observed calorimetrically, with excellent energy resolution. The Kascade experiment [5] was the first to see the systematic change in composition as the energy increases above the knee of the spectrum. This observation was made using ground array detectors sensitive to the muonic and electromagnetic components of cosmic ray air showers. Although the specific changes in composition they described were model-dependent, the interpretation was widely accepted as representing a rigidity-dependent cutoff sequence at the end of the galactic cosmic ray spectrum. Their interpretation was that the hydrogen knee was at about 10 15.5 eV. For a rigidity-dependent cutoff sequence, this would put the iron knee at 10 16.9 eV. However the Kascade experiment could not make reliable measurements this high in energy. Subsequent experiments have seen the three spectral features, with different energy scales. Figure 1a shows the spectra of TALE [1], Telescope Array (TA) [6], HiRes [7], Pierre Auger Observatory [8], IceTop [9], and Kascade-Grande [10]. Some of the differences in the spectra seem to be due to different energy scales of experiments. Figure 1b shows the same spectra, but with the energy scale of IceTop reduced by 9.2% and the Auger energy scale increased by 10.2%. Here the IceTop and TALE spectra agree up to about 10 17.5 eV, the TA and Auger spectra agree up to about 10 19.5 eV. TALE, HiRes, and Kascade-Grande agree without energy scale adjustment. Thus there is a broad consensus that there exist features called the knee, the dip in the 10 16 eV decade, and the second knee, and that we have a good idea of the energies at which the features occur. In what follows we argue that the second knee is the iron knee, and the spectrum is consistent with a rigidity-dependent cutoff sequence. First we argue on general grounds, then we present three models that illustrate fits to the spectrum. The TALE Experiment The TALE experiment consists of two sets of detectors of cosmic rays: a fluorescence detector (FD) consisting of 10 telescopes that look high in the sky (at elevations from 31°to 59°), and a surface detector consisting of 103 scintillation counters forming an infill array in front of the FD. The TALE spectrum in reference [1] was based on observations made with the FD alone. Events seen by the FD below about 10 17 eV were dominated by Cherenkov light from cosmic ray shower particles, but above 10 17.5 eV, fluorescence light was the largest contribution to the signal. In the intermediate energy range, a mixture of Cherenkov and fluorescence light pertained. The energy resolution of the TALE FD is about 15%, independent of energy, and the spectrum measured is independent of models. The Rigidity-dependent Cutoff Sequence The basic idea of a rigidity dependent cutoff sequence is that in a cosmic ray accelerator a moving magnetic field accelerates particles. The maximum energy achieved by the accelerator depends upon the strength of the magnetic field, its speed of movement, and the time duration that particles are in contact with it. In this situation the maximum energy of nuclei will be proportional to their charge; i.e., the cutoff rigidity (energy/charge) is the same for all nuclear species. Thus if the maximum energy of hydrogen nuclei is E, for helium it is 2 × E, carbon is 6 × E, etc. Practically speaking, the sequence ends with iron, at an energy of 26 × E. If we start with the interpretation of the Kascade experiment as a guide, but using the TALE spectrum energy scale, the second knee can be identified as the iron knee, at 10 17.04 eV. Dividing by 26, the proton maximum of the spectrum would be 10 15.6 eV, and the helium maximum would be at 10 15.9 eV. The broad maximum of the knee is then identified as the result of H and He coming to their maximum energies. The abundance of metals is considerably lower than H and He, so a dip in the spectrum should occur at higher energies. In all cosmic scenarios, the abundance of Li, Be, and B is very low, which enhances the dip. Although the CNO group is more abundant, it is much less so than H and He. Thus, there should be a broad minimum in the spectrum of a rigidity-dependent cutoff sequence, with a rise near the carbon location of 10 16.4 eV. This is what is seen in the TALE spectrum. Due to intermediate weight nuclei the spectrum rises (on an E 3 J plot), then peaks at Fe. The Extragalactic Contribution One detail that must be taken into account is the low energy end of the extragalactic cosmic ray flux. All experiments with fluorescence detectors that can measure the depth of shower maxima indicate that between 10 18.0 and 10 18.5 eV the composition is very light, and probably protonic [11] [12] [13] [14]. If these cosmic rays originated within our galaxy, there would be considerable anisotropy in their arrival directions, but this is not seen either in the northern [15] or southern [16] hemispheres. Hence, one expects these protons to be of extragalactic origin. This is a general result. Using galactic magnetic field models, one limit [15] has been put that (at 95% confidence level) < 1.6% of cosmic rays are of galactic origin. By extension, an extragalactic flux of light composition must extend down to energies lower than 10 18 eV. One can estimate the contribution of extragalactic protons in the 10 17 eV decade using measurements of Xmax, the depth of shower maximum. The HiRes-MIA [17] and Auger [18] measurements indicate that at 10 17 eV the mean of Xmax is midway between what is expected for hydrogen and iron. Perhaps half of cosmic rays at this energy are extragalactic protons. Three Specific Models of the 10 15 − 10 18 Decades Here we wish to present three models of the rigidity-dependent cutoff sequence in comparison with the TALE spectrum: the H4a model by T. Gaisser [19], a model by T. Gaisser, T. Stanev, and S. Tilav (GST) [20], and a model taken from direct measurements of composition in the 10 13 eV decade [21] and extrapolated to higher energies by 2.5 decades. The H4a and GST models include an extragalactic component, but in the direct measurement model we have supplied an estimate of the extragalactic flux. The H4a model assigns the features of the spectrum to three populations of cosmic rays, two of galactic origin and the third of extragalactic origin. The first population forms the knee, and the second population makes the second knee. The dip in the 10 16 eV decade seems to be poorly expressed in this model. Figure 2 shows the comparison of the H4a model to the TALE spectrum. Figure 3 shows the comparison of the GST model (which also uses three populations) to the TALE spectrum. The GST model might have a higher energy scale than the TALE data. In the third model, we attempt to recreate the observed spectrum from extrapolating the measured composition at lower energies, and applying a rigidity-dependent cutoff for the termination of the dominant galactic components. We also assume a very simple phenomenological model for an extragalactic component at higher energies. Hence this is a model of 2 populations, one galactic and one extragalactic. The known lower-energy data is obtained by extrapolation from the Particle Data Group's compilation given in Figure 29.1 of the Particle Data Book [21]. Assuming the 11 nuclei displayed are the dominant species, we interpolated relative abundances at 10 13 eV. These values (normalized to one) are shown in Table 1. Other species are neglected in our model. In this energy regime, all species shown in Figure 29.1 appear to follow a common power law E −α with an index of α = 2.8. We assume this value in our model. We attempt to explain the occurrence of the knee and the second knee as the result of terminations in the acceleration of the 11 nuclei. The flux observed on Earth could be dominated by one local and recent galactic source, or a class of sources. In such a scenario, if protons are accelerated up to a cut-off energy E p , then the cut-off energies for heavier species should be given by ZE p . However, an abrupt, stepfunction cut-off is clearly unphysical. Cut-offs are often modeled as an exponential decay above the break, resulting from, for instance Bohm diffusion. However, it is possible that there are additional, weaker sources that extend to higher energies still. We therefore model the break by a broken power law where the slope of the spectrum in each case changes from α = 2.8 to a larger (steeper) value β. The extra-galactic component is assumed to follow a simple power law with a log-exponential cut-off represented by log 10 [J XG (E)] = B + (1 − e c−x d ) − γx(1) where x = log 10 E, and the −γx term represents the extrapolation of the piece-wise power-law spectrum previously seen below the ankle feature. The low-energy cutoff for the extra-galactic component is assumed to occur at 10 c eV, smoothed by a broadening of the cut-off with width d. This form for the cut-off was chosen for its simplicity. Figure 4 shows the TALE energy spectrum overlaid with the simple model described in the previous paragraphs. Table 2 lists the parameter values that combine to give a total flux that gives good agreement to the shape of TALE data. The group contributions of H+He, C+O, and Fe are shown separately in addition to the total galactic component. Conclusions The TALE spectrum shows three spectral features which resemble the effects of a rigidity-dependent cutoff sequence, which could occur at the high energy end of the galactic cosmic ray spectrum. The general features of such a sequence are: a broad knee made from H and He, a drop in spectrum (a dip) caused by the much lower abundance of metals, and a rise to an iron knee marking the very end of the galactic spectrum. Identifying the second knee seen in the TALE spectrum with the iron knee, the energies of the features seen correspond closely with this general picture. One can also compare specific models of the rigidity-dependence sequence to the TALE spectrum. The H4a model uses three populations to form the knee, the second knee, and an extragalactic population, but overshoots the dip. The GST model has a stronger dip. A model using the PDG compendium as a starting point can form an acceptable fit to the spectrum when one adds an extragalactic component. In all these cases we identify the second knee at 10 17.04 eV as the rigidity-dependent cutoff of iron. Acknowledgements Figure 1 : 1(a) Cosmic Ray spectrum measured by TALE, TA, Auger, HiRes, IceTop, and Kascade-Grande. The TALE and TA spectrum is obtained by combining the TALE[1] and TA surface detector[6] spectra. (b) The same plot, but the energy scale of Auger is raised by 10.2%, and that of IceTop is lowered by 9.2%. Figure 2 : 2Cosmic Ray spectrum measured by TALE between 10 15.3 eV and 10 18.3 eV, overlaid with the H4A model described in this section. Figure 3 : 3Cosmic Ray spectrum measured by TALE between 10 15.3 eV and 10 18.3 eV, overlaid with the GST model described in this section. Figure 4 : 4Cosmic Ray spectrum measured by TALE between 10 15.3 eV and 10 18.3 eV, overlaid with the phenomenological model described in this section. The galactic component, extrapolated from Figure 29.1 of the Particle Data Book, is shown by the red dot-dashed curve. The proton acceleration cut-off is placed at 10 15.6 eV. Pre-break post-break power indices of α = 2.80 and β = 4.20 are assumed, respectively, for all species. The galactic component is described by a power law of index γ = 3.24, with a cut-off at 10 16.4 eV and width of c = 0.55. Table 1 : 1Normalized abundances, at 10 13 eV, of the 11 nuclei from the Particle Data BookElement Z fraction at 10 13 eV hydrogen 1 0.3019 helium 2 0.4104 carbon 6 0.0388 oxygen 8 0.0745 neon 10 0.0153 magnesium 12 0.0293 silicon 14 0.0308 sulfur 16 0.0082 argon 18 0.0043 calcium 20 0.0070 iron 16 0.0800 Table 2 : 2Model parameters used to calculate the TALE spectrum Energy of cut-off of extra-galactic fluxsymbol value explanation α 2.80 galactic spectral power index below cut-off energy β 4.20 galactic spectral power index above cut-off energy E p 10 15.60 eV cut-off energy for protons γ 3.24 extra-galactic spectral index d 0.55 width of extra-galactic cut-off c 16.4 log- The authors wish to thank the members of the University of Utah Cosmic Ray Group for many interesting discussions, and the U.S. National Science Foundation for its awards PHY-0601915, PHY-1404495, PHY-1404502, and PHY-1607727. The Cosmic-Ray Energy Spectrum between 2 PeV and 2 EeV Observed with the TALE detector in monocular mode. R U Abbasi, arXiv:1803.01288Submitted to Astropart. Phys. R. U. Abbasi, et al., The Cosmic-Ray Energy Spectrum between 2 PeV and 2 EeV Observed with the TALE detector in monocular mode, Submitted to Astropart. Phys.arXiv:1803.01288. End of the cosmic-ray spectrum?. K Greisen, Phys. Rev. Lett. 16748K. Greisen, End of the cosmic-ray spectrum?, Phys. Rev. Lett. 16 (1966) 748. Upper limit of the spectrum of cosmic rays. G T Zatsepin, V A Kuzmin, JETP Lett. 4G. T. Zatsepin, V. A. Kuzmin, Upper limit of the spectrum of cosmic rays, JETP Lett. 4 (1966) 78-80. The Cosmic Ray Energy Spectrum Observed with the Surface Detector of the Telescope Array Experiment. T Abu-Zayyad, 10.1088/2041-8205/768/1/L1arXiv:1205.5067Astrophys. J. 7681T. Abu-Zayyad, et al., The Cosmic Ray Energy Spectrum Observed with the Surface Detector of the Telescope Array Experiment, Astrophys. J. 768 (2013) L1. arXiv:1205.5067, doi:10.1088/2041-8205/768/1/L1. KASCADE measurements of energy spectra for elemental broups of cosmic rays: Results and open problems. T Antoni, Astropart. Phys. 24T. Antoni, et al., KASCADE measurements of energy spectra for elemental broups of cosmic rays: Results and open problems, Astropart. Phys. 24 (2005) 1-25. Energy Spectrum of Ultra-High-Energy Cosmic Rays Measured by The Telescope Array. Y Tsunesada, Proceedings of the 35 th ICRC, PoS(ICRC2017)535. the 35 th ICRC, PoS(ICRC2017)535Busan, S. KoreaY. Tsunesada, et al., Energy Spectrum of Ultra-High-Energy Cosmic Rays Measured by The Telescope Array, in: Proceedings of the 35 th ICRC, PoS(ICRC2017)535, Busan, S. Korea, 2017. Observation of the GZK cutoff by the HiRes experiment. R U Abbasi, 10.1103/PhysRevLett.100.101101arXiv:astro-ph/0703099doi:10.1103/ PhysRevLett.100.101101Phys. Rev. Lett. 100101101R. U. Abbasi, et al., Observation of the GZK cutoff by the HiRes experiment, Phys. Rev. Lett. 100 (2008) 101101. arXiv:astro-ph/0703099, doi:10.1103/ PhysRevLett.100.101101. The cosmic ray energy spectrum measured using the Pierre Auger Observatory The Pierre Auger energy spectrum. F Fenu, The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference. ICRC 2017F. Fenu, The cosmic ray energy spectrum measured using the Pierre Auger Ob- servatory The Pierre Auger energy spectrum, in: The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017), 2017, pp. 9-16. URL http://inspirehep.net/record/1618413/files/1617990_9-16.pdf Cosmic ray spectrum and composition from three years of Ice-Top and IceCube. K Rawlins, 10.1088/1742-6596/718/5/052033doi:10.1088/ 1742-6596/718/5/052033J. Phys. Conf. Ser. 718552033K. Rawlins, Cosmic ray spectrum and composition from three years of Ice- Top and IceCube, J. Phys. Conf. Ser. 718 (5) (2016) 052033. doi:10.1088/ 1742-6596/718/5/052033. The spectrum of high-energy cosmic rays measured with KASCADE-Grande. W D Apel, 10.1016/j.astropartphys.2012.05.023doi:10.1016/j. astropartphys.2012.05.023Astropart. Phys. 36W. D. Apel, et al., The spectrum of high-energy cosmic rays measured with KASCADE-Grande, Astropart. Phys. 36 (2012) 183-194. doi:10.1016/j. astropartphys.2012.05.023. Indications of Proton-Dominated Cosmic Ray Composition above 1.6 EeV. R U Abbasi, Phys. Rev. Lett. 104161101R. U. Abbasi, et al., Indications of Proton-Dominated Cosmic Ray Composition above 1.6 EeV, Phys. Rev. Lett. 104 (2010) 161101. Study of Ultra-High Energy Cosmic Ray Composition using Telescope Array's Middle Drum detector and surface array in hybrid mode. R U Abbasi, Astropart. Phys. 64R. U. Abbasi, et al., Study of Ultra-High Energy Cosmic Ray Composition using Telescope Array's Middle Drum detector and surface array in hybrid mode, Astropart. Phys. 64 (2015) 49-62. Depth of Ultra High Energy Cosmic Ray Induced Air Shower Maxima Measured by the Telescope Array Black Rock and Long Ridge FADC Fluorescence Detectors and Surface Array in Hybrid Mode. R U Abbasi, arXiv:1801.09784Submitted to Ap. J.R. U. Abbasi, et al., Depth of Ultra High Energy Cosmic Ray Induced Air Shower Maxima Measured by the Telescope Array Black Rock and Long Ridge FADC Fluorescence Detectors and Surface Array in Hybrid Mode, Submitted to Ap. J.arXiv:1801.09784. Depth of maximum of air-shower profiles at the Pierre Auger Observatory. I. Measurements at energies above 10 17.8 eV. A Aab, Phys. Rev. D. 90122005A. Aab, et al., Depth of maximum of air-shower profiles at the Pierre Auger Observatory. I. Measurements at energies above 10 17.8 eV, Phys. Rev. D 90 (2014) 122005. Search for EeV Protons of Galactic Origin. R U Abbasi, 10.1016/j.astropartphys.2016.11.001arXiv:1608.06306Astropart. Phys. 86R. U. Abbasi, et al., Search for EeV Protons of Galactic Origin, Astropart. Phys. 86 (2017) 21-26. arXiv:1608.06306, doi:10.1016/j.astropartphys. 2016.11.001. Constraints on the origin of cosmic rays above 10 18 eV from large-scale anisotropy searches in data of the Pierre Auger Observatory. P Abreu, Ap. J. Lett. 76213P. Abreu, et al., Constraints on the origin of cosmic rays above 10 18 eV from large-scale anisotropy searches in data of the Pierre Auger Observatory, Ap. J. Lett. 762 (2013) L13. Measurement of the cosmic ray energy spectrum and composition from 10**17-eV to 10**18.3-eV using a hybrid fluorescence technique. T Abu-Zayyad, 10.1086/322240arXiv:astro-ph/0010652Astrophys. J. 557T. Abu-Zayyad, et al., Measurement of the cosmic ray energy spectrum and composition from 10**17-eV to 10**18.3-eV using a hybrid fluorescence tech- nique, Astrophys. J. 557 (2001) 686-699. arXiv:astro-ph/0010652, doi: 10.1086/322240. Depth of maximum of air-shower profiles at the Pierre Auger Observatory: Measurements above 10**17.2 eV and Composition Implications. H Bellido, Proceedings of the 35 th ICRC, PoS(ICRC2017)506. the 35 th ICRC, PoS(ICRC2017)506Busan, S. KoreaH. Bellido, et al., Depth of maximum of air-shower profiles at the Pierre Auger Observatory: Measurements above 10**17.2 eV and Composition Implications, in: Proceedings of the 35 th ICRC, PoS(ICRC2017)506, Busan, S. Korea, 2017. Spectrum of cosmic-ray nucleons, kaon production, and the atmospheric muon charge ratio. T K Gaisser, 10.1016/j.astropartphys.2012.02.010arXiv:1111.6675Astropart. Phys. 35T. K. Gaisser, Spectrum of cosmic-ray nucleons, kaon production, and the at- mospheric muon charge ratio, Astropart. Phys. 35 (2012) 801-806. arXiv: 1111.6675, doi:10.1016/j.astropartphys.2012.02.010. Cosmic ray energy spectrum from measurements of air showers. T Gaisser, Front. Phys. (Beijing). 8T. Gaisser, et al., Cosmic ray energy spectrum from measurements of air show- ers, Front. Phys. (Beijing) 8 (2013) 748-758. . C Patrignani, 10.1088/1674-1137/40/10/100001Review of Particle Physics, Chin. Phys. 4010100001C. Patrignani, et al., Review of Particle Physics, Chin. Phys. C40 (10) (2016) 100001. doi:10.1088/1674-1137/40/10/100001.
[]
[ "Tuning the effective fine structure constant in graphene: opposing effects of dielectric screening on short-and long-range potential scattering", "Tuning the effective fine structure constant in graphene: opposing effects of dielectric screening on short-and long-range potential scattering" ]
[ "C Jang \nCenter for Nanophysics and Advanced Materials\n\n", "S Adam \nCondensed Matter Theory Center\n\n", "J.-H Chen \nCenter for Nanophysics and Advanced Materials\n\n\nMaterials Research Science and Engineering Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMDUSA\n", "E D Williams \nCenter for Nanophysics and Advanced Materials\n\n\nMaterials Research Science and Engineering Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMDUSA\n", "S Das Sarma \nCenter for Nanophysics and Advanced Materials\n\n\nCondensed Matter Theory Center\n\n", "M S Fuhrer \nCenter for Nanophysics and Advanced Materials\n\n\nMaterials Research Science and Engineering Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMDUSA\n" ]
[ "Center for Nanophysics and Advanced Materials\n", "Condensed Matter Theory Center\n", "Center for Nanophysics and Advanced Materials\n", "Materials Research Science and Engineering Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMDUSA", "Center for Nanophysics and Advanced Materials\n", "Materials Research Science and Engineering Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMDUSA", "Center for Nanophysics and Advanced Materials\n", "Condensed Matter Theory Center\n", "Center for Nanophysics and Advanced Materials\n", "Materials Research Science and Engineering Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMDUSA" ]
[]
We reduce the dimensionless interaction strength α in graphene by adding a water overlayer in ultra-high vacuum, thereby increasing dielectric screening. The mobility limited by long-range impurity scattering is increased over 30 percent, due to the background dielectric constant enhancement leading to reduced interaction of electrons with charged impurities. However, the carrier-densityindependent conductivity due to short range impurities is decreased by almost 40 percent, due to reduced screening of the impurity potential by conduction electrons. The minimum conductivity is nearly unchanged, due to canceling contributions from the electron/hole puddle density and longrange impurity mobility. Experimental data are compared with theoretical predictions with excellent agreement.PACS numbers:
10.1103/physrevlett.101.146805
[ "https://arxiv.org/pdf/0805.3780v2.pdf" ]
31,182,357
0805.3780
ba4ccb5ae95474d156255a1ce2707f007ce66260
Tuning the effective fine structure constant in graphene: opposing effects of dielectric screening on short-and long-range potential scattering 8 Sep 2008 (Dated: September 8, 2008) C Jang Center for Nanophysics and Advanced Materials S Adam Condensed Matter Theory Center J.-H Chen Center for Nanophysics and Advanced Materials Materials Research Science and Engineering Center Department of Physics University of Maryland 20742-4111College ParkMDUSA E D Williams Center for Nanophysics and Advanced Materials Materials Research Science and Engineering Center Department of Physics University of Maryland 20742-4111College ParkMDUSA S Das Sarma Center for Nanophysics and Advanced Materials Condensed Matter Theory Center M S Fuhrer Center for Nanophysics and Advanced Materials Materials Research Science and Engineering Center Department of Physics University of Maryland 20742-4111College ParkMDUSA Tuning the effective fine structure constant in graphene: opposing effects of dielectric screening on short-and long-range potential scattering 8 Sep 2008 (Dated: September 8, 2008) We reduce the dimensionless interaction strength α in graphene by adding a water overlayer in ultra-high vacuum, thereby increasing dielectric screening. The mobility limited by long-range impurity scattering is increased over 30 percent, due to the background dielectric constant enhancement leading to reduced interaction of electrons with charged impurities. However, the carrier-densityindependent conductivity due to short range impurities is decreased by almost 40 percent, due to reduced screening of the impurity potential by conduction electrons. The minimum conductivity is nearly unchanged, due to canceling contributions from the electron/hole puddle density and longrange impurity mobility. Experimental data are compared with theoretical predictions with excellent agreement.PACS numbers: Most theoretical and experimental work on graphene has focused on its gapless, linear electronic energy dispersion E = v F k. One important consequence of this linear spectrum is that the dimensionless coupling constant α (or equivalently r s , defined here as the ratio between the graphene Coulomb potential energy and kinetic energy) is a carrier-density independent constant [1,2], and as a result, the Coulomb potential of charged impurities in graphene is renormalized by screening, but strictly maintains its long-range character. Thus there is a clear dichotomy between long-range and short-range scattering in graphene, with the former giving rise to a conductivity linear [2,3] in carrier density (constant mobility), and the latter having a constant conductivity independent of carrier density. Charged impurity scattering necessarily dominates at low carrier density, and the minimum conductivity at charge neutrality is determined by the charged impurity scattering and the self-consistent electron and hole puddles of the screened impurity potential [3,4,5,6]. Apart from the linear spectrum, an additional striking aspect of graphene, setting it apart from all other twodimensional electron systems, is that the electrons are confined to a plane of atomic thickness. This fact has a number of ramifications which are only beginning to be explored [7]. One such consequence is that graphene's properties may be tuned enormously by changing the surrounding environment. Here we provide a clear demonstration of this by reducing the dimensionless coupling constant α in graphene by more than 30 percent through the addition of a dielectric layer (ice) on top of the graphene sheet. Upon addition of the ice layer, the mobility limited by long-range scattering by charged impurities increases by 31 percent, while the conductivity limited by short-range scatterers decreases by 38 percent. The minimum conductivity value remains nearly unchanged. The opposing effects of reducing α on short-and long-range scattering are easily understood theoretically. The major effect on long-range scattering is to reduce the Coulomb interaction of electrons with charged impurities, reducing the scattering [8]. In contrast, the dielectric does not modify the atomic-scale potential of short-range scatterers, and there the leading effect is the reduction of screening by the charge carriers, which increases scattering resulting in lower high-density conductivity. Such screening of short-range potentials has been predicted theoretically [9], although in other 2D systems, this effect is difficult to observe experimentally. The minimum conductivity is nearly unchanged due to competing effects of increased mobility and reduced carrier concentration in electron-hole puddles due to reduced screening [4,10]. Fig. 1 illustrates the effect of the dielectric environment on graphene. For graphene sandwiched between two dielectric slabs with κ 1 and κ 2 , α = 2e 2 (κ 1 + κ 2 ) v F ,(1) where e is the electronic charge, is Planck's constant, and v F is the Fermi velocity, which we take to be 1.1 × 10 6 m/s [11,12,13]. Typically, graphene transport experiments [5,6,11,12] are performed on a SiO 2 substrate with κ 1 ≈ 3.9 and in air/vacuum κ 2 ≈ 1, making graphene a weakly interacting electron system with α ≈ 0.8 (although very recently work on substrate-free graphene [14] explored the strong coupling regime with α ≈ 2). Here we deposit ice (κ 2 ≈ 3.2 [15]) on graphene on SiO 2 , decreasing α from ≈ 0.81 to ≈ 0.56. Graphene is obtained by mechanical exfoliation of Kish graphite on a SiO 2 (300 nm)/Si substrate [11]. Graphene monolayers are identified from the color contrast in an optical microscope image and confirmed by Raman spectroscopy [16]. The final device (see Fig 2 inset) was fabricated by patterning electrodes using electron beam lithography and thermally evaporated Cr/Au, followed by annealing in Ar/H 2 to remove resist residue (see Refs. [6,17] for details). The experiments are performed in a cryostat cold finger placed in an ultra high vacuum (UHV) chamber. In order to remove residual adsorbed gases on the device and the substrate, the sample was baked at 430 K over-night in UHV following a vacuum bakeout. The conductivity was measured using a conventional four-probe technique with an ac current of 50 nA at a base pressure (∼ 10 −10 torr) and device temperature (∼ 77 K). Deionized nano-pure water was introduced through a leak valve attached to the chamber. The water gas pressure (determined by a residual gas analyzer) was 5 ± 3 × 10 −8 torr. The amount of ice deposited was estimated by assuming a sticking coefficient of unity and the ice I h layer density of 9.54 × 10 14 cm −2 [18,19]. Fig. 2 shows conductivity as a function of gate voltage for two different sample conditions, pristine graphene and ice-covered graphene. We observe several interesting effects of adding ice: (i) The offset gate voltage at which the conductivity is a minimum V g,min remains unchanged; (ii) the minimum conductivity σ min value remains unchanged, (iii) the maximum slope of σ(V g ) becomes steeper, and (iv) the curve σ(V g ) in the presence of ice is more non-linear and crosses that of the pristine sample at some large carrier density. All these features can be understood qualitatively from the physical picture described above, and we show below that they are in quantitative agreement with the predictions of Boltzmann transport theory including screening within the Random Phase Approximation (RPA). In order to interpret the experimental results quantitatively [21], we fit the conductivity data to σ −1 (V g , α) = (neµ l ) −1 + (σ s ) −1 ,(2) where n = c g |V g − V g,min |, e is the electric charge and c g = 1.15 × 10 −8 V/cm 2 is the gate capacitance per unit area for the 300 nm thick SiO 2 . Since the transport curves are not symmetric about the minimum gate voltage, the fitting is performed separately for positive and negative carrier densities (i.e. electron and hole carriers), excluding data close to the Dirac point conductivity plateau (V g,min ± 5V ). We report both the symmetric µ sym (σ sym ) and antisymmetric µ asym (σ asym ) contributions to the mobility (conductivity). Shown also in Fig. 2 is the result of the fit for pristine graphene and after deposition of 6 monolayers of ice. Figure 3 shows µ sym , σ sym and σ min as a function of number of ice layers. The mobility (Fig. 3a) of pristine graphene is 9, 000 cm 2 /Vs, which is typical for clean graphene devices on SiO 2 substrates at low temperature. As the number of water layers increases, the mobility increases, and saturates after about 3 layers of ice to about 12, 000 cm 2 /Vs. In contrast, the conductivity due to short-range scatterers (Fig. 3b) decreases from 280 e 2 /h to 170 e 2 /h. The decrease in conductivity due to shortrange scatterers shows a similar saturation behavior as the mobility, suggesting they have the same origin [20]. The absence of any sharp change in the conductivity or mobility at very low ice coverage rules out ice itself acting as a significant source of short-or long-range scat-tering. This is corroborated by the absence of a shift in the gate voltage of the minimum conductivity, consistent with physisorbed ice [18] not donating charge to graphene [4,5,6]. Fig. 3c shows that the minimum conductivity is essentially unchanged during the addition of ice. Ref. [22] 0.62 0.62 Minimum conductivity: σ ice min σ vac min = n * (α ice )F l (α vac ) n * (α vac )F l (α ice ) Ref. [4] 0.99 1.00 Long-range (anti-symmetric): µ ice asym µ vac asym = F l (α vac ) α ice F l (α ice ) α vac Ref. [26] 0.87 0.17 Short-range (anti-symmetric): σ ice asym σ vac asym Ref. [27] 0. 13 We now analyze the experimental results within Boltzmann transport theory. The conductivity of graphene depends strongly on the coupling constant α. For screened long-range impurities within RPA, we have [4] σ l = 2e 2 h n n imp 1 F l (α) , F l (α) = πα 2 + 24α 3 (1 − πα) + 16α 3 (6α 2 − 1) arccos[1/2α] √ 4α 2 − 1 ,(3) where in the last term, for α < 0.5 both arccos[(2α) −1 ] in the numerator and √ 4α 2 − 1 in the denominator are purely imaginary so that F l (α) is real and positive for all α. For screened short-range impurities, we have [22] σ s = σ 0 F s (α) , F s (α) = π 2 − 32α 3 + 24πα 2 + 320α 3 (1 − πα) + 256α 3 (5α 2 − 1) arccos[1/2α] √ 4α 2 − 1 ,(4) where similarly F s (α) is real and positive. Consistent with the physical picture outlined earlier, in the limit α → 0, σ l ∼ α −2 which describes the scaling of the Coulomb scattering matrix element, while for short-range scattering, σ s ≈ const[1 + (64/3π)α] where increased screening of the potential by the carriers gives the leading order increase in conductivity. For the experimental values of α, the full functional form of F s and F l should be used [23]. Dashed lines in Figs. 3a-b show the theoretical expectations for µ sym and σ sym for vacuum and ice on graphene in quantitative agreement with experiment. Regarding the magnitude of the minimum conductivity, it was recently proposed [4] that one can estimate σ min by computing the Boltzmann conductivity of the residual density n * that is induced by the charged impurities. This residual density (i.e. rms density of electrons and hole puddles) has been seen directly in scanning probe experiments [24] and in numerical simulations [10]. We therefore use Eq. 3, but replace n with n * = V 2 D /[π( v F ) 2 ] (where the angular brackets indicate ensemble averaging over configurations of the disorder potential V D ) to give [4] σ min = 2e 2 h 1 F l (α) n * (α) n imp , V 2 D = n imp ( v F α) 2 dq e −qd qǫ(q) 2 ,(5) where ǫ(q) is the RPA dielectric function and d ≈ 1nm is the typical impurity separation from the graphene sheet. The dominant contribution to both the disorder potential V 2 D and F l (α) is the Coulomb matrix element, giving n * ∼ n imp α 2 and 1/F l (α) ∼ 1/α 2 so that to leading order, σ min is unchanged by dielectric screening [25]. The experimental data also show a mobility asymmetry (between electrons and holes) of about 10 percent. Novikov [26] argued that for Coulomb impurities in graphene such an asymmetry is expected since electrons are slightly repelled by the negative impurity cen-ters compared to holes resulting in slightly higher mobility for electrons (since V g,min > 0, we determine that there are more negatively charged impurity centers, see also Ref. [6]); and that for unscreened Coulomb impurities µ usc (±V g ) ∼ [C 2 α 2 ± C 3 α 3 + C 4 α 4 + · · · ] −1 . From the magnitude of the asymmetry, we know that C 3 α 3 ≪ C 2 α 2 , but if we further assume that C 4 α 4 ≪ C 3 α 3 (although, in the current experiment, we cannot extract the value of C 4 ), then including the effects of screening gives µ asym ∼ α/F l (α). In Table I we show all the experimental fit parameters and compare them to theoretical predictions. The quantitative agreement for µ sym , σ min and σ sym is already highlighted in Fig. 3, while we have only qualitative agreement for µ asym , probably because the condition C 4 α 4 ≪ C 3 α 3 does not hold in our experiments. There is no theoretical expectation of asymmetry in σ s ; the experimental asymmetry (about 30 percent) could be explained by contact resistance [27] which we estimate to be a 20 percent correction to σ s for our sample geometry. In conclusion we have observed the effect of dielectric environment on the transport properties of graphene. The experiment highlights the difference between longrange and short-range potential scattering in graphene. The enhanced µ l (i.e. the slope of σ against density) and reduced σ s (i.e. the constant conductivity at high density) are attributed to the decreased interaction between charged carriers and impurities and decreased screening by charge carriers, respectively, upon an increase in background dielectric constant with ice deposition in UHV. These variations quantitatively agree with theoretical expectations for the dependence of electron scattering on graphene's "fine structure constant" within the RPA approximation. This detailed knowledge of the scattering mechanisms in graphene is essential for design of any useful graphene device, for example, use of a high-κ gate dielectric will increase the transconductance of graphene at the expense of linearity, an important consideration for analog applications. As demonstrated here, dielectric deposition only improved mobility by 30 percent, however the use of high-k dielectric overlayers could significantly enhance this result. We thank E. Hwang and E. Rossi for fruitful discussions. This work is supported by US ONR, NRI-SWAN and NSF-UMD-MRSEC grant DMR 05-20471. FIG. 1 : 1Schematic illustrating dielectric screening in graphene. The dielectric environment controls in the interaction strength parameterized by the coupling constant α. FIG. 2 : 2(Color online) Conductivity of the graphene device as a function of back-gate voltage for pristine graphene (circles) and after deposition of 6 monolayers of ice (triangles). Lines are fits to Eq. 2. Inset: Optical microscope image of the device.FIG. 3: (Color online) µsym, σsym and σmin as a function of number of ice layers. Dashed lines show the values for pristine graphene and corresponding theoretical expectations for the ice-covered device. TABLE I : ISummary of our results and corresponding theoretical predictions.Theory Experiment Long-range (symmetric): µ ice sym µ vac sym = F l (α vac ) F l (α ice ) Ref. [4] 1.26 1.31 Short-range (symmetric): σ ice sym σ vac sym = Fs(α vac ) Fs(α ice ) . N M R Peres, F Guinea, A H Castro Neto, Phys. Rev. B. 72174406N. M. R. Peres, F. Guinea, and A. H. Castro Neto, Phys. Rev. B 72, 174406 (2005). . T Ando, J. Phys. Soc. Jpn. 7574716T. Ando, J. Phys. Soc. Jpn. 75, 074716 (2006); . K Nomura, A H Macdonald, Phys. Rev. Lett. 96256602K. No- mura and A. H. MacDonald, Phys. Rev. Lett. 96, 256602 (2006). . E H Hwang, S Adam, S. Das Sarma, Phys. Rev. Lett. 98186806E. H. Hwang, S. Adam, and S. Das Sarma, Phys. Rev. Lett. 98, 186806 (2007). . S Adam, E H Hwang, V M Galitski, S. Das Sarma, Proc. Natl. Acad. Sci. USA. 10418392S. Adam, E. H. Hwang, V. M. Galitski, and S. Das Sarma, Proc. Natl. Acad. Sci. USA 104, 18392 (2007). . Y.-W Tan, Phys. Rev. Lett. 99246803Y.-W. Tan et al., Phys. Rev. Lett. 99, 246803 (2007). . J H Chen, C Jang, S Adam, M S Fuhrer, E D Williams, M Ishigami, Nature Physics. 4377J. H. Chen, C. Jang, S. Adam, M. S. Fuhrer, E. D. Williams, and M. Ishigami, Nature Physics 4, 377 (2008). . H Min, R Bistritzer, J Su, A H , arXiv:0802.3462v1MacDonald. H. Min, R. Bistritzer, J. Su, and A. H. MacDon- ald, arXiv:0802.3462v1 (2008); . Y Zhang, V W Brar, F Wang, C Girit, Y Yayon, M Panlasigui, A Zettl, M F Crommie, arXiv:0802.4315v1Y. Zhang, V. W. Brar, F. Wang, C. Girit, Y. Yayon, M. Panlasigui, A. Zettl, and M. F. Crommie, arXiv:0802.4315v1 (2008). . D Jena, Phys. Rev. Lett. 98136805D. Jena et al., Phys. Rev. Lett. 98, 136805 (2007). . T Ando, A B Fowler, F Stern, Rev. Mod. Phys. 54437T. Ando, A. B. Fowler, and F. Stern, Rev. Mod. Phys. 54, 437 (1982); . S , Das Sarma, B Vinter, Phys. Rev. B. 24549S. Das Sarma and B. Vinter, Phys. Rev. B 24, 549 (1981). . E Rossi, S. Das Sarma, arXiv:0803.0963v1E. Rossi and S. Das Sarma, arXiv:0803.0963v1 (2008). . K S Novoselov, Nature. 438197K. S. Novoselov et al., Nature 438, 197 (2005). . Y Zhang, Nature. 438201Y. Zhang et al., Nature 438, 201 (2005). . Z Jiang, Phys. Rev. Lett. 98197403Z. Jiang et al., Phys. Rev. Lett. 98, 197403 (2007). . K Bolotin, Solid State Commun. 146351K. Bolotin et al., Solid State Commun. 146, 351 (2008); . S Adam, S. Das Sarma, 146356S. Adam and S. Das Sarma, ibid. 146, 356 (2008). V F Petrenko, R W Whitworth, The Physics of Ice. Oxford, U.K.Oxford University PressV. F. Petrenko and R. W. Whitworth, The Physics of Ice (Oxford University Press, Oxford, U.K., 1999). . A C Ferrari, Phys. Rev. Lett. 97187401A. C. Ferrari et al., Phys. Rev. Lett. 97, 187401 (2006). . M Ishigami, Nano Lett. 71643M. Ishigami et al., Nano Lett. 7, 1643 (2007). . P C Sanfelix, Surface Science. 532166P. C. Sanfelix et al., Surface Science 532, 166 (2003). . P A Thiel, Surface Science Reports. 7211P. A. Thiel et al. Surface Science Reports 7, 211 (1987). The saturation behavior shown in Fig. 3 indicates that the ice film is continuous well before the formation of 6 full ice layers, and has reached a constant value of the dielectric constant. K Hirose, Phys. Rev. B. 67195313Bulk dielectric constant has been observed in ultrathin films of Si02, see. and it is reasonable to assume that these ultrathin ice layers have the bulk dielectric constant of iceThe saturation behavior shown in Fig. 3 indicates that the ice film is continuous well before the formation of 6 full ice layers, and has reached a constant value of the dielectric constant. Bulk dielectric constant has been ob- served in ultrathin films of Si02, see K. Hirose et al., Phys. Rev. B 67, 195313 (2003), and it is reasonable to assume that these ultrathin ice layers have the bulk di- electric constant of ice. . S Morozov, Phys. Rev. Lett. 10016602S. Morozov et al., Phys. Rev. Lett. 100, 016602 (2008). . S Adam, Physica E. 401022S. Adam et al., Physica E 40, 1022 (2008). ) consider a phenomenological Yukawa potential. Generally one uses a model Yukawa potential in studying systems where the microscopic nature of the screened potential is unknown which is not the case for graphene. For the Yukawa potential. V Shytov, arXiv:0805.1413v1Results beyond the RPA approximation have been examined in A. 9917001We believe that these effects are unobservable in the current experiment. we find Fy = πα 2 + 8α 3 − πα √ 1 + 4α 2 which is qualitatively similar to Eq. 3Results beyond the RPA approximation have been ex- amined in A. V. Shytov et al., Phys. Rev. Lett. 99, 236801 (2007), R. R. Biswas et al., Phys. Rev. B 76, 205122 (2007), V. M. Pereira et al., Phys. Rev. Lett. 99, 166802 (2007), I. S. Terekhov et al., Phys. Rev. Lett. 100, 076803 (2008), M. S. Foster et al. Phys. Rev. B 77, 195413 (2008) and M. Mueller et al., arXiv:0805.1413v1 (2008). We believe that these effects are unobservable in the current experiment. Also M. Trushin et al. Euro- phys. Lett. 83, 17001 (2008) consider a phenomenological Yukawa potential. Generally one uses a model Yukawa potential in studying systems where the microscopic na- ture of the screened potential is unknown which is not the case for graphene. For the Yukawa potential, we find Fy = πα 2 + 8α 3 − πα √ 1 + 4α 2 which is qualitatively similar to Eq. 3. . J Martin, Nature Physics. 4144J. Martin et al., Nature Physics 4, 144 (2008). (ice)/σmin(vac) ≈ 6.66/6.72 ≈ 0.99. The minimum conductivity (Fig. 3c) shows almost no variation with ice layers, in agreement with this theoretical expectation. We ignore quantum coherent effects such as localization (see e.g. I. Aleiner and K. Efetov. W Tan, Estimating the charged impurity density nimp ≈ 5.5 × 10 10 cm −. 97236801Eur. Phys. J.Estimating the charged impurity density nimp ≈ 5.5 × 10 10 cm −2 (which is comparable to similar ex- periments [5, 6]) we find [4] σmin(ice)/σmin(vac) ≈ 6.66/6.72 ≈ 0.99. The minimum conductivity (Fig. 3c) shows almost no variation with ice layers, in agreement with this theoretical expectation. We ignore quantum co- herent effects such as localization (see e.g. I. Aleiner and K. Efetov, Phys. Rev. Lett. 97, 236801 (2006)) which are not expected to be important at 77 K, and are not ex- perimentally observed [5, 6, 11, 12, 21] down to 30 mK (see: Y.-W. Tan et al., Eur. Phys. J. 148, 15 (2007)). . D S Novikov, Appl. Phys. Lett. 91102102D. S. Novikov, Appl. Phys. Lett. 91, 102102 (2007). . B Huard, arXiv:0804.2040v1B. Huard et al., arXiv:0804.2040v1 (2008).
[]
[ "Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations", "Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations" ]
[ "Andris Ambainis " ]
[]
[]
We present two new quantum algorithms. Our first algorithm is a generalization of amplitude amplification to the case when parts of the quantum algorithm that is being amplified stop at different times.Our second algorithm uses the first algorithm to improve the running time of Harrow et al. algorithm for solving systems of linear equations from O(κ 2 log N ) to O(κ log 3 κ log N ) where κ is the condition number of the system of equations.
null
[ "https://arxiv.org/pdf/1010.4458v2.pdf" ]
18,806,500
1010.4458
bfb00f951f2d48657af75d055e3273cc3ddb31da
Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations 14 Nov 2010 Andris Ambainis Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations 14 Nov 2010 We present two new quantum algorithms. Our first algorithm is a generalization of amplitude amplification to the case when parts of the quantum algorithm that is being amplified stop at different times.Our second algorithm uses the first algorithm to improve the running time of Harrow et al. algorithm for solving systems of linear equations from O(κ 2 log N ) to O(κ log 3 κ log N ) where κ is the condition number of the system of equations. Introduction Solving large systems of linear equations is a very common problem in scientific computing, with many applications. Until recently, it was thought that quantum algorithms cannot achieve a substantial speedup for this problem, because the coefficient matrix A is of size N 2 and it may be necessary to access all or most of coefficients in A to compute x -which requires time Ω(N 2 ). Recently, Harrow, Hassidim and Lloyd [5] discovered a surprising quantum algorithm that allows to solve systems of linear equations in time O(log N ) -in an unconventional sense. Namely, the algorithm of [5] generates the quantum state |x = N i=1 x i |i with the coefficients x i being equal to the values of variables in the solution x = (x 1 , x 2 , . . . , x N ) of the system Ax = b. The Harrow-Hassidim-Lloyd algorithm among the most interesting new results in quantum algorithms, because systems of linear equations have many applications in all fields of science. For example, this algorithm has been used to design quantum algorithms for solving differential equations [7,3]. Besides N , the running time of the algorithms for systems of linear equations (both classical and quantum algorithms) depends on another parameter κ, the condition number of matrix A. The condition number is defined as the ratio between the largest and the smallest singular value of A: κ = max i,j |µi| |µj | where µ i are the singular values of A. In the case of sparse classical matrices, the best classical algorithm runs in time O( √ κN ) [8] while the HHL quantum algorithm runs in time O(κ 2 log N ), with an exponentially better dependence on N but worse-than-classical dependence on κ. In this paper, we present a better quantum algorithm, with the running time O(κ log 3 κ log N ). To construct our algorithm, we introduce a new tool, the variable-time quantum amplitude amplification which allows to amplify the success probability of quantum algorithms in which some branches of the computation stop earlier than other branches. The conventional amplitude amplification [4] would wait for all branches to stop -possibly resulting in a substantial inefficiency. Our new algorithm amplifies the success probability in multiple stages and takes advantage of the parts of computation which stop earlier. We expect that this new method will be useful for building other quantum algorithms. The dependence of our quantum algorithm for solving systems of linear equations on κ is almost optimal. Harrow et al. [5] show that, unless BQP = P SP ACE, time of Ω(κ 1−o (1) ) is necessary for generating the state |x that describes the solution of the system. Overview of main results Variable time amplitude amplification Informally, our result is as follows. Consider a quantum algorithm A which may stop at one of several times t 1 , . . . , t m . (In the case of systems of linear equations, these times corresponding to m runs of eigenvalue estimation with increasing precision and increasing number of steps.) To indicate the outcome, A has an extra register O with 3 possible values: 0, 1 and 2. 1 indicates the outcome that should be amplified. 0 indicates that the computation has stopped at this branch but did not the desired outcome 1. 2 indicates that the computation at this branch has not stopped yet. Let p i be the probability of the algorithm stopping at time t i (with either the outcome 0 or outcome 1). The average stopping time of A (the l 2 average) is T av = i p i t 2 i . T max denotes the maximum possible running time of the algorithm (which is equal to t m ). Let α good |1 O |ψ good + α bad |0 O |ψ bad be the algorithm's output state after all branches of the computation have stopped. Our goal is to obtain |ψ good with a high probability. Let p succ = |α good | 2 be the probability of obtaining this state via algorithm A. Our main result is Theorem 1 We can construct a quantum algorithm A ′ invoking A several times, for total time O T max log T max + T av √ p succ log 1.5 T max that produces a state α|1 ⊗ |ψ good + β|0 ⊗ |ψ ′ with probability |α| 2 ≥ 1/2 as the output 1 In contrast, the usual amplitude amplification [4] would run for time O( Tmax √ psucc ). Our algorithm A ′ provides an improvement, whenever T av is substantially smaller than T max . By repeating A ′ O(log 1 ǫ ) times, we can obtain |ψ good with a probability at least 1 − ǫ. Our algorithm A ′ is optimal, up to the factor of log T max . If the algorithm A has just one stopping time T = T av = T max , then amplitude amplification cannot be performed with fewer than O( T √ psucc ) steps. Thus, the term of Tav √ psucc is necessary. The term T max is also necessary because, in some branch of computation, A can run for T max steps. More details are given in section 3. First, in subsection 3.1, we give a precise definition of how a quantum algorithm could stop at different times. Then, in subsections 3.2 and 3.3, we give a proof of Theorem 1. Systems of linear equations We consider solving a system of linear equations Ax = b where A = (a ij ) i,j∈[N ] , x = (x i ) i∈[N ] , b = (b i ) i∈[N ] . We assume that A is Hermitian. As shown in [5], this assumption is without the loss of generality. Let |v i be the eigenvectors of A and λ i be their eigenvalues. Similarly to [5], we assume that all λ i satisfy 1 κ ≤ λ i ≤ 1 for some known κ. We can then transform the state |b = n i=1 b i |i into |x = n i=1 x i |i as follows: 1. If, in terms of eigenvectors |v i of A, we have |b = i c i |v i , then |x = i ci λi |v i . 2. By eigenvalue estimation, we can create the state |b ′ = i c i |v i |λ i whereλ i are the estimates of the true eigenvalues. 3. We then create the state |b ′′ = i c i |v i |λ i 1 κλ i |1 + 1 − 1 κ 2λ2 |0 .(1) Conditional on the last bit being 1, the rest of state is i cĩ λi |v i |λ i which can be turned into an approximation of |x by running eigenvalue estimation in reverse and uncomputingλ i . 4. We then amplify the part of state which has the last qubit equal to 1 (using amplitude amplification) and obtain a good approximation of |x with a high probability. Theorem 2 [5] Let C be such that the evolution of the Hamiltonian H for time T can be simulated in time C min(T, 1). Then, we can generate |ψ ′ satisfying ψ − ψ ′ ≤ ǫ in time ( Cκ 2 ǫ ). The main term in the running time, κ 2 is generated as a product of two κ's. First, for ψ−ψ ′ ≤ ǫ, it suffice that the estimatesλ i satisfy |λ i −λ i | = O(ǫλ i ). Since λ i = Ω(1/κ), this means |λ i −λ i | = O( ǫ κ ). To estimate λ i within error O( ǫ κ ), we need to run H for time O( κ ǫ ). Second, for amplitude amplification, we may need to repeat the algorithm generating |b ′′ O(κ) times -resulting in the total running time O(κ 2 /ǫ). For eigenvalue estimation, the worst case is when all of most of λ i are small (of order Θ(1/κ)). Then, |λ i −λ i | = Θ( ǫ κ ). and eigenvalue estimation with the right precision indeed requires time Θ( κ ǫ ). For amplitude amplification, the worst case is if most or all of λ i are large (constant). Then, the coefficients 1 κλi can be of order Θ(1/κ) and Θ(κ) repetitions are required for amplitude amplification. We now observe that the two Θ(κ)'s appear in the opposite cases. One of them appears when λ i is small (λ i ≈ κ) but the other appears when λ i is large (λ i ≈ 1). If all eigenvalues are of roughly similar magnitude (e.g., λ ∈ [a, 2a] for some a), the running time becomes O(κ/ǫ) because we can do eigenvalue estimation in time to error ǫa in O(1/aǫ) and, for eigenvalue amplification, it suffices to repeat the generation of |b ′′ O(κa) times (since the amplitude of 1 in the last qubit of |b ′ is at least 1 κa for every v i ). Thus, the running time is O 1 aǫ · O(κa) = O κ ǫ . The problem is to achieve a similar running time in the general case (when the eigenvalues λ i can range from κ to 1). To do that, we first design a version of eigenvalue estimation in which some branches of computation (corresponding to eigenvectors with larger eigenvalues λ i ) terminate earlier than others. Namely, we start by running it for O(1) steps. If we see that the estimateλ i for the eigenvalue is such that the allowed error O(ǫλ i ) is more than the expected error of the current run of eigenvalue estimation, we stop. Otherwise, we run eigenvalue estimation again, doubling its running time. This doubles the precision achieved by eigenvalue estimation. We continue this until the precision of current estimate becomes better than the allowed error of O(ǫλ i ). This gives a quantum algorithm in which different branches of computation stop at different times. We apply our variable-time amplitude amplification to this quantum algorithm. This gives us Theorem 3 Let C be such that the evolution of the Hamiltonian H for time T can be simulated in time C min(T, 1). Then, we can generate |ψ ′ satisfying ψ − ψ ′ ≤ ǫ in time O Cκ log 3 κ ǫ ǫ 3 log 2 1 ǫ . We give more details in section 4. Variable-time amplitude amplification 3.1 Model How can a quantum algorithm have different branches of computation stopping at different times? We start by giving a precise definition of that. We require the state space of A to be of the form H = H o ⊗H c be the Hilbert space of A, consisting of the 0-1-2 valued outcome register H o and the rest of the Hilbert space H c . Let |ψ 1 , . . . , |ψ m be the states of A at times t 1 , . . . , t m . We insist on the following consistency requirements. 1. For each i ∈ {1, . . . , m}, the description of the algorithm must define a subspace H i of H o in which the computation has stopped. Those subspaces must satisfy H 1 ⊆ H 2 . . . ⊆ H m = H c . 2. The state |ψ i can be expressed as |ψ i = α i,0 |0 ⊗ |ψ i,0 + α i,1 |1 ⊗ |ψ i,1 + α i,2 |2 ⊗ |ψ i,2 , with |ψ i,0 ∈ H i , |ψ i,1 ∈ H i and |ψ i,2 ∈ H o ∩ (H i ) ⊥ . (When i = m, we have |ψ m,0 = |ψ bad , |ψ m,1 = |ψ good , |ψ m,2 = − → 0 .) 3. We must have P Hi |ψ i+1,0 = |ψ i,0 and P Hi |ψ i+1,1 = |ψ i,1 . That is, the part of the state where the computation stopped at time t i should not change after that. The success probability of A is p succ = |α m,1 | 2 . We also define p succ,i = |α i,1 | 2 , the probability of A succeeding before time t i . The probability of A stopping at time t i or earlier is p stop,≤i = |α i,0 | 2 + |α i,1 | 2 . The probability of A stopping at exactly time t i is p stop,1 = p stop,≤1 for i = 1 and p stop,i = p stop,≤i − p stop,≤i−1 for i > 1. We will also use the probability of A stopping later than time t i , defined as p stop,>i = |α i,2 | 2 = 1 − p stop,≤i . The average stopping time of A (the l 2 average) is T av = i p i t 2 i . The maximum stopping time of A is T max = t m . Our goal is to amplify the success probability to Ω(1), by running A for time O T max log 0.5 T max + Tav √ psucc log 1.5 T max . Tools Our variable-time amplitude amplification uses two subroutines. Thr first is a result by Aaronson and Ambainis [1] who gave a tighter analysis of the usual amplitude amplification algorithm [4]. We say that an algorithm A produces a quantum state |ψ with probability p if the following is true: • The algorithm has two output registers R and S (and, possibly some more auxiliary registers); • Measuring R gives 1 with probability p and, conditional on this measurement result, the S register is in state |ψ . Lemma 1 [1] Let A be a quantum algorithm that outputs a state |ψ with probability 2 δ ≤ ǫ where ǫ is known. Furthermore, let m ≤ π 4 arcsin √ ǫ − 1 2 . (2) Then, there is an algorithm A ′ which uses 2m + 1 calls to A and A −1 and outputs a state |ψ with probability δ new ≥ 1 − (2m + 1) 2 3 δ (2m + 1) 2 δ.(3) The algorithm A ′ is just the standard amplitude amplification [4] but its analysis is tighter. According to the usual analysis, amplitude amplification increases the success probability from δ to Ω(1) in 2m + 1 = O( 1 √ δ ) repetitions. In other words, 2m + 1 repetitions increase the success probability Ω((2m + 1) 2 ) times. Lemma 1 achieves an increase of almost (2m + 1) 2 times, without the big-Ω factor. This is useful if we have an algorithm with k levels of amplitude amplification nested one inside another. Then, with the usual amplitude amplification, a big-Ω constant of c would result in a c k factor in the running time. Using Lemma 1 avoids that. Our second subroutine is a version of amplitude estimation from [2]. Theorem 4 [4,2] There is a procedure Estimate(A, c, p, k) which, given a constant c, 0 < c ≤ 1 and a quantum algorithm A (with the promise that the probability ǫ that the algorithm A outputs 1 is either 0 or at least a given value p) outputs an estimateǫ of the probability ǫ such that, with probability at least 1 − 1 2 k , we have (i) |ǫ −ǫ| < cǫ if ǫ ≥ p; (ii)ǫ = 0 if ǫ = 0. The procedure Estimate(A, c, p, k) uses the expected number of Θ k 1 + log log 1 p 1 max(ǫ, p) evaluations of A. 2 [1] requires the probability to be exactly ǫ but the proof works without changes if the probability is less than the given ǫ. The state generation algorithm We now describe our state generation algorithm. Without the loss of generality, we assume that the stopping times of A are t i = 2 i for i ∈ {0, . . . , m} for some m. We present a sequence of algorithms A i , with the algorithm A i generating an approximation of the state |ψ ′ i = α i,1 |α i,1 | 2 + |α i,2 | 2 |1 ⊗ |ψ i,1 + α i,2 |α i,1 | 2 + |α i,2 | 2 |2 ⊗ |ψ i,2 , in the following sense: the algorithm A i outputs a state |ψ ′′ i = √ r i |ψ ′ i + √ 1 − r i |0 ⊗ |φ i(4) for some |φ i and some r i satisfying r i ≥ 1/9m. (To avoid the problem with nested amplitude amplification described in section 3.2, we only require r i ≥ 1/9m instead of r i = Ω(1).) The algorithm A i uses A i−1 as the subroutine. It is defined in two steps. First, we define an auxiliary algorithm B i . 1. If i = 0, B i runs A for 1 step and outputs the output state of A. If i > 0, B i runs A i−1 which outputs |ψ ′′ i−1 . B i then executes A for time steps from 2 i−1 to 2 i on the parts of the state |ψ ′′ i−1 where the outcome register is 2 (the computation is not finished). Algorithm 1: Algorithm B i Let p i = Estimate(B i , c, 1 κ , log m + 5). Then, A i is as follows. 1. If p > 1 9m , A i = B i . 2. If p ≤ 1 9m , A i = Amplify(B i , k) for the smallest k satisfying 1 9m ≤ (2k + 1) 2 p ≤ 1 m . Algorithm 2: Algorithm A i The overall algorithm A ′ is given as Algorithm 3. Run Estimate to obtain p 0 = Estimate(B 0 , c, 1 κ , log m + 5). 2. For each i = 1, 2, . . . , m: (a) Use p i−1 to define A i and B i . (b) If i < m, run Estimate to obtain p i = Estimate(B i , c, 1 κ , log m + 5) . Amplify A m to the success probability at least 1/2 and output the output state of the amplified A m . Algorithm 3: Algorithm A ′ We now analyze the running times of algorithms A i . Let T i denote the running time of A i . Let r i be as defined in equation (4) and let r ′ i be a similar quantity for the output state of B i . Then, we have Lemma 2 T i ≤ 1 + 1 3m − 1 √ r i r ′ i T i−1 + 2 i−1 .(5) Proof: The running time of B i is T i−1 + 2 i−1 . If A i = B i , then the running time of A i is the same and, also r i = r ′ i (because the two algorithms output the same state). If A i is an amplified version of B i , then: 1. The running time of A i is (2k + 1)(T i−1 + 2 i−1 ). 2. By Lemma 1, we have r i ≥ (1 − 1 3m )(2k + 1) 2 r ′ i which implies (2k + 1) ≤ (1 + 1 3m − 1 ) √ r i r ′ i . Applying (5) recursively, we get T m ≤ 1 + 1 3m − 1 m m i=1   m j=i √ r i r ′ i   2 i−1 .(6) The first multiplier, 1 + 1 3m−1 m can be upper-bounded by a constant. We now bound the product m j=i √ ri √ r ′ i . Lemma 3 m j=i √ r i r ′ i ≤ 3 1 + p stop,>i p succ . Proof: We consider the quantities o j = | 1 ⊗ ψ i,1 |ψ ′′ i | 2 for j = i, i + 1, . . . , m. For j = i, we have o i = r i | 1 ⊗ ψ i,1 |ψ ′ i | 2 = r i |α i,1 | 2 |α i,1 | 2 + |α i,2 | 2 = r i p succ,i p succ,i + p stop,>i .(7) For j > i, we have o j = o j−1 ri r ′ i because amplification increases the probability of the "good" part of the state (which includes |1 ⊗ ψ i,1 ) ri r ′ i times. Finally, we have o m = r m p succ,i p succ which follows similarly to (7). Putting all of this together, we have m j=i r i r ′ i = o m o i = r m r i · p succ,i + p stop,>i p succ,i . By taking the square roots from both sides and observing that rm ri is at most 9 (because r m ≤ 1 m and r i ≥ 1 9m ), we get m j=i √ r i r ′ i ≤ 3 1 + p stop,>i p succ . The Lemma follows by using √ 1 + x ≤ 1 + √ x. By applying Lemma 3 to each term in (6), we get T m ≤ C m i=1 1 + p stop,>i p succ 2 i−1 = C m i=1 2 i−1 + C m i=1 2 i−1 √ p stop,>i √ p succ . The first sum can be upper bounded by 2 i = O(T max ). For the second sum, in its numerator, we have m i=1 2 i−1 √ p stop,>i = m i=1 2 2i−2 p stop,>i ≤ mT av = T av log T max where the inequality follows because each term 2 2i−2 p stop,>i is at most T av . Thus, the algorithm A m runs in time O T max + T av √ p succ log T max . The algorithm A ′ amplifies A m from a success probability of r m ≥ 1 9m to a success probability Ω(1). This increases the running time by a factor of O( √ m) = O( √ log T max ). Faster algorithm for solving systems of linear equations 4.1 Unique-answer eigenvalue estimation For our algorithm, we need a version of eigenvalue estimation that is guaranteed to output exactly the same estimate with a high probability. The standard version of eigenvalue estimation [6, p. 118] runs U = e −iH up to 2 n times and, if the input is an eigenstate |ψ : H|ψ = λ|ψ , outputs x ∈ {0, π 2 n , 2π 2 n , . . . , (2 n −1)π 2 n } with probability p(x) = 1 2 2n sin 2 2 n (λ − x) sin 2 (λ − x)(8) (equation (7.1.30) from [6]). We now consider an algorithm that runs the standard eigenvalue estimation k uniq times and takes the most frequent answer x maj . Lemma 4 For k uniq = O( 1 ǫ 2 log 1 ǫ ), we have 1. If |λ − x| ≤ 1−ǫ 2 n+1 , then P r[x maj = x] ≥ 1 − ǫ. 2. If λ ∈ [x + 1−ǫ 2 n+1 , x + 1+ǫ 2 n+1 ], then P r[x maj ∈ {x, x + 1}] ≥ 1 − ǫ. Proof: In the first case, (8) is at least (1 + ǫ) 4 π 2 for the correct x and less than 4 π 2 for any other x. Repeating eigenvalue estimation O( 1 ǫ 2 ) times and taking the majority allows to distinguish the correct x with a fixed probability (say 3/4) and repeating it O( 1 ǫ 2 log 1 ǫ ) times allows to determine the correct x with a probability at least 1 − ǫ. In the second case, the two values x and x + 1 are output with probability at least (1 − ǫ) 4 π 2 each. In contrast, for any other y = mπ 2 n , m ∈ {0, 1, . . . , 2 n − 1}, we have |y − λ| ≥ 1 − ǫ 2 n+1 π + 1 2 n π = 3 − ǫ 2 n+1 π. This implies p(y) ≤ 1 2 2n 1 sin 2 (3−ǫ) 2 n+1 π = (1 + o(1)) 4 (3 − ǫ) 2 π 2 . Thus, there is a constant gap between p(x) or p(x + 1) and p(y) for any other y. In this case, taking majority of O(log 1 ǫ ) runs of eigenvalue estimation is sufficient to produce x or x + 1 with a probability at least 1 − ǫ. We refer to this algorithm as UniqueEst(H, 2 n , ǫ). When we use UniqueEst as a subroutine in algorithm 5, we need the answer to be unique (as in the first case) and not one of two high-probability answers (as in the second case). To deal with that, we will replace H with H + δπ 2 n I for a randomly chosen δ ∈ [0, 1]. The eigenvalue becomes λ ′ = λ + δπ 2 n and, with probability 1 − ǫ, λ ′ ∈ x − 1−ǫ 2 2 n π, x + 1−ǫ 2 2 n π for some integer x. This allows to achieve the first case for all eigenvalues, except a small random fraction of them. Main algorithm We now show that Theorem 1 implies our main result, Theorem 3. We start by describing a variable running time Algorithm 4. This algorithm uses the following registers: • The input register I which holds the input state |x (and is also used for the output state); • The outcome register O, with basis states |0 , |1 and |2 (as described in the setup for variabletime amplitude amplification); • The step register S, with basis states |1 , |2 , . . ., |2m (to prevent interference between various branches of computation). • The estimation register E, which is used for eigenvalue estimation (which is a subroutine for our algorithm). H I , H O , H S and H E denote the Hilbert spaces of the respective registers. From now on, we refer to ǫ appearing in Theorem 3 as ǫ f inal . ǫ without a subscript is an error parameter for subroutines of algorithm 4 (which we will choose at the end of the proof so that the overall error in the output state is at most ǫ f inal ). Our main algorithm is Algorithm 5 which consists of applying variable-time amplitude amplification to Algorithm 4. We claim that, conditional on the output register being |1 O , the output state of Algorithm 4 is close to |ψ ideal = i α i |v i I ⊗ 1 κλ i |1 O ⊗ |2j i S .(10) Variable-time amplitude amplification then generates a state that is close to |ψ ideal ψ ideal . Fourier transform in the last step of algorithm 5 then effectively erases the S register. Conditional on S being in |0 S after the Fourier transform, the algorithm's output state is close to our desired output state 3. Apply a transformation mapping |2j S → |j S to the S register. After that, apply Fourier transform F m to the S register and measure. If the result is 0, output the state in the I register. Otherwise, stop without outputting a quantum state. λ = λ ′ − xj π 2 j . (b) If ǫλ > 1 2 j+1 , perform the transformation |2 O ⊗ |1 S → 1 κλ |1 O ⊗ |2j S + 1 − 1 (κλ) 2 |0 O ⊗ |2j S .(9) Algorithm 5: Main algorithm Approximation guarantees. We now give a formal proof that the output state of Algorithm 4 is close to the desired output state (10). Let |v i be an eigenvector and λ i be an eigenvalue. For each j, the unique-value eigenvalue estimation either outputs one estimateλ i,j or one of two estimatesλ i,j andλ i,j − 1 2 j with a high probability (at least 1 − ǫ). Let j i be the smallest j for which the estimateλ =λ i,j satisfies the condition ǫλ ≥ 1 2 j+1 in step 3b. We call v i and λ i good if, for j = j i the unique-value eigenvalue estimation outputs one estimateλ i,j with a high probability. Otherwise, we call λ i bad. For both good and bad λ i , we denoteλ i =λ i,ji . We claim that the part of final state Algorithm 4 that has |1 in the output register O is close to |ψ ′ = i α i |v i I ⊗ 1 κλ i |1 O ⊗ |2j i S and |ψ ′ is, in turn, close to the state |ψ ideal defined by equation (10). The next two lemmas quantify these claims. Let δ = i:λi bad |α i | 2 quantify the size of the part of the state |ψ ′ that consists of bad eigenvectors. Lemma 5 Let |ψ be the output state of Algorithm 4 and let P 1 be the projection to the subspace where the outcome register O is in the state |1 . Then, we have P 1 |ψ − |ψ ′ ≤ ((2m + 37)ǫ + 30δ) ψ ′ . Proof: In section 4.3. Lemma 6 |ψ ′ − |ψ ideal ≤ 2ǫ 1 + 2ǫ ψ ideal . Proof where k uniq is the quantity from Lemma 4. Proof: In section 4.4. Lemma 8 p succ , the success probability of Algorithm 4, is Ω i |α i | 2 ǫ 2 2 2ji κ 2 .(12) Proof: In section 4.4. By dividing the two expressions above one by another, we get Corollary 1 T av √ p succ = O κ ǫ k uniq . By Theorem 1, the running time of algorithm 5 is O T max log T max + T av √ p succ log 1.5 T max . Since T max = O(2 m ) = O( κ ǫ ), we have T max ≤ Tav √ psucc and the running time is O T av √ p succ log 1.5 T max = O κ ǫ k uniq log 1.5 κ ǫ = O mκ ǫ f inal k uniq log 1.5 κ ǫ , with the 2nd equality following from ǫ = Θ(ǫ f inal /m). Since algorithm 5 needs to be repeated O( √ m log 1 ǫ f inal ) times, the overall running time is O m 1.5 κ ǫ f inal k uniq log 1.5 κ ǫ log 1 ǫ f inal = O κ log 3 κ ǫ ǫ 3 f inal log 2 1 ǫ f inal , with the equality following from m = O(log κ ǫ ). Proofs of Lemmas about the quality of output state Proof: [of Lemma 5] Let |v i be an eigenstate of A. Then, the eigenvalue estimation leaves |v i unchanged (and produces an estimate for the eigenvalue λ i in the E register). This means that the algorithm above maps |x = i α i |v i to i α i |v i I ⊗ |φ i O,S,E where |φ i O,S,E = |1 O ⊗ |φ ′ i S,E + |0 O ⊗ |φ ′′ i S,E . We will show: • If |v i is good, then |φ ′ i S,E is close to 1 κλi |2j i S ⊗ |0 E . • If |v i is bad, then φ ′ i does not become too large (and, therefore, does not make too big contribution to P 1 |ψ − |ψ ′′ ). These two statements are quantified by two claims below: Claim 2 and Claim 5. The Lemma follows by combining these two claims and the fact that the sum of |α i | 2 over all bad i is equal to δ. Before proving Claims 2 and 5, we prove a claim that boundsλ i (and will be used in the proofs of both Claim 2 and Claim 5). Claim 1 Let j = j i . Then 1 ǫ2 j+1 ≤λ i ≤ 1 ǫ + 3 2 1 2 j . Proof: The first inequality follows immediately. For the second inequality, since j > j i − 1, we havẽ λ i,j−1 ≤ 1 ǫ2 j . This means that the actual eigenvalue λ satisfies λ ≤ (1 + ǫ) 1 ǫ2 j = 1 ǫ2 j + 1 2 j andλ i,j ≤ (1 + ǫ)λ ≤ 1 ǫ2 j + 1 2 j + 1 2 j+1 . As a consequence to this claim, we have 1 λ i ≥ 2 2 + 3ǫ ǫ2 j . Claim 2 If |v i is good, |φ ′ i − 1 κλ i |1 O ⊗ |2j i S ⊗ |0 E 2 ≤ (2m + 37)ǫC where C = ( 1 κλi ) 2 . Proof: We express |φ ′ i = j |2j S ⊗ |φ i,j E . Furthermore, we group the terms of |φ ′ i in a following way: |φ ′ i = |φ < + |φ = + |φ > where |φ < = j<ji |2j S ⊗ |φ i,j E , |φ = = ⊗|2j i S ⊗ |φ i,ji E , |φ > = j>ji |2j S ⊗ |φ i,j E . We have |φ ′ i − 1 κλ i |2j i S ⊗ |0 E 2 = φ < 2 + |φ = − 1 κλ i |2j i S ⊗ |0 E 2 + φ > 2 . We first show that φ < and φ > are not too large. For j < j i , the eigenvalue estimation outputs an answer that is more thanλ i,j with probability at most ǫ. Therefore, the probability of step (3b) being executed is at most ǫ. Moreover, if this step is executed, the estimate λ ′ for the eigenvalue is at least 1 ǫ2 j . Therefore, the coefficient of |1 O in (9) is 1 κλ ′ ≤ 2 j+1 ǫ κ . By summing over all j < j i , we get φ < 2 = j<ji φ ′ i,j 2 = j<ji 2 j+1 ǫ κ 2 ǫ ≤ 1 3 2 ji+1 ǫ κ 2 ǫ, with the inequality following from the formula for the sum of a geometric progression. By using the right hand side of Claim 1, we get φ < 2 ≤ 4ǫ 3 1 + 3ǫ 2 2 C where C = ( 1 κλi ) 2 . If ǫ < 0.1, we can upper-bound this by 1.6ǫC. For j > j i , we have φ i,j 2 ≤ ǫ j−ji . (We only reach stage j if, in every previous stage k, eigenvalue estimation outputs an estimate that is smaller thanλ i . For each k ∈ {j i , j i + 1, . . . , j − 1}, this happens with probability at most ǫ.) Therefore, φ > 2 = j>ji φ ′ i,j 2 ≤ j>ji 2 j+1 ǫ κ 2 ǫ j−ji ≤ 4 1 + 3ǫ 2 2 C ∞ j=1 (4ǫ) j = 16 1 + 3ǫ 2 2 ǫ 1 − 4ǫ C where the 2nd inequality follows from the right hand side of Claim 1 and the last equality follows from the formula for the sum of a geometric progression. If ǫ < 0.1, we can upper bound this by 36ǫC. Thus, both φ < 2 and φ > 2 are small enough. For |φ = , we first estimate the probability that algorithm reaches stage j i . Claim 3 Algorithm 4 reaches stage j i with probability at least 1 − 2(m − 1)ǫ. Proof: For each j < j i , the eigenvalue estimation may produce an incorrect answer with probability at most ǫ. This may lead to transformation (9) being executed with probability at most ǫ. Moreover, this causes some disturbance for the next step, when eigenvalue estimation is uncomputed. Let |ψ be the output of the eigenvalue estimation. We can split |ψ = |ψ ′ + |ψ ′′ where |ψ ′ consists of estimates λ which are smaller than the one in the condition of step 3b and |ψ ′′ consists of estimates that are greater than or equal to the one in the condition. Then, ψ ′′ 2 ≤ ǫ and, conditional on outcome register being |2 , the estimation register is in the state |ψ ′ . If the estimation register was in the state |ψ , uncomputing the eigenvalue estimation would lead to the correct initial state |0 . If it is in the state |ψ ′ , then, after uncomputing the eigenvalue estimation, E can be in a basis state different from |0 with probability at most ψ − ψ ′ 2 = ψ ′′ 2 ≤ ǫ. Thus, the probability of the computation terminating for a fixed j < j i is at most 2ǫ. The probability of that happening for some j < j i is at most 2(j i − 1)ǫ < 2(m − 1)ǫ. We now assume that the algorithm is started from stage j i . Claim 4 If Algorithm 4 is started from stage j i (instead of stage 1), then |φ i,ji E − 1 κλ |0 E 2 ≤ ǫ 1 + 3ǫ 2 C. Proof: Let |ψ = λ α λ |λ be the output of the eigenvalue estimation in stage j i . Then, |αλ i | 2 ≥ 1 − ǫ and |ψ − αλ i |λ i 2 ≤ ǫ. Conditional on O being mapped to |1 , the estimation register E is in the state |ψ ′ = λ β λ |λ where β λ = α λ κλ when λ ≥ 1 ǫ2 j+1 and β λ = 0 otherwise. By Claim 1, we have 1 λ ∈ [0, ǫ2 j+1 ] ⊆ 0, 2 + 3ǫ 2 1 λ . When λ ≥ 1 ǫ2 j+1 , this implies β λ − α λ κλ = α λ κλ − α λ κλ ≤ 1 + 3ǫ 2 α λ κλ . When λ < 1 ǫ2 j+1 , we have β λ = 0 and β λ − α λ κλ = α λ κλ . By summing over all λ =λ, we get ψ ′ − 1 κλ ψ 2 ≤ 1 + 3ǫ 2 C λ:λ =λ |α λ | 2 ≤ 1 + 3ǫ 2 ǫC. Therefore, (conditional on the outcome register being |1 ) uncomputing UniqueEst leads to a state |ϕ E with ϕ − 1 κλ |0 2 ≤ ǫ 1 + 3ǫ 2 C. Since the algorithm might not reach stage j i with probability at most 2(m − 1)ǫ, we have to combine the error bounds from Claims 3 and 4. This gives us |φ i,ji E − 1 κλ |0 E ≤ ǫ 2m − 1 + 3ǫ 2 C. Combining this with bounds of 1.6ǫC and 36ǫC on ψ < and ψ > completes the proof of Claim 2. Claim 5 If |v i is bad, φ ′ i 2 ≤ 30C where C = ( 1 κλ i ) 2 . Proof: We express |φ ′ i = |φ ≤ + |φ > where |φ ≤ = j≤ji+1 |2j S ⊗ |φ i,j E , |φ > = j>ji+1 |2j S ⊗ |φ i,j E . We have φ ≤ 2 ≤ 1 κǫ2 ji+2 2 ≤ 16 1 + 3ǫ 2 2 C.(13) Here, the first inequality follows from the amplitude of |1 in (9) being 1 κλ , λ ≥ 1 ǫ2 j+1 and j ≤ j i + 1. The second inequality follows from Claim 1. Starting from stage j + 1, the probability of algorithm obtaining λ < 1 2 j+1 ǫ is at most ǫ at each stage. Therefore (similarly to the proof of Claim 2), φ > 2 = j>ji +1 φ ′ i,j 2 ≤ j>ji +1 2 j+1 ǫ κ 2 ǫ j−ji−1 ≤ 2 ji+2 ǫ κ 2 ∞ j=1 (4ǫ) j ≤ 16 1 + 3ǫ 2 2 C ∞ j=1 (4ǫ) j = 16 1 + 3ǫ 2 2 4ǫ 1 − 4ǫ C.(14) The claim follows by putting equations (13) and (14) together and using ǫ < 0.01. Proof: [of Lemma 6] We have |λ i −λ i | ≤ 1 + ǫ 2 j+1 ≤ (1 + ǫ)ǫλ i , with the first inequality following from the correctness of the unique-output eigenvalue estimation and the second inequality following from the definition ofλ i . Let δ = (1 + ǫ)ǫ. If |λ i −λ i | ≤ δλ i , then 1 λ i − 1 λ i ≤ δ 1 − δ 1 λ i . Therefore, we have |ψ ′ − |ψ ideal ≤ δ 1−δ |ψ ideal and Let j ≥ j i + 1. The probability that, in the j th run of eigenvalue estimation, the algorithm does not stop is at most ǫ. Therefore, p ji+k ≤ ǫ k−1 and the expression in (15) is at most k 2 uniq times 2 2(ji+1) + ∞ j=ji+2 ǫ j−ji−1 2 2j < 2 2(ji+1) + 2 2(ji+1) As shown in the proof of Claim 2, the probability of the algorithm stopping before stage j i is at most 2(j i − 1)ǫ ≤ 2(m − 1)ǫ. Therefore, the algorithm stops at stage j i or j i + 1 with a probability that is at least a constant. The probability of algorithm stopping succesfully (i.e., producing |1 in an outcome register) is 1 κ 2 λ 2 . By Claim 1, we have λ = O( 1 ǫ2 j i ). This implies that the probability of the algorithm stopping successfully is O( ǫ 2 2 2j i κ 2 ). δ 1 − δ = (1 + ǫ)ǫ 1 − (1 + ǫ)ǫ < 2ǫ 1 − 2ǫ . |v i I . Finally, performing Fourier transform and measuring produces |0 S with probability 1/m. Because of that, the success probability of algorithm 5 needs to be amplified. This adds a factor of O( √ m) to the running time, if we would like to obtain the result state with probability Ω(1) and a factor of O( √ m log 1 ǫ ) if we would like to obtain it with probability at least 1 − ǫ.Input: parameters x 1 , . . . , x m ∈ [0, 1], Hamiltonian H. 1. Initialize O to |2 , S to |1 and E to |0 . Set j I. Using the registers I and S, run bf UniqueEst(H ′ , 2 j , ǫ). Let λ ′ be the estimate output by UniqueEst and let : In section 4.3. When x 1 , . . . , x m ∈ [0, 1] are chosen uniformly at random, the probability of any given v i being bad is of order O(ǫ). Thus, E[δ] = O(ǫ) and E P 1 |ψ − |ψ ideal = O(mǫ ψ ideal ) with the expectation taken over the random choice of x 1 , . . . , x m ∈ [0, 1].To achieve an error of at most ǫ f inal , we choose ǫ = Θ(ǫ f inal /m). Running time. We now bound the running time of Algorithm 4. We start with two lemmas bounding the average running time T av and success probability p av .Lemma 7 T av , the l 2 -average running time of Algorithm 4, is of the order 4. 4 4Proofs of Lemmas about the running time of Algorithm 4Proof: [ofLemma 7] We first consider the case when the input state |x is an eigenstate |v i of H. Let p stop,j be the probability that Algorithm 4 stops after stage j. Then, the square of l 2 average running time of Algorithm 4 is of the order in first j stages we use amplitude amplification for time k uniq (2 + 2 2 + . . . + 2 j ) = k uniq (2 j+1 − 2) = O(k uniq 2 j ). ∞ j=1 ( j=14ǫ) j = O(2 2ji ). If |x = i α i |v i , the square of l 2 -average of the number of steps is of the order O i |α i | 2 2 2ji k 2 uniq because, each subspace of the form |v i ⊗ H A ⊗ H S ⊗ H E stays invariant throughout the algorithm and, thus, can be treated separately. Taking square root finishes the proof. Proof: [of Lemma 8] Again, we can treat each subspace of the form |v i ⊗ H A ⊗ H S ⊗ H E separately. ( c ) cRun UniqueEst in reverse, to erase the intermediate information. (d) Check if the register E is in the correct initial state |0 E . If not, apply |2 O ⊗ |1 S → |0 O ⊗ |2j + 1 S on the outcome register O.(e) If the outcome register O is in the state |2 , increase j by 1 and go to step 2.Algorithm 4: State generation algorithm Input: Hamiltonian H. 1. Generate uniformly random x 1 , . . . , x m ∈ [0, 1]. 2. Apply variable-time amplitude amplification to Algorithm 4, with H and x 1 , . . . , x m as the input. The first bit of the output state indicates whether we have the desired state |ψ good or not. Since |α| 2 ≥ 1/2, we get |ψ good with probability at least 1/2. Quantum search of spatial regions. S Aaronson, A Ambainis, quant-ph/0303041Theory of Computing. 1S. Aaronson, A. Ambainis, Quantum search of spatial regions. Theory of Computing, 1:47-79, 2005. Also quant-ph/0303041. Quantum search with variable times. Theory of Computing Systems. A Ambainis, quant-ph/0609188Earlier versions in STACS'08 and. 47A. Ambainis. Quantum search with variable times. Theory of Computing Systems, 47(3): 786- 807, 2010. Earlier versions in STACS'08 and quant-ph/0609188. D Berry, arXiv:1010.2745Quantum algorithms for solving linear differential equations. D. Berry. Quantum algorithms for solving linear differential equations. arXiv:1010.2745. Quantum amplitude amplification and estimation. G Brassard, P Høyer, M Mosca, A Tapp, quant-ph/0005055Quantum Computation and Quantum Information Science. 305G. Brassard, P. Høyer, M. Mosca, A. Tapp. Quantum amplitude amplification and estimation. In Quantum Computation and Quantum Information Science, AMS Contemporary Mathematics Series, 305:53-74, 2002. Also quant-ph/0005055. Quantum algorithm for linear systems of equations. A Harrow, A Hassidim, S Lloyd, arXiv:0811.3171Physical Review Letters. 15103150502A. Harrow, A. Hassidim, S. Lloyd, Quantum algorithm for linear systems of equations. Physical Review Letters, 15(103):150502, 2009. Also arXiv:0811.3171. An Introduction to Quantum Computing. P Kaye, R Laflamme, M Mosca, Cambridge University PressP. Kaye, R. Laflamme, M. Mosca. An Introduction to Quantum Computing. Cambridge Univer- sity Press, 2007. S K Leyton, T J Osborne, arXiv:0812.4423A quantum algorithm to solve nonlinear differential equations. S. K. Leyton, T. J. Osborne. A quantum algorithm to solve nonlinear differential equations. arXiv:0812.4423. An introduction to the conjugate gradient method without the agonizing pain. J Shewchuk, CMU-CS-94-125School of Computer Science, Carnegie Mellon UniversityTechnical ReportJ. Shewchuk. An introduction to the conjugate gradient method without the agonizing pain. Technical Report CMU-CS-94-125, School of Computer Science, Carnegie Mellon University, 1994.
[]
[ "The Charge Form Factor of the Neutron from 2 H( e, e ′ n)p", "The Charge Form Factor of the Neutron from 2 H( e, e ′ n)p" ]
[ "I Passchier \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n", "L D Van Buuren \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "D Szczerba \nInstitut für Teilchenphysik\nETH\nCH-8093ZürichSwitzerland\n", "R Alarcon \nDept. of Physics and Astronomy\nASU\n85287TempeAZUSA\n", "Th S Bauer \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nPhysics Dept\nUU\n3508 TAUtrechtNLThe Netherlands\n", "D Boersma \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nPhysics Dept\nUU\n3508 TAUtrechtNLThe Netherlands\n", "J F J Van Den Brand \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "H J Bulten \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "M Ferro-Luzzi \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "D W Higinbotham \nDept. of Physics\n22901UVa, CharlottesvilleVAUSA\n", "C W De Jager \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nTJNAF\n23606Newport NewsVAUSA\n", "S Klous \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "H Kolster \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "J Lang \nInstitut für Teilchenphysik\nETH\nCH-8093ZürichSwitzerland\n", "D Nikolenko \nBINP\n630090NovosibirskRussian Federation\n", "G J Nooren \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n", "B E Norum \nDept. of Physics\n22901UVa, CharlottesvilleVAUSA\n", "H R Poolman \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "I Rachek \nBINP\n630090NovosibirskRussian Federation\n", "M C Simani \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n\nDept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands\n", "E Six \nDept. of Physics and Astronomy\nASU\n85287TempeAZUSA\n", "H De Vries \nNIKHEF\nNL-1009 DBAmsterdamThe Netherlands\n", "K Wang \nDept. of Physics\n22901UVa, CharlottesvilleVAUSA\n", "Z.-L Zhou \nLaboratory for Nuclear Science\nMIT\n02139CambridgeMAUSA\n" ]
[ "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "Institut für Teilchenphysik\nETH\nCH-8093ZürichSwitzerland", "Dept. of Physics and Astronomy\nASU\n85287TempeAZUSA", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Physics Dept\nUU\n3508 TAUtrechtNLThe Netherlands", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Physics Dept\nUU\n3508 TAUtrechtNLThe Netherlands", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "Dept. of Physics\n22901UVa, CharlottesvilleVAUSA", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "TJNAF\n23606Newport NewsVAUSA", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "Institut für Teilchenphysik\nETH\nCH-8093ZürichSwitzerland", "BINP\n630090NovosibirskRussian Federation", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics\n22901UVa, CharlottesvilleVAUSA", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "BINP\n630090NovosibirskRussian Federation", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics and Astronomy\nNL-1081 HVAmsterdamVUThe Netherlands", "Dept. of Physics and Astronomy\nASU\n85287TempeAZUSA", "NIKHEF\nNL-1009 DBAmsterdamThe Netherlands", "Dept. of Physics\n22901UVa, CharlottesvilleVAUSA", "Laboratory for Nuclear Science\nMIT\n02139CambridgeMAUSA" ]
[]
We report on the first measurement of spin-correlation parameters in quasifree electron scattering from vector-polarized deuterium. Polarized electrons were injected into an electron storage ring at a beam energy of 720 MeV. A Siberian snake was employed to preserve longitudinal polarization at the interaction point. Vector-polarized deuterium was produced by an atomic beam source and injected into an open-ended cylindrical cell, internal to the electron storage ring. The spin correlation parameter A V ed was measured for the reaction 2 H( e, e ′ n)p at a four-momentum transfer squared of 0.21 (GeV/c) 2 from which a value for the charge form factor of the neutron was extracted.MotivationThe charge distribution of the neutron is described by the charge form factor G n E , which is related to the Fourier transform of the distribution and is generally expressed as a function of Q 2 , the square of the four-momentum transfer. Data on G n E are important for our understanding of the nucleon and are essential for the interpretation of electromagnetic multipoles of nuclei, e.g. the deuteron.Since a practical target of free neutrons is not available, experimentalists mostly resorted to (quasi)elastic scattering of electrons from unpolarized deuterium[1,2] to determine this form factor. The shape of G n E as function of Q 2 is relatively well known from high precision elastic electron-deuteron scattering[2], but the absolute scale still contains a systematic uncertainty of about 50%. The slope of G n E at Q 2 = 0 (GeV/c) 2 is known from measurements where thermal neutrons are scattered from atomic electrons[3].
10.1016/s0375-9474(99)00673-9
[ "https://arxiv.org/pdf/nucl-ex/9908002v1.pdf" ]
92,708,160
nucl-ex/9908002
1e95fd220d3e471407e1853b57e7e97baa8cad8f
The Charge Form Factor of the Neutron from 2 H( e, e ′ n)p Aug 1999 I Passchier NIKHEF NL-1009 DBAmsterdamThe Netherlands L D Van Buuren NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands D Szczerba Institut für Teilchenphysik ETH CH-8093ZürichSwitzerland R Alarcon Dept. of Physics and Astronomy ASU 85287TempeAZUSA Th S Bauer NIKHEF NL-1009 DBAmsterdamThe Netherlands Physics Dept UU 3508 TAUtrechtNLThe Netherlands D Boersma NIKHEF NL-1009 DBAmsterdamThe Netherlands Physics Dept UU 3508 TAUtrechtNLThe Netherlands J F J Van Den Brand NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands H J Bulten NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands M Ferro-Luzzi NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands D W Higinbotham Dept. of Physics 22901UVa, CharlottesvilleVAUSA C W De Jager NIKHEF NL-1009 DBAmsterdamThe Netherlands TJNAF 23606Newport NewsVAUSA S Klous NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands H Kolster NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands J Lang Institut für Teilchenphysik ETH CH-8093ZürichSwitzerland D Nikolenko BINP 630090NovosibirskRussian Federation G J Nooren NIKHEF NL-1009 DBAmsterdamThe Netherlands B E Norum Dept. of Physics 22901UVa, CharlottesvilleVAUSA H R Poolman NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands I Rachek BINP 630090NovosibirskRussian Federation M C Simani NIKHEF NL-1009 DBAmsterdamThe Netherlands Dept. of Physics and Astronomy NL-1081 HVAmsterdamVUThe Netherlands E Six Dept. of Physics and Astronomy ASU 85287TempeAZUSA H De Vries NIKHEF NL-1009 DBAmsterdamThe Netherlands K Wang Dept. of Physics 22901UVa, CharlottesvilleVAUSA Z.-L Zhou Laboratory for Nuclear Science MIT 02139CambridgeMAUSA The Charge Form Factor of the Neutron from 2 H( e, e ′ n)p Aug 1999arXiv:nucl-ex/9908002v1 9 1 We report on the first measurement of spin-correlation parameters in quasifree electron scattering from vector-polarized deuterium. Polarized electrons were injected into an electron storage ring at a beam energy of 720 MeV. A Siberian snake was employed to preserve longitudinal polarization at the interaction point. Vector-polarized deuterium was produced by an atomic beam source and injected into an open-ended cylindrical cell, internal to the electron storage ring. The spin correlation parameter A V ed was measured for the reaction 2 H( e, e ′ n)p at a four-momentum transfer squared of 0.21 (GeV/c) 2 from which a value for the charge form factor of the neutron was extracted.MotivationThe charge distribution of the neutron is described by the charge form factor G n E , which is related to the Fourier transform of the distribution and is generally expressed as a function of Q 2 , the square of the four-momentum transfer. Data on G n E are important for our understanding of the nucleon and are essential for the interpretation of electromagnetic multipoles of nuclei, e.g. the deuteron.Since a practical target of free neutrons is not available, experimentalists mostly resorted to (quasi)elastic scattering of electrons from unpolarized deuterium[1,2] to determine this form factor. The shape of G n E as function of Q 2 is relatively well known from high precision elastic electron-deuteron scattering[2], but the absolute scale still contains a systematic uncertainty of about 50%. The slope of G n E at Q 2 = 0 (GeV/c) 2 is known from measurements where thermal neutrons are scattered from atomic electrons[3]. We report on the first measurement of spin-correlation parameters in quasifree electron scattering from vector-polarized deuterium. Polarized electrons were injected into an electron storage ring at a beam energy of 720 MeV. A Siberian snake was employed to preserve longitudinal polarization at the interaction point. Vector-polarized deuterium was produced by an atomic beam source and injected into an open-ended cylindrical cell, internal to the electron storage ring. The spin correlation parameter A V ed was measured for the reaction 2 H( e, e ′ n)p at a four-momentum transfer squared of 0.21 (GeV/c) 2 from which a value for the charge form factor of the neutron was extracted. Motivation The charge distribution of the neutron is described by the charge form factor G n E , which is related to the Fourier transform of the distribution and is generally expressed as a function of Q 2 , the square of the four-momentum transfer. Data on G n E are important for our understanding of the nucleon and are essential for the interpretation of electromagnetic multipoles of nuclei, e.g. the deuteron. Since a practical target of free neutrons is not available, experimentalists mostly resorted to (quasi)elastic scattering of electrons from unpolarized deuterium [1,2] to determine this form factor. The shape of G n E as function of Q 2 is relatively well known from high precision elastic electron-deuteron scattering [2], but the absolute scale still contains a systematic uncertainty of about 50%. The slope of G n E at Q 2 = 0 (GeV/c) 2 is known from measurements where thermal neutrons are scattered from atomic electrons [3]. The systematic uncertainties can be significantly reduced through the measurement of electronuclear spin observables. The scattering cross section with both longitudinal polarized electrons and a polarized target for the 2 H( e, e ′ N) reaction, can be written as [4] S = S 0 1 + P d 1 A V d + P d 2 A T d + h(A e + P d 1 A V ed + P d 2 A T ed ) ,(1) where S 0 is the unpolarized cross section, h the polarization of the electrons, and P d 1 (P d 2 ) the vector (tensor) polarization of the target. The target analyzing powers and spin-correlation parameters (A i ), depend on the orientation of the nuclear spin. The polarization direction of the deuteron is defined by the angles Θ d and Φ d in the frame where the z-axis is along the direction of the three-momentum transfer (q) and the yaxis is defined by the vector product of the incoming and outgoing electron momenta. The observable A V ed (Θ d = 90 • , Φ d = 0 • ) contains an interference term, where the effect of the small charge form factor is amplified by the dominant magnetic form factor (see e.g. Refs. [4,5]). In the present paper we describe a measurement performed at NIKHEF (Amsterdam), which uses a stored polarized electron beam and a vector-polarized deuterium target, to determine G n E via a measurement of A V ed (90 • , 0 • ). Experimental setup The experiment was performed with a polarized gas target internal to the AmPS electron storage ring. An atomic beam source (ABS) [6,7] was used to inject a flux of 4.6×10 16 deuterium atoms/s into a cooled storage cell. Polarized electrons were produced by photo-emission from a strained-layer semiconductor cathode (InGaAsP) [8]. After linear acceleration to 720 MeV the electrons were injected and stacked in the AmPS storage ring. Every 5 minutes the ring was refilled, after reversal of the electron polarization at the source. The polarization of the stored electrons was maintained by setting the spin tune to 0.5 with a strong solenoidal field (using the Siberian snake principle [9]). Scattered electrons were detected in the large-acceptance magnetic spectrometer [10]. The electron detector was positioned at a central angle of 40 • , resulting in a central value of Q 2 = 0.21(GeV/c) 2 . Neutrons and protons were detected in a time-of-flight (TOF) system made of two subsequent and identical scintillator arrays. Each of the four bars in an array was preceded by two plastic scintillators used to identify and/or veto charged particles. By simultaneously detecting protons and neutrons in the same detector, one can construct asymmetry ratios for the two reaction channels 2 H( e, e ′ p)n and 2 H( e, e ′ n)p, in this way minimizing systematic uncertainties associated with the deuteron ground-state wave function, absolute beam and target polarizations, and possible dilution by cell-wall background events. Results An experimental asymmetry (A exp = N + −N − N + +N − ) can be constructed, where N ± is the number of events that pass the selection criteria, with hP d 1 either positive or negative. A exp for the 2 H( e, e ′ p)n-channel, integrated up to a missing momentum of 200 MeV/c, was used to determine the effective product of beam and target polarization by comparing to the predictions of the model of Arenhövel et al. [4]. This advanced, non-relativistic model has shown to provide good descriptions for quasifree proton knockout from tensorpolarized deuterium [11]. Finite acceptance effects were taken into account with a Monte Carlo code. The spin-correlation parameter for the neutron events was obtained from the experimental asymmetry by correcting for the contribution of protons misidentified as neutrons (less than 1%, as determined from a calibration with the reaction 1 H(e, e ′ p)), and for the product of beam and target polarization, as determined from the 2 H( e, e ′ p)n channel. . The shaded area indicates the systematic uncertainty from Ref. [2]. The dotted curve shows the results of Ref. [14], while the solid and dashed curves represent the predictions of the VMD-model of Ref. [15]. Figure 1 shows the spin-correlation parameter for the 2 H( e, e ′ n)p channel as a function of missing momentum. The data are compared to the predictions of the full model of Arenhövel et al. [4], assuming the dipole parameterization for the magnetic form factor of the neutron, folded over the detector acceptance with our Monte Carlo code for various values of G n E . Full model calculations are required for a reliable extraction of G n E . We extract G n E (Q 2 = 0.21(GeV/c) 2 ) = 0.066 ± 0.015 ± 0.004, where the first (second) error indicates the statistical (systematic) uncertainty. In Fig. 2 we compare our experimental result to other data obtained with spin-dependent electron scattering. The figure also shows the results from Ref. [2]. It is seen that our result favors their extraction of G n E which uses the Nijmegen potential. Conclusions In summary, we presented the first measurement of the sideways spin-correlation parameter A V ed (90 • , 0 • ) in quasifree electron-deuteron scattering from which we extract the neutron charge form factor at Q 2 = 0.21 (GeV/c) 2 . When combined with the known value and slope [3] at Q 2 = 0 (GeV/c) 2 and the elastic electron-deuteron scattering data from Ref. [2], this result puts strong constraints on G n E up to Q 2 = 0.7 (GeV/c) 2 . We would like to thank the NIKHEF and Vrije Universiteit technical groups for their outstanding support and Prof. H. Arenhövel for providing the calculations. This work was supported in part by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), the National Science Foundation under Grants No. PHY-9504847 (Arizona State Univ.), US Department of Energy under Grant No. DE-FG02-97ER41025 (Univ. of Virginia) and the Swiss National Foundation. Figure 1 .Figure 2 . 12Data and predictions for the sideways asymmetry A V ed (90 • , 0 • ) versus missing momentum for the 2 H( e, e ′ n)p reaction. The curves represent the results of the full model calculations of Arenhövel et al. assuming various values for G n E . Data and theoretical predictions for G n E . The solid circle shows our result, the cross, open circle, and square the results from double polarization measurments on 3 He [12] and the triangle on 2 H [13] . A Lung, Phys. Rev. Lett. 70718A. Lung et al., Phys. Rev. Lett. 70, 718 (1993). . S Platchkov, Nucl. Phys. 510740S. Platchkov et al., Nucl. Phys. A510, 740 (1990). . S Kopecki, Phys. Rev. Lett. 742427S. Kopecki et al., Phys. Rev. Lett. 74, 2427 (1995); . Phys. Rev. 562229Phys. Rev. C56, 2229 (1997). . H Arenhövel, W Leidemann, E L Tomusiak, Z. Phys. 331123H. Arenhövel, W. Leidemann, and E. L. Tomusiak, Z. Phys. A331, 123 (1988); . Erratum, Z. Phys. 334363Erra- tum, Z. Phys. A334, 363 (1989). . T W Donnelly, A S Raskin, Ann. Phys. 169247T.W. Donnelly and A.S. Raskin, Ann. Phys. 169, 247 (1986). . M Ferro-Luzzi, Nucl. Instr. Meth. 36444M. Ferro-Luzzi et al., Nucl. Instr. Meth. A364, 44 (1995). . L D Van Buuren, these proceedingsL. D. van Buuren et al., these proceedings. Y B Bolkhovityanov, Proc. of the 12 th Int. Symposium on High Energy Spin Physics. C. W. de Jager et al.of the 12 th Int. Symposium on High Energy Spin PhysicsWorld ScientificY. B. Bolkhovityanov et al., Proc. of the 12 th Int. Symposium on High Energy Spin Physics, edited by C. W. de Jager et al., p. 730-732, World Scientific, 1997. . S Ya, A M Derbenev, Kondratenko, Sov. Phys.-JETP. 37968Ya. S. Derbenev and A. M. Kondratenko, Sov. Phys.-JETP 37, 968 (1973); . Sov. Phys.-Dokl. 20830Sov. Phys.-Dokl. 20, 830 (1975). . D J J De Lange, Nucl. Instr. Meth. 412254D.J.J. de Lange et al., Nucl. Instr. Meth. A412, 254 (1998); . Nucl. Instr. Meth. 406182Nucl. Instr. Meth. A406, 182 (1998). . Z.-L Zhou, Phys. Rev. Lett. 82687Z.-L. Zhou et al. Phys. Rev. Lett. 82, 687 (1999). . M Meyerhoff, Phys. Lett. 327201M. Meyerhoff et al., Phys. Lett. B327, 201 (1994); . C E Jones, Phys. Rev. 44571C.E. Jones et al., Phys. Rev. C44, R571 (1991); . A K Thompson, Phys. Rev. Lett. 682901A.K. Thompson et al., Phys. Rev. Lett. 68, 2901 (1992). . T Eden, Phys. Rev. 501749T. Eden et al., Phys. Rev. C50, R1749 (1994). . S Galster, Nucl. Phys. 32221S. Galster et al., Nucl. Phys. B32, 221 (1971). . M F Gari, W Krümpelmann, Phys. Lett. 274159M.F. Gari and W. Krümpelmann, Phys. Lett. B274, 159 (1992).
[]
[ "Biophysics and Physicobiology Dynamic and static analyses of glass-like properties of three-dimensional tissues", "Biophysics and Physicobiology Dynamic and static analyses of glass-like properties of three-dimensional tissues" ]
[ "Hironobu Nogucci [email protected] \nGraduate School of Arts and Sciences\nGraduate School of Arts and Sciences\nThe University of Tokyo\nMeguro-ku150-8902TokyoJapan\n\nThe Uni-versity of Tokyo\n3-8-1 Komaba, Meguro-ku153-8902TokyoJapan\n" ]
[ "Graduate School of Arts and Sciences\nGraduate School of Arts and Sciences\nThe University of Tokyo\nMeguro-ku150-8902TokyoJapan", "The Uni-versity of Tokyo\n3-8-1 Komaba, Meguro-ku153-8902TokyoJapan" ]
[]
The mechanical properties of tissues are influenced by those of constituent cells in various ways. For instance, it has been theoretically demonstrated that two-dimensional confluent tissues comprising mechanically uniform cells can undergo density-independent rigidity transitions, and analysis of the dynamical behavior of tissues near the critical point revealed that the transitions are geometrically controlled by the so-called cell shape parameter. To investigate whether three-dimensional tissues behave similarly to two-dimensional ones, we herein extend the previously developed model to three dimensions both dynamically and statically, demonstrating that two mechanical states similar to those of glassy materials exist in the three-dimensional case. Scaling analysis is applied to the static model focused from the rearrangement viewpoint. The obtained results suggest that the upper critical dimension of tissues equals two and is therefore the same as that of the jamming transition.
10.2142/biophysico.16.0_9
null
64,299,291
1802.09149
385869d1d67606910f648c7df4205107e028bf02
Biophysics and Physicobiology Dynamic and static analyses of glass-like properties of three-dimensional tissues 2019 Hironobu Nogucci [email protected] Graduate School of Arts and Sciences Graduate School of Arts and Sciences The University of Tokyo Meguro-ku150-8902TokyoJapan The Uni-versity of Tokyo 3-8-1 Komaba, Meguro-ku153-8902TokyoJapan Biophysics and Physicobiology Dynamic and static analyses of glass-like properties of three-dimensional tissues 16201910.2142/biophysico.16.0_9Received May 9, 2018; accepted November 15, 2018https://www.jstage.jst.go.jp/browse/biophysico/ Regular Article ◄ S i g n i f i c a n c e ► Corresponding author:tissues and organsepithelial-mesenchymal transitionjamming transitionglass The mechanical properties of tissues are influenced by those of constituent cells in various ways. For instance, it has been theoretically demonstrated that two-dimensional confluent tissues comprising mechanically uniform cells can undergo density-independent rigidity transitions, and analysis of the dynamical behavior of tissues near the critical point revealed that the transitions are geometrically controlled by the so-called cell shape parameter. To investigate whether three-dimensional tissues behave similarly to two-dimensional ones, we herein extend the previously developed model to three dimensions both dynamically and statically, demonstrating that two mechanical states similar to those of glassy materials exist in the three-dimensional case. Scaling analysis is applied to the static model focused from the rearrangement viewpoint. The obtained results suggest that the upper critical dimension of tissues equals two and is therefore the same as that of the jamming transition. tissues and plays important roles during developmental processes, wound healing, and cancer metastasis [1]. The above processes have many common features and are therefore categorized as epithelial-mesenchymal transitions (EMTs). Notably, epithelial cells are tightly packed, i.e., located in close proximity to each other, while the packing of mesenchymal cells is more loose. During EMT, the mechanical properties of a tissue change from those of the epithelial state to those of the mesenchymal state, while the opposite phenomenon is denoted as MET. These macroscopic states reflect individual cell mechanical properties such as cortical tension and intercellular adhesion that are regulated by gene expression. From the viewpoint of physics, EMT can be described as being analogous to two well-known phase transitions. For instance, the mechanical rigidity of epithelial tissues and the fluid-like nature of mesenchymal tissues suggest that their interconversion can be modeled by a solid-to-liquid phase transition. However, epithelial cells feature an irregular configuration and behave similarly to glassy materials, most of which undergo a jamming transition upon packing rate (ρ) change. However, confluent tissues experience EMT even when the packing rate is kept constant (ρ=1) [2]. Alternatively, EMT can be modeled by phase transitions induced by the collective motions of active matter components [3,4]. In particular, active matter comprises particles that move indi-Changes in the mechanical properties of individual cells comprising three-dimensional tissues are shown to induce a transition affording a glasslike state. In view of the fact that the above properties (e.g., elasticity, contractility, and surface tension) are strongly dependent on single cell shape, tissue stiffness can be estimated by imaging of constituent cells. Moreover, since a similar relationship between cell shape and tissue stiffness has previously been reported for a two-dimensional case, the upper critical dimension of tissues may equal two, which is the same as that of the material jamming transition. where V i and V 0 represent cell volume and its optimal value in a single-cell system, respectively, while K v represents the cell's elastic constant in three-dimensional systems [13]. In this study, collective cell behavior in three-dimensional confluent tissues was investigated using dynamical and static models. The settings used to model dynamical cell motions are explained, and the corresponding results demonstrate that the phase transition of collective motions is similar to a glass transition and is influenced by a shape parameter introduced later in the text. To investigate static behavior around the transition, cell rearrangement energy was measured and analyzed by a scaling method described. Finally, conclusions and discussion are presented. Methods Model equations The Voronoi cell model used in a previous work [12] was employed to describe collective cellular motion considering cell shape. In this model, the position and orientation of cell i are represented as x i (= (x i , y i , z i )) and p i (= (p xi , p yi , p zi )), respectively. Specifically, cell position is denoted by the cell center, and cell orientation is denoted by a unit vector pointing from the cell tail to the cell head. Cell shape is approximated through a graphic representation generated by a three-dimensional Voronoi tessellation of {x i } (although neighboring cells may not be in contact with each other), and gaps between cells are filled with a viscous fluid. The above approximation implies that (i) cell shape must be represented by a convex polyhedron and (ii) there should be no vacant space in the system. Throughout this paper, the boundary condition is set to be periodic, and Euler's polyhedron formula leads to the following relation: #f−#e+#ν=2, where #f, #e, and #ν denote the total number of faces, edges, and vertices of a single cell, respectively. A vertex connects three edges if degeneracy is ignored, which results in the relation #2e=3#ν. These relations reveal that #e and #ν can be directly derived if #f is known. We assume that (i) forces acting on cell i are described as F i = −∇ i E and have the same form as that described previously [12] and (ii) the magnitude of cellular self-propulsion is constant and equals ν 0 . In this model, neighboring cells are not necessarily in contact with each other, while their shapes are approximated by assuming cell-to-cell contacts. The action-reaction principle is balanced between a floating cell and the surrounding fluid; however, the fluid force is ignored. Based on the overdamped equation of motion, cell position x i can be expressed as · x ɩ = μF i + v 0 p i .(3) Cell orientation is assumed to be randomly perturbed within the plane perpendicular to the orientation: · p ɩ = vη i (t) × p i ,(4) vidually by consuming energy supplied from outside, with notable multi-scale examples being flocks of birds, groups of cells, and intracellular components, and the cooperative motion of these particles results in the formation of dynamic collective orders. However, active matter phase transitions are accompanied by density changes, which is not the case for EMT. EMT has been extensively experimentally studied, particularly in two-dimensional systems. For instance, a Bayesian force inference was proposed to measure the distribution of forces on the edges between two contacting cells in developmental processes [5], and the above method was employed to show that mechanical anisotropy promotes cell packing with hexagonal ordering in Drosophila pupal wing [6]. Additionally, structural reconstructions of laser ablationproduced holes in epithelial sheets were investigated in the context of wound healing [7], while breast cancer cells were demonstrated to undergo individual pulsating migrations in epithelial tissues due to mechanical property mismatch, which provided insights into tumor progression [8,9]. Moreover, tracking of individual cells and their glassy behavior such as caging and dynamic heterogeneity was employed to analyze collective cell motions within tissues [10]. Recently, Bi et al. observed a new type of phase transition in two-dimensional tissues comprising mechanically uniform cells [11,12], employing a model with a phenomenological energy functional (E) originating from cellular shape. In particular, E can be viewed as the sum of individual cell energies E i , which in turn, can be expressed as E i = K A (A i − A 0 ) 2 + ξP i 2 + γP i(1) where A i , P i , K A , ξ, γ , and A 0 are the mean area and perimeter of cell i, the cell's elastic constant in two-dimensional systems, active contractility driven by cytoskeletons present in cells, interfacial tension between contacting cells, and the optimal area of an isolated cell, respectively, while P = −γ/2ξ, introduced as a "shape parameter," is the optimal cell perimeter in the energy ground state. The energy cost of cell rearrangement vanishes at P > P 0 , and the rigidity transition was found to occur at P 0 ≈ 3.81, at which value the optimal cell shape corresponds to a regular pentagon. Furthermore, collective cell motions drastically change at P ≈ P 0 . While individual cells move diffusively if P and the magnitude of self-propelling velocity ν 0 are large, some are caged by their surrounding cells, and the collective motion is heterogeneous elsewhere. Despite the abovementioned insights, the behavior of three-dimensional tissues has not been investigated to date, and the existence of a rigidity transition in three-dimensional cases remains to be elucidated. Herein, we extend the energy functional E to a three-dimensional system and model individual cell energy E i as E i = K v (V i − V 0 ) 2 + K A (A i − A 0 ) 2 ,(2) Results and Discussion Dynamical cell behavior Diffusive and sub-diffusive collective motion Two distinct types of collective motion are observed on changing the parameter values ν 0 and à 0 . If both ν 0 and à 0 are small, cell rearrangement is hardly observed, and collective motion is as slow as that in glass, while fast and fluid-like collective motion is observed when ν 0 and à 0 are large. To quantitatively characterize these motions, we measured the mean squared displacement (MSD) of cell trajectories (Fig. 1). To investigate the origin of time scale-dependent MSD(t) behavior, we first divide the two time scales by the length scale of MSD(t). For cell rearrangement, cells must move as far as ~0.01V 0 1.3 , which is comparable to the edge length of a single cell. Self-propulsion is dominant before cells move further away, as shape force or diffusion is dominant after cell rearrangement. MSD(t) is defined as MSD(t − t′) = ∑ N i=1 |x i (t) − x i (t′)| 2 N, t′ = 2000, Second, we characterize long-term collective motions and introduce D 0 = v 2 0 /3D as a unit of self-diffusivity, with the magnitude of self-diffusivity D s measured as D s = lim t→∞ MSD(t)/6t. Practically, D s can be measured by averaging MSD(t)/6t for a value of t that satisfies MSD(t) > 0.01. If D s /D 0 is larger than the noise floor-derived threshold, the collective motion is regarded as diffusive; otherwise, it is considered to be sub-diffusive. The threshold value is set to 0.05. Figure 2 shows the phase diagram of collective motions obtained as a result of setting the threshold. Around the threshold, the value of D s /D 0 sharply increases with increasing à 0 . Long-term diffusive collective motions are observed when the magnitude of self-propulsion (ν 0 ) and the shape parameter ( à 0 ) are large, whereas sub-diffusive motions are observed if both of these parameters are small. The critical point à 0* in the limit ν 0 →0 exceeds 5.4, although the critical point for the regular packing of a truncated octahedra equals 5.31. This point is of particular interest because cell dynamics above this value is purely dominated by the force originating from the shape energy functional, except for the noise effect, and cells can freely rearrange. The exact critical point and behavior near this point are discussed later in the manuscript. Similarities with glassy materials Next, we consider the properties of sub-diffusive motions (d<1) and address the question of how these motions are where μ is the mass of each cell divided by the drag coefficient, and ν denotes the moment of inertia divided by the rotational drag coefficient. The random vector η i (t) = (η i x (t), η i y (t), η i z (t)) obeys the following statistics: 〈η i k (t)η i k′ (t)〉 = 2Dδ ij δ kk′ δ(t − t′),(5) where δ ij and δ kk′ are the Kronecker delta on cell indices and the component indices of the vector, respectively, while δ(t − t′) is the Dirac delta function of time variables, and D is the magnitude of the orientational change of cell motion. The developed model of collective cellular motion was denoted as the self-propelled Voronoi (SPV) model. Rescaling and parameter settings E i can be expressed as E i = K v V 0 2 ( Ṽ ɩ − 1) 2 + K A V 0 4/3 ( à ɩ − à 0 ) 2 ,(6) where Ṽ ɩ = V i /V 0 and à ɩ = A i /V 0 2/3 are the rescaled volume and surface area, respectively. à 0 depends on cell mechanical properties and denotes the optimal surface area per unit volume that minimizes the shape energy functional if the shape of all cells is confined to be equal, convex, and isotropic (Table 1). In this paper, this parameter is referred to as the "shape parameter." For simplification, we select the unit length of the system as V 0 1/3 and set V 0 = 1. The system size L is set to L =6V 0 1/3 , and the total number of cells N is set to N=6 3 so that the packing ratio NV 0 /L 3 equals unity. The energy ratio, defined as γ=K A /( K V V 0 1/3 ), is fixed at γ =1. Some parameters determining cell properties are also fixed, i.e., μ=1, ν=1, and D=0.1. The cell position is randomly partitioned for initial configuration. Numerical simulations are performed until t=22000 (time unit = 1/(μK V V 0 )) at a fixed step size of Δt=0.1. The statistical values of all parameter regions are calculated by averaging the results obtained for five different samples under different initial conditions. From the geometrical viewpoint, à 0 is related to the Kelvin problem, which poses the question of how space can be partitioned into cells of equal volume with the least surface area. If all cells are assumed to have identical shapes, the least surface area is observed for a truncated octahedron [14]. simulations. Elsewhere, the function approaches zero with increasing time, and the coefficients of curves have almost the same order (~exp(10 -4 t)), which indicates that the tissue structure is completely relaxed within the end-time of numerical simulations. The second common example corresponds to dynamic heterogeneity [2]. Figure 4 depicts cell migration vectors, which are defined as vectors pointing from the starting position to the finish position determined on a 10 3 time scale (t=21000-22000). In the parameter region that allows diffusive collective motions but is near the sub-diffusive region boundary, some cells move for a long distance, while others stay in small domains for a long time. This indicates that dynamical heterogeneity is also detected near the diffusive-similar to that of glassy materials. The first common example could be "caging," a phenomenon that is observed when particles cannot move as they are surrounded by their nearest neighbors for a long period of time [2] and represented by the self-intermediate scattering function F s (k, t). The above function is defined as F s (k, t) =〈e ik·Δr(t) 〉 , where Δr(t) denotes the difference between cell positions at the start time t 0 and the measured time (t 0 =2000) while 〈·〉 represents the average over all cells. Figure 3 shows the values of F s (k, t) after averaging the angles of k for different parameters. The magnitude of k is fixed so that F s (k, 0)≡1, (k =π/r 0 ), where r 0 is the averaged nearest distance of contacting cells for each cell. If caging occurs, the value of F s (k, t) is maintained near unity for a long period of time. For a fixed value of ν 0 =0.004, à 0 <5.3, F s (k, t) still has a high value at the end-time of numerical and the areas corresponding to unit volume equal 0.1984 and 0.5155 for square and hexagonal faces, respectively. Figure 6 shows the joint distribution for faces that belong to an n-faced polyhedron and have a specific area value. The distribution for the 14-faced polyhedron showing sub-diffusive motion features a single peak with a value of ~0.4 as the area value ( Fig. 6(a)), which indicates that cell shapes are not similar to regular truncated tetrahedra. Although the lattice of a regular truncated polyhedron is the global solution of the Kelvin problem, such polyhedra does not have an iso-sub-diffusive transition boundary line and suggests that the dynamics of three-dimensional tissues is similar to that of glass. Analysis of individual cell shape To understand the relationship between individual cell shape and collective motions, we measured the distributions of cell shapes. In the adopted model system, cell shape is approximated by convex polyhedra. Figure 5 shows the effect of à 0 on the number of cell faces at fixed ν 0 (=0.004), revealing that in the parameter region where collective motion is sub-diffusive, the average number of faces equals 14. At higher à 0 , when diffusive collective motion is observed, the average number of faces exceeds 15 and features a variance larger than that observed for à 0 <5.3. Common phenomena have been observed for twodimensional tissues, in which case an increase of the shape index triggered the "sub-diffusive"-to-"diffusive" transition [12]. The lattices of regular truncated octahedrons are the global solutions for the Kelvin proble These polyhedra have 14 faces and therefore probably represent the average shape of individual cells in the sub-diffusive collective motion regime. In two-dimensional tissues, however, the critical point corresponds to a shape parameter value representing a regular pentagon, although the global solution for the energy minimum states is represented by a regular hexagon. Therefore, we investigated actual cell shapes in three-dimensional tissues to determine whether these shapes typically correspond to regular truncated octahedra. These polyhedra feature six regular squares and eight regular hexagons as faces, Measurement method To investigate the energetic properties of cell rearrangement, we introduce a static model. In addition to the dynamical model mentioned previously, Equation (6) is adopted to describe the energy functional originating from the cell shape constraint, and one of the states corresponding to local energy minima is achieved from the random initial configuration of cell positions using the gradient descent method. Using the final configuration that reaches the local energy minima, we measure the rearrangement energy for contacting pairs of cells. Figure 7 schematically expresses the differ ence of the adopted approach from that used in ref. [11]. For simplicity, we consider two-dimensional systems. In ref. [11], a cellular vertex model (CVM) was adopted, and variables represented the positions of vertices where three cells contacted each other; therefore, rearrangement was generated by shortening the edge length of the contacting cells to zero. Herein, on the contrary, we adopt the SPV model, where variables represent the positions of cell centers and the length of the edge cannot therefore be controlled. Instead of inducing length changes, cell centers are moved in tropic shape. Both model equations and the periodic boundary cube do not have the mechanism to break the symmetry of isotropy for a single cell. Moreover, the area 0 peak is always found for all faces in case of diffusive motion ( Fig. 6(b)), i.e., cell rearrangement occurs for any cell shape. Diffusive collective motions originate from free cell rearrangements. Critical point and scaling behavior of cell rearrangement energy In the previous section, we investigated the transition from sub-diffusive collective motions to diffusive collective motions on a large time scale; however, the value of the critical point à 0* in the limit ν 0 →0 remains unknown. Therefore, we aimed to determine the above value and study critical behavior in the proximity of this value by ignoring selfpropulsion. To determine the value of à 0* , one should measure the energies of cell rearrangements. Hereafter, we explain the employed measurement method, focusing on parameter dependency only for the shape parameter à 0 and the energy ratio r. rΔE = | à 0 − à 0* | β f ± ( r | à 0 − à 0* | Δ ) ,(7) where Δ is the crossover scaling critical exponent, and f + and fare the two branches of the crossover function whose sign added at its subscript corresponds to that of ( à 0 − à 0* ). Additionally, z is defined as z= r | à 0 − à 0* | Δ and represents the crossover scaling variable. The exponent β can be derived from the following relation in the limit z→0: rΔE ∝ | à 0 − à 0* | β . At the critical point, the two branches the opposite direction to each other, and cell pairs become disconnected after several iterations of this operation. In the case of the three-dimensional system, contacting edges are replaced with contacting faces, while both the process and its efficiency remain unchanged. The detailed procedures are explained in Supplementary Text S1. Critical point and scaling behavior The energy of a cellular rearrangement is denoted as ΔE. Figure 8 shows the distributions of ΔE rescaled with respect to its average over the sample faces (ΔE) for many parameter sets, revealing that good fits can be obtained using the k-gamma distribution (p(x) = k k x k-1 e (-kx)/(k-1)! , (k = 1.38±0.01)). This distribution reflects the maximization of entropy at a constant packing ratio and it is observed for many types of disordered systems [15,16]. Figure 9(a) shows ΔE for different parameter sets. Depending on the magnitude of the energy ratio r, the finite values of ΔE are different in regions where à 0 is small. After rescaling via multiplication with r, three curves with similar forms are obtained ( Fig. 9(b)). As measured here, the critical point à 0* exists within the range of 5.4-5.5. The rescaled rearrangement energy can possibly be used as a parameter to classify two phases. To examine the hypothesis that the phase transition occurs and the rescaled rearrangement energy vanishes at the critical point à 0* , the measured data are scaled following a previously employed method [11]. In the Ising model for ferromagnetism, magnetization m is expressed as a function of magnetic field h and the T−T c tempera ture difference, where T c is the critical temperature. that the upper critical dimension of the tissue equals two, i.e., equals that observed for glassy materials exhibiting the jamming transition. At this point, we briefly discuss the possible relevance of our results to cell biology. First, EMT occurs in threedimensional tissues, and its understanding is expected to shed light on various biological phenomena. Furthermore, some tissues such as skin and trachea can be regarded as pseudo-two-dimensional because they are only several-cellthick in one direction. If the upper critical dimension of the tissue is two, the tissues are similarly considered to exhibit glassy properties because both two-and three-dimensional tissues also show these properties [11,12]. We did not consider the dynamics of single-cell mechanical properties such as elastic coefficients K V and K A and other types of interactions coupling ν 0 and p i . Consideration of this dynamics could lead to another type of collective motion, which may have a relation with chemotaxis and planar cell polarity. We assumed that tissues comprise cells with uniform mechanical properties; however, breaking this assumption may also trigger interesting phenomena. In view of the fact that the shape index à 0 can be experimentally measured, the results of this study were tested by assessing the mechanical properties of three-dimensional tissues. Furthermore, measurements of the shape index for all cells by three-dimensional imaging may unveil some anomalous events such as cancer metastasis. Note After the completion of this manuscript, we noted that a similar problem has been addressed in ref. [17], where it was determined that the critical point does not correspond to the global solution of tissue energy functionals. We, however, showed the scaling properties of cell rearrangement and phase transition energies, which are yet to be reported. à 0* , β, Δ) ~ (5.410, 1, 4) (Supplementary Text S2). With this set of values, the scaling function is obtained for z and collapses to a universal curve (Fig. 10). As seen in Section 3.1, the value of à 0* is not equal to that of the regular truncated octahedron. In the branch where cell rearrangement is highly suppressed by finite energy barriers, its height is scaled as rΔE ∝ K v V 0 2 ( à 0 − à 0* ), while it is described as rΔE = r (β/Δ) in the limit z→∞. The values of β and Δ are the same as those in the twodimensional systems, as shown in [11]. This correspondence between systems of different dimensions is also observed for many types of systems that exhibit the jamming transition [2]. Conclusion and Discussion In this study, we extend the techniques established in previous researches to probe the mechanical properties of three-dimensional tissues [11,12], showing that two types of collective motions, namely diffusive and sub-diffusive, emerge depending on the self-propulsion ν 0 and the shape parameter à 0 . Sub-diffusive collective motions, occurring in the cases of caged cells and dynamic heterogeneity, are observed in the parameter region near the transition boundary line, in direct analogy to the behavior of glassy materials. Statistical analysis of individual cell shape reveals that the cell shape at the phase transition point does not correspond to a regular truncated octahedron but is more isotropic. To measure the critical value à 0* in the SPV model, we calculated rearrangement energy ΔE, whose average obeys the scaling relation (Equation (7)). The critical exponents were the same as those of two-dimensional tissues, indicating where |·| is the L-2 norm with the periodic boundary. For all parameter regions, MSD(t) is proportional to t 2 when t is small, i.e., ballistic motion is dominant at this time scale. At large t, MSD(t) is proportional to t if both ν 0 and à 0 are large. However, MSD(t) is proportional to t d (d<1) when either diffu sive or sub-diffusive motions are observed for a long period depending on parameter values. Figure 1 1Time-dependent MSDs of cell trajectories determined for (a) variable ν 0 and fixed à 0 (=5.1) and (b) variable à 0 and fixed ν 0 (=0.004). Figure 2 2Contour graph of D s /D 0 as a function of ν 0 and à 0 . The black line represents the D s /D 0 =0.05 threshold. Sub-diffusive collective motions are observed in the gray-colored region, whereas diffusive motions are observed otherwise. Figure 3 3Self-intermediate scattering function (F s (k, t), k = π/r 0 , t 0 = 2000) for different à 0 and fixed ν 0 (= 0.004). Figure 4 4Migration vectors of cells determined for three different parameter sets. Duration is set as t= 21000-22000. Snapshots in (a) are obtained using a diffusive parameter set near the transition boundary line (à 0 = 5.4, v 0 = 0.004), while those in (b) are obtained using a sub-diffusive parameter set far from the boundary line (à 0 =5.1, v 0 =0.002), and those in (c) are obtained using a diffusive parameter set far from the boundary line (à 0 =5.6, v 0 =0.008). Figure 5 5Distributions of polyhedra for different parameters à 0 at constant ν 0 = 0.004 averaged for t= 21000-22000. Figure 6 6Joint distributions of faces belonging to an n-faced polyhedoron with a different area value obtained for (a) a sub-diffusive parameter set (à 0 =5.1, v 0 =0.004) and (b) a diffusive parameter set (à 0 =5.6, v 0 = 0.008). Figure 7 7Schematic representations of (a) the measurement method of the CVM model in[1] and (b) the measurement method of the SPV model used in the present work. Only a two-dimensional scenario is considered for simplicity. In analogy with the relation in (m, h, T−T c ) for spin statistical physics, (rΔE, Figure 8 8Distributions of re-scaled rearrangement energy for different parameter combinations, with 200 faces selected for each simulation. The red line shows the k-gamma distribution with k= 1.38. Figure 9 ( 9a) Averaged energy of cell rearrangement (ΔE) and (b) rescaled averaged energy (rΔE) as functions of à Figure 10 10Scaling function determined for (à 0* , β, Δ) = (5.410, 1, 4). Three slope lines are drawn to compare the universal curves with the fitted values of critical exponents. Table 1 1Shape parameter (à 0 ) values of regular polyhedra.shapeà 0 sphere 4.836 icosahedron 5.148 dodecahedron 5.312 truncated octahedron 5.315 octahedron 5.719 cube 6 and r. AcknowledgementI would like to thank Prof. K. Kaneko for beneficial discussions and advice on some topics related to this paper.Conflicts of InterestThe author declares no conflicts of interest.Author ContributionThe author directed the research and wrote the manuscript. A common framework for EMT and collective cell migration. K Campbell, J Casanove, Development. 143Campbell, K. & Casanove, J. A common framework for EMT and collective cell migration. Development 143, 4291-4300 (2016). By changing the values of à 0* and using them to fit (β, Δ), we obtain the best fit for the scaling relation. by takingmerge as f + =f -=z β/Δ . By changing the values of à 0* and using them to fit (β, Δ), we obtain the best fit for the scaling relation (Equation (7)) by taking ( A density-independent rigidity transition in biological tissues. D Bi, J H Lopez, J M Schwarz, M L Manning, Nat. Phys. 11Bi, D., Lopez, J. H., Schwarz, J. M. & Manning, M. L. A density-independent rigidity transition in biological tissues. Nat. Phys. 11, 1074-1079 (2015). Motilitydriven glass and jamming transitions in biological tissues. D Bi, X Yang, M C Marchetti, M L Manning, Phys. Rev. X. 621011Bi, D., Yang, X., Marchetti, M. C. & Manning, M. L. Motility- driven glass and jamming transitions in biological tissues. Phys. Rev. X 6, 021011 (2016). Physics of adherent cells. U S Schwarz, S A Safran, Rev. Mod. Phys. 85Schwarz, U. S. & Safran, S. A. Physics of adherent cells. Rev. Mod. Phys. 85, 1327-1381 (2013). A counter-example to Kelvin's conjecture on minimal surfaces. D Weaire, R Phelan, Philos. Mag. Lett. 69Weaire, D. & Phelan, R. A counter-example to Kelvin's con- jecture on minimal surfaces. Philos. Mag. Lett. 69, 107-110 (1994). Emergence of Gamma distributions in granular materials and packing models. T Aste, T Di Matteo, Phys. Rev. E. 7721309Aste, T. & Di Matteo, T. Emergence of Gamma distributions in granular materials and packing models. Phys. Rev. E 77, 021309 (2008). A statistical mechanics framework captures the packing of monodisperse particles. K A Newhall, I Jorjadze, E Vanden-Eijnden, J Brujic, Soft Matter. 7Newhall, K. A., Jorjadze, I., Vanden-Eijnden, E. & Brujic, J. A statistical mechanics framework captures the packing of monodisperse particles. Soft Matter 7, 11518-11525 (2011). A geometrically controlled rigidity transition in a model for confluent 3D tissues. M Merkel, M L Manning, New J. Phys. 2022002Merkel, M. & Manning, M. L. A geometrically controlled rigidity transition in a model for confluent 3D tissues. New J. Phys. 20, 022002 (2018). Theoretical perspective on the glass transition and amorphous materials. L Berthier, G Biroli, Rev. Mod. Phys. 83Berthier, L. & Biroli, G. Theoretical perspective on the glass transition and amorphous materials. Rev. Mod. Phys. 83, 587- 645 (2011). Novel type of phase transition in a system of selfdriven particles. T Vicsek, Phys. Rev. Lett. 754Vicsek, T. Novel type of phase transition in a system of self- driven particles. Phys. Rev. Lett. 75, 4 (1995). Hydrodynamics of soft active matter. M C Marchetti, J F Joanny, S Ramaswamy, T B Liverpool, J Prost, M Rao, Rev. Mod. Phys. 85Marchetti, M. C., Joanny, J. F., Ramaswamy, S., Liverpool, T. B., Prost, J., Rao, M., et al. Hydrodynamics of soft active matter. Rev. Mod. Phys. 85, 1143-1189 (2013). Bayesian inference of force dynamics during morphogenesis. S Ishihara, K Sugimura, J. Theor. Biol. 313Ishihara, S. & Sugimura, K. Bayesian inference of force dynamics during morphogenesis. J. Theor. Biol. 313, 201-211 (2012). The mechanical anisotropy in a tissue promotes ordering in hexagonal cell packing. K Sugimura, S Ishihara, Development. 140Sugimura, K. & Ishihara, S. The mechanical anisotropy in a tissue promotes ordering in hexagonal cell packing. Develop- ment 140, 4091-4101 (2013). Border forces and friction control epithelial closure dynamics. O Cochet-Escartin, J Ranft, P Silberzan, P Marcq, Biophys. J. 106Cochet-Escartin, O., Ranft, J., Silberzan, P. & Marcq, P. Border forces and friction control epithelial closure dynamics. Biophys. J. 106, 65-73 (2014). Mismatch in mechanical and adhesive properties induces pulsating cancer cell migration in epithelial monolayer. M H Lee, P H Wu, J R Staunton, R Ros, G D Longmore, D Wirtz, Biophys. J. 102Lee, M. H., Wu, P. H., Staunton, J. R., Ros, R., Longmore, G. D. & Wirtz, D. Mismatch in mechanical and adhesive prop- erties induces pulsating cancer cell migration in epithelial monolayer. Biophys. J. 102, 2731-2741 (2012). Multiple scale model for cell migration in monolayers: Elastic mismatch between cells enhances motility. B Palmieri, Y Bresler, D Wirtz, M Grant, Sci. Rep. 511745Palmieri, B., Bresler, Y., Wirtz, D. & Grant, M. Multiple scale model for cell migration in monolayers: Elastic mismatch between cells enhances motility. Sci. Rep. 5, 11745 (2015). Glassy dynamics in three-dimensional embryonic tissues. E M Schotz, M Lanio, J A Talbot, M L Manning, J. R. Soc. Interface. 1020130726Schotz, E. M., Lanio, M., Talbot, J. A. & Manning, M. L. Glassy dynamics in three-dimensional embryonic tissues. J. R. Soc. Interface 10, 20130726 (2013). This article is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license. This article is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Inter- national License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/.
[]
[ "Studying strangeness and baryon production in small sys- tems through Ξ−hadron correlations using the ALICE de- tector", "Studying strangeness and baryon production in small sys- tems through Ξ−hadron correlations using the ALICE de- tector" ]
[ "Jonatan Adolfsson \nLund University\nLundSweden\n" ]
[ "Lund University\nLundSweden" ]
[]
These proceedings summarise recent measurements of angular correlations between the Ξ baryon and identified hadrons in pp collisions at √ s = 13 TeV using the ALICE detector. The results are compared with both string-based (PYTHIA8 with extensions) and core-corona (EPOS-LHC) models, to improve our understanding of strangeness and baryon production in small systems. The results favour baryon production through string junctions over diquark breaking, but the PYTHIA models fail at describing the relatively wide Ξ−strangeness jet peak, indicating stronger diffusion of strange quarks in data. On the other hand, EPOS-LHC is missing local conservation of quantum numbers, making it difficult to draw any conclusion about the core-corona model. *
10.1051/epjconf/202225913015
[ "https://arxiv.org/pdf/2108.11504v1.pdf" ]
237,304,024
2108.11504
d0acd139700c8b195a506d7abfee083aa0db4fc1
Studying strangeness and baryon production in small sys- tems through Ξ−hadron correlations using the ALICE de- tector Jonatan Adolfsson Lund University LundSweden Studying strangeness and baryon production in small sys- tems through Ξ−hadron correlations using the ALICE de- tector (for the ALICE Collaboration) These proceedings summarise recent measurements of angular correlations between the Ξ baryon and identified hadrons in pp collisions at √ s = 13 TeV using the ALICE detector. The results are compared with both string-based (PYTHIA8 with extensions) and core-corona (EPOS-LHC) models, to improve our understanding of strangeness and baryon production in small systems. The results favour baryon production through string junctions over diquark breaking, but the PYTHIA models fail at describing the relatively wide Ξ−strangeness jet peak, indicating stronger diffusion of strange quarks in data. On the other hand, EPOS-LHC is missing local conservation of quantum numbers, making it difficult to draw any conclusion about the core-corona model. * Introduction Today, it is widely accepted that a quark-gluon plasma (QGP) is formed in heavy-ion collisions. What is not yet understood, however, is that several QGP key signatures have also been observed in small systems such as high-multiplicity proton-proton (pp) collisions, where the QGP is not expected to form. One such signature is the enhanced yields of multi-strange baryons, which seem to scale smoothly with system size [1]. Several phenomenological models are being developed to try to understand the origin of this. Two different approaches are being explored. In one approach, microscopic models based on colour strings are extended with new mechanisms for strange-baryon formation and other features to mimic collective behaviour. This approach is used in PYTHIA [2], where baryons in the standard configuration (Monash tune) are formed through diquark breaking. This is extended by allowing string junctions [3] and -as a further extension -rope hadronisation [4], where the latter provides strangeness enhancement. The other approach is the core-corona model, which is based on a two-phase state with a QGP-like core and a string-like corona. This approach is used in EPOS [5]. The strangeness and baryon production mechanisms predicted by these models are tested through angular correlations between the Ξ baryon (quark content dss) and other hadrons, and in particular those with opposite charge or baryon number. In the string models, strangeness is mostly formed at hadronisation, which results in short-ranged Ξ-strangeness correlations, as opposed to the core-corona model where strange quarks are largely formed in the core and can diffuse prior to hadronisation. The quantity measured here is the per-trigger yield, Y(∆y, ∆ϕ) = 1 N trig d 2 N pairs d∆yd∆ϕ , which is the distribution of particle pairs in relative rapidity (∆y) and azimuthal angle (∆ϕ) space, divided by the number of triggers N trig . To extract the part that is due to balancing charges, same-charge (or baryon number) correlations were subtracted from those of opposite charge. This was measured for identified Ξ − π, Ξ − K, Ξ − p, Ξ − Λ, and Ξ − Ξ pairs using approximately 8 · 10 8 minimum-bias pp events at √ s = 13 TeV recorded by the ALICE detector [6]. Pions, kaons, and protons were identified via the specific energy loss in the Time Projection Chamber and the velocity in the Time-Of-Flight detectors (pseudorapidity |η| < 0.8). The Ξ and Λ baryons were reconstructed from their decay products (e.g. Ξ − → π − + Λ, Λ → π − + p), by making use of their invariant masses and various topological cuts (similar to what is done in Ref. [7]). Finally, the multiplicity dependence of the correlation function was measured by dividing the events into multiplicity classes (where the lowest percentiles correspond to the highest multiplicities) based on the response from the V0 counters in the forward regions (−3.7 < η < −1.7 and 2.8 < η < 5.1). Figure 1 shows angular Ξ − π correlations projected on ∆ϕ and ∆y (for the near side, |∆ϕ| < 3π/10), for both ALICE data and models. Here, the same-sign correlations are reasonably well described by PYTHIA8 Monash, which can be attributed to good tuning of the singleparticle yields, whereas there is a slight offset for the other models. Similar observations have also been made for Ξ − K and Ξ − p correlations. On the other hand, PYTHIA8 Monash underestimates the difference between opposite-and same-sign (OS-SS) correlations, which is better described by the other models or tunes. This means that these models more accurately predict the fraction of the charge from the Ξ baryons that is balanced by pions. For the other minimum-bias results, only the near-side projections of the OS-SS difference are shown here, which are summarised in Fig. 2. Here the predictions from the different models differ significantly, with EPOS-LHC predicting a nearly flat behaviour, which is not observed in data. The reason is that local conservation of quantum numbers is not implemented in EPOS, and this is not necessarily a feature of core-corona models in general. The PYTHIA models, on the other hand, predict a stronger and narrower peak than what is observed in data, except for Ξ − p correlations, where no direct correlations are expected from the diquark breaking mechanism. The junction and rope extensions are better at predicting Ξ−baryon correlations, but fail at describing Ξ − K correlations. This favours the additional baryon production mechanism from the junction model, but more development is needed to accurately describe all observations. Multiplicity dependent Ξ − π, Ξ − K, and Ξ − Λ correlations are shown in Fig. 3 (similar trends are seen also in Ξ − p and Ξ − Ξ correlations). The shift observed in the baseline with increasing multiplicity is simply due to the increased particle yields, but a small enhancement is also observed for the OS-SS difference, along with some narrowing. A similar behaviour is predicted by PYTHIA, where this can be attributed to radial flow from colour reconnection, so this effect is likely present also in data (although possibly from a different origin). However, the relative difference between multiplicity classes for Ξ − Λ correlations is weaker in data than in PYTHIA (not shown), pointing towards a competing mechanism. This could mean that the strangeness production mechanism changes with multiplicity, but more quantitative comparisons with models are needed to conclude in which way. Results Conclusions While the same-sign Ξ−hadron yields predicted by PYTHIA8 Monash largely describe what is observed in data, which is likely due to good tuning of the single-particle yields, the same cannot be said about the OS-SS difference. For this observable, EPOS-LHC predicts an almost flat behaviour, because local conservation of quantum numbers is not implemented in this model, and thus it is difficult to conclude anything about the accuracy of the corecorona approach. On the other hand, the PYTHIA models predict a too strong and narrow OS-SS peak for Ξ−strangeness correlations, which means that the strange quarks seem to have diffused somewhat in data prior to hadronisation. Finally, for Ξ−baryon correlations the junction extension describes the data better than the Monash tune, favouring this mechanism for producing baryons. The multiplicity dependence seems to be dominated by radial flow, but some observations indicate a competing mechanism, possibly related to changes in the Ξ production. However, more work is required to conclude how. Figure 1 : 489014 Figure 2 : 14890142Per-trigger yields of Ξ − π correlations compared with predictions from PYTHIA8 (including two extensions) and EPOS-LHC, projected onto (left) ∆ϕ and (right) ∆y on the near side. The lower panels show the difference between opposite-and same-sign correlations. ALI-PREL-Differences between opposite-and same-sign (or baryon number) correlations projected onto ∆y on the near side, for (top left) Ξ − K pairs, (top right) Ξ − p pairs, (bottom left) Ξ − Λ pairs, and (bottom right) Ξ − Ξ pairs. Figure 3 : 3Per-trigger yields of (left) Ξ − π, (middle) Ξ − K, and (right) Ξ − Λ correlations, measured for three different multiplicity classes and projected onto ∆y on the near side. The lower panels show the differences between opposite-and same-sign correlations. . Nature Phys. 13ALICE Collaboration, Nature Phys. 13, 535-539 (2017) . T Sjöstrand, Comput. Phys. Commun. 191T. Sjöstrand et al, Comput. Phys. Commun. 191, 159-177 (2015) . J R Z Christiansen &amp; P, Skands, J. High Energ. Phys. 20153J.R. Christiansen & P.Z. Skands, J. High Energ. Phys. 2015, 3 (2015) . C Bierlich, J. High Energ. Phys. 2015148C. Bierlich et al, J. High Energ. Phys. 2015, 148 (2015) . T Pierog, Phys. Rev. 9234906T. Pierog et al, Phys. Rev. C92, 034906 (2015) . JINST. 38002ALICE Collaboration, JINST 3, S08002 (2008) . Eur. Phys. J. 711594ALICE Collaboration, Eur. Phys. J. C71, 1594 (2011)
[]
[ "Complete three photon Hong-Ou-Mandel interference at a three port device", "Complete three photon Hong-Ou-Mandel interference at a three port device" ]
[ "S Mährlein \nInstitut für Optik\nInformation und Photonik\nUniversität Erlangen-Nürnberg\n91058ErlangenGermany\n\nErlangen Graduate School in Advanced Optical Technologies (SAOT)\nUniversität Erlangen-Nürnberg\n91052ErlangenGermany\n", "J Von Zanthier \nInstitut für Optik\nInformation und Photonik\nUniversität Erlangen-Nürnberg\n91058ErlangenGermany\n\nErlangen Graduate School in Advanced Optical Technologies (SAOT)\nUniversität Erlangen-Nürnberg\n91052ErlangenGermany\n", "G S Agarwal \nDepartment of Physics\nOklahoma State University\n74078StillwaterOklahomaUSA\n" ]
[ "Institut für Optik\nInformation und Photonik\nUniversität Erlangen-Nürnberg\n91058ErlangenGermany", "Erlangen Graduate School in Advanced Optical Technologies (SAOT)\nUniversität Erlangen-Nürnberg\n91052ErlangenGermany", "Institut für Optik\nInformation und Photonik\nUniversität Erlangen-Nürnberg\n91058ErlangenGermany", "Erlangen Graduate School in Advanced Optical Technologies (SAOT)\nUniversität Erlangen-Nürnberg\n91052ErlangenGermany", "Department of Physics\nOklahoma State University\n74078StillwaterOklahomaUSA" ]
[]
We report the possibility of completely destructive interference of three indistinguishable photons on a three port device providing a generalisation of the well known Hong-Ou-Mandel interference of two indistinguishable photons on a two port device. Our analysis is based on the underlying mathematical framework of SU(3) transformations rather than SU(2) transformations. We show the completely destructive three photon interference for a large range of parameters of the three port device. As each output port can deliver zero to three photons the device generates higher dimensional entanglement. In particular, different forms of entangled states of qudits can be generated depending again on the device parameters. Our system is different from a symmetric three port beam splitter which does not exhibit a three photon Hong-Ou-Mandel interference.
10.1364/oe.23.015833
[ "https://arxiv.org/pdf/1502.02934v3.pdf" ]
35,481,245
1502.02934
401069466fc9770b8de83316b60cb8d1128590cf
Complete three photon Hong-Ou-Mandel interference at a three port device S Mährlein Institut für Optik Information und Photonik Universität Erlangen-Nürnberg 91058ErlangenGermany Erlangen Graduate School in Advanced Optical Technologies (SAOT) Universität Erlangen-Nürnberg 91052ErlangenGermany J Von Zanthier Institut für Optik Information und Photonik Universität Erlangen-Nürnberg 91058ErlangenGermany Erlangen Graduate School in Advanced Optical Technologies (SAOT) Universität Erlangen-Nürnberg 91052ErlangenGermany G S Agarwal Department of Physics Oklahoma State University 74078StillwaterOklahomaUSA Complete three photon Hong-Ou-Mandel interference at a three port device (Dated: March 20, 2015) We report the possibility of completely destructive interference of three indistinguishable photons on a three port device providing a generalisation of the well known Hong-Ou-Mandel interference of two indistinguishable photons on a two port device. Our analysis is based on the underlying mathematical framework of SU(3) transformations rather than SU(2) transformations. We show the completely destructive three photon interference for a large range of parameters of the three port device. As each output port can deliver zero to three photons the device generates higher dimensional entanglement. In particular, different forms of entangled states of qudits can be generated depending again on the device parameters. Our system is different from a symmetric three port beam splitter which does not exhibit a three photon Hong-Ou-Mandel interference. INTRODUCTION The Hong-Ou-Mandel (HOM) effect [1,2], i.e., the completely destructive interference of two independent but indistinguishable photons, brought a paradigm shift to the field of quantum optics. Until the demonstration of the HOM effect the interference of independent photons was considered to be impossible. Such an interference effect manifests itself in the study of photon correlations rather than in intensity measurements. More specifically, if two single photons are sent from two different ports of a 50/50 beam splitter then the number of coincidence events at the two output ports vanishes. This follows from the fact that, if two photons are indistinguishable with respect to their wavelength and polarisation and their wave packages overlap in time, then the two different quantum paths interfere so that the two photons will never leave the beam splitter at different ports. If one of these parameters is changed, the photons become distinguishable and the dip in the observed coincidence rate starts to disappear. The effect is quite versatile and has been observed in a very wide class of systems. Besides beam splitters it has been studied in other optical elements such as in integrated devices like evanescently coupled waveguides [3,4] and coupled plasmonic systems [5][6][7][8]. It has also been studied in the radiation from two trapped ions [9], atoms [10][11][12] , quantum dots [13,14] and for two different kinds of sources [15][16][17]. Since the original work of HOM one also has examined the kind of interference that can take place if two single photons are replaced by say two photons on each port or say by one at one port and two at the other port. Hereby one has found very interesting quantum interference effects depending on the beam splitter reflectivity [18][19][20]. Another interesting possibility occurs if n photons arrive at each port of a 50/50 beam splitter -in this case the output ports never have odd numbers of photons [21]. In this letter we report a three photon interference effect which is in the original spirit of the HOM effect -we examine the completely destructive interference of three indistinguishable photons on a three port device. We thus shift the focus from a two port device to a three port device. This brings a key change to the underlying mathematical framework as we work with SU(3) transformations rather than SU(2) transformations. We specifically examine a three port integrated device consisting of a small array of three single mode evanescently coupled waveguides as these are relatively easy to fabricate [22]. Although we tailor our discussion to coupled waveguide systems the results will be applicable to a class of wide bosonic systems described by the Hamiltonian (1). For the three port device we have found an analytical expression for the completely destructive three photon interference. Thereby we produce a variety of two and three photon entanglement at the output ports. Our three port network is different from the symmetric mulitport beam splitter which has been extensively studied for the HOM like interferences [23,24]. However for the three port system, such a splitter does not exhibit a perfect three photon HOM interference. On the other hand Campos [25] using the idea of Reck and Zeilinger [26] constructed a SU(3) transformation involving beam splitters and phase shifters which leads to three photon HOM interference. Further Tan et al. [27] showed how the SU(3) transformation involving beam splitters and phase shifters can lead to perfect photon interference depending on specific values of the parameters of SU (2) transformations. Note also that experimental studies of two photon interference in three port and four port devices have been reported in [28][29][30]. The study of multiport systems is also important in the context of Bosonic sampling [31][32][33], where the main goal is to use coincidence data to reconstruct the unitar transformation matrix between input and output ports. While our investigated setup is similar to the experi-arXiv:1502.02934v2 [quant-ph] 19 Mar 2015 g 1 g 2 input modesâ † j (0) output modesâ † j (t) FIG. 1. Scheme of a 3×3 waveguide with continuous coupling between the inner mode and the outer modes ment studied in [34], which relies on a 3D geometry in order to couple all three modes to each other, we will use a more simple 2D structure, where the outer modes are coupled to the inner mode but not to each other. With the setup of [34] it is possible to suppress states [24] of the form |2, 1, 0 , which contain two, one and zero photons in the different output modes, but the output state will still contain a |1, 1, 1 -term corresponding to the coincidence detection of all three photon in the three different output modes. We will show that for a whole range of parameters of the waveguide our system can suppress this coincidence event, which corresponds to the original Hong-Ou-Mandel effect [1,2] extended to three interfering photons. MODEL AND TIME EVOLUTION The investigated system consists of a 3 × 3 waveguide (three input modes and three output modes) with con-tinuous evanescent coupling between the inner mode and the outer modes (see Fig. 1). The coupling strength is given by the coupling coefficients g 1 (between the first and second mode) and g 2 (between the second and third mode) and is basically determined by the distance between the guides. Note that there is no direct coupling between the two outer modes. The interaction Hamiltonian for this system readŝ H int = g 1â1â † 2 + g 2â2â † 3 + g * 1â2â † 1 + g * 2â3â † 2 . (1) Each part of Eq. (1) stands for the annihilation of a photon in a certain mode and the creation of a photon in a neighbouring mode associated with the corresponding coupling strength g 1/2 . In order to analyse the time dependent evolution of the system we switch to the Heisenberg picture. To simplify the calculation we define a vector a = â † 1 ,â † 2 ,â † 3 T so that the time evolution is governed by d dt a(t) = i   0 g 1 0 g * 1 0 g 2 0 g * 2 0   M a(t),(2) where the interaction time t is determined by the length of the waveguide. The equation can easily be solved using the exponential ansatz V = e − i M t , which allows to rewrite the creation operatorsâ † j (0) at time t = 0 in terms of creation operatorsâ † j (t) at time t via a(0) = V · a(t). The solution of this equation yields the form of the ma- trix V V =   cos 2 (θ) cos(G) + sin 2 (θ) − i cos(θ) sin(G) e i ψ cos(θ) sin(θ)e i(ϕ+ψ) (cos(G) − 1) − i cos(θ) sin(G)e − i ψ cos(G) − i sin(θ) sin(G)e i ϕ cos(θ) sin(θ)e − i(ϕ+ψ) (cos(G) − 1) − i sin(θ) sin(G)e − i ϕ sin 2 (θ) cos(G) + cos 2 (θ)   .(3) where g 1 · t = G cos(θ) e i ψ and g 2 · t = G sin(θ) e i ϕ . From this expression it is easy to see that the evolution given by V is periodic with respect to G and θ as these variables only appear as arguments of sine or cosine functions. Note that, as expected, the transformation V is unitary -this is because we are considering a lossless device. THREE SINGLE PHOTON INPUT Next we focus on an input state where at each input port a single photon is coupled into the waveguide at t = 0. The wave function of this state is given by |ψ in = a † 1 (0)â † 2 (0)â † 3 (0) |0 . Via the transformation matrix V we can easily calculate the general form of the output state |ψ out = 3 l=1 V 1lâ † l (t) · 3 m=1 V 2mâ † m (t) · 3 n=1 V 3nâ † n (t) |0 ,(4) where V mn is the matrix element in the mth row and the nth column of V in Eq. +c 210 |2, 1, 0 + . . . + c 111 |1, 1, 1 ,(5) where corresponding coefficients can be calculated by explicitly expanding Eq. (4) or alternatively using a formalism for linear optical networks involving permanents [35]. THREE PHOTON HONG-OU-MANDEL INTERFERENCE A permanent of a matrix is equal to its determinant, but without the sign of the permutation taken into account. For a N × N matrix A it is given by Perm A = σ N j=1 A jσ(j) ,(6) where the sum runs over all possible permutations σ of the set {1, 2, . . . , N }. The coefficients c klm of Eq. (5) can be expressed by permanents of matrices V {k,l,m} , where k, l and m are the number of photons in the three output modes so that k + l + m = 3. Hereby V {k,l,m} is a 3 × 3 matrix and is constructed via the transformation matrix in Eq. (3). It consists of k copies of the first column of V , l copies of the second column of V and m copies of the third column of V [35]. Dividing the permanent of V {k,l,m} by a normalisation factor yields the final expression for the coefficients c klm = Perm V {k,l,m} √ k!l!m! .(7) One can show that the absolute value of these coefficients depends only on G and θ, but not on the phases ψ and ϕ of g 1/2 . The last two variables are linked to the phases of the coupling coefficients g 1/2 and will only have an impact on the phases of the coefficients c klm . Note that Eq. (7) only holds true for the input state |1, 1, 1 . However as shown in [35], the formalism involving permanents can be expanded to arbitrary initial states by additional consideration of the rows of the transformation matrix corresponding to the input state. In the following we analyse a particular type of states displaying a three-photon Hong-Ou-Mandel (HOM) interference. In analogy to the original two-photon Hong-Ou-Mandel experiment [1,2], where two photons are never detected simultaneously at the two different output modes of a 50/50 beam splitter, the probability for all three photons leaving the waveguide at different output ports vanishes if c 111 ! = 0.(8) To analyse the conditions for the three-photon HOM interference, we have to calculate c 111 explicitly. With Eqs. (3) and (7) we find c 111 =V 11 V 22 V 33 + V 12 V 23 V 31 + V 13 V 21 V 32 +V 11 V 23 V 32 + V 12 V 21 V 33 + V 13 V 22 V 31 ,(9) where each summand corresponds to a different threephoton quantum path leading to the same final state |1, 1, 1 . For example the first term corresponds to the case, where all three photons exit the waveguide in the same mode they came in, the second term corresponds to the case, where the photon in the first/second/third mode switches to the second/third/first mode etc. As the photons are indistinguishable the coefficient c 111 is a coherent superposition of the amplitudes of all these quantum paths. By inserting the expression for the various matrix elements V mn and solving Eq. As can be seen from Eq. (10) completely destructive three photon HOM interference can take place for a large range of the parameters g 1 and g 2 . Note that for some values of G these equations would result in a complex valued θ and are therefore not considered as a solution. Fig. 2 shows a plot of this HOM contour, which is (2π-)periodic in G and θ. INTERESTING STATES ON THE HONG-OU-MANDEL CONTOUR Finally we investigate the states determined by the HOM contour. In the original HOM experiment a maximally entangled state of the form ∝ |2, 0 − |0, 2 is produced at the output [1,2]. Similar states can be found in the case of a three-photon interference. Additionally to the condition c 111 = 0 we find that at certain points some coefficients c klm of Eq. (5) will vanish as well, so that further terms are suppressed. Other coefficients will have the same absolute value, so that the states can be written in a compact form. We found three different kinds of states fulfilling this condition, which display entanglement between two and possibly three output modes. In Fig. 3 (a), where the HOM contour is shown, some coordinates are marked by points, where one can find maximally bipartite entangled states. A closer investigation yields that for these all coordinates we find states of the form |ψ out = 1 √ 2 |2 j , 0 k + |0 j , 2 k |1 l(11) with j = 1, k = 3 and l = 2 at the red crosses, with j = 2, k = 3 and l = 1 at the green dots and with j = 1, k = 2 and l = 3 for at the blue diamonds. Note that for simplicity we neglected phase factors in front of each state (see appendix for the exact analytical expression for each coefficient and the coordinates). All the three states have a similar form, where one mode containing one photon is separable while the remaining two modes are in a maximally entangled state. Depending on the phase ψ/ϕ of the coupling coefficients g 1/2 the relative phase between the non-separable states can be varied. Note that these states are created in a deterministic way so that no post selection is necessary. In Fig. 3 (b) and (c) the coordinates of possible tripartite entangled states along the HOM contour are displayed. Note that we have three modes and each mode has four possible states-corresponding to the occupation of 0, 1, 2 and 3 photons. We are thus dealing with higher dimensional entanglement of three qudits (d = 4). A complete classification of the classes of entangled states for qudits (d = 4) does not exist. However the structure of tripartite states generated for the case of Figs. 3 (b) and (c) suggests three qudit (d = 4) entanglement. The form of the states of Fig. 3 (b) and their coordinates read |ψ out = √ 3 4 |3 j , 0 k + |0 j , 3 k |0 l + 1 4 |2 j , 1 k + |1 j , 2 k |0 l + 1 2 |1 j , 0 k + |0 j , 1 k |2 l(12) with j = 2, k = 3 and l = 1 at the red crosses, with j = 1, k = 3 and l = 2 at the green dots and with j = 1, k = 2 and l = 3 at the blue diamonds. The form of the wave function of these states of Fig. 3 (c) and their coordinates are given by |ψ out = 1 3 √ 2 2 |3 j , 0 k , 0 l + |0 j , 3 k , 0 l + |0 j , 0 k , 3 l + 1 √ 6 |1 j , 0 l + |0 j , 1 l |2 k + 1 √ 6 |1 j , 0 l + |0 j , 1 k |2 l(13) with j = 1, k = 2 and l = 3 at the red crosses, with j = 2, k = 1 and l = 3 at the green dots and with j = 3, k = 1 and l = 2 at the blue diamonds. As before we neglected phase factors in front each state for simplicity (see appendix for tables containing all coefficients and the analytical expressions for coordinates for all states). We have written the states in a way that a certain mode is always factored out in each term so that the entanglement between the two remaining modes is clearly visible; this suggests that the states are not separable and are good candidates for tripartite entanglement. CONCLUSION In conclusion, we investigated the dynamics of a 3×3 waveguide where the outer modes are coupled to the inner mode by evanescent coupling but not to each other. Beginning with three indistinguishable single photons at the three input ports we showed that for a wide range of waveguide parameters this leads to completely destructive three photon interference, i.e. for these parameters the photons will never leave the waveguide in three separate ports. This is a generalisation of the well know Hong-Ou-Mandel effect from two to three photons. Additionally the produced output states consisting of three qudits (d = 4), exhibit highly interesting structures displaying bipartite or possibly tripartite entanglement. ACKNOWLEDGMENTS The authors gratefully acknowledge funding by the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the German Research Foundation (DFG) 0 π 2 π 3π 2 2π π 2 π 3π 2 2π G θ (a) 0 π 2 π 3π 2 2π G (b) 0 π 2 π 3π 2 2π π 2 π 3π 2 2π G θ (c) APPENDIX For certain waveguide parameters G and θ we found three interesting sets of states on the HOM contour, on which the coincident event is suppressed and therefore c 111 vanishes. However only the general form of these states was discussed before but not the explicit values of coefficients. Here we present their analytical values: Tables I to VII contain the exact analytical expressions for the coefficients of Eq. (11) to (13) of the paper. Note that in Eqs. (12) and (13) of the main paper some signs depend on the value of n mod4. Therefore in Table III and VI n is replaced by 4ñ + 0, 4ñ + 1, 4ñ + 2 or 4ñ + 3, in which case all possible cases are considered. G π (2m + 1) π 4 (2m + 1) π 4 (2m + 1) θ π 8 (2n + 1) πn π 2 (2n + 1) c 210 (11). Coefficients, which are not listed, are equal to 0. (12). Coefficients, which are not listed, are equal to 0. G π 2 (2m + 1) θ π 4 (2(4ñ + 0) + 1) π 4 (2(4ñ + 1) + 1) π 4 (2(4ñ + 2) + 1) π 4 (2(4ñ + 3) + 1) (12). Coefficients, which are not listed, are equal to 0. (13). Coefficients, which are not listed, are equal to 0. (−1) n+1 √ 2 e − i(ψ+ϕ) 0 0 c 012 (−1) n+1 √ 2 e − i(ψ+ϕ) 0 0 c 120 0 i(−1) n+m √ 2 e − i ψ 0 c 102 0 i(−1) n+m √ 2 e i ψ 0 c 201 0 0 i(−1) n+m+1 e − i ϕ c 021 0 0 i(−1) n+m+1 e i ϕG π 2m + 1 − 1 3 π 2m + 1 + 1 3 π 2m + 1 − 1 3 π 2m + 1 + 1 3 θ πn + arccot √ 2 πn − arccot √ 2 c 030 √ 3 4 e i(ψ−ϕ) √ 3 4 e i(ψ−ϕ) − √ 3 4 e i(ψ−ϕ) - √ 3 4 e i(ψ−ϕ) c 003 i (−1) n √ 3 4 e i(ψ+2ϕ) − i (−1) n √ 3 4 e i(ψ+2ϕ) i (−1) n √ 3 4 e i(ψ+2ϕ) − i (−1) n √ 3 4 e i(ψ+2ϕ) c 210 1 2 e − i(ψ+ϕ) 1 2 e − i(ψ+ϕ) − 1 2 e − i(ψ+ϕ) − 1 2 e − i(ψ+ϕ) c 201 − i (−1) n 2 e − i ψ i (−1) n 2 e − i ψ − i (−1) n 2 e − i ψ i (−1) n 2 e − i ψ c 021 i (−1) n 4 e i ψ − i (−1) n 4 e i ψ i (−1) n 4 e i ψ − i (−1)c 300 i (−1) m √ 3 4 e − i(2ψ+ϕ) i (−1) m √ 3 4 e − i(2ψ+ϕ) − i (−1) m √ 3 4 e − i(2ψ+ϕ) − i (−1) m √ 3 4 e − i(2ψ+ϕ) c 003 i (−1) m √ 3 4 e − i(ψ+2ϕ) − i (−1) m √ 3 4 e − i(ψ+2ϕ) − i (−1) m √ 3 4 e − i(ψ+2ϕ) i (−1) m √ 3 4 e − i(ψ+2ϕ) c 201 − i (−1) m 4 e − i ψ i (−1) m 4 e − i ψ i (−1) m 4 e − i ψ − i (−1) m 4 e − i ψ c 120 i (−1) m 2 e − i ϕ i (−1) m 2 e − i ϕ − i (−1) m 2 e − i ϕ − i (−1) m 2 e − i ϕ c 021 i (−1) m 2 e i ψ − i (−1) m 2 e i ψ − i (−1) m 2 e i ψ i (−1) m 2 e i ψ c 102 − i (−1) m 4 e i ϕ − i (−1) m 4 e i ϕ i (−1) m 4 e i ϕ i (−1) m 4 e i ϕG π 2m + 1 − 1 3 π 2m + 1 + 1 3 π 2m + 1 − 1 3 π 2m + 1 + 1 3 θ πn + arctan √ 2 πn − arctan √ 2 c 300 i (−1) n √ 3 4 e − i(2ψ+ϕ) − i (−1) n √ 3 4 e − i(2ψ+ϕ) − i (−1) n √ 3 4 e − i(2ψ+ϕ) i (−1) n √ 3 4 e − i(2ψ+ϕ) c 030 √ 3 4 e i(ψ−ϕ) √ 3 4 e i(ψ−ϕ) − √ 3 4 e i(ψ−ϕ) − √ 3 4 e i(ψ−ϕ) c 210 1 4 e − i(ψ+ϕ) 1 4 e − i(ψ+ϕ) − 1 4 e − i(ψ+ϕ) − 1 4 e − i(ψ+ϕ) c 120 i (−1) n 4 e − i ϕ − i (−1) n 4 e − i ϕ − i (−1) n 4 e − i ϕ i (−1) n 4 e − i ϕc 300 i (−1) n √ 2 3 e − i(2ψ+ϕ) − i (−1) n √ 2 3 e − i(2ψ+ϕ) c 030 − 1 3 √ 2 e i(ψ−ϕ) 1 3 √ 2 e i(ψ−ϕ) c 003 i (−1) n 3 √ 2 e ψ+2ϕ i (−1) n 3 √ 2 e ψ+2ϕ c 120 i (−1) n √ 6 e − i ϕ − i (−1) n √ 6 e − i ϕ FIG. 2 . 2(3). As the transformation matrix V only acts on the creation operators of the three different modes in Eq. (4), a vacuum state is not transformed by the time evolution governed by the Hamiltonian of Eq. (1). The general output state is Plot of the three-photon coincidence probability |c111| 2 and the HOM contour, where the condition |c111| 2 =0 is fulfilled.a superposition of all possible distributions of the three photons among the three output modes with respective coefficients, which depend on the explicit form of V . The general form of the output state reads |ψ out =c 300 |3, 0, 0 + c 030 |0, 3, 0 + . . . ( 8 ) 8we can find an analytical expression for the HOM contour in the variable space (G, θ), where all states |ψ out (G, θ) have a vanishing c 111 coefficient: θ(G) = nπ ± arcsec 4 8 ± √ 2 csc 4 ( G 2 ) sin 4 ( G 2 )(20 cos(G)+3(8 cos(2G)+4 cos(3G)+3 cos(4G)+5)) 3 cos(G)+2 FIG. 3 . 3Plots of the HOM contour and the coordinates of the investigated states (red crosses, green dots and blue diamonds), which display bipartite entanglement (a) or possibly tripartite entanglement (b)/(c). in the framework of the German excellence initiative. We thank M. Tichy, T. Meany, H. de Guise for bringing their works to our attention. S. M. gratefully acknowledges the hospitality at the Oklahoma State University. G. S. A. thanks Ian Walmsley and Paolo Mataloni for discussions on HOM interference in integrated devices. TABLE I . ITable of coefficients for equation TABLE II . IITable of coefficients for equation TABLE III . IIITable of coefficients for equation TABLE IV . IVTable of coefficients for equation(12). Coefficients, which are not listed, are equal to 0.G 2πm + 2 arctan 5 + 2 √ 3 θ πn + arctan 1 2 (1 − √ 3) πn − arctan 1 2 (1 − √ 3) TABLE V . VTable of coefficients for equation G 2πm + arctan 2 + √ 3 θ π 4 (2(4ñ + 0) + 1) π 4 (2(4ñ + 1) + 1) π 4 (2(4ñ + 2) + 1) π 4 (2(4ñ + 3) + 1)θ π 4 (2(4ñ + 0) + 1) π 4 (2(4ñ + 1) + 1) π 4 (2(4ñ + 2) + 1) π 4 (2(4ñ + 3) + 1)θ π 4 (2(4ñ + 0) + 1) π 4 (2(4ñ + 1) + 1) π 4 (2(4ñ + 2) + 1) π 4 (2(4ñ + 3) + 1) c 300 i 1 Measurement of subpicosecond time intervals between two photons by interference. C K Hong, Z Y Ou, L Mandel, Phys. Rev. Lett. 592044C. K. Hong, Z. Y. Ou and L. Mandel, "Measurement of subpicosecond time intervals between two photons by interference", Phys. Rev. Lett. 59, 2044 (1987). New Type of Einstein-Podolsky-Rosen-Bohm Experiment Using Pairs of Light Quanta Produced by Optical Parametric Down Conversion. Y H Shih, C O Alley, Phys. Rev. Lett. 612921Y. H. Shih and C. O. Alley, "New Type of Einstein- Podolsky-Rosen-Bohm Experiment Using Pairs of Light Quanta Produced by Optical Parametric Down Conver- sion", Phys. Rev. Lett. 61, 2921 (1988). Silica-on-silicon waveguide quantum circuits. A Politi, M J Cryan, J G Rarity, S Yu, J L O&apos;brien, Science. 320646A. Politi, M. J. Cryan, J. G. Rarity, S. Yu and J. L. O'Brien, "Silica-on-silicon waveguide quantum circuits", Science 320, 646 (2008). Transport and quantum walk of nonclassical light in coupled waveguides. A Rai, G S Agarwal, J H H Perk, Phys. Rev. A. 7842304A. Rai, G. S. Agarwal and J. H. H. Perk, "Transport and quantum walk of nonclassical light in coupled waveg- uides", Phys. Rev. A 78, 042304 (2008). Quantum plasmonics. M S Tame, K R Mcenery, S K Ozdemir, J Lee, S A Maier, M S Kim, Nature Physics. 9329M. S. Tame, K. R. McEnery, S. K. Ozdemir, J.Lee, S. A. Maier and M. S. Kim, "Quantum plasmonics", Nature Physics 9, 329 (2013). Two-photon quantum interference in plasmonics: theory and applications. S D Gupta, G S Agarwal, Opt. Lett. 39390S. D. Gupta and G. S. Agarwal, "Two-photon quantum interference in plasmonics: theory and applications", Opt. Lett. 39, 390 (2014). Observation of quantum interference in the plasmonic Hong-Ou-Mandel effect. G Di Martino, Y Sonnefraud, M S Tame, S Kéna-Cohen, F Dieleman, S K Özdemir, M S Kim, S A Maier, Phys. Rev. Applied. 134004G. Di Martino, Y. Sonnefraud, M. S. Tame, S. Kéna- Cohen, F. Dieleman, S. K.Özdemir, M. S. Kim and S. A. Maier, "Observation of quantum interference in the plasmonic Hong-Ou-Mandel effect", Phys. Rev. Applied 1, 034004 (2014). Two-plasmon quantum interference. J S Fakonas, H Lee, Y A Kelaita, H A Atwater, Nature Photonics. 8317J. S. Fakonas, H. Lee, Y. A. Kelaita, and H. A. Atwater, "Two-plasmon quantum interference", Nature Photonics 8, 317 (2014). Quantum interference of photon pairs from two remote trapped atomic ions. P Maunz, D L Moehring, S Olmschenk, K C Younge, D N Matsukevich, C Monroe, Nature Physics. 3538P. Maunz, D. L. Moehring, S. Olmschenk, K. C. Younge, D. N. Matsukevich and C. Monroe, "Quantum interfer- ence of photon pairs from two remote trapped atomic ions", Nature Physics 3, 538 (2007). Tunable entanglement, antibunching, and saturation effects in dipole blockade. J Gillet, G S Agarwal, T Bastin, Phys. Rev. A. 8113837J. Gillet, G. S. Agarwal, T. Bastin, "Tunable entan- glement, antibunching, and saturation effects in dipole blockade", Phys. Rev. A 81, 013837 (2010). Creating path entanglement and violating Bell inequalities by independent photon sources. R Wiegner, C Thiel, J Zanthier, G S Agarwal, Phys. Rev. A. 3743405R. Wiegner, C. Thiel, J. von Zanthier and G. S. Agarwal, "Creating path entanglement and violating Bell inequali- ties by independent photon sources", Phys. Rev. A 374, 3405 (2010). Heralded Entanglement Between Widely Separated Atoms. J Hofmann, M Krug, N Ortegel, L Gerard, M Weber, W Rosenfeld, H Weinfurter, Science. 33772J. Hofmann, M. Krug, N. Ortegel, L. Gerard, M. Weber, W. Rosenfeld, and H. Weinfurter, "Heralded Entangle- ment Between Widely Separated Atoms", Science 337, 72 (2012). Two-photon interference of the emission from electrically tunable remote quantum dots. R B Patel, A J Bennett, I Farrer, C A Nicoll, D A Ritchie, A J Shields, Nature Photonics. 4632R. B. Patel, A. J. Bennett, I. Farrer, C. A. Nicoll, D. A. Ritchie and A. J. Shields, "Two-photon interference of the emission from electrically tunable remote quantum dots", Nature Photonics 4, 632 (2010). Two-photon interference from remote quantum dots with inhomogeneously broadened linewidths. P Gold, A Thoma, S Maier, S Reitzenstein, C Schneider, S Höfling, M Kamp, Phys. Rev. B. 8935313P. Gold, A. Thoma, S. Maier, S. Reitzenstein, C. Schnei- der, S. Höfling and M. Kamp, "Two-photon interference from remote quantum dots with inhomogeneously broad- ened linewidths", Phys. Rev. B 89, 035313 (2014). Observation of quantum interference between a single-photon state and a thermal state generated in optical fibers. X Li, L Yang, L Cui, Z Y Ou, D Yu, Opt. Express. 161712505X. Li, L. Yang, L. Cui, Z. Y. Ou and D. Yu, "Observation of quantum interference between a single-photon state and a thermal state generated in optical fibers", Opt. Express 16(17), 12505 (2008). Producing high fidelity single photons with optimal brightness via waveguided parametric down-conversion. K Laiho, K N Cassemiro, Ch Silberhorn, Opt. Express. 172522823K. Laiho, K. N. Cassemiro, Ch. Silberhorn, "Producing high fidelity single photons with optimal brightness via waveguided parametric down-conversion", Opt. Express 17(25), 22823 (2009). Quantum interference and non-locality of independent photons from disparate sources. R Wiegner, J Zanthier, G S Agarwal, J. Phys. B. 4455501R. Wiegner, J. von Zanthier, and G. S. Agarwal, "Quan- tum interference and non-locality of independent photons from disparate sources", J. Phys. B 44, 055501 (2011). Observation of Four-Photon Interference with a Beam Splitter by Pulsed Parametric Down-Conversion. Z Y Ou, J.-K Rhee, L J Wang, Phys. Rev. Lett. 83959Z. Y. Ou, J.-K. Rhee and L. J. Wang, "Observation of Four-Photon Interference with a Beam Splitter by Pulsed Parametric Down-Conversion", Phys. Rev. Lett. 83, 959 (1999). Phase measurement at the Heisenberg limit with three photons. H Wang, T Kobayashi, Phys. Rev. A. 7121802H. Wang and T. Kobayashi', "Phase measurement at the Heisenberg limit with three photons", Phys. Rev. A 71, 021802 (2005). Four-photon interference with asymmetric beam splitters. B H Liu, F W Sun, Y X Gong, Y F Huang, G C Guo, Z Y Ou, Opt. Lett. 32101320B. H. Liu, F. W. Sun, Y. X. Gong, Y. F. Huang, G. C. Guo and Z. Y. Ou, "Four-photon interference with asymmetric beam splitters", Opt. Lett. 32(10), 1320 (2007). G S Agarwal, Quantum Optics. SecCambridge University Press7G. S. Agarwal, Quantum Optics, (Cambridge University Press, 2012), Sec. (5.7). On the genesis and evolution of Integrated Quantum Optics. S Tanzilli, A Martin, F Kaiser, M P De Micheli, O Alibart, D B Ostrowsky, Laser & PhotonS. Tanzilli, A. Martin, F. Kaiser, M. P. De Micheli, O. Alibart and D.B. Ostrowsky, "On the genesis and evo- lution of Integrated Quantum Optics", Laser & Photon. . Rev. 6115Rev. 6, 115 (2012). Generalized Hong-Ou-Mandel experiments with bosons and fermions. Y L Lim, A Beige, New J. Phys. 7155Y. L. Lim and A. Beige, "Generalized Hong-Ou-Mandel experiments with bosons and fermions", New J. Phys. 7, 155 (2005). Zero-transmission law for multiport beam splitters. M C Tichy, M Tiersch, F De Melo, F Mintert, A Buchleitner, Phys. Rev. Lett. 104220405M. C. Tichy, M. Tiersch, F. De Melo, F. Mintert and A. Buchleitner, "Zero-transmission law for multiport beam splitters", Phys. Rev. Lett. 104, 220405 (2010). Three-photon Hong-Ou-Mandel interference at a multiport mixer. R Campos, Phys. Rev. A. 6213809R. Campos, "Three-photon Hong-Ou-Mandel interfer- ence at a multiport mixer", Phys. Rev. A 62, 013809 (2000). Experimental realization of any discrete unitary operator. M Reck, A Zeilinger, Phys. Rev. Lett. 7358M. Reck and A. Zeilinger, "Experimental realization of any discrete unitary operator", Phys. Rev. Lett. 73, 58 (1994). SU(3) quantum interferometry with single-photon input pulses. S.-H Tan, Y Y Gao, H De Guise, B C Sanders, Phys. Rev. Lett. 110113603S.-H. Tan, Y. Y. Gao, H. De Guise and B. C. Sanders, "SU(3) quantum interferometry with single-photon input pulses", Phys. Rev. Lett. 110, 113603 (2013). Two-photon interference in optical fiber multiports. G Weihs, M Reck, H Weinfurter, A Zeilinger, Phys. Rev. A. 54893G. Weihs, M. Reck, H. Weinfurter and A. Zeilinger, "Two-photon interference in optical fiber multiports", Phys. Rev. A 54, 893 (1996). Non-classical interference in integrated 3D multiports. T Meany, M Delanty, S Gross, G D Marshall, M J Steel, M J Withford, Opt. Express. 202426895T. Meany, M. Delanty, S. Gross, G. D. Marshall, M. J. Steel and M. J. Withford, "Non-classical interference in integrated 3D multiports", Opt. Express 20(24), 26895 (2012). Tuneable quantum interference in a 3D integrated circuit. Z Chaboyer, T Meany, L G Helt, M J Withford, M J Steel, arXiv:1409.4908quant-phZ. Chaboyer, T. Meany, L. G. Helt, M. J. Withford and M. J. Steel, "Tuneable quantum interference in a 3D in- tegrated circuit", arXiv:1409.4908 [quant-ph] (2014). Photonic boson sampling in a tunable circuit. M A Broome, A Fedrizzi, S Rahimi-Keshari, J Dove, S Aaronson, T C Ralph, A G White, Science. 339794M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. C. Ralph and A. G. White, "Photonic boson sampling in a tunable circuit", Science 339, 794 (2013). . J B Spring, B J Metcalf, P C Humphreys, W S Kolthammer, X.-M Jin, M Barbieri, A Datta, N , J. B. Spring, B. J. Metcalf, P. C. Humphreys, W. S. Kolthammer, X.-M. Jin, M. Barbieri, A. Datta, N. Boson sampling on a photonic chip. N K Thomas-Peter, D Langford, J C Kundys, B J Gates, P G R Smith, I A Smith, Walmsley, Science. 339798Thomas-Peter, N. K. Langford, D. Kundys, J. C. Gates, B. J. Smith, P. G. R. Smith and I. A. Walmsley, "Boson sampling on a photonic chip", Science 339, 798 (2013). Experimental boson sampling. M Tillmann, B Dakić, R Heilmann, S Nolte, A Szameit, P Walther, Nature Photonics. 7540M. Tillmann, B. Dakić, R. Heilmann, S. Nolte, A. Sza- meit and P. Walther, "Experimental boson sampling", Nature Photonics 7, 540 (2013). Threephoton bosonic coalescence in an integrated tritter. N Spagnolo, C Vitelli, L Aparo, P Mataloni, F Sciarrino, A Crespi, R Ramponi, R Osellame, Nature communications. 41606N. Spagnolo, C. Vitelli, L. Aparo, P. Mataloni, F. Scia- rrino, A. Crespi, R. Ramponi and R. Osellame, "Three- photon bosonic coalescence in an integrated tritter", Na- ture communications 4, 1606 (2013). Permanents in linear optical networks. S Scheel, arXiv:quant-ph/0406127S. Scheel, "Permanents in linear optical networks", arXiv:quant-ph/0406127 (2004).
[]
[ "COMPACTLY GENERATED T-STRUCTURES IN THE DERIVED CATEGORY OF A COMMUTATIVE RING", "COMPACTLY GENERATED T-STRUCTURES IN THE DERIVED CATEGORY OF A COMMUTATIVE RING" ]
[ "Michal Hrbek " ]
[]
[]
We classify all compactly generated t-structures in the unbounded derived category of an arbitrary commutative ring, generalizing the result of [ATLJS10] for noetherian rings. More specifically, we establish a bijective correspondence between the compactly generated t-structures and infinite filtrations of the Zariski spectrum by Thomason subsets. Moreover, we show that in the case of a commutative noetherian ring, any bounded below homotopically smashing t-structure is compactly generated. As a consequence, all cosilting complexes are classified up to equivalence.
10.1007/s00209-019-02349-y
[ "https://arxiv.org/pdf/1806.00078v3.pdf" ]
119,584,802
1806.00078
ae920205a31466c4cc540db054bf7df00f1bce23
COMPACTLY GENERATED T-STRUCTURES IN THE DERIVED CATEGORY OF A COMMUTATIVE RING 6 Aug 2018 Michal Hrbek COMPACTLY GENERATED T-STRUCTURES IN THE DERIVED CATEGORY OF A COMMUTATIVE RING 6 Aug 2018arXiv:1806.00078v3 [math.AC] We classify all compactly generated t-structures in the unbounded derived category of an arbitrary commutative ring, generalizing the result of [ATLJS10] for noetherian rings. More specifically, we establish a bijective correspondence between the compactly generated t-structures and infinite filtrations of the Zariski spectrum by Thomason subsets. Moreover, we show that in the case of a commutative noetherian ring, any bounded below homotopically smashing t-structure is compactly generated. As a consequence, all cosilting complexes are classified up to equivalence. Introduction There is a large supply of classification results for various subcategories of the unbounded derived category D(R) of a commutative noetherian ring R. Since the work of Hopkins [Hop87] and Neeman [NB92], we know that the localizing subcategories of D(R) are parametrized by data of a geometrical nature -the subsets of the Zariski spectrum Spec(R). As a consequence, the famous telescope conjecture, stating that any smashing localizing subcategory is generated by a set of compact objects, holds for the category D(R). These compactly generated localizations then correspond to those subsets of Spec(R), which are specialization closed, that is, upper subsets in the poset (Spec(R), ⊆). A more recent work [ATLJS10] provides a "semistable" version of the latter classification result. Specifically, it establishes a bijection between compactly generated t-structures and infinite decreasing sequences of specialization subsets of Spec(R) indexed by the integers. The concept of a t-structure in a triangulated category was M. Hrbek was supported by the Czech Academy of Sciences Programme for research and mobility support of starting researchers, project MSM100191801. introduced by Beȋlinson, Bernstein, and Deligne [BBD82], and can be seen as a way of constructing abelian categories inside triangulated categories, together with cohomological functors. In the setting of the derived category of a ring, t-structures proved to be an indispensable tool in various general instances of tilting theory, replacing the role played by the ordinary torsion pairs in more traditional tilting frameworks (see e.g. [PV18], [NSZ18], [AHMV17], and [MV18]). Not all the mentioned results carry well to the generality of an arbitrary commutative ring R (that is, without the noetherian assumption). Namely, the classification of all localizing subcategories via geometrical invariants is hopeless (see [Nee00]), and the telescope conjecture does not hold in general, the first counterexample of this is due to Keller [Kel94]. However, when restricting to the subcategories induced by compact objects, the situation is far more optimistic. The prime example of this is the classification of compact localizations due to Thomason [Tho97]. Another piece of evidence is provided by the recent classification of n-tilting modules [HŠ17], extending the previous work [AHPŠT14] from the noetherian rings to general commutative ones. In both cases, the statement of the general result is obtained straightforwardly, by replacing any occurrence of "a specialization closed subset of Spec(R)" by "a Thomason subset of Spec(R)". The methods of the proof however differ substantially, due to the fact that lot of machinery available in the noetherian world simply does not work in the general situation. In the present paper, we continue in this path by proving the following two theorems: (1) (Theorem 5.6) Let R be an arbitrary commutative ring. Then the compactly generated t-structures in D(R) are in a bijective correspondence with decreasing sequences · · · ⊇ X n−1 ⊇ X n ⊇ X n+1 ⊇ · · · of Thomason subsets of the Zariski spectrum Spec(R). (2) (Theorem 4.3) Let R be a commutative noetherian ring. Then any bounded below homotopically smashing t-structure is compactly generated. We postpone the precise formulation of both statements, as well as the relevant definitions, to the body of the paper. The first of the two results is a generalization of [ATLJS10, Theorem 3.10] -from the commutative noetherian ring to an arbitrary commutative ring. Since many of the methods used in [ATLJS10] are specific to the noetherian situation, we were forced to use different techniques. First, we characterize the compact generation of a bounded below t-structure using injective envelopes, see Lemma 3.10 and Theorem 3.11. This approach is also crucial in proving the second result (2), which can be seen as a version of the telescope conjecture for commutative noetherian rings adjusted for bounded below t-structures, as opposed to localizing pairs. As a corollary, any t-structure induced by a (bounded) cosilting complex is compactly generated (this is Corollary 6.2), extending the result proved in [AHPŠT14], which established the cofinite type for n-cotilting modules over a commutative noetherian ring. As a consequence, cosilting complexes over commutative noetherian rings are classified in Theorem 6.3, generalizing the result for cosilting modules [AHH16, Theorem 5.1]. Finally, in Theorem 5.6 we remove the bounded below assumption and classify all compactly generated t-structures over a commutative ring. Even though we are not able to show in general, unlike in the bounded below case, that the aisle of such a t-structure is determined by supports cohomology-wise, we show that these t-structures are always generated by suspensions of Koszul complexes, using similar techniques as in the "stable" version of this classification for localizing pairs from [KP17]. Basic notation. We work in the unbounded derived category D(R) := D(Mod-R) of the category Mod-R of all R-modules over a commutative ring R; we refer to [KS05,§13 and §14] for a sufficiently up-to-date exposition. Whenever talking about subcategories, we mean full subcategories. Given a subcategory C ⊆ D(R), and a set I ⊆ Z, we let Usually, the role of I will be played by symbols k, ≥ k, ≤ k, < k, > k for some integer k with their obvious interpretation as subsets of Z. The symbol D(R) c stands for the thick subcategory of all compact objects of D(R). Recall that the compact objects of D(R) are, up to a quasi-isomorphism, precisely the perfect complexes, that is, bounded complexes of finitely generated projective modules. Complexes are written using the cohomological notation, that is, a complex X has coordinates X n , with the degree n ∈ Z increasing along the differentials d n X : X n → X n+1 . We denote by B n (X), Z n (X), and H n (X) the n-th coboundary, cocycle, and cohomology of the complex X. For any complex X ∈ D(R), we define its cohomological infimum inf(X) = inf{n ∈ Z | H n (X) = 0}, and recall the usual notation D + (R) = {X ∈ D(R) | inf(X) ∈ Z} for the full subcategory of cohomologically bounded below complexes. The supremum sup(X) of a complex is defined dually. We will freely use the calculus of the total derived bifunctors RHom R (−, −) and −⊗ L R −, considered as functors D(R)×D(R) → D(R) and D(R)×D(R op ) → D(R), respectively. Finally, given a subcategory C of Mod-R we let K(C) denote the homotopy category of all complexes with coordinates in C. If C = Mod-R, we write just K(R). Also, we consider the bounded variants K # (C) of homotopy category, where # can be one of symbols {< 0, > 0, −, +, b} with their usual interpretation. Torsion pairs in the derived category A pair (U, V) of subcategories of D(R) is called a torsion pair provided that (i) U = ⊥0 V and V = U ⊥0 , (ii) both U and V are closed under direct summands, and (iii) for any X ∈ D(R) there is a triangle (2.1) U → X → V − → U [1] with U ∈ U and V ∈ V. A torsion pair (U, V) in D(R) is called: • a t-structure if U[1] ⊆ U, • a co-t-structure if U[−1] ⊆ U 1 , • a localizing pair if U[±1] ⊆ U, that is, when both U and V are triangulated subcategories of D(R). In this paper, we will be mostly interested in t-structures -in this setting we call the subcategory U (resp. V) the aisle (resp. the coaisle) of the t-structure (U, V). In the t-structure case, the approximation triangle 2.1 is unique up to a unique isomorphism, and it is of the form τ U X → X → τ V X − → τ U [1], where τ U (resp. τ V ) is the right (resp. left) adjoint to the inclusion of the aisle (resp. coaisle): U D(R) V. ⊆ τU τV ⊆ 2.1. Standard (co-)t-structures and truncations. The pair of subcategories (D ≤0 , D >0 ), where D ≤0 = {X ∈ D(R) | sup(X) ≤ 0}, and D >0 = {X ∈ D(R) | inf(X) > 0} , form the so-called standard t-structure. The left and right approximation with respect to this t-structure are denoted simply by τ ≤0 and τ >0 , and they can be identified with the standard "soft" truncations of complexes. Of course, we also adopt the notation D ≤n , D >n , τ ≤n , τ >n for the obvious versions of this t-structure shifted by n ∈ Z, and the associated soft truncation functors in degree n. On the side of co-t-structures, we follow [AHMV15, Example 2.9] and define a standard co-t-structure (K >0 p , D ≤0 ), where K >0 p is the subcategory of all complexes from D(R) quasi-isomorphic to a K-projective complex with zero nonpositive components. Since D(R) is equivalent to the homotopy category K p of all K-projective complexes, restricting to K ≤0 p ≃ D ≤0 , it follows that K >0 p = ⊥0 D ≤0 . For any complex X, one can obtain an approximation triangle with respect to this co-t-structure using the brutal truncations (also called the stupid truncations). Denote by σ >n , and σ ≤n the right and left brutal truncations at degree n. Let P be a K-projective complex quasi-isomorphic to X, then there is a triangle σ >0 P → X → σ ≤0 P − → σ >0 P [1], and it approximates X with respect to the standard co-t-structure. 2.2. Generation of torsion pairs. Let (U, V) be a torsion pair, and S be a subclass of D(R). We say that the torsion pair (U, V) is generated by the class S if (U, V) = ( ⊥0 (S ⊥0 ), S ⊥0 ). If S consists of objects from D(R) c , we say that (U, V) is compactly generated. Dually, the torsion pair (U, V) is cogenerated by S if (U, V) = ( ⊥0 S, ( ⊥0 S) ⊥0 ). Given a general class S, we cannot always claim that the pair ( ⊥0 (S ⊥0 ), S ⊥0 ) is a torsion pair. But if S forms a set, we can always use it to generate a t-structure. In particular, since D c (R) is a skeletally small category, we can always generate a t-structure by a given family of compact objects. Theorem 2.1. ([ATLJSS03, Proposition 3.2]) If C is a set of objects of D(R), then ⊥0 (C ⊥ ≤0 ) is an aisle of a t-structure. If C is a set, we let aisle(C) denote the class ⊥0 (C ⊥ ≤0 ). By [ATLJS10,p. 6], aisle(C) coincides with the smallest subcategory of D(R) containing C and closed under suspensions, extensions, and coproducts. 2.3. Homotopy (co)limits. It is well-known that, in general, the categorical (co)limit constructions have a very limited use in triangulated categories, which can be illustrated by the fact that any monomorphism, as well as any epimorphism, is split. The way to circumvent this shortcoming is to use homotopy (co)limits instead. Especially the directed versions of the homotopy colimits will be essential in our efforts. Because in many sources the emphasis is put only on directed systems of shape ω, we feel it is necessary to recall the relevant facts on homotopy colimits of arbitrary shapes. We start with a construction which can be, up to quasi-isomorphism, seen as a special case of the general homotopy (co)limit construction. For details, we refer the reader to [BN93]. Let X 0 f0 − → X 1 f1 − → X 2 f2 − → · · · be a tower of morphism in D(R), indexed by natural numbers. The Milnor colimit of this tower, denoted Mcolim n≥0 X n , is defined by the following triangle n≥0 X n 1−(fn) −−−−→ n≥0 X n → Mcolim n≥0 X n − → n≥0 X n [1]. Dually, if · · · f2 − → X 2 f1 − → X 1 f0 − → X 0 is an inverse tower of morphisms in D(R), the Milnor limit is defined as an object fitting into the triangle n≥0 X n [−1] → Mlim n≥0 X n → n≥0 X n 1−(fn) −−−−→ n≥0 X n . Directly from the definition, one can check the quasi-isomorphism (2.2) RHom R (Mcolim n≥0 X n , Y ) ≃ Mlim n≥0 RHom R (X n , Y ) for any Y ∈ D(R). Milnor limits and colimits allow to reconstruct any complex X from its brutal truncations, that is, there are isomorphisms in D(R): (2.3) X ≃ Mcolim n≥0 σ >−n X and X ≃ Mlim n≥0 σ <n X, where the maps in the towers are the natural ones (see e.g. [KN12, Lemma 6.3, Theorem 10.2]). Now we address the more general notion (in a sense) of a homotopy colimit. The definition of a homotopy colimit can vary a lot depending on how general categories one considers. In our case of the derived category of a ring, we can afford a very explicit definition using derived functors. To link this with the more general theory of Grothendieck derivators, we refer the reader to [Š14, §5] and the references therein. Let I be a small category, and let Mod-R I be the (Grothendieck) category of all I-shaped diagrams, that is functors from I to Mod-R. There is a natural equivalence between C(Mod-R I ) and C(Mod-R) I , and therefore we can consider D(Mod-R I ) as the derived category of I-shaped diagrams of complexes over R. Given such a diagram X i , i ∈ I, its homotopy colimit is defined as hocolim i∈I X i := L colim i∈I X i , where L colim i∈I : D(Mod-R I ) → D(R) is the left derived functor of the colimit functor. Note that if I is a directed small category, then colim i∈I = lim − →i∈I is an exact functor, and therefore hocolim i∈I X i ≃ lim − →i∈I X i (see [Š14,Proposition 6.6] for details). In this situation we talk about a directed homotopy colimit, and use the notation hocolim − −−−− →i∈I X i . Also, the exactness of the direct limit functors also yields H n (hocolim − −−−− →i∈I X i ) ≃ lim − →i∈I H n (X i ) for any n ∈ Z. To see how the two constructions are related, consider (X n | n ≥ 0) ∈ D(Mod-R) ω , a directed diagram of shape ω = (0 → 1 → 2 → · · · ) inside the derived category. By [KN12, Proposition 11.3 1)], there exists an object (Y n | n ≥ 0) inside D(Mod-R ω ), such that its natural interpretation in D(Mod-R) ω is the original diagram (X n | n ≥ 0) ∈ D(Mod-R) ω (the terminology here is that the incoherent diagram (X n | n ≥ 0) lifts to a coherent diagram (Y n | n ≥ 0)). Then hocolim − −−−− →n≥0 Y n is quasi-isomorphic to Mcolim n≥0 X n , see [KN12, Proposition 11.3 3)]. In this way, Milnor colimits can be seen as particular cases of homotopy colimits, up to a (non-canonical) quasi-isomorphism. 2.4. Homotopically smashing t-structures. We call a t-structure (U, V) homotopically smashing if the coaisle V is closed under taking directed homotopy colimits, that is, if for any directed diagram ( X i | i ∈ I) ∈ D(Mod-R I ) such that X i ∈ V for all i ∈ I we have hocolim − −−−− →i∈I X i ∈ V. We remark some relations between t-structures and homotopy colimits: Hom D(R) (S, hocolim − −−−− → i∈I X i ) ≃ lim − → i∈I Hom D(R) (S, X i ) for any (X i | i ∈ I) ∈ D(Mod-R I ). 2.5. Rigidity of aisles and coaisles. The following interplay between the rigid symmetric monoidal structure given by the derived tensor product on D(R) and the t-structures will be essential in our endeavour. Proposition 2.3. Let (U, V) be a t-structure in D(R). Let X be a complex such that X ∈ D ≤0 . Then the following holds: (i) for any U ∈ U we have X ⊗ L R U ∈ U, (ii) for any V ∈ V we have RHom R (X, V ) ∈ V. Proof. We prove just (ii), as (i) follows by the same (in fact, simpler) argument (and is essentially proved in [ATLJSS03, Corollary 5.2]). Fix a complex V ∈ V. First note that, since V is closed under cosuspensions, RHom R (R[i], V ) ≃ V [−i] ∈ V for any i ≥ 0. Next, let F ≃ R (κ) be a free R-module. Then RHom R (F [i], V ) ≃ RHom R (R[i], V ) κ ∈ V for any i ≥ 0, as V is closed under direct products. Because V is extension-closed, it follows that RHom R (X, V ) ∈ V whenever X is a bounded complex of free R-modules concentrated in non-positive degrees. Finally, let X be any complex from D ≤0 . By quasi-isomorphic replacement, we can without loss of generality assume that X is a complex of free R-modules concentrated in non-positive degrees. Express X as a Milnor colimit of its brutal truncations from below, X = Mcolim n≥0 σ >−n X (see 2.3). Since σ >−n X is a bounded complex of free modules concentrated in non-positive degrees, RHom R (σ >−n X, V ) ∈ V by the previous paragraph. We compute, using (2.2): RHom R (X, V ) ≃ RHom R (Mcolim n≥0 σ >−n X, V ) ≃ Mlim n≥0 RHom R (σ >−n X, V ), and infer that RHom R (X, V ) ∈ V, as V is closed under Milnor limits. A subcategory T of Mod-R belongs to some torsion pair (T , F ) if and only if it is closed under coproducts, extensions, and epimorphic images, and we call such a class a torsion class. Dually, we call a subcategory F closed under products, extensions, and submodules a torsion-free class. We say that a torsion pair (T , F ) is hereditary, if T is closed under submodules (or equivalently, F is closed under injective envelopes). As a shorthand, we say that T is a hereditary torsion class if it belongs as a torsion class into a hereditary torsion pair. In this situation, the torsion pair is fully determined by the cyclic modules in T . A hereditary torsion pair (T , F ) is said to be of finite type if F is further closed under direct limits. This corresponds to the situation in which the torsion pair is determined by finitely presented cyclic objects in T . This last statement can be made precise by tying such torsion pairs to certain data of geometrical flavour -the Thomason subsets of the Zariski spectrum. Let R be a commutative ring. A subset X of the Zariski spectrum Spec(R) is called Thomason (open) if there is a family I of finitely generated ideals of R such that X = I∈I V (I), where V (I) = {p ∈ Spec(R) | I ⊆ p} is the basic Zariski-closed set on I. The reason we used the parenthesized word "open" is that the Thomason sets are precisely the open subsets of the topological space Spec(R) endowed with the Hochster dual topology to the Zariski topology (for more details, see e.g. [HŠ17,§2]). Their name comes from the Thomason's paper [Tho97], in which he proved a bijective correspondence between Thomason sets and thick subcategories of D c (R). The following result appeared in [GP08]: Theorem 2.4. ([GP08, Theorem 2.2]) Let R be a commutative ring. There is a 1-1 correspondence Thomason subsets X of Spec(R) ↔ Hereditary torsion pairs (T , F ) of finite type in Mod-R , given by the assignment X → (T X , F X ), where T X = {M ∈ Mod-R | Supp(M ) ⊆ X}. Bounded below compactly generated t-structures The goal of this section is to establish a bijective correspondence between the compactly generated t-structures over an arbitrary commutative ring R and infinite decreasing sequences of Thomason subsets of Spec(R), with the additional assumption that the t-structures are cohomologically bounded from below. As Theorem 2.4 suggests, an important role is played by hereditary torsion pairswe show that the aisle of any bounded below compactly generated t-structure is determined cohomology-wise by a decreasing sequence of hereditary torsion classes. We start with simple, but key observations about injective modules. Lemma 3.1. Let P be a projective module over a commutative ring R. ( i) If E is an injective module, then Hom R (P, E) is also an injective R-module. (ii) If P is finitely generated, and ι M : M → E(M ) is an injective envelope of M , then Hom R (P, ι M ) is an injective envelope of Hom R (P, M ). Proof. (i) The Hom-⊗ adjunction yields a natural isomorphism Hom R (−, Hom R (P, E)) ≃ Hom R (− ⊗ R P, E). Since the latter functor is an exact functor Mod-R → Mod-R, we conclude that Hom R (P, E) is injective as an R-module. (ii) As Hom R (R (n) , ι M ) ≃ ι (n) M : M (n) → E(M ) (n) as maps of R-modules, the result is clear from [AF12, Proposition 6.16(2)] in the case when P is free. The rest follows by considering P as a direct summand in R (n) for some n > 0. Lemma 3.2. Let E be an injective R-module, and X a complex. Then there is an isomorphism ϕ : Hom D(R) (X, E[−n]) ≃ Hom R (H n (X), E), for any n ∈ Z, defined by the rule ϕ(f ) = H n (f ). Proof. Since ϕ is clearly an R-homomorphism, it is enough to show that ϕ is a bijection. If g ∈ Hom R (H n (X), E), then we can lift it first to a map g ′ : Z n (X) → X such that g ′ ↾B n (X) = 0, and then to a map g ′′ : X n → E by injectivity of E. The map g ′′ clearly extends to a map of complexesg : X → E[−n] such that H n (g) = g, proving that ϕ is surjective. Suppose that f, g ∈ Hom D(R) (X, E[−n]) are two maps inducing the same map on n-th cohomology. Since E is injective, Hom D(R) (X, E[−n]) ≃ Hom K(R) (X, E[−n]). Put h = f − g ∈ Hom K(R) (X, E[−n]) , and note that h n ↾ Z n (X) = 0. Then h factors through the differential d n X . Therefore, by injectivity of E, there is a homotopy map s : X n+1 → E such that h n = sd n X . It follows that h is zero in Hom K(R) (X, E[−n]), and therefore f = g in Hom D(R) (X, E[−n]), establishing that ϕ is injective. The following result is crucial to this section, as it will allow us to replace bounded below complexes in compactly generated coaisles by collections of stalks of injective R-modules. Let proj-R denote the subcategory of all finitely generated projective R-modules. Lemma 3.3. Let S ∈ K − (proj-R) and X ∈ D ≥k for some k ∈ Z. If X ∈ S ⊥ ≤0 , then the stalk complex E(H k (X))[−k] belongs to S ⊥ ≤0 . Proof. Let N be the largest integer such that the N -th coordinate of S is non-zero, S N = 0. Since S is a bounded above complex of projectives, we may compute the homomorphisms starting in S in the homotopy category, that is, Hom D(R) (S, −) ≃ Hom K(R) (S, −). Therefore, if N < k, there is nothing to prove, so we may assume N ≥ k. Further we proceed by backwards induction on n = N, N − 1, ..., k and prove that Hom K(R) (S[n − k], E(H k (X))[−k]) = 0. To save some ink, we fix the following notation: M := H k (X), E := E(H k (X)), H n := H n (S), Z n := Z n (S), B n := B n (S), S n := S n (−) E := Hom R (−, E), (−) M := Hom R (−, M ). By Lemma 3.2, Hom K(R) (S[n − k], E[−k]) ≃ Hom R (H n , E) = H E n for any n. Since E is injective, the obviously obtained sequence (3.1) 0 → B E n+1 → (S n /B n ) E → H E n → 0 is exact. Claim 1. The exact sequence (3.1) is split. Proof of Claim 1. Consider the exact sequences 0 → B E n+1+i → (S n+i /B n+i ) E → H E n+i → 0 0 → (S n+i /B n+i ) E → S E n+i → B E n+i → 0 for i > 0. By the induction hypothesis, H E n+i = 0 for all i > 0, and therefore B E n+1+i ≃ (S n+i /B n+i ) E for all i > 0. Note that since S N +1 = 0, we have B N +1 = 0. If n = N , this implies B E n+1 = 0 , and there is nothing to prove, so we may further assume that N − n > 0. Next we proceed by backwards induction on i = N −n+1, N −n, . . ., 1 to prove that B n+i is injective, and since the case i = N −n+1 is already done, we can assume 0 < i ≤ N − n. Then the backwards induction hypothesis says that B E n+1+i ≃ (S n+i /B n+i ) E is injective, and therefore the second exact sequence above splits. This means that B E n+i is a direct summand of S E n+i , which is injective by Lemma 3.1, establishing the induction step. In particular, we showed that B E n+1 is injective, proving finally that (3.1) is indeed split. Claim 1 Now we are ready to prove that H E n = 0. Towards a contradiction, suppose that H E n = 0, and consider the following diagram, induced naturally by the Hom bifunctor: (3.2) S M n−1 Z M n H M n 0 S M n−1 S M n S E n−1 Z E n H E n 0 S E n−1 S E n (S n /B n ) E 0 h ιn−1 g ιn l i k j π By the left exactness and naturality of Hom-functors, all the rows of (3.2) are exact and all the squares commute. Since S n and S n−1 are finitely generated projective modules, and the ring R is commutative, the front dotted vertical arrows are injective envelopes by Lemma 3.1(ii). The map π is a split epimorphism by the previous endeavour, with a section denoted in the diagram by i. Consider the submodule (ji)[H E n ] of S E n . By the essentiality of the vertical map ι n , the intersection L = (ji)[H E n ] ∩ ι n [S M n ] is non-zero. Choose an element f ∈ S M n such that ι n (f ) is non-zero and belongs to L. Denotef = g(f ) ∈ Z M n . We claim thatf is non-zero. Indeed, h(f ) = hg(f ) = kι n (f ), and because ι n (f ) ∈ L ⊆ (ji)[H E n ], there is f ′ ∈ H E n such that ι n (f ) = ji(f ′ ) . Thus, we can compute further that h(f ) = kι n (f ) = kji(f ′ ) = lπi(f ′ ) = l(f ′ ) = 0, using that l is a monomorphism. Therefore,f is non-zero as claimed. Note that by the commutativity of the squares of the diagram, f is in the kernel of the map S M n → S M n−1 , since the vertical dotted map ι n−1 is injective. Therefore, f d n−1 S = 0. Also,f lies in the kernel of the map Z M n → S M n−1 , that isf is a non-zero element of H M n . We proved that there is a map f : S n → M such that it induces a non-zero mapf : H n → M on cohomology. Finally, we use this to infer that Hom K(R) (S[n − k], X) = 0, which is the desired contradiction. Indeed let us define a map ϕ : S[n−k] → X by first letting ϕ k : S n → Z k (X) ⊆ X k be some map lifting the map f : S n → M = Z k (X)/B k (X), using projectivity of S n . The other coordinates of ϕ are defined as follows: put ϕ j = 0 for all j > k, and define ϕ i for i < k inductively by projectivity of coordinates of S -using the exactness of X in degrees smaller than k, and the fact that the image of the composition ϕ k d n−1 S is contained in B k (X), which in turn follows from f d n−1 S = 0. Therefore, we obtained a map of complexes ϕ : S[n − k] → X which is not nullhomotopic, because H k (ϕ) =f is non-zero. The only way around this contradiction is that 0 = H E n = Hom R (H n , E) ≃ Hom K(R) (S[n − k], E[−k]) , establishing the induction step. We say that a t-structure (U, V) is bounded below if V ⊆ D + (R). Since V is closed under products, this is equivalent to V ⊆ D ≥m for some integer m. In the following lemma, we show that the condition from Lemma 3.3 can be used to replace a bounded below complex in a coaisle of a compactly generated t-structure by a well-chosen injective resolution -one, such that each of its component considered as a stalk complex in the appropriate degree belongs to the coaisle. Lemma 3.4. Let (U, V) be a bounded below t-structure. Suppose that the following implication holds: X ∈ V ⇒ E(H inf(X) (X))[− inf(X)] ∈ V. Then there is a collection E of (shifts of ) stalk complexes of injective modules such that (U, V) is cogenerated by E. Proof. Fix a complex X = X 0 ∈ V and let k = inf(X 0 ). Since V is bounded below, k ∈ Z and we can denote E 0 = E(H k (X 0 )) ∈ Mod-R. By the assumption, the stalk complex E 0 [−k] belongs to V. Also, by Lemma 3.2, there is a map of complexes ι 0 : X 0 → E 0 [−k] inducing the injective envelope H k (ι 0 ) : H k (X 0 ) → E 0 . Consider the induced triangle: X 0 ι0 − → E 0 [−k] → Cone(ι 0 ) → X 0 [1]. Since V is closed for extensions and cosuspensions, we obtain that X 1 := Cone(ι 0 )[−1] ∈ V. By the mapping cone construction, X 1 can be represented by the following complex: · · · d k−1 X − −− → X k   d k X ι k 0   − −−−− → X k+1 ⊕ E 0 d k+1 X 0 − −−−−−−− → X k+2 d k+2 X − −− → · · · From this, one can see that H k (X 1 ) = 0, and in fact inf(X 1 ) > k. Continuing inductively we obtain a sequence of complexes X n ∈ V and injective modules E n for each n ≥ 0 such that inf(X n ) ≥ k + n, and E n [−k − n] ∈ V, together with triangles X n+1 fn −→ X n ιn − → E n [−k − n] → X n+1 [1] for all n ≥ 0, where H k+n (ι n ) : H k+n (X n ) → E n is the injective envelope map. Note that f n : X n+1 → X n can be, up to quasi-isomorphism, represented by a degree-wise surjective map of complexes (3.3) · · · X k+n ⊕ E n−1 X k+n+1 ⊕ E n X k+n+2 · · · · · · X k+n ⊕ E n−1 X k+n+1 X k+n+2 · · ·   d k+n X 0 αn ǫn   π k+n+1 where the vertical map π k+n+1 is the obvious split epimorphism, and the maps α n , ǫ n are the coordinates of the map ι k+n n , ι k+n n : X k+n n = X k+n ⊕ E n−1 α n ǫ n −−−−−−−→ E n , where ι n is the map obtained from the inductive construction above. (We let E i = 0 for i < 0 and α j = ǫ j = 0 for j ≤ 0 so that (3.3) above makes sense for any n ≥ 0.) We define a complex Z = lim ← −n≥0 X n , where the maps in the inverse system are as in (3.3). Then Z is a complex of form · · · X k+n ⊕E n−1   d k+n X 0 α n ǫ n   − −−−−−−−−− → X k+n+1 ⊕E n   d k+n+1 X 0 α n+1 ǫ n+1   −−−−−−−−−−−−−→ X k+n+2 ⊕E n+1 · · · There is a triangle in D(R) E → Z f − → X → E[1], where f : Z → X = X 0 is the limit map. From the construction, we see that those coordinates of f , which are not just identities, consist of the split epimorphisms f i = π i , i > k. It follows that E = Cone(f )[−1] ≃ Ker(f ) is a complex of form · · · → 0 → E 0 ǫ1 − → E 1 ǫ2 − → E 2 ǫ3 − → · · · , where the coordinate E 0 is in degree k + 1. Since inf(X n ) ≥ k + n for any k ≥ 0, we infer that Z is exact, and thus X is quasi-isomorphic to E[1]. Let Y X = ( ⊥ ≤0 {E n [−n − k] | n ≥ 0}) ⊥0 be a subcategory of D(R). Because Y X is closed under extensions, cosuspensions, and products, and thus also under Milnor limits, we have by (2.3) that E[1] ∈ Y X , and thus X ∈ Y X . Repeating this for all complexes X ∈ V, we obtain classes Y X = ( ⊥ ≤0 E X ) ⊥0 , where E X is a collection of stalks of injective modules with E X ⊆ V, such that X ∈ Y X . Then also Y X ⊆ V for each X ∈ V. Put E = X∈V E X . Then we have E ⊆ V, and X ∈ Y X ⊆ ( ⊥ ≤0 E) ⊥0 for each X ∈ V, and thus V = ( ⊥ ≤0 E) ⊥0 . Because V is closed under cosuspensions, so is E, and therefore V = ( ⊥0 E) ⊥0 . In other words, the t-structure (U, V) is cogenerated by E. By injectivity and Lemma 3.2 we can rewrite the class as: U = {X ∈ D(R) | Hom R (H n (X), E) = 0 ∀E ∈ E n ∀n ∈ Z}. Therefore, U = {X ∈ D(R) | H n (X) ∈ T n }, where T n is the torsion class cogenerated by E n . Again by injectivity, it follows that T n is hereditary for each n ∈ Z. Since U is closed under suspensions, necessarily T n ⊇ T n+1 for each n ∈ Z. (iii) ⇒ (i): Towards contradiction, suppose that there is X ∈ V such that E = E(H n (X))[−n] ∈ V, where n = inf(X). The injectivity of E together with Lemma 3.2 implies that E does not belong to F n , the torsion-free class of the hereditary torsion pair (T n , F n ). Therefore, H n (X) ∈ F n , and whence there is a non-zero map f : T → H n (X), for some T ∈ T n . Since n = inf(X), we can extend f to a map of complexesf : T [−n] → X such that H n (f ) = f . Since T [−n] ∈ U, this is a contradiction with X ∈ V. The next auxiliary lemma says that, even though the injective envelope is not a functorial construction, we can still apply it to well-ordered directed systems in a natural way. Lemma 3.6. Let (M α , f α,β | α < β < λ) be a well-ordered directed system in Mod-R. Then there is a directed system (E α , g α,β | α < β < λ) such that E α = E(M α ), and such that the natural embeddings M α ⊆ E α induce a homomorphism between the two directed systems. Proof. We construct maps g α,β by the induction on β < λ. Suppose that we have already constructed maps g α,β for all α < β < γ for some γ < λ, so that (E α , g α,β | α < β < γ) forms a directed system with the claimed properties. If γ is a successor, we first let g γ−1,γ : E γ−1 → E γ be any map extending f γ−1,γ , which exists by injectivity. For any α < γ we then put g α,γ = g γ−1,γ g α,γ−1 . This is easily seen to define a directed system, and g α, γ ↾Mα = g γ−1,γ g α,γ−1↾Mα = g γ−1,γ f α,γ−1 = f γ−1,γ f α,γ−1 = f α,γ . Suppose now that γ is a limit ordinal. By the inductive assumption, the system (E α , g α,β | α < β < γ) is a direct system, and we let L = lim − →α<γ E α . Denote by h α : E α → L the limit maps. By the exactness of direct limit, lim − →α<γ M α is naturally a subobject in L. Then there is a map l : L → E γ extending the universal map lim − →α<γ M α → M γ . We put g α,γ = lh α for any α < γ. For any α < β < γ, we have g β,γ g α,β = lh β g α,β = lh α = g α,γ . Therefore, (E α , g α,β | α < β < γ + 1) forms a directed system. By the construction, g α,β equals f α,β after restriction to M α for any α < β < γ + 1. This establishes the induction step, and therefore also the proof. Proof. We proceed by contradiction. Suppose that there is n ∈ Z such that the torsion-free class F n is not closed under direct limits. Then by [GT12, Lemma 2.14], there is a well-ordered directed system (M α | α < λ) such that M α ∈ F n for each α < λ, but M = lim − →α<λ M i ∈ F n . Let (E α | α < λ) be the directed system provided by Lemma 3.6. Since F n is closed under injective envelopes, and by the Lemma 3.6 M embeds into L = lim − →α<λ E α , this is a directed system of injective modules from F n , such that its direct limit does not belong to F n . Because E α ∈ F n is injective, it is easy using Lemma 3.2 to check that E α [−n] ∈ V for all α < λ. Since the direct limit L is not in F n , there is a non-zero map from some module from T n to L, and thus L[−n] ∈ V. Therefore, L[−n] = hocolim − −−−− →α<λ E α [−n] ∈ V, which is in contradiction with (U, V) being homotopically smashing. Remark 3.8. We sketch here an alternative proof of Lemma 3.7, which relies on a deep result from [SŠV17], but is perhaps more conceptual. It is relatively straightforward to show that the torsion radical t n associated to the torsion pair (T n , F n ) can be expressed as t n (M ) = H n (τ U (M [−n])) for all M ∈ Mod-R. One can show using the result [SŠV17, Theorem 3.1] that τ U naturally preserves directed homotopy colimits. This implies that t n (M ) commutes with direct limits, which in turn implies that (T n , F n ) is of finite type. A Thomason filtration of Spec(R) is a decreasing map Φ : Z → (2 Spec(R) , ⊆) such that Φ(i) is a Thomason subset of Spec(R) for each i ∈ Z. To any Thomason filtration, assign the following class: U Φ = {X ∈ D(R) | Supp H n (X) ⊆ Φ(n) ∀n ∈ Z}. Given x ∈ R, the Koszul complex of x is the complex K(x) = · · · 0 → R ·x − → R → 0 → · · · concentrated in degrees −1 and 0. Ifx = (x 1 , x 2 , . . . , x n ) is a sequence of elements of R, we define the Koszul complex ofx by tensor product: K(x 1 , x 2 , . . . , x n ) = n i=1 K(x i ). Convention 3.9. (i) Let I be a finitely generated ideal, and letx andȳ be two finite sequences of generators of I. It is not true in general that K(x) and K(ȳ) are quasi-isomorphic -see [BH93, Proposition 1.6.21]. Nevertheless, we will for each finitely generated ideal I fix once and for all a finite sequencē x I of generators, and let K(I) := K(x I ). Our results will not depend on the choice of the generating sequence. The reason behind this is that although the quasi-isomorphism class of the Koszul complex does depend on the choice of generators, the vanishing of the relative cohomology does not -see [BH93, Corollary 1.6.22 and Corollary 1.6.10(d)]. (ii) In the following, we will often need to enumerate over all finitely generated ideals I such that the basic Zariski-closed set V (I) = {p ∈ Spec(R) | I ⊆ p} is a subset of some Thomason set X. For brevity, we will use a shorthand quantifier "∀V (I) ⊆ X" instead. There is no risk of confusion, because even though the shorthand would make sense for infinitely generated ideals as well, these would either lead to an undefined expressions K(I) orČ ∼ (I) with I infinitely generated, or to inclusion of cyclic modules R/I with I infinitely generated, which would not endanger the validity of our results. We can gather our findings about bounded below compactly generated t-structure in this way: Lemma 3.10. Let R be a commutative ring and (U, V) a bounded below t-structure. Then the following are equivalent: (i) (U, V) is compactly generated, (ii) (U, V) is homotopically smashing, and X ∈ V ⇒ E(H inf(X) (X))[− inf(X)] ∈ V, (iii) there is a Thomason filtration Φ such that there is k ∈ Z with Φ(k) = Spec(R), and U = U Φ , (iv) there is a Thomason filtration Φ such that there is k ∈ Z with Φ(k) = Spec(R), and such that the t-structure (U, V) is generated by the set S Φ = {K(I)[−n] | ∀V (I) ⊆ Φ(n) ∀n ∈ Z}. Proof. (i) ⇒ (ii) : First, by Proposition 2.2 any compactly generated t-structure is homotopically smashing. Let S ⊆ D c (R) be such that V = S ⊥0 . Because V is closed under cosuspensions, we also have V = S ⊥ ≤0 . By the assumption, there is k ∈ Z such that V ⊆ D ≥k . Therefore, we can apply Lemma 3.3 to infer that for any X ∈ V, E(H inf(X) (X))[− inf(X)] ∈ S ⊥ ≤0 for all S ∈ S, and therefore E(H inf(X) (X))[− inf(X)] ∈ V. (ii) ⇒ (iii) : Using Proposition 3.5 and Lemma 3.7, there is a sequence T n of hereditary torsion pairs (T n , F n ) of finite type such that U = {X ∈ D(R) | H n (X) ∈ T n ∀n ∈ Z}. For each n ∈ Z, there is a Thomason set Φ(n) such that T n = {M ∈ Mod-R | Supp(M ) ⊆ Φ(n)}, using Theorem 2.4. Because U is closed under suspensions, the map Φ : Z → 2 Spec(R) is a Thomason filtration. We conclude that U = U Φ . Finally, since (U, V) is bounded below, there is necessarily an integer k such that Φ(k) = Spec(R). (iii) ⇒ (iv) : Let (U ′ , V ′ ) be the t-structure generated by the set S Φ . By [BH93, Proposition 1.6.5b], Supp H k (K(I)) ⊆ V (I) for any k ∈ Z, and thus K(I)[−n] ∈ U Φ for all n ∈ Z and any finitely generated ideal I such that V (I) ⊆ Φ(n). Therefore, U ′ ⊆ U = U Φ . For the converse inclusion, note that S Φ ⊆ D c (R). Then we can use the already proven implication (i) ⇒ (iii) to infer that there is a Thomason We call a Thomason filtration Φ bounded below if there is an integer k such that Φ(k) = Spec(R). Then we can formulate Lemma 3.10 in a form of a bijective correspondence: filtration Φ ′ such that U ′ = U Φ ′ . Since U ′ ⊆ U Φ , we have Φ ′ (n) ⊆ Φ(n) Theorem 3.11. Let R be a commutative ring. Then there is a 1-1 correspondence: Bounded below Thomason filtrations Φ of Spec(R) ↔ Bounded below compactly generated t-structures in D(R) . The correspondence is given by mutually inverse assignments Φ → (U Φ , U Φ ⊥0 ), and (U, V) → Φ U , where Φ U (n) = M∈Mod-R,M[−n]∈U Supp(M ). A version of the telescope conjecture for bounded below t-structures over commutative noetherian rings The approach of the previous section can be used to extract more information about bounded below t-structures in the case of a commutative noetherian ringfor such t-structures, the homotopically smashing property is equivalent to compact generation. This can be seen as a "semistable" version of the telescope conjecture for bounded below t-structures in place of localizing pairs. As an application, we will use this in Section 6 to establish a cofinite type result for cosilting complexes over commutative noetherian rings. Lemma 4.1. Let R be a commutative ring and (U, V) a homotopically smashing t-structure in D(R). Let p be a prime and set U p = {X ⊗ R R p | X ∈ U}, V p = {X ⊗ R R p | X ∈ V}. Then the pair (U p , V p ) is a homotopically smashing t-structure in D(R p ). Proof. Because R p is a flat R-module, it can be by [Laz69, Théorème 1.2] written as a direct limit of finite free modules, R p = lim − →i∈I R nI . Since V is closed under homotopy colimits, then X p := X ⊗ R R p = X ⊗ R lim − → i∈I R nI ≃ lim − → i∈I X nI ≃ hocolim − −−−− → i∈I X nI ∈ V for any X ∈ V and prime p, and thus V p ⊆ V. Similarly, U p ⊆ U, and U p is clearly closed under suspensions. It remains to show that (U p , V p ) provides approximation triangles in D(R p ). Let Y ∈ D(R p ), then there is an approximation triangle with respect to (U, V) in D(R): (4.1) τ U Y → Y → τ V Y − → τ U Y [1]. Localizing this triangle at p yields the desired approximation triangle with respect to (U p , V p ) (and in fact, by uniqueness of approximation triangles, the triangle (4.1) is already in D(R p )). Finally, V p is clearly closed under directed homotopy colimits, as a directed homotopy colimit commutes with localization. As in the Neeman's proof of the telescope conjecture for localizing pairs in [NB92], we will use the Matlis' theory of injectives in a crucial way. For this reason, we state explicitly the structural result for injective modules in the setting of a commutative noetherian ring: Proof. Since (U, V) is bounded below, it is by Lemma 3.10 enough to show that for any complex X ∈ V, we have (4.2) E(H l (X))[−l] ∈ V, where l = inf(X). Since V is closed under directed homotopy colimits, and thus in particular under direct sums and direct limits of modules inhabiting some cohomological degree of V as stalk complexes, it is enough to show that κ(p)[−l] ∈ V, whenever p ∈ Ass(H l (X)). Indeed, by Theorem 4.2 we know that E(H l (X)) is isomorphic to a direct sum of indecomposable injectives of form E(R/ p), where p ∈ Ass(H l (X)), and each of these is filtered by κ(p)-modules. Since any κ(p)module is isomorphic to a direct sum of copies of κ(p), we conclude that κ(p)[−l] ∈ V for each p ∈ Ass(H l (X)) implies (4.2). Fix p ∈ Ass(H l (X)). By Lemma 4.1, (U p , V p ) is a homotopically smashing tstructure, and obviously X p := X⊗ R R p ∈ V p ⊆ V. We show that κ(p)[−l] ∈ V p . By Proposition 2.3, the complex Y = RHom R (κ(p), X p ) ∈ V p . Since R/ p embeds into H l (X), the residue field κ(p) embeds into H l (X p ) ≃ H l (X) p . Since l = inf(X p ), there is a map of complexes κ(p)[−l] → X p inducing the embedding κ(p) ⊆ H l (X p ) in the l-th cohomology. Using [Sta18,15.68.0.2], it follows that 0 = Hom D(Rp) (κ(p)[−l], X p ) ≃ Hom D(Rp) (κ(p), X p [l]) ≃ ≃ H l RHom Rp (κ(p), X p ) = H l (Y ). Because Y ∈ D(κ(p)) ⊆ D(R p ), Y is quasi-isomorphic in D(R p ) to a complex of κ(p)-modules. As κ(p) is a field, Y is -up to quasi-isomorphism -a split complex, and thus H l (Y )[−l] is a direct summand of Y . Therefore, H l (Y )[−l] ∈ V p . As H l (Y )[−l] = 0 is a vector space over κ(p), we conclude that κ(p)[−l] ∈ V p ⊆ V, as desired. Compactly generated t-structures in D(R) The purpose of this section is to extend Theorem 3.11 to all compactly generated t-structures in D(R), that is, without any assumption of a cohomological bound. To do this, we need to adopt a different approach, which we shortly explain now. Following [ŠP16], compactly generated t-structures (and by duality, also compactly generated co-t-structures) correspond to full subcategories of D c (R) closed under extensions, direct summands, and suspensions. This indicates that, in order to classify compactly generated t-structures, it is enough to consider the generation of compact objects in the aisle. In doing so, we will to a large extent follow the approach of [KP17,§2], where the authors considered the stable case of localizing pairs. However, it is necessary to point out that the form of the classification obtained by this approach is in a sense weaker than Theorem 3.11 or [ATLJS10]. While we are able to show that any compactly generated t-structure corresponds to a Thomason filtration and is generated by suspensions of Koszul complexes, we will not show in general that the aisle is determined cohomology-wise by supports. Indeed, such a result seems to be unknown to the literature even for localizing pairs. For commutative noetherian rings, this stronger description is obtained in [ATLJS10]. Another known case is the localizing pair associated to a "basic" Thomason set P = V (I) based on a single finitely generated ideal I ( [NB92], [DG02], [Pos16]). We provide a version of the latter result for aisles in 5.2, which will also allow for a description of the coaisle of a compactly generated t-structure viaČech cohomology. Given an object X ∈ D(R), by a non-negative suspension we mean an object isomorphic in D(R) to X[i] for some i ≥ 0. Lemma 5.1. Suppose that S is a set of compact objects, and let C be a compact object. If C ∈ aisle(S), then C is quasi-isomorphic to a direct summand of an object F , where F is an n-fold extension of finite direct sums of non-negative suspensions of objects of S for some n > 0. Proof. Using [KN12, Theorem 12.3], we know that C ≃ hocolim − −−−− →n≥0 X n , where X n is an n-fold extension of coproducts of non-negative suspensions of objects from S. This means that there is for each n > 0 a cardinal Λ n and a sequence (S n λ | λ < Λ n ) of non-negative suspensions of objects from the set S such that there is a triangle X n−1 → X n → λ<Λn S n λ → X n−1 [1]. Since C is compact, we can use [Rou08, Proposition 3.13] to infer that there is an integer n > 0, and an object F , which is obtained as an n-fold extension of objects of form λ∈A k S k λ for k = 1, . . . , n, where A k is a finite subset of Λ k , and such that C is quasi-isomorphic to a direct summand of F . This concludes the proof. Recall that a complex X is finite if the R-module n∈Z H n (X) is finitely generated. The fact that the generation of aisles by finite complexes over a commutative noetherian ring is controlled by the support of cohomologies follows from a result due to Kiessling in the following way. Proposition 5.2. Let R be a commutative noetherian ring, and let X and Y be two finite complexes over R. Suppose that for every k ∈ Z we have that Supp(H k (Y )) ⊆ Supp( i≥k H i (X)). Then aisle(Y ) ⊆ aisle(X). Proof. This follows from [Kie12,Corollary 7.4]. Indeed, [Kie12,Corollary 7.4] claims that the condition on supports above implies that Y is contained in the smallest subcategory of D(R) containing X and closed under extensions, cones, and coproducts. Then a fortiori, we have Y ∈ aisle(X) as desired. Lemma 5.3. Let R be a commutative noetherian ring, and S ∈ D c (R). Then there is n > 0, ideals I 1 , I 2 , . . . , I n of R and integers s 1 , . . . , s n such that aisle(S) = aisle(R/I 1 [s 1 ], . . . , R/I n [s n ]). Proof. Because R is commutative noetherian, the complex C is finite, as well as the split complex Y = k∈Z R/ Ann(H k (C))[−k]. Since Supp(H k (C)) = V (Ann H k (C)), we can use Proposition 5.2 to infer that aisle(C) = aisle(Y ). Because C is compact, only finitely many of its cohomologies do not vanish, and therefore aisle(C) = aisle(R/I 1 [s 1 ], · · · , R/I n [s n ]) for ideals I 1 , . . . , I n and appropriate integers s 1 , . . . , s n . In preparation for the main result, we need to prove some auxiliary observations about aisles generated by cyclic modules and Koszul complexes over any commutative ring. Lemma 5.4. Let R be a commutative ring, and I be a finitely generated ideal. Proof. (i) First, for any ideal J, any R/J-module M is quasi-isomorphic to a complex of free R/J-modules concentrated in non-positive degrees. Using brutal truncations and Milnor colimits, we see that such M belongs to aisle(R/J). In particular, R/I ∈ aisle(R/I n ). On the other hand, as R/I n admits a finite filtration by R/Imodules, R/I n ∈ aisle(R/I). (ii) Since Supp(M ) ⊆ V (I), M is in the hereditary torsion class corresponding to the Thomason set V (I). Therefore, there is an epimorphism n>0 (R/I n ) (κn) → M for some cardinals κ n . Also, the kernel K of this epimorphism satisfies Supp(K) ⊆ V (I) again. Inductively, we can construct a complex concentrated in non-positive degrees with coordinates consisting of direct sums of copies of cyclic modules R/I n , n > 0. By (i), and using the Milnor colimits of brutal truncations again, this shows that M ∈ aisle(R/I). (iii) By [BH93, Proposition 1.6.5b], Supp(H n (K(I))) ⊆ V (I) for all n ∈ Z. Since K(I) is a bounded complex, K(I) can be obtained by a finite number of extensions from stalks of modules supported on V (I) concentrated in non-positive degrees. This shows that K(I) ∈ aisle(R/I) by (ii). We know by [Nor68,p. 360,8.2.7] that H 0 (K(I)) ≃ R/I. Using Proposition 2.3, K(I)⊗ R R/I ∈ aisle(K(I)). Inspecting the definition of a Koszul complex, K(I)⊗ R R/I is a split complex of R/I-modules, and H 0 (K(I) ⊗ R R/I) ≃ R/I. Therefore, R/I ∈ aisle(K(I)). The following proof is a version of the argument [KP17, Proposition 2.1.13] adjusted for aisles. i > 0. Applying − ⊗ L T R onto F i , we conclude that K(J i )[s i ] ⊗ T R is a direct summand in F i ⊗ T R, an n i -fold extension of finite direct sums of non-negative suspensions of copies of S = S ′ ⊗ T R (note that by compactness of all of the involved objects, we can compute the derived tensor product via the ordinary tensor product). One can see directly from the definition of a Koszul complex that, when choosing the appropriate generating sets, K(J i ) ⊗ T R is isomorphic to a Koszul complex K(I i ), where I i = J i R for each i = 1, . . . , n. We proved that K(I i )[s i ] ∈ aisle(S) in D(R) for each i = 1, . . . , n. Repeating the same argument using Lemma 5.1 with the roles of the two aisles reversed shows that also S ∈ aisle(K(I 1 )[s 1 ], . . . , K(I n )[s n ]). Theorem 5.6. Let R be a commutative ring. Then there is a 1-1 correspondence: Proof. Since both the assignments A and B are clearly well-defined, it is enough to check that they are mutually inverse. First, let U be a compactly generated aisle. By Lemma 5.5, we know that U = aisle(S), where S is a set of non-negative suspensions of Koszul complexes. Let us prove that U = AB(U). Clearly, U ⊆ AB(U). To prove the converse inclusion, let J be a finitely generated ideal such that V (J) ⊆ Φ(n), where Φ = B(U). Since Φ(n) = {V (I) | I f.g. ideal such that R/I[−n] ∈ U}, this means that there are finitely generated ideals I 1 , . . . , I k , such that I l1 1 · · · I l k k ⊆ J, for some natural numbers k, l 1 , . . . , l k , and such that R/I j [−n] ∈ U for all j = 1, . . . , k. Because the cyclic module R/I l1 1 · · · I l k k admits a finite filtration by modules from hereditary torsion classes T i = {M ∈ Mod-R | Supp(M ) ⊆ V (I i )}, i = 1, . . . , k, we infer that R/I l1 1 · · · I l k k [−n] belongs to U by Lemma 5.4(ii). By the same lemma, R/J[−n] ∈ U. Therefore, for any finitely generated ideal J with V (J) ⊆ Φ(n) we Thomason filtrations Φ of Spec(R) ↔ Aisles U of compactly generated t-structures in D(R) . The correspondence is given by mutually inverse assignments A : Φ → U = aisle(K(I)[−n] | V (I) ⊆ Φ(n) ∀n ∈ Z}, 5.1. Compactly generated co-t-structures. Following [ŠP16,Theorem 4.10], we know that there is a 1-1 correspondence in D(R) between the compactly generated t-structures and the compactly generated co-t-structures in D(R). Explicitly, the correspondence can be described as follows. If S ∈ D c (R) is an compact object, we define the compact-dual of S to be S * := RHom R (S, R) ≃ Hom R (S, R). Given a t-structure (U, V) generated by a set S of compact objects, define a torsion pair (X , Y) generated by the set S * = {S * | S ∈ S} of compact-duals of S. By [ŠP16], the pair (X , Y) is a co-t-structure, and this assignment is the desired correspondence. Starting with a Thomason filtration Φ, we define a set V Φ = S Φ ⊥0 = {X ∈ D(R) | Hom D(R) (K(I)[−i], X) = 0 ∀i ∈ Z ∀V (I) ⊆ Φ(i)} = = {X ∈ D(R) | Hom D(R) (K(I), X[k]) = 0 ∀k ≤ i ∈ Z ∀V (I) ⊆ Φ(i)} = = {X ∈ D(R) | RHom R (K(I), X) ∈ D >i ∀i ∈ Z ∀V (I) ⊆ Φ(i)}. If (X , Y Φ ) is the compactly generated co-t-structure corresponding to the t-structure (U, V Φ ) in the above-described manner, we claim that (5.1) Y Φ = {X ∈ D(R) | K(I) ⊗ L R X ∈ D <−i ∀i ∈ Z ∀V (I) ⊆ Φ(i)}. To establish (5.1), we proceed as follows. First, for any X ∈ D(R) we have: X ∈ Y Φ = (S * Φ ) ⊥0 ⇐⇒ Hom D(R) ((K(I)[−i]) * , X) = 0 ∀i ∈ Z ∀V (I) ⊆ Φ(i). Furthermore, (K(I)[−i]) * ≃ K(I) * [i] , whence, by similar computation as above, Y Φ = {X ∈ D(R) | RHom R (K(I) * , X) ∈ D <−i ∀i ∈ Z ∀V (I) ⊆ Φ(i)}. By [Sta18,Lemma 20.43.11], we can use the compactness of K(I) to infer that RHom R (K(I) * , X) ≃ K(I) * * ⊗ L R X ≃ K(I) ⊗ L R X. This proves (5.1). Altogether, this yields the following classification result for compactly generated co-t-structures: Theorem 5.7. Let R be a commutative ring. Then there is a 1-1 correspondence: Thomason filtrations Φ of Spec(R) ↔ Compactly generated co-t-structures in D(R) . The correspondence is given by the assignment Φ → ( ⊥0 Y Φ , Y Φ ). 5.2. Compactly generated t-structures versus localizing pairs. The purpose of this subsection is twofold -to describe the coaisles of the compactly generated t-structures explicitly, and to make precise the relation between the classification of compactly generated localizing subcategories (via Thomason sets), and the present classification of compactly generated t-structures (via Z-filtrations by Thomason sets). A triangulated subcategory L of D(R) is called localizing if it is closed under coproducts. If (L, L ⊥0 ) forms a localizing pair, that is, if the inclusion L ⊆ D(R) admits a right adjoint, we call L a strict localizing subcategory. If in this situation also the class L ⊥0 is localizing, we call (L, L ⊥0 ) a smashing localizing pair, and the class L a smashing subcategory. A localizing pair is smashing if and only if it is homotopically smashing, this follows from [Mur06, §5.2] (and it is not true for t-structures in general, see [SŠV17, Example 6.2]). The term "smashing" comes from algebraic topology and indicates that the smashing subcategory is determined by the symmetric monoidal product, which in our category D(R) means the derived tensor product. Indeed, that is the case. Let L be a smashing subcategory and consider the left (resp. right) approximation functor Γ (resp. L) with respect to the localizing pair (L, L ⊥0 ). Then the approximation triangle Γ(R) → R → L(R) → Γ(R)[1], is an idempotent triangle in the sense of [BF11], that is, Γ(R) ⊗ L R L(R) = 0, and by [BF11, Theorem 2.13] we have: L ≃ (L(R) ⊗ L R −), Γ ≃ (Γ(R) ⊗ L R −). In particular, L = Ker(L(R) ⊗ L R −), and L ⊥0 = Ker(Γ(R) ⊗ L R −). It is clear that any compactly generated localizing pair is smashing. The question whether the converse is also true is known as the telescope conjecture, and it is not valid in general, but it is true for example for commutative noetherian rings [NB92], or for (even one-sided) hereditary rings [Kv10]. The classification of compactly generated localizing pairs is due to Thomason [Tho97], who generalized the result of Neeman and Hopkins from noetherian rings to arbitrary commutative rings. It also follows as a special case of Theorem 5.6, as compactly generated localizing pairs amongst compactly generated t-structures are clearly precisely those, such that the corresponding Thomason filtration is constant. We recall that given a set S ⊆ D(R), Loc(S) denotes the smallest localizing subcategory in D(R) containing S. Equivalently, Loc(S) = aisle(S[n] | n < 0), and therefore Loc(S) is always a strict localizing subcategory. If the Thomason set P is equal to V (I) for some finitely generated ideal I, then the induced idempotent triangle is given by aČech complex, as we explain now. Given an element x ∈ R, we leť C ∼ (x) = (· · · → 0 → R ι − → R[x −1 ] → 0 → · · · ), a complex concentrated in degrees 0 and 1, where ι is the obvious natural map. For an n-tuplex = (x 1 , x 2 , . . . , x n ) of elements of R we letČ ∼ (x) = n i=1Č ∼ (x i ). It follows from [Gre07, Corollary 3.12] that ifx andȳ are two finite sequences of generators of an ideal I, thenČ ∼ (x) is quasi-isomorphic toČ ∼ (ȳ). Therefore, given a finitely generated ideal I, we can use any finite generating sequence of I to defině C ∼ (I) as an object in D(R), and call it the infinite Koszul complex 2 associated to I. Let I be generated by a finite sequencex. Note that, up to quasi-isomorphism, C ∼ (I) is a complex of form R → 1≤i1≤n R[x −1 i1 ] → 1≤i1<i2≤n R[x −1 i1 , x −1 i2 ] → · · · → R[x −1 1 , x −1 2 , . . . , x −1 n ]. Taking the cone of the mapČ ∼ (I) → R, given by identity in degree 0, results in a triangle (note that, as bothČ(I) andČ ∼ (I) are bounded complexes of flat R-modules, we can drop the left derivation symbol from the tensor product). Finally, we explain how the stable Koszul complex is built from the (compact) Koszul complexes. One can see directly from the construction that for a single element x ∈ R, the stable Koszul complexČ ∼ (x) is obtained as a direct limit of compact-duals of Koszul complexes K(x n ), n > 0: C ∼ (x) ≃ lim − → n>0 K(x n ) * . If x 1 , x 2 , . . . , x m is a sequence of generators of an ideal I, it is then easy to thať C ∼ (I) ≃ ⊗ m i=1Č ∼ (x i ) ≃ ⊗ m i=1 lim − → n>0 K(x n i ) * ≃ lim − → n>0 ⊗ m i=1 K(x n i ) * ≃ lim − → n>0 K(x n 1 , . . . , x n m ) * . Lemma 5.9. Let I be a finitely generated ideal of a commutative ring R. Proof. First, recall that both τ ≤0 andČ ∼ (I) are the right adjoints to the inclusions of D ≤0 and L V (I) into D(R), respectively. Moreover, we have by [Pos16, Proposition 5.1] that L V (I) = {X ∈ D(R) | Supp(H n (X)) ⊆ V (I) ∀n ∈ Z}. Because the essential image of the composition τ ≤0 • (Č ∼ (I) ⊗ R −) is precisely L V (I) ∩ D ≤0 , we infer that τ ≤0 • (Č ∼ (I) ⊗ R −) is the right adjoint to the inclusion of L V (I) ∩ D ≤0 = {X ∈ D ≤0 | Supp(H n (X)) ⊆ V (I) ∀n ≤ 0} to D(R). Next we show that L V (I) ∩ D ≤0 = aisle(K(I)). Since Supp(H n (K(I))) = V (I) for all n ∈ Z, we clearly have aisle(K(I)) ⊆ L V (I) ∩D ≤0 . In order to show the other inclusion, let Y be a complex from L V (I) ∩ D ≤0 , and consider the approximation triangle with respect to the t-structure (aisle(K(I)), aisle(K(I)) ⊥0 ): U → Y → X → X[1]. Since both U and Y are in L V (I) ∩ D ≤0 , so is the cone X. We will show that X = 0, and therefore Y ≃ U ∈ aisle(K(I)). Let x 1 , . . . , x m be a finite sequence of generators of I, and let K n = K(x n 1 , . . . , x n m ) denote the Koszul complex of n-th powers of the generators for each n > 0. Using Lemma 5.4, we see that aisle(K(I)) = aisle(K n ) for all n > 0, and therefore K n ∈ aisle(K(I)) for all n > 0. Then we can compute using the discussion above and [Sta18, Lemma 20.43.11]: X ≃ X ⊗ RČ ∼ (I) ≃ X ⊗ R lim − → n>0 K * n ≃ lim − → n>0 (X ⊗ R K * n ) ≃ hocolim − −−−− → n>0 R Hom R (K n , X). Because X ∈ aisle(K(I)) ⊥0 , we see that R Hom R (K n , X) ∈ D >0 for all n > 0, and therefore also X ≃ hocolim − −−−− →n>0 R Hom R (K n , X) ∈ D >0 . Since X ∈ D ≤0 , we conclude that X = 0. We proved that L V (I) ∩ D ≤0 = aisle(K(I)). Since the left approximation functor associated to aisle(K(I)) is τ ≤0 •(Č ∼ (I)⊗ R −), it follows that the coaisle aisle(K(I)) ⊥0 is equal to {Y ∈ D(R) | τ ≤0 (Č ∼ (I) ⊗ R Y ) = 0} = {Y ∈ D(R) |Č ∼ (I) ⊗ R Y ∈ D >0 }. Proposition 5.10. Let R be a commutative ring and Φ a Thomason filtration. Consider the compactly generated t-structure associated to Φ via Theorem 5.6, that is, a t-structure (U, V Φ ) generated by the set S Φ = {K(I)[−n] | ∀V (I) ⊆ Φ(n) ∀n ∈ Z}. Then the coaisle V Φ can be described as follows: V Φ = {Y ∈ D(R) | (Č ∼ (I) ⊗ R Y ) ∈ D >n ∀V (I) ⊆ Φ(n) ∀n ∈ Z}. Proof. Clearly, we have Combining this with (5.3) yields the desired description of V Φ . Intermediate t-structures and cosilting complexes We finish the paper by discussing the consequences of Theorem 4.3 and Theorem 3.11 for the silting theory of a commutative noetherian ring. An object C ∈ D(R) is a (bounded) cosilting complex if the two following conditions hold: (1) C belongs to K b (Inj-R), that is, C is quasi-isomorphic to a bounded complex of injective R-modules, and, (2) the pair ( ⊥ ≤0 C, ⊥>0 C) forms a t-structure (note that this implies, in particular, that C ∈ ⊥>0 C) 3 . The adjective "bounded" in this context reads as condition (1), and variants of cosilting object not satisfying condition (1) are also discussed in literature, especially in the setting of a general compactly generated triangulated category (we refer to [NSZ18], [PV18]). In this paper, all cosilting complexes are bounded. The notion of a cosilting complex was introduced in [ZW17] as a dualization of the (big) silting complexes of [AHMV15]. Combining the recent works [MV18] and [Lak18] gives a useful characterization of t-structures induced by cosilting complexes. Before that, we need to recall a couple of definitions. Say that a t-structure ( .14] implies that (i) is equivalent to V being definable (see [Lak18] for the definition), which is in our situation equivalent to (U, V) being homotopically smashing by [Lak18, Theorem 4.6]. Using Theorem 4.3, we conclude with the following: Corollary 6.2. If the ring R is commutative noetherian, the conditions of Theorem 6.1 are equivalent to (iv) (U, V) is compactly generated. Corollary 6.2 can be seen as a generalization of the cofinite type of n-cotilting modules over commutative noetherian ring proved in [AHPŠT14] to the cosilting setting. Also, in view of condition (ii) from Theorem 6.1, it provides a partial answer to [ŠP16, Question 3.12]. Let us call a Thomason filtration Φ bounded if there are integers m < n such that Φ(m) = Spec(R) and Φ(n) = ∅. We will also adopt the custom from cotilting theory and say that two cosilting complexes are equivalent if they induce the same cosilting t-structure. The cosilting t-structure (U Φ , V Φ ) induced by a cosilting module associated to a Thomason filtration Φ by this correspondence can be described as follows: U Φ = {X ∈ D(R) | Supp H n (X) ⊆ Φ(n) ∀n ∈ Z}, 3 The author is grateful to Rosanna Laking for pointing out the unnecessity of this condition in the definition of a cosilting object to him. V Φ = {X ∈ D(R) |Č ∼ (I) ⊗ R X ∈ D >n ∀V (I) ⊆ Φ(n) ∀n ∈ Z}. Proof. It is easy to see that bounded Thomason filtration correspond via Theorem 3.11 precisely to those compactly generated t-structures which are intermediate. The rest is a consequence of Corollary 6.2, Theorem 3.11, and Proposition 5.10. We conclude the paper by remarking that Theorem 6.3 is a generalization of [AHH16, Theorem 5.1], which gives an equivalence between cosilting modules and Thomason sets over a commutative noetherian ring R. Cosilting modules were introduced in [AHMV15] as module-theoretic counterparts of 2-term cosilting complexes in D(R), and indeed, they correspond in a well-behaved manner precisely to 2-term cosilting complexes, up to a shift ([AHMV15, Theorem 4.11]). Theorem 6.3 extends this result from 2-term complexes to cosilting complexes of arbitrary length. C ⊥I = {X ∈ D(R) | Hom D(R) (C, X[n]) = 0 ∀C ∈ C, n ∈ I}, and ⊥I C = {X ∈ D(R) | Hom D(R) (X, C[n]) = 0 ∀C ∈ C, n ∈ I}. (i) Any aisle of D(R) is closed under arbitrary homotopy colimits, and any coaisle of D(R) is closed under Milnor limits. (ii) Any compactly generated t-structure is homotopically smashing. Proof. (i) Any aisle is closed under homotopy colimits by [SŠV17, Proposition 4.2]. The closure of coaisles under Milnor limits is clear from them being closed under extensions, cosuspensions, and direct products. (ii) This follows from [SŠV17, Proposition 5.4], which implies that compact objects are precisely the objects S with the property that 2. 6 . 6Hereditary torsion pairs and the Thomason sets. We will also need to recall the notion of a torsion pair in the module category Mod-R. A pair of subcategories (T , F ) of Mod-R is called a torsion pair if for any M ∈ Mod-R: (i) Hom R (T , M ) = 0 if and only M ∈ F , and (ii) Hom R (M, F ) = 0 if and only M ∈ T . Proposition 3. 5 . 5For a bounded below t-structure (U, V), the following conditions are equivalent: (i) for any X ∈ V, E(H inf X (X))[− inf(X)] ∈ V, (ii) (U, V) is cogenerated by stalks of injective modules, (iii) there is a decreasing sequence · · · ⊇ T n ⊇ T n+1 ⊇ T n+2 ⊇ · · · of hereditary torsion classes such that U = {X ∈ D(R) | H n (X) ∈ T n }. Proof. (i) ⇒ (ii): Lemma 3.4. (ii) ⇒ (iii): For each n ∈ Z, let E n be a collection of injective modules such that U = {X ∈ D(R) | Hom D(R) (X, E[−n]) = 0 ∀E ∈ E n ∀n ∈ Z}. Lemma 3. 7 . 7Let (U, V) be a bounded below t-structure satisfying the conditions of Proposition 3.5. If (U, V) is homotopically smashing, then the hereditary torsion pairs (T n , F n ) of Proposition 3.5 are of finite type. for all n ∈ Z. But since by[Nor68, p. 360, 8.2.7], H 0 (K(I)) ≃ R/I, we have Supp(H n (K(I)[−n])) = V (I). Therefore, necessarily Φ = Φ ′ , and thus U ′ = U Φ ′ = U Φ = U.(iv) ⇒ (i) : Clear, since Koszul complexes are compact objects of D(R). Theorem 4.2. ([Mat58]) Let R be a commutative noetherian ring. Then any injective module is isomorphic to a direct sum of indecomposable injective modules of form E(R/ p), where p ∈ Spec(R). Furthermore, each of the modules E(R/ p) admits a filtration by κ(p)-modules. Theorem 4 . 3 . 43Let R be a commutative noetherian ring. If (U, V) is a bounded below homotopically smashing t-structure in D(R), then it is compactly generated. (i) aisle(R/I) = aisle(R/I n ) for any n > 0, (ii) M ∈ aisle(R/I) whenever M is an R-module such that Supp(M ) ⊆ V (I), (iii) aisle(K(I)) = aisle(R/I). Lemma 5 . 5 . 55Let R be an arbitrary commutative ring, and S ∈ D c (R). Then there is n > 0, ideals I 1 , I 2 , . . . , I n of R and integers s 1 , . . . , s n such that aisle(S) = aisle(K(I 1 )[s 1 ], . . . , K(I n )[s n ]). Proof. First, by [NB92, Lemma A.2], there is a noetherian subring T of R, and a compact object S ′ ∈ D c (T ) such that S ′ ⊗ T R = S. Using Lemma 5.3 together with Lemma 5.4(iii), there are ideals J 1 , . . . , J n and integers s 1 , . . . , s n such that aisle(S ′ ) = aisle(K(J 1 )[s 1 ], . . . , K(J n )[s n ]) in D(T ). By Lemma 5.1, K(J i )[s i ] can be obtained as a direct summand in an object F i ∈ D c (T ), where F i is an n ifold extension of finite direct sums of non-negative suspensions of copies of S ′ , for each i = 1, . . . , n and for some n and B : U → Φ(n) = {V (I) | I f.g. ideal such that R/I[−n] ∈ U}. have aisle(K(J)[−n]) = aisle(R/J[−n]) ⊆ U. This shows that U = AB(U). Now let Φ be a Thomason filtration, and let us prove that BA(Φ) = Φ. Again, by Lemma 5.4(iii), clearly BA(Φ)(n) ⊇ Φ(n) for each n ∈ Z. To prove the other inclusion, let J be a finitely generated ideal such that R/J[−n] ∈ A(Φ). Because U Φ = {X ∈ D(R) | Supp(H j (X)) ⊆ Φ(j) ∀j ∈ Z} is a subcategory of D(R) closed under extensions, suspensions, and coproducts, and Supp(H j (K(I)) ⊆ V (I) for any finitely generated ideal I and any j ∈ Z, we have A(Φ) ⊆ U Φ . Then R/J[−n] ∈ U Φ , and therefore V (J) ⊆ Φ(n). S Φ = {K(I)[−n] | ∀V (I) ⊆ Φ(n) ∀n ∈ Z} of compact objects. We can express the compactly generated coaisle V Φ associated to Φ via Theorem 5.6 as follows, using [Sta18, 15.68.0.2]: Theorem 5.8. ([Tho97],[KP17] Let R be a commutative ring. There is a 1-1 correspondenceThomason subsets P of Spec(R) ↔ Compactly generated localizing subcategories L of D(R) , given by the assignment X → L X = Loc(K(I) | V (I) ⊆ P ). I :Č ∼ (I) → R →Č(I) →Č ∼ (I)[1]. We call the complexČ(I) theČech complex associated to the finitely generated ideal I. It follows e.g. from [Pos16, Lemma 1.1] thatČ ∼ (I) ∈ L V (I) . Because L ⊥0 is closed under extensions, tensoring by arbitrary complexes ([KP17, Lemma 1.1.8]), and contains all stalk complexes R[x −1 1 ], . . . , R[x −1 n ] ([KP17, Theorem 2.2.4]), alsǒ C(I) ∈ L V (I) ⊥0 . Then the triangle (5.2) is the approximation triangle of R with respect to the localizing pair (L V (I) , L V (I) ⊥0 ). In particular, L V (I) = Ker(Č(I) ⊗ R −) and L V (I) ⊥0 = Ker(Č ∼ (I) ⊗ R −) Then aisle(K(I)) = L V (I) ∩ D ≤0 = {X ∈ D ≤0 | Supp(X) ⊆ V (I) ∀n ≤ 0}. Furthermore, the functor defined as the composition τ ≤0 • (Č ∼ (I) ⊗ R −) is the right adjoint to the inclusion of aisle(K(I)) into D(R). In particular, aisle(K(I)) ⊥0 = {Y ∈ D(R) | C ∼ (I) ⊗ R Y ∈ D >0 }. Φ = S Φ ⊥0 = n∈Z,V (I)⊆Φ(n) aisle(K(I)[−n]) ⊥0 .Using Lemma 5.9 and shifting, we haveaisle(K(I)[−n]) ⊥0 = {Y ∈ D(R) | (Č ∼ (I) ⊗ R Y ) ∈ D >n }. U, V) is intermediate if there are integers n ≤ m such that D ≥m ⊆ V ⊆ D ≥n . A TTF triple is a triple (U, V, W) of subcategories of D(R) such that both (U, V) and (V, W) are torsion pairs in D(R). Theorem 6.1. Let R be a (not necessarily commutative) ring, and (U, V) an intermediate t-structure in D(R). Then the following conditions are equivalent: (i) (U, V) is induced by a (bounded) cosilting complex, (ii) there is a (cosuspended) TTF triple (U, V, W), (iii) (U, V) is homotopically smashing. Proof. The equivalence (i) ⇔ (ii) is precisely [MV18, Theorem 3.13]. Also, [MV18, Theorem 3 Theorem 6. 3 . 3Let R be a commutative noetherian ring. There is a bijective correspondence: Bounded Thomason filtrations Φ of Spec(R) ↔ Cosilting complexes in D(R) up to equivalence . These also appear under the name "weight structures" in the literature. Also known as the "stable Koszul complex" in the literature. AcknowledgementThe paper was written during the author's stay at Dipartimento di Matematica of Università degli Studi di Padova. I would like to express my gratitude to everybody at the department -and to Prof. Silvana Bazzoni in particular -for all the hospitality, and all the stimulating discussions. Also, I am indebted to JanŠťovíček for spotting an error in an earlier version of the manuscript. Rings and categories of modules. W Frank, Kent R Anderson, Fuller, Springer Science & Business Media13Frank W Anderson and Kent R Fuller. Rings and categories of modules, volume 13. Springer Science & Business Media, 2012. Silting modules over commutative rings. Angeleri Lidia, Michal Hügel, Hrbek, International Mathematics Research Notices. 201713Lidia Angeleri Hügel and Michal Hrbek. Silting modules over commutative rings. International Mathematics Research Notices, 2017(13):4131-4151, 2016. Silting modules. Lidia Angeleri Hügel, Frederik Marks, Jorge Vitória, International Mathematics Research Notices. 20164Lidia Angeleri Hügel, Frederik Marks, and Jorge Vitória. Silting modules. Interna- tional Mathematics Research Notices, 2016(4):1251-1284, 2015. Torsion pairs in silting theory. Lidia Angeleri Hügel, Frederik Marks, Jorge Vitória, Pacific Journal of Mathematics. 2912Lidia Angeleri Hügel, Frederik Marks, and Jorge Vitória. Torsion pairs in silting theory. Pacific Journal of Mathematics, 291(2):257-278, 2017. Tilting, cotilting, and spectra of commutative noetherian rings. Lidia Angeleri Hügel, David Pospíšil, Janšťovíček , Jan Trlifaj, Transactions of the American Mathematical Society. 3667Lidia Angeleri Hügel, David Pospíšil, JanŠťovíček, and Jan Trlifaj. Tilting, cotilt- ing, and spectra of commutative noetherian rings. Transactions of the American Mathematical Society, 366(7):3487-3517, 2014. Compactly generated t-structures on the derived category of a noetherian ring. Ana López Leovigildo Alonso Tarrío, Manuel Jeremías, Saorín, Journal of Algebra. 3243Leovigildo Alonso Tarrío, Ana López Jeremías, and Manuel Saorín. Compactly gen- erated t-structures on the derived category of a noetherian ring. Journal of Algebra, 324(3):313-346, 2010. Construction of t-structures and equivalences of derived categories. Ana López Leovigildo Alonso Tarrío, María José Souto Jeremías, Salorio, Transactions of the American Mathematical Society. 3556Leovigildo Alonso Tarrío, Ana López Jeremías, and María José Souto Salorio. Con- struction of t-structures and equivalences of derived categories. Transactions of the American Mathematical Society, 355(6):2523-2543, 2003. Faisceaux pervers. Alexander A Beilinson, Joseph Bernstein, Pierre Deligne, Analysis and topology on singular spaces, I (Luminy. Paris100AstérisqueAlexander A. Beilinson, Joseph Bernstein, and Pierre Deligne. Faisceaux pervers. In Analysis and topology on singular spaces, I (Luminy, 1981), volume 100 of Astérisque, pages 5-171. Soc. Math. France, Paris, 1982. Generalized tensor idempotents and the telescope conjecture. Paul Balmer, Giordano Favi, Proceedings of the. theLondon Mathematical Society102Paul Balmer and Giordano Favi. Generalized tensor idempotents and the telescope conjecture. Proceedings of the London Mathematical Society, 102(6):1161-1185, 2011. Cohen-Macaulay rings, volume 39 of Cambridge studies in advanced mathematics. Winfried Bruns, Jürgen Herzog, Cambridge University PressCambridgeWinfried Bruns and Jürgen Herzog. Cohen-Macaulay rings, volume 39 of Cambridge studies in advanced mathematics. Cambridge University Press, Cambridge, 1993. Homotopy limits in triangulated categories. Marcel Bökstedt, Amnon Neeman, Compositio Mathematica. 862Marcel Bökstedt and Amnon Neeman. Homotopy limits in triangulated categories. Compositio Mathematica, 86(2):209-234, 1993. Complete modules and torsion modules. G William, John Patrick Campbell Dwyer, Greenlees, American Journal of Mathematics. 1241William G Dwyer and John Patrick Campbell Greenlees. Complete modules and torsion modules. American Journal of Mathematics, 124(1):199-220, 2002. Torsion classes of finite type and spectra. K-theory and Noncommutative Geometry. Grigory Garkusha, Mike Prest, Grigory Garkusha and Mike Prest. Torsion classes of finite type and spectra. K-theory and Noncommutative Geometry, pages 393-412, 2008. First steps in brave new commutative algebra. J P C Greenlees, Interactions between homotopy theory and algebra. Providence, RIAmer. Math. Soc436J. P. C. Greenlees. First steps in brave new commutative algebra. In Interactions between homotopy theory and algebra, volume 436 of Contemp. Math., pages 239- 275. Amer. Math. Soc., Providence, RI, 2007. . Rüdiger Göbel, Jan Trlifaj, Approximations and Endomorphism Algebras of Modules. 1Walter de GruyterRüdiger Göbel and Jan Trlifaj. Approximations and Endomorphism Algebras of Modules: Volume 1-Approximations/Volume 2-Predictions, volume 41. Walter de Gruyter, 2012. Global methods in homotopy theory. Michael J Hopkins, Proceedings of the 1985 LMS Symposium on Homotopy Theory. the 1985 LMS Symposium on Homotopy TheoryMichael J. Hopkins. Global methods in homotopy theory. In Proceedings of the 1985 LMS Symposium on Homotopy Theory, pages 73-96, 1987. Michal Hrbek, Janšťovíček , arXiv:1701.05534Tilting classes over commutative rings. arXiv preprintMichal Hrbek and JanŠťovíček. Tilting classes over commutative rings. arXiv preprint arXiv:1701.05534, 2017. A remark on the generalized smashing conjecture. Bernhard Keller, Manuscripta Mathematica. 841Bernhard Keller. A remark on the generalized smashing conjecture. Manuscripta Mathematica, 84(1):193-198, 1994. Properties of cellular classes of chain complexes. Jonas Kiessling, Israel Journal of Mathematics. 1911Jonas Kiessling. Properties of cellular classes of chain complexes. Israel Journal of Mathematics, 191(1):483-505, 2012. Weight structures and simple dg modules for positive dg algebras. Bernhard Keller, Pedro Nicolás, International Mathematics Research Notices. 5Bernhard Keller and Pedro Nicolás. Weight structures and simple dg modules for pos- itive dg algebras. International Mathematics Research Notices, 2013(5):1028-1078, 2012. Hochster duality in derived categories and pointfree reconstruction of schemes. Joachim Kock, Wolfgang Pitsch, Transactions of the American Mathematical Society. 3691Joachim Kock and Wolfgang Pitsch. Hochster duality in derived categories and point- free reconstruction of schemes. Transactions of the American Mathematical Society, 369(1):223-261, 2017. Categories and sheaves. Masaki Kashiwara, Pierre Schapira, Springer Science & Business Media332Masaki Kashiwara and Pierre Schapira. Categories and sheaves, volume 332. Springer Science & Business Media, 2005. The telescope conjecture for hereditary rings via Ext-orthogonal pairs. Henning Krause, Janšťovíček , Advances in Mathematics. 2255Henning Krause and JanŠťovíček. The telescope conjecture for hereditary rings via Ext-orthogonal pairs. Advances in Mathematics, 225(5):2341-2364, 2010. Rosanna Laking, arXiv:1804.01326Purity in compactly generated derivators and t-structures with Grothendieck hearts. arXiv preprintRosanna Laking. Purity in compactly generated derivators and t-structures with Grothendieck hearts. arXiv preprint arXiv:1804.01326, 2018. Autour de la platitude. Daniel Lazard, Bull. Soc. Math. France. 9781128Daniel Lazard. Autour de la platitude. Bull. Soc. Math. France, 97(81):128, 1969. Injective modules over noetherian rings. Eben Matlis, Pacific Journal of Mathematics. 83Eben Matlis. Injective modules over noetherian rings. Pacific Journal of Mathemat- ics, 8(3):511-528, 1958. Derived categories part I. Daniel Murfet, Daniel Murfet. Derived categories part I. http://therisingsea.org/notes/DerivedCategories.pdf, 2006. Silting and cosilting classes in derived categories. Frederik Marks, Jorge Vitória, Journal of Algebra. 501Frederik Marks and Jorge Vitória. Silting and cosilting classes in derived categories. Journal of Algebra, 501:526-544, 2018. The chromatic tower for D(R). Amnon Neeman, Marcel Bökstedt, Topology. 313Amnon Neeman and Marcel Bökstedt. The chromatic tower for D(R). Topology, 31(3):519-532, 1992. Oddball Bousfield classes. Amnon Neeman, Topology. 395Amnon Neeman. Oddball Bousfield classes. Topology, 39(5):931-935, 2000. Lessons on rings, modules and multiplicities. Geoffrey Douglas, Northcott, Douglas Geoffrey Northcott. Lessons on rings, modules and multiplicities. 1968. Silting theory in triangulated categories with coproducts. Pedro Nicolás, Manuel Saorín, Alexandra Zvonareva, Journal of Pure and Applied Algebra. Pedro Nicolás, Manuel Saorín, and Alexandra Zvonareva. Silting theory in triangu- lated categories with coproducts. Journal of Pure and Applied Algebra, 2018. Dedualizing complexes and MGM duality. Leonid Positselski, Journal of Pure and Applied Algebra. 22012Leonid Positselski. Dedualizing complexes and MGM duality. Journal of Pure and Applied Algebra, 220(12):3866-3909, 2016. Realisation functors in tilting theory. Chrysostomos Psaroudakis, Jorge Vitória, Mathematische Zeitschrift. 2883-4Chrysostomos Psaroudakis and Jorge Vitória. Realisation functors in tilting theory. Mathematische Zeitschrift, 288(3-4):965-1028, 2018. Dimensions of triangulated categories. Raphaël Rouquier, Journal of K-theory. 12Raphaël Rouquier. Dimensions of triangulated categories. Journal of K-theory, 1(2):193-256, 2008. On compactly generated torsion pairs and the classification of co-t-structures for commutative noetherian rings. David Janšťovíček, Pospíšil, Transactions of the American Mathematical Society. 3689JanŠťovíček and David Pospíšil. On compactly generated torsion pairs and the classification of co-t-structures for commutative noetherian rings. Transactions of the American Mathematical Society, 368(9):6325-6361, 2016. Manuel Saorín, Janšťovíček , Simone Virili, arXiv:1708.07540t-structures on stable derivators and Grothendieck hearts. arXiv preprintManuel Saorín, JanŠťovíček, and Simone Virili. t-structures on stable derivators and Grothendieck hearts. arXiv preprint arXiv:1708.07540, 2017. The Stacks Project Authors. Stacks Project. The Stacks Project Authors. Stacks Project. http://stacks.math.columbia.edu, 2018. Derived equivalences induced by big cotilting modules. Janšťovíček, Advances in Mathematics. 263JanŠťovíček. Derived equivalences induced by big cotilting modules. Advances in Mathematics, 263:45-87, 2014. The classification of triangulated subcategories. Robert W Thomason, Compositio Mathematica. 1051Robert W. Thomason. The classification of triangulated subcategories. Compositio Mathematica, 105(1):1-27, 1997. Cosilting complexes and AIR-cotilting modules. Peiyu Zhang, Jiaqun Wei, Institute of Mathematics CAS. 49167ŽitnáPeiyu Zhang and Jiaqun Wei. Cosilting complexes and AIR-cotilting modules. Jour- nal of Algebra, 491:1-31, 2017. Institute of Mathematics CAS,Žitná 25, 115 67 Prague, Czech Republic E-mail address: [email protected]
[]
[ "The non-thermal superbubble in IC 10: the generation of cosmic ray electrons caught in the act", "The non-thermal superbubble in IC 10: the generation of cosmic ray electrons caught in the act" ]
[ "Volker Heesen \nSchool of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUK\n", "Elias Brinks \nCentre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUK\n", "Martin G H Krause \nExcellence Cluster Universe\nTechnische Universität München\nBoltzmannstrasse 2D-85748GarchingGermany\n\nMax-Planck-Institut für extraterrestrische Physik\nGiessenbachstr. 1D-85741GarchingGermany\n", "Jeremy J Harwood \nCentre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUK\n", "† Urvashi Rau \nNRAO, P.V.D. Science Operations Center\nNational Radio Astronomy Observatory\n1003 Lopezville Road87801SocorroNMUSA\n", "Michael P Rupen \nNRAO, P.V.D. Science Operations Center\nNational Radio Astronomy Observatory\n1003 Lopezville Road87801SocorroNMUSA\n", "Deidre A Hunter \nLowell Observatory\n1400 West Mars Hill Road86001FlagstaffAZUSA\n", "Krzysztof T Chyży \nObserwatorium Astronomiczne Uniwersytetu Jagiellońskiego\nul. Orla 17130-244KrakówPoland\n", "Ged Kitchener \nCentre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUK\n" ]
[ "School of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUK", "Centre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUK", "Excellence Cluster Universe\nTechnische Universität München\nBoltzmannstrasse 2D-85748GarchingGermany", "Max-Planck-Institut für extraterrestrische Physik\nGiessenbachstr. 1D-85741GarchingGermany", "Centre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUK", "NRAO, P.V.D. Science Operations Center\nNational Radio Astronomy Observatory\n1003 Lopezville Road87801SocorroNMUSA", "NRAO, P.V.D. Science Operations Center\nNational Radio Astronomy Observatory\n1003 Lopezville Road87801SocorroNMUSA", "Lowell Observatory\n1400 West Mars Hill Road86001FlagstaffAZUSA", "Obserwatorium Astronomiczne Uniwersytetu Jagiellońskiego\nul. Orla 17130-244KrakówPoland", "Centre for Astrophysics Research\nUniversity of Hertfordshire\nAL10 9ABHatfieldUK" ]
[ "Mon. Not. R. Astron. Soc" ]
Superbubbles are crucial for stellar feedback, with supposedly high (of the order of 10 per cent) thermalization rates. We combined multiband radio continuum observations from the Very Large Array (VLA) with Effelsberg data to study the non-thermal superbubble (NSB) in IC 10, a starburst dwarf irregular galaxy in the Local Group. Thermal emission was subtracted using a combination of Balmer Hα and VLA 32 GHz continuum maps. The bubble's nonthermal spectrum between 1.5 and 8.8 GHz displays curvature and can be well fitted with a standard model of an ageing cosmic ray electron population. With a derived equipartition magnetic field strength of 44 ± 8 µG, and measuring the radiation energy density from Spitzer MIPS maps as 5±1×10 −11 erg cm −3 , we determine, based on the spectral curvature, a spectral age of the bubble of 1.0 ± 0.3 Myr. Analysis of the LITTLE THINGS H I data cube shows an expanding H I hole with 100 pc diameter and a dynamical age of 3.8 ± 0.3 Myr, centred to within 16 pc on IC 10 X-1, a massive stellar mass black hole (M > 23 M ⊙ ). The results are consistent with the expected evolution for a superbubble with a few massive stars, where a very energetic event like a Type Ic supernova/hypernova has taken place about 1 Myr ago. We discuss alternatives to this interpretation.
10.1093/mnrasl/slu168
[ "https://arxiv.org/pdf/1411.1604v2.pdf" ]
14,908,479
1411.1604
d9392143ed52c259a18a926281389412add90b61
The non-thermal superbubble in IC 10: the generation of cosmic ray electrons caught in the act 24 Nov 2014 25 November 2014 Volker Heesen School of Physics and Astronomy University of Southampton SO17 1BJSouthamptonUK Elias Brinks Centre for Astrophysics Research University of Hertfordshire AL10 9ABHatfieldUK Martin G H Krause Excellence Cluster Universe Technische Universität München Boltzmannstrasse 2D-85748GarchingGermany Max-Planck-Institut für extraterrestrische Physik Giessenbachstr. 1D-85741GarchingGermany Jeremy J Harwood Centre for Astrophysics Research University of Hertfordshire AL10 9ABHatfieldUK † Urvashi Rau NRAO, P.V.D. Science Operations Center National Radio Astronomy Observatory 1003 Lopezville Road87801SocorroNMUSA Michael P Rupen NRAO, P.V.D. Science Operations Center National Radio Astronomy Observatory 1003 Lopezville Road87801SocorroNMUSA Deidre A Hunter Lowell Observatory 1400 West Mars Hill Road86001FlagstaffAZUSA Krzysztof T Chyży Obserwatorium Astronomiczne Uniwersytetu Jagiellońskiego ul. Orla 17130-244KrakówPoland Ged Kitchener Centre for Astrophysics Research University of Hertfordshire AL10 9ABHatfieldUK The non-thermal superbubble in IC 10: the generation of cosmic ray electrons caught in the act Mon. Not. R. Astron. Soc 000000024 Nov 2014 25 November 2014Accepted 2014 October 15. Received 2014 October 15; in original form 2014 September 22Printed (MN L A T E X style file v2.2)radiation mechanisms: non-thermal -cosmic rays -galaxies: individual: IC 10 -galaxies: irregular -galaxies: starburst -radio continuum: galaxies Superbubbles are crucial for stellar feedback, with supposedly high (of the order of 10 per cent) thermalization rates. We combined multiband radio continuum observations from the Very Large Array (VLA) with Effelsberg data to study the non-thermal superbubble (NSB) in IC 10, a starburst dwarf irregular galaxy in the Local Group. Thermal emission was subtracted using a combination of Balmer Hα and VLA 32 GHz continuum maps. The bubble's nonthermal spectrum between 1.5 and 8.8 GHz displays curvature and can be well fitted with a standard model of an ageing cosmic ray electron population. With a derived equipartition magnetic field strength of 44 ± 8 µG, and measuring the radiation energy density from Spitzer MIPS maps as 5±1×10 −11 erg cm −3 , we determine, based on the spectral curvature, a spectral age of the bubble of 1.0 ± 0.3 Myr. Analysis of the LITTLE THINGS H I data cube shows an expanding H I hole with 100 pc diameter and a dynamical age of 3.8 ± 0.3 Myr, centred to within 16 pc on IC 10 X-1, a massive stellar mass black hole (M > 23 M ⊙ ). The results are consistent with the expected evolution for a superbubble with a few massive stars, where a very energetic event like a Type Ic supernova/hypernova has taken place about 1 Myr ago. We discuss alternatives to this interpretation. INTRODUCTION Stellar feedback is a fundamental process that regulates the formation and evolution of galaxies. Supernovae (SNe) inject energy into the interstellar medium (ISM), heating the gas to Xray emitting temperatures and accelerating cosmic rays via shock waves. Galactic winds, hybridly driven by the hot gas and cosmic rays, remove mass and angular momentum (Everett et al. 2008;Strickland & Heckman 2009;Dorfi & Breitschwerdt 2012;Hanasz et al. 2013;Salem & Bryan 2014). Cosmological simulations without stellar feedback not only predict wrong global mass estimates, but mass concentrations towards the centres of galaxies ⋆ E-mail: [email protected] † Now at ASTRON, Postbus 2, 7990 AA Dwingeloo, the Netherlands. that are too high, leading to rotation curves that are steeper than observed (Scannapieco 2013). The most abundant type of galaxies in the local Universe, dwarf galaxies, are particularly affected by outflows: their weak gravitational potentials make them susceptible to outflows and winds (Tremonti et al. 2004). In the paradigm of a ΛCDM Universe, the removal of baryons in the least massive dark matter haloes may resolve the long standing 'missing satellites' problem (Moore et al. 1999). The loss of baryonic matter and associated angular momentum at early stages in their formation and evolution can affect the distribution of the non-baryonic matter as well, rendering the inner part of the rotation curves less steep (Governato et al. 2010;Oh et al. 2011a,b). Furthermore, outflows and winds in dwarf galaxies may be behind the magnetization of the early Universe (e.g. Pakmor, Marinacci & Springel 2014;Siejkowski et al. 2014). c 0000 RAS Massive stars are the agents of stellar feedback and they manifest themselves by carving bubbles -cavities of tenuous, hot gas -into the ISM. They usually form in groups, so that their bubbles start to overlap when expanding and subsequently merge, forming larger structures in excess of 100 pc, so-called superbubbles. The wind of massive stars, especially during their Wolf-Rayet (WR) phase, powers the early expansion of the bubble. Subsequent SNe create strong shocks in the bubble interior that are responsible for the thermal X-ray and the non-thermal synchrotron emitting gas . Stellar feedback in the form of SNe is more efficient for clustered SNe than for randomly distributed ones as subsequent SNe explode in the tenuous gas of the bubble and their shock waves are not suffering from strong radiative cooling. Hence, the thermalization fraction of clustered SNe is higher (Krause et al. 2013). An intriguing example of SN feedback is presented by what has become known as the non-thermal superbubble (NSB; Yang & Skillman 1993) in the nearby dwarf irregular galaxy IC 10, a member of the Local Group at a distance of 0.7 Mpc (1 arcsec = 3.4 pc; Hunter et al. 2012). It has several young star clusters, containing massive stars (Hunter 2001) and an unusually high number of WR stars (Massey & Holmes 2002). IC 10 is a dwarf irregular galaxy that is currently undergoing a starburst phase. Close to the centre of the NSB is one of the heaviest stellar-mass black holes known at a remnant mass of >23 M ⊙ (Silverman & Filippenko 2008), which forms together with the massive WR star [MAC92] 17A a highly variable luminous Xray binary, known as IC 10 X-1 (J2000.0, RA 00 h 20 m 29 s .09, Dec. 59 • 16 ′ 51. ′′ 95; Bauer & Brandt 2004;Barnard et al. 2014). It has been speculated that a core collapse of the IC 10 X-1 progenitor in a 'hypernova' could be responsible for the NSB, rather than a series of SNe (Lozinskaya & Moiseev 2007). In this Letter, we present multiband radio continuum observations with the NRAO 1 Karl G. Jansky Very Large Array (VLA) to study the non-thermal radio continuum spectrum of the NSB. This project follows on, and extends some preliminary results presented in Heesen et al. (2011). The data cover the frequency range between 1.4 and 32 GHz, at high spatial resolution. OBSERVATIONS We observed IC 10 with the VLA (project ID: AH1006). Observations were taken in D-array in 2010 August and September at L band (1.4-1.6 GHz), C band (4.5-5.4 and 6.9-7.8 GHz), X band (7.9-8.8 GHz) and Ka band (27-28 and 37-38 GHz) with ≈3 h onsource time each. In addition, we observed in C-array at L, C and X band with ≈3 h on-source time each in 2012 February to April (ID: 12A-288) and 2013 August (ID: 13A-328). A flux calibrator (3C 48) was observed either at the beginning or the end of the observations, and scans of IC 10 were interleaved every 15 min with a 2 min scan of a nearby complex gain calibrator (J0102+5824). We incorporated L-band B-array data observed with the historical VLA in 1986 September (ID: AS0266) from Yang & Skillman (1993). We followed standard data reduction procedures, using the Common Astronomy Software Applications package (CASA), developed by NRAO, and utilizing the flux scale by Perley & Butler (2013). We self-calibrated the L-, C-and X-band data with two 1 The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. rounds of phase-only antenna-based gain corrections, using images from the C-array data as a model. In C and X bands, we self-calibrated in phase and amplitude, adding in the D-array data (self-calibrated in phase), checking that the amplitudes did not change by more than 1-2 per cent. For the imaging we used CASA's implementation of the Multi-Scale Multi-Frequency Synthesis (MS-MFS) algorithm (Rau & Cornwell 2011), which simultaneously solves for spatial and spectral structure during wide-band image reconstruction. A radio spectral index image was produced by MS-MFS as well, which we used to refine the self-calibration model. A post-deconvolution wide-band primary beam correction was applied to remove the effect of the frequency-dependent primary beam. For the spectral analysis, we imaged subsets ('spectral windows') of data with either 128 or 256 MHz bandwidth, varying Briggs' 'robust' parameter as function of frequency to achieve a synthesized beam of a similar angular size. All data were convolved with a Gaussian kernel in AIPS 2 to an identical resolution of 5.1 arcsec and regridded. Because the VLA cannot record baselines smaller than ≈30 m (elevation dependent), there is a limit to the largest angular scale that can be observed, resulting broadly in flux densities that are too low compared with single-dish measurements; this is known as the 'missing zero-spacing flux'. Our VLA flux density at 1.5 GHz and those at 2.6 and 10.5 GHz, measured with the 100-m Effelsberg telescope (Chyży et al. 2003(Chyży et al. , 2011, of 343, 277 and 156 mJy, can be fitted with a constant spectral index of −0.41. We can interpolate them to estimate the missing zero-spacing flux in each spectral window. We found that at frequencies of 4-6 GHz, 10-20 per cent of the flux density was missed by the VLA, which increased to 30-40 per cent at frequencies of 7-9 GHz. In order to correct for this, we merged the VLA and Effelsberg data using IMERG in AIPS. We used the VLA 1.5 GHz map as a template for the large-scale emission at the lower end of our frequency range, which has the benefit of an improved angular resolution in comparison to the 2.6 GHz Effelsberg map. We hence interpolated the 1.5 GHz VLA and 10.5 GHz Effelsberg maps at an angular resolution of 78 arcsec, assuming a constant but spatially resolved spectral index, to have a template of the large-scale emission in each spectral window. We merged the data of each spectral window with the appropriate template, making sure that the integrated flux densities of several regions agreed to within 5-10 per cent and the discrepancy between the total integrated flux densities was less than 4 per cent. This was achieved by adjusting the 'uvrange' parameter in IMERG, which prescribes the angular scale at which the single-dish image is scaled to interferometric image; we used values within the range of 0.8-1.8 kλ. RESULTS In Fig. 1(a), we present a 6.2 GHz contour map from combined VLA and Effelsberg observations at 3.4 arcsec angular resolution, overlaid on an integrated H I map from LITTLE THINGS (Hunter et al. 2012). The NSB is centred on RA 00 h 20 m 28 s .85, Dec. 59 • 16 ′ 48 ′′ , which is 5 arcsec south-west of IC 10 X-1, and has a diameter of 54 arcsec or 184 pc. We created a map of the thermal (free-free) emission from the Balmer Hα emission map of Hunter & Elmegreen (2004) following standard conversion (e.g., Deeg, Duric & Brinks 1997 where we corrected for foreground absorption using E(B − V) = 0.75 mag (Burstein & Heiles 1984). This map was combined with our 32 GHz map of the south-eastern starburst region, which we use as an extinction free measurement of the thermal radio continuum emission (Fig. 1 c). A comparison between the two maps showed agreement to within 10-20 per cent in areas outside of the compact H II regions (I th < 1.0 mJy beam −1 ), indicating that our estimate of the optical foreground absorption is accurate. The main fraction of thermal radio continuum is located in the H II regions, north-west of the NSB. Whereas the NSB is prominent in the non-thermal radio continuum, there has thus far been little other evidence reported in the literature that the NSB constitutes a cavity in the ISM. Wilcots & Miller (1998) find an H I hole at its position, but do not report any signs of expansion. There is weak, diffuse Hα emission from ionized hydrogen and an increased line width, corresponding to a thermal velocity dispersion of 35 km s −1 , but nothing to suggest an expanding shell (Thurow & Wilcots 2005). Using the natural weighted H I data cube from LITTLE THINGS (FWHM = 8.4 × 7.5 arcsec, PA = 37 • ; Hunter et al. 2012), we have created a position-velocity diagram of the NSB and its surroundings, presented in Fig. 2. We find a cavity of little prominence, centred on RA 00 h 20 m 29 s .71, Dec. 59 • 16 ′ 51. ′′ 9, which is 4.8 arcsec east of IC 10 X-1. It is either a single H I hole with a diameter of 100 pc and an extent in velocity space of 30 km s −1 , or consists of two smaller H I holes with diameters of 76 pc and extents in velocity space of 18 km s −1 each. For a single hole the expansion velocity is 15 km s −1 , leading to an es- Fig. 1(a). South-east is to the left, north-west to the right. HI bubble timate of the bubble's dynamical age of τ dyn = 3.8 ± 0.5 Myr, with a similar age for the double-hole scenario. For further analysis, we fed the radio continuum data into the Broadband Radio Analysis ToolS (BRATS; Harwood et al. 2013). The spectrum of the NSB presented in Fig. 3 (flux densities are tabulated in Table 1) 0.6 ± 0.1. The JP model describes the evolution of radio continuum emission from a cosmic ray electron (CRe) population within a constant magnetic field strength following a single-injection. There exist variations to the JP model, such as the KP (Kardashev 1962) and Tribble (Tribble 1993) model. Our data cannot differentiate between them as any differences are only notable close to the break frequency. Assuming energy equipartition and using the revised equipartition formula by Beck & Krause (2005), we find a total magnetic field strength of 44 ± 8 µG (U B = 7.7 ± 0.4 × 10 −11 erg cm −3 ). The total infrared luminosity from Spitzer MIPS 24-160 µm maps from Dale & Helou (2002) lead to a radiation energy density of U rad = U star + U IR = 5 ± 1 × 10 −11 erg cm −3 , where the contribution from stellar light is taken as U star = 1.73 × U IR as measured in the solar neighbourhood (Draine 2011). The spatially resolved distribution of the spectral age is shown in Fig. 4, where we applied a S/N-cutoff of 5 in each pixel. There is an east-west gradient, where the age in the eastern part is τ = 1.0 Myr. The JP model fit to the spatially resolved data is better ( χ 2 red = 0.6) than for the integrated spectrum (χ 2 red = 1.3), because a superposition of spectral ages cannot be described by a single JP model. This leads us to conclude that our best estimate of the spectral age is τ spec = 1.0 ± 0.3 Myr. The error is a combination of the magnetic field error and the formal fit error of the JP model. Finally, in order to exclude a power-law spectrum we conducted a few more tests: first, we fitted a power-law fit to the integrated spectrum and found χ 2 red = 4.5, far inferior to the JP model fit. Secondly, a power-law fit to data points >1.5 GHz predicts a flux density at 1.5 GHz of 19 per cent above to the actual value, about 15 times larger than its error of 1.3 per cent (made up of 1 per cent flux calibration error and the rms map noise) as shown in Fig. 3. Thus, we can exclude a spectrum without curvature. DISCUSSION We first review the parameters derived for the IC 10 NSB: the integrated current cosmic ray energy in the NSB is 1 × 10 51 erg, where we modelled the bubble as a sphere and used the assumption of energy equipartition (U CR = U B ), injected approximately 1 Myr ago. Following the calculations of Bagetakos et al. (2011), we can derive the energy required to create the H I hole as 0.2-1 × 10 51 erg, with the upper value appropriate if two holes were formed, where the ambient density of the neutral, atomic gas is 0.8-2.2 cm −3 (including helium). The contribution from turbulent gas within the NSB traced in Hα (Thurow & Wilcots 2005) is probably not significant when taking into account that the filling factor for emission line gas is probably low. We can compare our findings with 3D simulations by Krause et al. (2013. They predict that superbubbles reach diameters of the order of 100 pc even before the first SN. Each SN then first heats the bubble, accelerates the shell, and then dissipates the injected energy entirely at the leading radiative shock wave, and via radiative cooling in mixing regions at the location of the shell, on a time-scale of a few 10 5 yr. The shell slows down accordingly, resulting in a discrepancy between spectral and dynamical age. The rather low shell velocity of the IC 10 NSB (high-velocity superbubbles have a few times faster shells, compare e.g. Oey 1996) is indeed expected if the last embedded SN exploded about 1 Myr ago, as suggested by the non-thermal emission. The CRe would have been accelerated as the accompanying shock wave traversed the bubble. Using the method of Bagetakos et al. (2011) on the aforementioned 3D simulations at a similar time, we find 10 51 erg as minimum energy to create the cavity, in agreement with the upper limit from the observations. The energy found in cosmic rays is however surprisingly large. Assuming an acceleration efficiency of 10 per cent (e.g. Rieger, de Oña-Wilhelmi & Aharonian 2013), at least 10 52 erg would have to have been released. Could this have happened in a single explosion? Highly energetic SNe are thought to be related to long duration gamma-ray bursts (e.g. Mazzali et al. 2014, and references therein). The associated Type Ic SNe have energies of up to a few times 10 52 erg, adequate to account for our observations. We note that a higher energy than the standard 10 51 erg would also better explain the high shell velocities in some high-velocity superbubbles (Oey 1996;. It is noteworthy that the NSB is centred to within 16 pc on IC 10 X-1, suggesting an association. This system contains at least one massive star, [MAC92] 14A, which has a mass larger than 17 M ⊙ and more likely 35 M ⊙ (Silverman & Filippenko 2008), also a possible progenitor for a Type Ic SN. Alternatively, multiple SNe may have exploded in the past 1 Myr. We cannot rule this out from the spectral ageing analysis, because a constant CRe injection rate since approximately 1 Myr would still lead to a spectral downturn, caused by the oldest CRe. However, the position of the stellar clusters (Fig. 1 a) and the distribution of the thermal radio continuum emission and hence that of massive stars (Fig. 1 c), argues against this scenario, because there is no spatial correlation. It is, however, conceivable that a less massive SN has exploded more recently, offset from IC 10 X-1, which could explain the east-west gradient in the spectral age distribution. Another way to explain the presence of non-thermal particles would be perhaps the energy release from IC 10 X-1. It is a debated possibility that the X-ray emission of black hole binaries partially originates from a jet in addition to that of the more conventional Xray corona (Grinberg et al. 2014). If the current X-ray luminosity of IC 10 X-1, 10 39 erg s −1 (Barnard et al. 2014), comes exclusively from the jet, an outburst length of 1 Myr would be sufficient to explain the cosmic ray energy, assuming a 10 per cent acceleration efficiency. One would then have to explain why this channel was so active in the past and by now has ceased almost entirely with no compact radio source observed in the vicinity of IC 10 X-1. CONCLUSIONS In this Letter, we have presented a multiband radio continuum study of the NSB in the nearby starburst dwarf irregular galaxy IC 10. Conventional wisdom tells us that dwarf galaxies are weak in nonthermal (synchrotron) emission, being easily subjected to outflows and winds and not likely able to retain cosmic rays. IC 10 is no exception, it has a large thermal fraction of 50 per cent at 6 GHz and is underluminous in terms of its radio continuum emission compared to its star-formation rate (Heesen et al. 2011(Heesen et al. , 2014. However, high spatial resolution observations (10-20 pc) show complex cosmic ray and magnetic field distributions. The NSB stands out as the brightest non-thermal structure and its spectrum shows a conspicuous downturn towards higher frequencies, something that to date has rarely been observed in any nearby galaxy. We fit a JP spectral model to the data, which describes the radio continuum emission of an ageing population of CRe in a constant magnetic field. Estimating the magnetic field from equipartition and the radiation energy density from Spitzer MIPS maps, we find a spectral age of τ spec = 1.0 ± 0.3 Myr (uncertainty stems from the errors of the magnetic field and the spectral fitting errors). The bubble's dynamical age is τ dyn = 3.8 ± 0.3, measured from the expansion speed of its corresponding 'H I hole'. Our results suggest that the NSB was generated by the wind of the progenitor of IC 10 X-1, a massive stellar mass black hole, during its main-sequence life and WR phase. Considering alternative explanations, we find that most likely a single energetic explosion of the progenitor of IC 10 X-1 released 10 52 erg, accelerating the non-thermal particles and the shell at the same time. The latter than slowed down via interaction with the ambient medium to its current velocity of 15 km s −1 . We are observing the NSB in the early stages of its evolution, of what may become over the next few 10-50 Myr a superbubble of a few hundred parsec size visible as a large H I hole. Figure 1 . 1, equation 3, T = 10 4 K), (a) Integrated H I emission line intensity as grey-scale at 5.5 × 5.9 arcsec (PA = 16 • ) resolution of an approximately 1 kpc 2 region to the south-east of the centre of IC 10. Contours show the 6.2 GHz radio continuum emission at 60, 120 and 800 µJy beam −1 , i.e. the superposition of thermal and non-thermal emission. The white line corresponds to the slice used to extract the PV-diagram at an angle of −45 • , centred on the H I hole (see the text for details). Green plus signs show the position of stellar clusters(Hunter 2001). (b) Non-thermal radio continuum at 6.2 GHz of the superbubble, where the dashed line indicates the region, used for measuring the spectrum of the non-thermal superbubble. (c) Thermal radio continuum of the same region as in (b), constructed from a combination of Hα and 32 GHz emission. The dashed line indicates the 80 per cent attenuation level of the primary beam at 32 GHz. In panels (a)-(c), the magenta star indicates the position of IC 10 X-1 and the angular resolution of the radio data is 3.4 arcsec (equivalent to 12 pc). Figure 2 . 2Position-velocity (PV) diagram of the NSB and its surroundings, from the LITTLE THINGS H I data cube. The position of the slice is shown in Figure 3 . 3shows a conspicuous curvature, which can be well fitted by a Jaffe-Perola (JP; Jaffe & Perola 1973) model, shown as red solid line, with an injection spectral index of α inj = Non-thermal spectrum of the NSB between 1.5 and 8.8 GHz. The solid red line shows the Jaffe-Perola model fit to the data and the solid black line is a linear fit to data points >1.5 GHz. Figure 4 . 4Spectral age of the cosmic ray electrons in the NSB and its environment. The angular resolution is 5.1 arcsec (equivalent to 17 pc) as indicated by the boxed circle in the bottom-left corner. Contours show the non-thermal 1.5 GHz emission at 3, 5, 10, 20 and 40 × 27 µJy beam −1 and the yellow star the position of IC 10 X-1. Table 1 . 1Non-thermal flux densities of the NSB.ν (GHz) S ν (mJy) ν (GHz) S ν (mJy) ν (GHz) S ν (mJy) 1.52 40.2 5.32 15.6 7.59 10.8 4.55 17.5 5.45 15.0 7.72 10.8 4.68 17.1 6.95 11.8 7.85 10.6 4.81 16.9 7.08 11.9 8.01 10.8 4.93 16.3 7.21 11.6 8.27 10.5 5.06 16.3 7.33 11.4 8.53 10.2 5.19 15.7 7.46 11.1 8.78 9.6 AIPS, the Astronomical Image Processing Software, is free software available from the NRAO. c 0000 RAS, MNRAS 000, 000-000 c 0000 RAS, MNRAS 000, 000-000 ACKNOWLEDEGEMENTSVH acknowledges support from the Science and Technology Facilities Council (STFC) under grant ST/J001600/1. MK acknowledges support by the DFG cluster of excellence 'Origin and Structure of the Universe' and by the ISSI project 'Massive star clusters across the Hubble time'. JJH wishes to thank the University of Hertfordshire for generous financial support and STFC for a STEP award. We thank our referee, Biman Nath, for a constructive and thoughtful report. . I Bagetakos, E Brinks, F Walter, W J G De Blok, A Usero, A K Leroy, J W Rich, R C Kennicutt, Jr, AJ. 14123Bagetakos I., Brinks E., Walter F., de Blok W. J. G., Usero A., Leroy A. K., Rich J. W., Kennicutt, R. C., Jr. 2011, AJ, 141, 23 . R Barnard, J F Steiner, A F Prestwich, I R Stevens, J S Clark, U C Kolb, ApJ. 792131Barnard R., Steiner J. F., Prestwich A. F., Stevens I. R., Clark J. S., Kolb U. C., 2014, ApJ, 792, 131 . F E Bauer, W N Brandt, ApJ. 60167Bauer F. E., Brandt W. N., 2004, ApJ, 601, L67 . R Beck, M Krause, Astron. Nachr. 326414Beck R., Krause M., 2005, Astron. Nachr., 326, 414 . D Burstein, C Heiles, ApJS. 5433Burstein D., Heiles C., 1984, ApJS, 54, 33 . K T Chyży, J Knapik, D J Bomans, U Klein, R Beck, M Soida, M Urbanik, A&A. 405513Chyży K. T., Knapik J., Bomans D. J., Klein U., Beck R., Soida M., Ur- banik M., 2003, A&A, 405, 513 . K T Chyży, M Weżgowiec, R Beck, D J Bomans, A&A. 52994Chyży K. T., Weżgowiec M., Beck R., Bomans D. J., 2011, A&A, 529, A94 . D A Dale, G Helou, ApJ. 576159Dale D. A., Helou G., 2002, ApJ, 576, 159 . H.-J Deeg, N Duric, E Brinks, A&A. 323323Deeg H.-J., Duric N., Brinks E., 1997, A&A, 323, 323 . E A Dorfi, D Breitschwerdt, A&A. 54077Dorfi E. A., Breitschwerdt D., 2012, A&A, 540, A77 B T Draine, Physics of the Interstellar and Intergalactic Medium. Princeton, NJPrinceton University PressDraine B. T., 2011, Physics of the Interstellar and Intergalactic Medium, Princeton University Press, Princeton, NJ . J E Everett, E G Zweibel, R A Benjamin, D Mccammon, L Rocks, Iii J S Gallagher, ApJ. 674258Everett J. E., Zweibel E. G., Benjamin R. A., McCammon D., Rocks L., Gallagher, III J. S., 2008, ApJ, 674, 258 . F Governato, Nature. 463203Governato F. et al., 2010, Nature, 463, 203 . V Grinberg, A&A. 5651Grinberg V. et al., 2014, A&A, 565, A1 . M Hanasz, H Lesch, T Naab, A Gawryszczak, K Kowalik, D Wóltański, ApJ. 77738Hanasz M., Lesch H., Naab T., Gawryszczak A., Kowalik K., Wóltański D., 2013, ApJ, 777, L38 . J J Harwood, M J Hardcastle, J H Croston, J L Goodger, 4353353MN-RASHarwood J. J., Hardcastle M. J., Croston J. H., Goodger J. L., 2013, MN- RAS, 435, 3353 . V Heesen, E Brinks, A K Leroy, G Heald, R Braun, F Bigiel, R Beck, AJ. 147103Heesen V., Brinks E., Leroy A. K., Heald G., Braun R., Bigiel F., Beck R., 2014, AJ, 147, 103 . V Heesen, U Rau, M P Rupen, E Brinks, D A Hunter, ApJ. 73923Heesen V., Rau U., Rupen M. P., Brinks E., Hunter D. A., 2011, ApJ, 739, L23 . D A Hunter, ApJ. 559225Hunter D. A., 2001, ApJ, 559, 225 . D A Hunter, B G Elmegreen, AJ. 1282170Hunter D. A., Elmegreen B. G., 2004, AJ, 128, 2170 . D A Hunter, AJ. 144134Hunter D. A. et al., 2012, AJ, 144, 134 . W J Jaffe, G C Perola, A&A. 26423Jaffe W. J., Perola G. C., 1973, A&A, 26, 423 . N S Kardashev, SvA. 6317Kardashev N. S., 1962, SvA, 6, 317 . M Krause, R Diehl, H Böhringer, M Freyberg, D Lubos, A&A. 56694Krause M., Diehl R., Böhringer H., Freyberg M., Lubos D., 2014, A&A, 566, A94 . M Krause, K Fierlinger, R Diehl, A Burkert, R Voss, U Ziegler, A&A. 55049Krause M., Fierlinger K., Diehl R., Burkert A., Voss R., Ziegler U., 2013, A&A, 550, A49 . M G H Krause, R Diehl, ApJ. 79421Krause M. G. H., Diehl R., 2014, ApJ, 794, L21 . T A Lozinskaya, A V Moiseev, MNRAS. 38126Lozinskaya T. A., Moiseev A. V., 2007, MNRAS, 381, L26 . P Massey, S Holmes, ApJ. 58035Massey P., Holmes S., 2002, ApJ, 580, L35 . P A Mazzali, A I Mcfadyen, S E Woosley, E Pian, M Tanaka, MNRAS. 44367Mazzali P. A., McFadyen A. I., Woosley S. E., Pian E., Tanaka M., 2014, MNRAS, 443, 67 . B Moore, S Ghigna, F Governato, G Lake, T Quinn, J Stadel, P Tozzi, ApJ. 52419Moore B., Ghigna S., Governato F., Lake G., Quinn T., Stadel J., Tozzi P., 1999, ApJ, 524, L19 . M S Oey, ApJ. 467666Oey M. S., 1996, ApJ, 467, 666 . S.-H Oh, W J G De Blok, E Brinks, F Walter, R C Kennicutt, Jr, AJ. 141193Oh S.-H., de Blok W. J. G., Brinks E., Walter F., Kennicutt, R. C., Jr. 2011a, AJ, 141, 193 . S.-H Oh, C Brook, F Governato, E Brinks, L Mayer, W J G De Blok, A Brooks, F Walter, AJ. 14224Oh S.-H., Brook C., Governato F., Brinks E., Mayer L., de Blok W. J. G., Brooks A., Walter F., 2011b, AJ, 142, 24 . R Pakmor, F Marinacci, V Springel, ApJ. 78320Pakmor R., Marinacci F., Springel V., 2014, ApJ, 783, L20 . R A Perley, B J Butler, ApJS. 20419Perley R. A., Butler B. J., 2013, ApJS, 204, 19 . U Rau, T J Cornwell, A&A. 53271Rau U., Cornwell T. J., 2011, A&A, 532, A71 . F M Rieger, E De Oña-Wilhelmi, F A Aharonian, Frontiers Phys. 8714Rieger F. M., de Oña-Wilhelmi E., Aharonian F. A., 2013, Frontiers Phys., 8, 714 . M Salem, G L Bryan, MNRAS. 4373312Salem M., Bryan G. L., 2014, MNRAS, 437, 3312 . C Scannapieco, Astron. Nachr. 334499Scannapieco C., 2013, Astron. Nachr., 334, 499 . H Siejkowski, K Otmianowska-Mazur, M Soida, D J Bomans, M Hanasz, A&A. 562136Siejkowski H., Otmianowska-Mazur K., Soida M., Bomans D. J., Hanasz M., 2014, A&A, 562, A136 . J M Silverman, A V Filippenko, ApJ. 67817Silverman J. M., Filippenko A. V., 2008, ApJ, 678, L17 . D K Strickland, T M Heckman, ApJ. 6972030Strickland D. K., Heckman T. M., 2009, ApJ, 697, 2030 . J C Thurow, E M Wilcots, AJ. 129745Thurow J. C., Wilcots E. M., 2005, AJ, 129, 745 . C A Tremonti, ApJ. 613898Tremonti C. A. et al., 2004, ApJ, 613, 898 . P C Tribble, MNRAS. 26157Tribble P. C., 1993, MNRAS, 261, 57 . E M Wilcots, B W Miller, AJ. 1162363Wilcots E. M., Miller B. W., 1998, AJ, 116, 2363 . H Yang, E D Skillman, AJ. 1061448Yang H., Skillman E. D., 1993, AJ, 106, 1448
[]
[ "Conditions for direct black hole seed collapse near a radio-loud quasar 1 Gyr after the Big Bang", "Conditions for direct black hole seed collapse near a radio-loud quasar 1 Gyr after the Big Bang" ]
[ "Roderik A Overzier \nObservatório Nacional/MCTIC\nRua General José Cristino\n7720921-400Rio de JaneiroSão Cristóvão, RJBrazil\n\nInstitute of Astronomy, Geophysics and Atmospheric Sciences\nUniversity of São Paulo\nRua do Matão\n1226, 05508-090São PauloSPBrazil\n" ]
[ "Observatório Nacional/MCTIC\nRua General José Cristino\n7720921-400Rio de JaneiroSão Cristóvão, RJBrazil", "Institute of Astronomy, Geophysics and Atmospheric Sciences\nUniversity of São Paulo\nRua do Matão\n1226, 05508-090São PauloSPBrazil" ]
[]
Observations of luminous quasars and their supermassive black holes at z 6 suggest that they formed at dense matter peaks in the early universe. However, few studies have found definitive evidence that the quasars lie at cosmic density peaks, in clear contrast with theory predictions. Here we present new evidence that the radio-loud quasar SDSS J0836+0054 at z = 5.8 could be part of a surprisingly rich structure of galaxies. This conclusion is reached by combining a number of findings previously reported in the literature. Bosman et al.(2020)obtained the redshifts of three companion galaxies, confirming an overdensity of i 775 -dropouts found byZheng et al. (2006). By comparing this structure with those found near other quasars and large overdense regions in the field at z ∼ 6 − 7, we show that the SDSS J0836+0054 field is among the densest structures known at these redshifts. One of the spectroscopic companions is a very massive star-forming galaxy (log 10 (M /M ) = 10.3 +0.3 −0.2 ) based on its unambiguous detection in a Spitzer 3.6 µm image. This suggests that the quasar field hosts not one, but at least two rare, massive dark matter halos (log 10 (M h /M ) 12), corresponding to a galaxy overdensity of at least 20. We discuss the properties of the young radio source. We conclude that the environment of SDSS J0836+0054 resembles, at least qualitatively, the type of conditions that may have spurred the direct collapse of a massive black hole seed according to recent theory.
10.3847/1538-4357/ac448c
[ "https://arxiv.org/pdf/2111.05446v2.pdf" ]
243,938,504
2111.05446
9acf0549eac0c8a278c62048b2bc7155c7c2e0fb
Conditions for direct black hole seed collapse near a radio-loud quasar 1 Gyr after the Big Bang December 20, 2021 Roderik A Overzier Observatório Nacional/MCTIC Rua General José Cristino 7720921-400Rio de JaneiroSão Cristóvão, RJBrazil Institute of Astronomy, Geophysics and Atmospheric Sciences University of São Paulo Rua do Matão 1226, 05508-090São PauloSPBrazil Conditions for direct black hole seed collapse near a radio-loud quasar 1 Gyr after the Big Bang December 20, 2021Draft version Typeset using L A T E X twocolumn style in AASTeX631Radio loud quasars (1349)Galaxy environments (2029)Lyman-alpha galaxies (978)High- redshift galaxies (734)Reionization (1383)Black holes(162) Observations of luminous quasars and their supermassive black holes at z 6 suggest that they formed at dense matter peaks in the early universe. However, few studies have found definitive evidence that the quasars lie at cosmic density peaks, in clear contrast with theory predictions. Here we present new evidence that the radio-loud quasar SDSS J0836+0054 at z = 5.8 could be part of a surprisingly rich structure of galaxies. This conclusion is reached by combining a number of findings previously reported in the literature. Bosman et al.(2020)obtained the redshifts of three companion galaxies, confirming an overdensity of i 775 -dropouts found byZheng et al. (2006). By comparing this structure with those found near other quasars and large overdense regions in the field at z ∼ 6 − 7, we show that the SDSS J0836+0054 field is among the densest structures known at these redshifts. One of the spectroscopic companions is a very massive star-forming galaxy (log 10 (M /M ) = 10.3 +0.3 −0.2 ) based on its unambiguous detection in a Spitzer 3.6 µm image. This suggests that the quasar field hosts not one, but at least two rare, massive dark matter halos (log 10 (M h /M ) 12), corresponding to a galaxy overdensity of at least 20. We discuss the properties of the young radio source. We conclude that the environment of SDSS J0836+0054 resembles, at least qualitatively, the type of conditions that may have spurred the direct collapse of a massive black hole seed according to recent theory. INTRODUCTION Since their discovery, the study of luminous quasars within the first billion years of cosmic time has intersected with numerous areas in astrophysics from black holes to cosmology (Fan et al. 2000). Although they are the most luminous hydrogen-ionizing photon producing sources at their epoch, their relatively low number density impies that they probably did not play a major role in driving the overall reionization of the universe (Bouwens et al. 2015). Ironically, however, their restframe UV spectra offer some of the most direct ways we currently have of assessing the rapidly evolving neutral fraction of the intergalactic medium (IGM) (e.g., Mortlock et al. 2011;Becker et al. 2015;Eilers et al. 2017;Davies et al. 2018;Wang et al. 2020;Yang et al. 2020;Bosman et al. 2020). Estimates of the masses of their [email protected] permassive black holes (SMBHs) using established techniques indicate that at least some of these objects were able to accumulate masses similar to the SMBH at the center of the local giant elliptical M87 within less than 1 Gyr of the Big Bang, defying the most simple maximum accretion scenarios especially for the highest redshifts (e.g., Bañados et al. 2018b). One of the main uncertainties at the current moment relates to the population of "seed" black holes from which they originated (e.g., Bromm & Yoshida 2011;Greene 2012;Volonteri 2012;Mezcua 2017;Inayoshi et al. 2020). Although in principle a ∼ 10 9 M SMBH could form from a 100 M seed accreting at the Eddington rate for under 1 Gyr, it is not clear whether such high accretion rates could be sustained for such a long time (or shorter periods of super-Eddington accretion), especially in the presence of feedback in the forming galaxy (Inayoshi et al. 2016;Smith et al. 2018;Regan et al. 2019). One possible solution includes mergers between light black hole seeds in clusters of stellar remnants (Kroupa et al. 2020) or at the centers of merging (mini)halos, but long black hole merger time scales and the effect of merger recoil kicks could pose a problem for such a scenario (see Piana et al. 2021). An attractive, alternative scenario starts with the formation of a much more massive seed of ∼ 10 4−5 M , a so-called direct collapse black hole (DCBH) seed (e.g. Umemura et al. 1993;Eisenstein & Loeb 1995;Koushiappas et al. 2004;Volonteri & Rees 2005;Lodato & Natarajan 2006). In order to accomplish this, models and simulations typically require some kind of mechanism that prevents cooling and fragmentation and stimulates the formation of the DCBH seed, through an enhanced local Lyman-Werner photon flux or through the dynamical heating of merging (mini)halos. These mechanisms, which could operate alone or in tandem, may prevent halos from forming stars at least until reaching the atomic cooling stage, at which point a runaway collapse to a massive seed occurs, either directly or by forming first a shortlived, supermassive star (e.g., Agarwal et al. 2016;Habouzit et al. 2016;Latif et al. 2018;Wise et al. 2019;Lupi et al. 2021;Piana et al. 2021). Based on their low co-moving space density, high accretion power and large black hole masses, it is frequently argued that the first luminous quasars must have formed in very dense regions of the cosmic web (e.g., Fan et al. 2000;Springel et al. 2005;Li et al. 2007;Sijacki et al. 2009). It has thus been expected that these luminous quasars will be situated in or near large overdensities of dark matter, gas and galaxies that, in principle, we should be able to detect. Precisely this expectation has been tested for the last two decades in many different and complementary ways, but it has been challenging to interpret that work in a straightforward and clear-cut way (see, e.g., Overzier et al. 2009a;Overzier 2016;Mazzucchelli et al. 2017;Decarli et al. 2017;Mignoli et al. 2020, and references therein for a variety of observational findings related to the environments of quasars). These results are even more puzzling in the light of several significant large-scale overdensities that have been detected at similar redshifts with relative ease in fields not known to host luminous quasars (e.g., Ouchi et al. 2005;Toshikawa et al. 2012;Ota et al. 2018;Calvi et al. 2019). Although obscuration, variability or duty cycle can easily explain the real or apparent absence of luminous quasars in any of these structures, there are no obvious mechanisms that could temporarily hide any large-scale structure around the known quasars, if present. This is a rather strong, albeit indirect, argument against the explanation that the structures associ-ated with the quasars are simply being missed because we lack observational sensitivity at these redshifts. Another common explanation given for the apparent absence of galaxies clustered near quasars is that strong radiative feedback has raised the temperature floor of the surrounding IGM, thereby preventing the condensation of gas into galaxies (e.g., Utsumi et al. 2010;Bañados et al. 2013;Mazzucchelli et al. 2017). In this case, the quasars would still be surrounded by large matter overdensities as expected from theory, but those would be devoid of galaxies within the quasar ionization cone. In this scenario, however, it seems challenging to suppress any structure on scales out to many Mpc, uniformly around the quasar, and throughout its active and inactive phases as the observations appear to require. Besides, galaxies have been found within the ionization cones of some quasars (e.g., Mignoli et al. 2020;Bosman et al. 2020). The apparent lack of clearly identifiable overdense environments around most quasars at z ∼ 6 studied to date is even more puzzling given current theories for massive seed formation. As explained above, these theories require large matter overdensities that could (1) provide a large Lyman-Werner photon background, and (2) enhanced merger rates of small dark matter halos that stimulate the collapse of a massive gas cloud into a DCBH seed (e.g., Wise et al. 2019;Regan et al. 2020;Lupi et al. 2021). Once the massive seed of 10 3 − 10 5 M has formed, the high merger and accretion rates in the overdense environment would be sufficient to achieve the 4-6 orders of magnitude growth necessary to form the SMBH in just a few hundred million years. Without invoking the DCBH scenario, it is a real challenge to explain the very large SMBH masses in at least some of the quasars at z 6. Quasar SDSS J0836+0054 at z = 5.8 (denoted J0836 throughout this paper) is among a handful of objects for which a large potential overdensity of galaxies was found in early observations using the HST (Zheng et al. 2006, Z06). However suggestive this result, direct spectroscopic evidence for an overdensity associated with J0836 was still lacking. Recently, however, three galaxies having redshifts near that of the quasar were found as part of the study of Bosman et al. (2020, B20), confirming close association with the quasar for two objects from the Z06 sample, plus one new source. As we show in this paper, one of the companion galaxies is very bright at 3.6 µm, indicative of a very high stellar mass. Based on these new results, we conclude that the reality of a structure of galaxies associated with J0836 has now been established. We compare the structure around J0836 with model expectations and recent liter- B-B20 A-Z06/A-B20 B-Z06/C-B20 J0836 HST/ACS F775W,F850LP Figure 1. HST/ACS false color image of the region around SDSS J0836+0054 at z = 5.8 using the i775 image for the blue channel, (i775+z850)/2 as the green channel and z850 as the red channel. The quasar is marked by the white polygon. Photometric dropouts from Z06 and spectroscopic Lyα emitting objects from B20 are indicated by the green circles and cyan squares, respectively. Redshifts from B20 have been indicated. The large yellow circle referred to in this study has a radius of 1. 26 (radius of 0.45 pMpc). Scalebars of length 1 and 1 cMpc are shown in the bottom-left. ature results, including another recently confirmed large structure associated with quasar SDSS J1030+0524 at z = 6.3 (Mignoli et al. 2020) as well as several field overdensities at z ∼ 6 − 7. From these comparisons, we conclude that J0836 lies in a rather exceptional environment that at least qualitatively resembles the type of overdense environments required by the DCBH scenario. We set the cosmological parameters to H 0 = 69.6 km s −1 Mpc −1 , Ω M = 0.286, Ω Λ = 0.714, which gives a spatial scale of 5.95 physical kpc arcsec −1 and an age of 0.98 Gyr at z = 5.8 (Bennett et al. 2014). We will use the prefix 'p' to indicate physical scales (e.g., pkpc) and the prefix 'c' to indicate co-moving scales (e.g., cMpc). We use AB magnitudes. The quasar J0836 was discovered as part of the Sloan Digital Sky Survey (SDSS) quasar survey (Fan et al. 2001), and is one of the brightest quasars at z > 5.7 in the optical (Bañados et al. 2016), radio ) and X-rays (Wolf et al. 2021). Various redshift determinations in the range z 5.77 − 5.83 exist (Fan et al. 2001;Stern et al. 2003;Freudling et al. 2003;Kurk et al. 2007;Jiang et al. 2007;Shen et al. 2019), but we follow B20 who determined z = 5.804 ± 0.002 based on the O ii 1305Å and C ii 1335Å emission lines. J0836 is also one of the most distant known radio-loud quasars: it has a flux density of 1.1 mJy at 1.4 GHz, a 5 GHz (rest-frame) luminosity of 1.1 × 10 25 W Hz −1 sr −1 , a radio spectral index of -0.9 between 1.5 and 5 GHz, and is marginally resolved at VLBI resolutions of ∼ 10 mas (∼40 pc) (Petric et al. 2003;Frey et al. 2003Frey et al. , 2005. Wolf et al. (2021) presented new observations at low radio frequency, showing that the radio spectral index flattens significantly below 1.4 GHz, consistent with a peaked radio spectrum. The compact radio size, steep spectral index and the evidence for a peaked spectrum may suggest that this is a young radio source in which the jets are still confined to (sub-)kpc scales. J0836 has an estimated black hole mass of log 10 (M BH /M ) = 9.48 ± 0.55 (see Sect. 3.2 below). HST/ACS and Spitzer/IRAC images We use images obtained with the HST/ACS in the filters F775W (i 775 ; 4676 s) and F850LP (z 850 ; 10,778 s). These data are described in detail in Z06. For the analysis in this paper, we retrieved the pipeline reduced mosaics from the Hubble Legacy Archive 1 , and performed new measurements using Source Extractor version 2.25.0 (Bertin & Arnouts 1996). We applied a Galactic extinction correction of 0.1 and 0.07 mag to the i 775 and z 850 magnitudes, respectively. We also use Spitzer/IRAC 3.6 µm observations with a total exposure time of 17 ks taken from Overzier et al. (2009b). Galaxy samples and redshifts The samples discussed in this paper are the following. We use the sample of photometrically selected i 775 -dropouts detected with HST/ACS from Z06. The dropout galaxies have z 850 <26.5 mag and colors 1.3<i 775 -z 850 <2.0, consistent with z ∼ 5.8. There are seven objects in this sample (labeled A-G). Below, when referring to the Z06 objects we will use these IDs together with the suffix "Z06". One of the dropouts (C-Z06) has two close companions in projection (labeled C2 and C3) that are fainter than the imposed limit in z 850 . Ajiki et al. (2006) carried out broad and narrowband observations using the Subaru Suprime-Cam with filters B,V ,r ,i ,z , and the narrowband filter N B816 sensitive to Lyα at z = 5.65 − 5.75. They found one strong Lyα emitting candidate at z ≈ 5.7 about 1 or 6 kpc from object B-Z06. We will refer to this object as B -A06. Zheng et al. (2006) and object B -A06 from Ajiki et al. (2006). b From Bosman et al. (2020). † During this work a mistake was found in the coordinates for object G used in Table 1 and Figure 1 of Z06. - - - - a Objects A-Z06 to G-Z06 from Here we give the correct coordinates. † † Redshift difference of spectroscopic galaxies and J0836, ∆z = z obj − zQSO. The remainder of the Z06 objects were not detected in B,V ,r , which is consistent with them being genuine idropouts and not foreground interlopers. Finally, and key to the results presented in this paper, we use three spectroscopically confirmed Lyα emitting objects from B20. Their selection was also based on the Subaru data from Ajiki et al. (2006). They used color selection criteria optimized to the redshift range 5.65 ≤ z ≤ 5.90, and supplemented spectroscopic targets with a photometric redshift selection. Of 19 candidates found in the region overlapping with the HST data, 3 corresponded to objects from Z06: A-Z06, B-Z06 and F-Z06. 11 of these were targeted with the Keck/DEIMOS, including A-Z06 and B-Z06. When referring to objects from the spectroscopic sample, we will use the suffix "B20". Redshifts were obtained for three objects: "Aerith A" at z = 5.856 ± 0.003 (A-B20 which is identical to A-Z06), "Aerith B" at z = 5.793 ± 0.003 (B-B20 has no counterpart in Z06 because of its relatively small i 775 -z 850 color) and "Aerith C" at z = 5.726 ± 0.003 (C-B20 which is identical to B-Z06). Object A-Z06/A-B20 is also detected in the NB816 narrowband, which is interpreted as continuum emission blueward of Lyα . Object B-B20 is not detected in the NB816. The redshift of B-Z06/C-B20 places the wavelength of its Lyα emission in the NB816. A relatively bright object (B -A06) has indeed been detected, as first reported by Ajiki et al. (2006). However, as explained in B20, the redshift of B-Z06/C-B20 was obtained at the location of its z -band continuum, which is offset from the NB816 source B -A06 by about 1 . It is thus possible that there are two galaxies at this location separated by about 6.3 kpc if at the same redshift. B20 report no other objects with redshifts close to that of J0836, while Meyer et al. (2020) report on a fourth source at z = 5.284 that is not relevant to this paper. Details on the samples are summarized in Table 1. ANALYSIS AND RESULTS Basic structure In Fig. 1 we show a false color image of the archival HST/ACS i 775 and z 850 bands of the J0836 field. The quasar itself is marked by a white polygon. The i 775dropouts from Z06 are marked by green circles, and the spectroscopically confirmed objects from B20 by cyan squares. The work of B20 is significant because it has confirmed, for the first time, that 2 out of the 7 objects from Z06 have redshifts close to that of the quasar (much closer than one would expect for a randomly distributed sample of i 775 -dropouts, see below). Moreover, they also found an additional object (B-B20), that lies closer to the quasar redshift than the other two. The proper distances along and perpendicular to the line of sight (l.o.s.) to the quasar (R , R ⊥ ) are (3.64,0.28) pMpc for A-Z06/A-B20, (-0.67,0.33) pMpc for B-B20 and (-5.03,0.44) pMpc for B-Z06/C-B20, where a minus sign indicates objects in the foreground of the quasar. As further illustrated by Fig. 2, the three objects lie along a narrow cylinder (radius of 0.5 pMpc) oriented along the line of sight with a total length of about 8.7 pMpc (corresponding to the maximum redshift difference of ∆z = 0.13) and the quasar about mid-way. The length of this cylinder is on the larger side of the distribution of cosmic filament sizes found in cosmological simulations (Galárraga-Espinosa et al. 2020). However, we will refrain from calling this structure a filament because of the relatively small number of redshifts it is based on. Locations of the spectroscopically identified galaxies with respect to the quasar in proper physical coordinates. The Z-axis was chosen for the direction along the line of sight, and the X-and Y -axes were chosen to indicate the directions along Right Ascension and Declination, respectively. The structure can be approximated by a narrow cylinder elongated along the line of sight. When this cylinder is seen face-on, as in the case of the HST observations shown in Fig. 1, the structure appears highly compact. Future data may show that the true three-dimensional structure around J0836 is very different from the simple 'filament' drawn in Fig. 2. 3.2. The quasar Kurk et al. (2007) estimated a black hole mass of log 10 (M BH /M ) = 9.4 ± 0.1 based on the Mg ii line. We have updated their estimate using newer measurements and calibrations. We use the black hole mass calibration from Vestergaard & Osmer (2009) together with the FWHM(Mg ii)= 3600 ± 300 measurement of Kurk et al. (2007) A dynamical mass estimate for the quasar host galaxy does not yet exist, but an order of magnitude estimate can be made by taking the results for a sample of z ∼ 6 quasars from Neeleman et al. (2021) (see also Venemans et al. 2016). They find that, on average, the quasar host galaxy dynamical masses are a factor of 7 lower than expected for their black hole mass and assuming the Kormendy & Ho (2013) M BH − M relation for local bulges. With an estimated black hole Figure 3. HST/ACS image stamps in i775 (left) and z850 (right) of the three sources confirmed by B20 with A-Z06/A-B20 in the top, B-B20 (no Z06 counterpart) in the middle, and B-Z06/C-B20 in the bottom panels. The boxes measure 2 ×2 . The image pixel size is 0. 04 and the data is shown unsmoothed. In the bottom panels, the dashed circle marks the narrow-band excess object B -A06 about 1 East of B-Z06/C-B20 detected in the Subaru NB816 image by Ajiki et al. (2006). See text for details. mass of log 10 (M BH /M ) ∼ 9.5, this would imply a stellar (bulge) mass of log 10 (M /M ) ∼ 10.8. If we assume a typical stellar-to-dark-matter mass ratio of ∼ 30 − 50 × 10 −3 appropriate for the high mass end of the halo mass function at z ∼ 6 (Stefanon et al. 2021), this would imply a quasar host halo mass of log 10 (M h /M ) = 12−12.3. Although the uncertainty of this estimate is large, it is consistent with recent derivations based on [C ii] 158µm velocity widths from Shimasaku & Izumi (2019) and proximity zone measurements from Chen et al. (2021). Overzier et al. (2009b) presented Spitzer/IRAC observations of the J0836 structure, and showed that object A-Z06/A-B20 is very bright at 3.6 µm with m 3.6 = 23.78 ± 0.09 mag. Assuming that its redshift was indeed z ∼ 6, they concluded that it is among the brightest and most massive i 775 -dropout galaxies. Here we will revisit the analysis of A-Z06/A-B20. The z 850 and 3.6 µm images are shown in Fig. 4. Massive companion galaxy A Using the redshift firmly established by B20, we can now make a much more secure determination of the properties of A-Z06/A-B20 based on the available photometry. Although the z 850 -[3.6] color is sensitive to age, dust, SFH and metallicity (and redshift), the 3.6 µm flux probing rest-frames of around 5000Å is an accurate (albeit relative) gauge of stellar mass at these redshifts (e.g., see Overzier et al. 2009b;Duncan et al. 2014;Bhatawdekar et al. 2019;Stefanon et al. 2021). We used the BagPipes code (Carnall et al. 2018) to fit the z 850 and 3.6 µm photometry. The i 775 -band was excluded because it does not add any information about the intrinsic SED. The redshift was fixed to z = 5.856. We fit constant and exponential star formation histories with a Kroupa & Boily (2002) IMF. We set the maximum age to the age of the universe at z = 5.8. Because galaxies at z ∼ 6 are known to have a relatively strong contribution from emission lines (mainly [O iii] 4959,5007Å) to their 3.6 µm broadband flux, we used the option to include nebular emission with an ionization parameter U = 0.01. Assuming a constant star formation history and attenuation by dust according to the Calzetti et al. (2000) law, we find a stellar mass log 10 (M /M ) = 10.34 +0.30 −0.14 ) with (without) the nebular emission with little change to the other parameters. Below we will use our first estimate of log 10 (M /M ) = 10.34 +0.30 −0.16 for the stellar mass of A-Z06/A-B20, as the other estimates are consistent within the errors. This stellar mass is two orders of magnitude higher than the minimum halo mass that was inferred by B20. We can compare these results with recent results for the stellar mass function and stellar-to-halo mass ratios of galaxies at z ∼ 5.8 in the field from Stefanon et al. (2021). The stellar mass of A-Z06/A-B20 places it in the highest mass bin considered in that paper (log 10 (M /M ) = 10.4 ± 0.2), taking into account a division by a factor of 1.5 to convert from their Salpeter to our kroupa IMF. If we use their best-fit log(M /M ) − M U V relation with M U V = −21.3 mag measured for A-Z06/A-B20, we would naively expect log 10 (M /M ) ≈ 9.3, a full 1 dex below the actual mass measurement and thus indicative of the significant obscuration in this source. The stellar mass estimate allows us to make an estimate of the mass of the typical dark matter halos that host objects like A-Z06/A-B20. Using an abundancematching method, Stefanon et al. (2021) provide an estimate of the typical stellar-to-halo mass ratio for objects of this mass of ∼ 30 − 50 × 10 −3 , implying that A-Z06/A-B20 is hosted by a dark matter halo of log 10 (M h /M ) ∼ 12. The J0836 field thus hosts at least one other massive dark matter halo, besides the quasar itself. Halos of this mass correspond to a virial radius of r 200 50 kpc, so the halos are really two individual halos given that their separation is about 3.6 pMpc. It is worth pointing out that according to theoretical predictions (Piana et al. 2021), the black hole occupation fraction of halos of this mass is around 1. It is thus possible that A-Z06/A-B20 also hosts a SMBH, but there is no evidence that it is currently active. Although there are no structural measurements for the quasar host galaxy, for object A-Z06/A-B20 we measured a seeing-corrected half-light radius of 0.14 in the z 850 -band. This corresponds to a physical half-light radius of 0.83 kpc, giving a stellar mass surface density of Σ M /(2πr 2 50 ) = 10 9.8 M kpc −2 . This should be considered an upper limit because the optical size is likely larger than that measured in the UV. These measurements place object A-Z06/A-B20 near the transition region between massive (compact) quiescent and star-forming galaxies at z ∼ 4 (Straatman et al. 2015), indicating that A-Z06/A-B20 may be near the end of its active star-forming phase. Other companions We checked the IRAC image for the other confirmed companions, but object B-B20 was not detected and object B-Z06/C-B20 (and thus also B -A06) lies too close to a very bright foreground object to attempt a flux measurement. A direct stellar mass estimate can thus not be made. However, assuming that, unlike object A-Z06/A-B20, these objects do follow the log 10 (M /M ) − M U V relation of Stefanon et al. (2021) (reddening by dust can be neglected at these magnitudes, Bouwens et al. (2007)), the stellar masses would be a few times 10 9 M for B-B20 and B-Z06/C-B20 and a few times 10 8 M for B -A06. The halo masses would be a few times 10 11 and 10 10 M , respectively, based on the abundance matching results from Stefanon et al. (2021). The HST images of the objects are shown in Fig. 3 and basic measurements are given in Table 2. In Sect. 4 below we will analyze the J0836 structure in terms of the probabilities of finding objects like the massive companion A-Z06/A-B20 and the more typical star-forming galaxies B-Z06/C-B20 and B-B20. OVERDENSITY ANALYSIS Several past studies have evaluated the environment of J0836. Z06 found 7 photometrically-selected i 775dropouts brighter than z 850 ∼26.5 mag in the central 5 arcmin 2 region around J0836 (see Fig. 1). These 7 objects represent a factor 6 overdensity based on a comparison with the much larger HST Great Observatories Origins Deep Survey (GOODS; Giavalisco et al. 2004) that was observed to a similar depth over an area of 316 arcmin 2 . Using a large i 775 -dropout sample extracted from GOODS by Bouwens et al. (2006), Z06 found that no single 5 arcmin 2 region (a circle with a radius of 1. 26) randomly drawn from GOODS contained as many as the 7 objects encountered in the J0836 field, with 4 being the highest. Furthermore, Overzier et al. (2009a) constructed a 4. • 4×4. • 4 mock survey (≈ 220× the area observed by GOODS) of galaxies at z ∼ 6 and showed that (surface) overdensities as found near J0836 are indeed very rare. However, in this large simulation even rarer regions were found due to extremely dense regions mostly associated with (forming) galaxy clusters. However, a definitive conclusion about the nature of the J0836 overdensity was not possible due to the lack of spectroscopic redshifts. With the new spectroscopic data from B20, we can now quantify the nature of the overdensity associated with J0836 much more precisely. It is interesting that while only 2 of the 7 objects from Z06 were targeted spectroscopically by B20, they represent 2/3 of the spectroscopically confirmed objects near J0836. These two objects are also the brightest among the Z06 sample (z 850 -band magnitudes of 25.5 and 26 mag). The typical surface density and redshift distribution of i 775dropouts selected with HST is well known. According to Bouwens et al. (2007), the cumulative surface density is 0.328 ± 0.044 arcmin −2 to z 850 =26.5 mag, and based on their redshift distribution the (random) probability of finding a dropout in the redshift range z = 5.73 − 5.86 (∆z = 0.13) is 0.16. In this redshift range and in an area of 5 arcmin 2 we would thus naively expect to find 0.3 ± 0.04 objects. With two Z06 objects spectroscopically confirmed in this region, the three-dimensional overdensity thus appears sigificant as well (δ = n/n − 1 ≈ 6). Here we did not take into account the third confirmed object from B20 because it did not pass the i 775 -dropout selection criteria used in Z06. In the analysis below, the terms dropout and Lyman Break Galaxy (LBG) are used interchangeably to indicate continuum-selected star-forming galaxies. We will use the term Lyα emitter (LAE) for the subset of starforming galaxies with a prominent Lyα emission line having a minimum rest-frame equivalent width (EW) of ∼20Å. Using this criterion, at least 2 of the 3 B20 objects would classify as LAEs. We assume the redshift distribution of Bouwens et al. (2007) for i 775 -dropouts and that of Inami et al. (2017) as presented by Mignoli et al. (2020) for LAEs. According to these distributions the probabilities of finding a single LBG and LAE in the redshift range z = 5.73 − 5.86 are 0.16 and 0.073, respectively. Of the 7 Z06 LBGs, two spectra were taken and both objects fell into this range, giving a probability of 0.026 based on the binomial distribution (2 out of 2 with P = 0.16). Among the spectroscopic targets of B20, four objects with Lyα were found, three of which fell in the redshift range ∆z = 0.13 near J0836. The probability of this occurrence at random is 0.0015 again using binomial statistics (3 out of 4 with P = 0.073). These small (but non-zero) probabilities are suggestive that the presence of the quasar J0836 has some effect on the clustering observed. Clustering of two massive halos In Section 3.3 we showed that one of the J0836 companions is an extremely massive galaxy, and the close clustering between the quasar and this object can provide additional strong constraints on the overdensity in this region. Above we showed that the massive companion A-Z06/A-B20 has a stellar mass of around log 10 (M /M ) = 10.34 +0.30 −0.16 and inferred halo mass log 10 (M h /M ) ∼ 10 12 . Based on the analysis of several hundreds of square arcminutes with deep HST and Spitzer/IRAC coverage, Stefanon et al. (2021) show . HST/ACS z850 and Spitzer/IRAC 3.6 µm images of the object A-Z06/A-B20 discussed in Sect. 3.3. The boxes measure 2 ×2 as in Fig. 3. The HST object is clearly detected at 3.6 µm. The (deblended) object flux was extracted by simultaneously fitting point sources at the locations of the four objects seen near the central part of the image (see Overzier et al. 2009b). how rare such massive objects are: the number density of i 775 -dropouts with log 10 (M /M ) = 10.4 ± 0.2 is 0.06 +0.14 −0.05 ×10 −4 dex −1 Mpc −3 . The volume of the J0836 structure is approximately 1724 cMpc 3 , estimated by taking a cylinder with radius of 3.1 cMpc and a length of 58.5 cMpc (corresponding to ∆z = 0.13). In this volume we thus expect to find 0.002 − 0.035 of such massive i 775 -dropouts at random, while at least 1 was found (object A-Z06/A-B20) not counting the quasar. The discovery of such a rare massive object as part of the J0836 structure is thus additional evidence that the environment of the quasar is exceptional. We can try to estimate how exceptional the J0836 environment is based on the clustering statistics of quasars and galaxies. The typical overdensity of galaxies found in a biased region depends on the amplitude of the quasar-galaxy cross-correlation function, ξ QG (r) = (r/r QG 0 ) −γ , and the volume V eff : δ g = 1 V eff V ξ QG (r)dV = 1 V eff R 0 2πRdR Z 0 dZ √ R 2 + Z 2 r QG 0 −γ(1) The quasar-galaxy cross-correlation length has not been measured directly for the redshift and samples of our interest, but assuming that the clustering of both the quasars and the galaxies is described by power laws with similar slopes, their cross-correlation length is given by r QG 0 = r GG 0 r QQ 0 . Using the results from galaxy clustering measurements together with a halo occupation distribution model, Harikane et al. (2021) find that dark matter halos with average masses in the range log 10 (M h /M ) ≈ 11.8 − 12.2 at z ≈ 5.9 correspond to bias parameters of 7.7-10.1. A similar result is obtained using the Colossus 2 cosmological framework (Diemer 2018), which gives a bias of 8.7 for halos of log 10 (M h /M ) = 12 (peak-height of 5.5). The auto-correlation length of z ∼ 6 galaxies with a similar bias parameter or halo mass was measured by Khostovan et al. (2019). They found r 0 = 15.56 +2.51 −2.71 h −1 Mpc (r 0 = 16.16 +3.80 −3.52 h −1 Mpc) for LAEs with halo mass log 10 (M h /M ) ≈ 12.3 ± 0.2 (log 10 (M h /M ) ≈ 12.4 ± 0.3) at z ≈ 5.79 (z ≈ 5.56). In the analysis below we will therefore take r GG 0 ≈ 15 h −1 cMpc as an approximate value. With a quasar auto-correlation length of r QQ 0 ≈ 22.3 h −1 cMpc (see García-Vergara et al. 2017), the quasar-galaxy cross-correlation length in Eq. 1 becomes r QG 0 ≈ 26.1 cMpc. If we assume a clustering power law slope γ = 1.8, and a cylindrical volume with radius R = 3.1 cMpc and length Z = 58.5 cMpc, the overdensity of 10 12 M halos expected in the J0836 field (not counting the quasar) is δ g = 23.8 (29.6 for γ = 2.0). From this we can then estimate the absolute number of such galaxies expected: N g = ( δ + 1)n g V eff(2) For an average field density ofn g = 1.1 − 3.7 × 10 −5 cMpc −3 (appropriate for objects of halo mass of order 10 12 M ; Khostovan et al. 2019;Stefanon et al. 2021), we then expect to find 0.5-1.6 of such objects in our cylindrical volume around the quasar, matching the one object that was found. We can do a similar exercise for star-forming galaxies in more typical halos. The average co-moving abundance of dropouts with M U V < −20.13 isn g ≈ 4 × 10 −4 cMpc −3 . Assuming a typical bias of 5 and a correlation length of 5.5 h −1 cMpc, appropriate for this population of relatively bright dropout galaxies in halos of around 10 11 M at z ∼ 6 (Overzier et al. 2006a;Barone-Nugent et al. 2014;Khostovan et al. 2019), the quasar-galaxy cross-correlation length in Eq. 1 would be 15.8 cMpc. In Fig. 6 we show the cumulative number of massive and typical companion galaxies expected in the J0836 field following these calculations with the red-and blueshaded regions, respectively. We plot the number of objects as a function of the projected co-moving radius of a cylinder with a fixed length of 58.5 cMpc. The finding of 1 massive companion within 3.1 cMpc is consistent with the enhancement in clustering over the random expectation due to the presence of the massive quasar halo (upper red shaded curve and large circle). Doing the same calculation for more typical star-forming galaxies in lower mass halos, the number of 3 objects detected (small blue circle with arrow) is lower than that expected (upper blue-shaded region) by a factor of 2-3, but it is important to keep in mind that the spectroscopic sample is highly incomplete (for example, there are at least an additional 5 objects from Z06 that have not been targeted). This analysis shows that the clustering observed in the J0836 field is consistent with what we would expect from the presence of a massive quasar halo. In Section 4.2 below we will compare the J0836 structure to several other dense structures of galaxies discovered in recent years, both near quasars and in the general field at z ∼ 6. In this way we will be able to address the relative rarity of the J0836 structure in yet another way. Comparison with other z ∼ 6 structures In Sect. 3 above we presented statistical and theoretical evidence that the J0836 field appears to be much richer than the average cosmic region at z ∼ 6 based on (1) the photometric overdensity of Z06, (2) the spectroscopic overdensity of B20, and (3) the presence of a very massive companion galaxy. However, we cannot escape the fact that these calculations are a simplification as they do not take into account, for example, the complicated selection functions of candidates and confirmed objects and the small number statistics. One way of addressing these issues has been to use numerical simu- Figure 6. Number of companion galaxies expected in a cylinder of length 58.5 cMpc as a function of its projected radius. The blue shaded regions correspond to typical galaxies in the case of no clustering (bottom) and in the case of strong cross-clustering between quasars and galaxies (top). The red hatched shaded regions correspond to massive companion galaxies in the case of no clustering (bottom) and in the case of strong cross-clustering between quasars and galaxies (top). Vertical dotted lines mark radii of 3.1 and 4.1 cMpc, corresponding to the fiducial cylinder used for the J0836 structure in this paper and the half-width of the HST/ACS field, respectively. The large red circle represents the massive companion object A-Z06/A-B20. The small blue circle represents the three spectroscopically confirmed objects from B20. The widths of the shaded regions reflect the uncertainties in the average number densities, and not the uncertainties in the correlation amplitudes. See Sect. 3.3 for details. This figure was inspired by a similar figure in Decarli et al. (2017). lations to evaluate the impact of such effects, as selection effects are easily incorporated and simulations offer large volumes from which statistics can be derived (e.g., Overzier et al. 2009a). On the other hand, however, we should be careful with this approach as well, as many aspects of these simulations remain to be tested by the very observations we try to interpret, and one cannot escape some degree of circular reasoning when comparing the two. Therefore, in this section we will try yet another approach of quantifying the J0836 structure, by comparing with a variety of structures that have been found near other quasars and in the field at z ∼ 6 − 7. SDSS J1030+0524 Among the luminous quasars at z ∼ 6, there is currently only one other object for which the observations show a certain resemblance to J0836. Quasar SDSS J1030+0524 (J1030) at z = 6.308 has been among a few, rare quasars with strong evidence for companion structure. The evidence consists of significant overdensities of (photometric) dropout galaxies on various scales (Stiavelli et al. 2005;Morselli et al. 2014;Balmaverde et al. 2017), and more recently spectroscopic evidence in the form of 4 LBGs and 2 LAEs within a ∆z = 0.2 of the quasar redshift (Decarli et al. 2019;Mignoli et al. 2020). The confirmed LBGs lie at ∼8-9 away from the quasar, and at present we have no way of comparing this to the environment of J0836 on these scales. However, Mignoli et al. (2020) also found two LAEs much closer to J1030, LAE1 at (-4.98,0.14) pMpc and LAE2 at (2.6,0.13) pMpc using our cosmology. Although the proper distances along the l.o.s. are comparable, the projected separations are smaller by a factor of ∼2 in the case of the J1030 LAEs. However, it is important to note that they were selected in a very different way using a blind spectroscopic survey over a 1 ×1 field centered on the quasar with VLT/MUSE. The LAEs in the J0836 field were found by targeting photometric dropout objects with slit spectroscopy. The physical properties of the two populations of LAEs are quite different: the J1030 LAEs are fainter in absolute magnitude (M U V −20.5 mag) compared to those in J0836 ( -20.8 mag). The EWs of the two LAEs near J1030 (EW 0 11-27Å) also appear somewhat lower than the three near J0836 (EW 0 10-76Å). The relatively high EWs and bright UV magnitudes of the J0836 LAEs are curious because typical LAEs show the opposite trend (e.g., De Barros et al. 2017), and could be related to the large quasar proximity zone in which they are situated . We have checked that the LAEs in neither field are affected by Lyα fluorescence due to the quasar's strong ionizing radiation to a few % at most (see also Bosman et al. 2020). Again, we note that the selection techniques used in the two studies were very different. In the case of J0836, current data does not allow us to search for objects as faint as the two LAEs found near J1030. On the other hand, the fact that the J1030 integral field observations did not find any dropout objects with Lyα as bright as the three objects found near J0836 means that they do not exist near J1030. Summarizing, the comparison shows that both J0836 and J1030 have a large spectroscopic excess of objects relatively close to the quasar (in terms of the projected separation, objects near J1030 are about twice closer than those near J0836), but the physical properties of these objects appear somewhat different (objects near J0836 are brighter in the UV and Lyα than those near J1030). 4.2.2. VIKING J030516.92-315056.0 at z = 6.61 Ota et al. (2018) have detected a large-scale overdensity of star-forming galaxies around the quasar VIKING J030516.92-315056.0 at z = 6.61. Although several overdense peaks of (candidate) LBGs are found at ∼20-40 cMpc away from the quasar, just a single LBG candidate lies within the central 12 arcmin 2 equivalent to one HST/ACS pointing. There are also no candidate LAEs in this region with a limiting rest-frame Lyα EW 0 of 15 A and L Lyα 10 42.4 erg s −1 , and the surface density of LAEs on larger scales appears to be underdense compared to a control field. There is thus no observational evidence that this particular quasar is surrounded by a structure of galaxies that is similar to that found near J0836, at least not on the relatively small scales probed by our study. Other quasars at z ∼ 6 A very useful literature overview of searches for structures associated with quasars at z 5 − 7 is given in Table 2 from Mazzucchelli et al. (2017). If we limit this list only to quasars for which there exists spectroscopic evidence for associated galaxies on scales similar to that probed by our J0836 study, only 4 out of the original 14 quasars stand out with reported overdensities. Among the z ∼ 6 quasars, this includes only J0836 itself, and J1030 at z = 6.3 described in detail in Section 4.2.1 above. The two remaining objects are quasars at z ≈ 5 studied by Husband et al. (2013). In one of these, there are 3 objects within ∆z = 0.11 in an area similar in scale as the J0836 structure. In the other field, there are 6 confirmed LBGs within ∆z = 0.06 (plus an additional quasar), but except for one object these all lie on scales beyond that probed for J0836. Thus, it appears that J0836 and J1030 both represent quite remarkable environments, among all the z ∼ 6 quasars studied to date. In addition, Decarli et al. (2017) derived unique information on the environments of quasars at z 6 based on the occurrence of companion objects identified based on the [C ii] 158µm emission line (see also Trakhtenbrot et al. 2017, for similar findings at z ∼ 5). In a survey of 25 quasars, four quasars had close companions (within a projected 600 kpc and 600 km s −1 ). The companion galaxies are not detected in the rest UV, and represent a population of dusty star-forming objects with dynamical masses similar to those of their companion quasars. These massive, close companion objects are consistent with the expected cross-clustering between quasars and star-forming galaxies measured at lower redshifts (García-Vergara et al. 2017). However, the fact that only 4/25 quasars showed [C ii] 158µm companions, indicates that this type of environment is far from typical. It is important to note that these Atacama Large Millimeter Array (ALMA) observations probe a much smaller field of view than in the other studies described in this paper (survey volume of 400 cMpc 3 ), making any quantitative comparison difficult. Despite it being much further away from the quasar, the stellar mass of the massive companion of J0836 is similar to that of the companion objects identified by Decarli et al. (2017). However, no [C ii] 158µm observations exist for the former. Vito et al. (2021) also found evidence for a quasar at z = 6.5 involved in a close pair, and Yue et al. (2021) found the first example of a pair of quasars at z ≈ 5.7 (projected separation of <10 pkpc). The separations are much smaller than the virial radius of the likely dark matter halos, suggesting that merger-induced triggering could play a role in at least a small fraction of the quasars. 4.2.4. COSMOS AzTEC-3 structure at z = 5.3 Capak et al. (2011) found strong clustering near the source COSMOS AzTEC-3 at z = 5.299 consisting of a spectroscopically confirmed dropout object at z = 5.295 and several photometric dropouts within a 2 cMpc (0.3 pMpc) radius (see also Riechers et al. 2014). A quasar at z = 5.30 lies further away (at 13 cMpc). Although a detailed quantitative comparison is difficult because of the different selections and spectroscopic completeness involved, there appear to be strong similarities between J0836 and this structure in terms of the photometric and spectroscopic overdensities. 4.2.5. SXDF protocluster at z = 5.7 The Subaru XMM-Newton Deep Field (SXDF) hosts one of the densest structures known at z ∼ 5.7 (Ouchi et al. 2005;Jiang et al. 2018;Shibuya et al. 2018;Higuchi et al. 2019;Harikane et al. 2019). The structure labeled z57OD corresponds to an overdensity of LAEs with overdensity δ = 11.5 and significance of 7.2σ when defined using a cylindrical region of 10 cMpc radius and 40 cMpc depth. The core of the structure is characterized by a narrow range in redshift (∆z ≈ 0.02), but can be seen in maps with ∆z ≈ 0.12. In Fig. 7 we show the z57OD structure using the spectroscopic data from Harikane et al. (2019). Objects having absolute rest UV magnitudes brighter than -20.2 mag are indicated with large circles, and objects having Lyα luminosities larger than 10 42.6 erg s −1 are marked by red squares. The large dotted circle marks a circular 5 arcmin 2 region that maximizes the number of known structure members, contain- Figure 7. Map of the large-scale structure z57OD at z ≈ 5.7 in the SXDF from Harikane et al. (2019). The circle with a radius of 1. 26 (radius of 0.45 pMpc) that maximizes the number of z57OD galaxies similar to those in the J0836 field has been indicated. There are two objects that would plausibly pass the same selection criteria as those in the J0836 field (large black circles). See Sect. 4.2.5 for details. ing 4 members. Because the SXDF and J0836 structures are, respectively, narrow-band and dropout selected, it is difficult to make a direct quantitative comparison. However, the 4 objects encircled in Fig. 7 all have Lyα luminosities at least as high as the 3 confirmed objects near J0836, and 2 of these have UV luminosities similar to those near J0836. Thus, at least on these scales, the J0836 and z57OD structures appear quite similar. 4.2.6. HUDF structure at z ≈ 5.9 Malhotra et al. (2005) identified a structure of i 775dropout galaxies at z = 5.9 ± 0.2 in the HUDF. They followed up 29 (i 775 -z 850 ) ≥0.9 objects with the ACS grism, finding 23 z ∼ 6 objects of which 15 are part of an overdensity of at least a factor of 2 along the line of sight. In Fig. 8 we show the HUDF structure, indicating objects with (i 775 -z 850 ) ≥1.3 (small circles) and those brighter than z = 26.5 mag (large circles). Objects within the z = 5.9 ± 0.2 redshift spike are marked red. The large dotted circle marks a circular 5 arcmin 2 region that maximizes the number of structure members. In this region, there are 7 galaxies, with 3 being brighter than z = 26.5 mag. The grism redshifts have an accuracy of ∆z ≈ 0.15, so we will only be able to compare the two structures within ∆z = 0.2. The 3 objects thus appear to lie in a redshift interval that is at least Figure 8. Map of the large-scale structure at z ∼ 5.9 in the HUDF from Malhotra et al. (2005). The circle with a radius of 1. 26 (radius of 0.45 pMpc) that maximizes the number of galaxies has been indicated. There are four objects that would plausibly pass the same selection criteria as those in the J0836 field (large circles), but they have a wider redshift distribution. See Sect. 4.2.6 for details. as wide as that found for the confirmed objects around J0836. More importantly, the spectroscopic completeness of the Malhotra et al. (2005) sample is much higher than for the J0836 sample (about 62% versus 29%). The J0836 structure thus appears significantly more overdense compared to the HUDF structure, at least at the magnitude limits considered. 4.2.7. SDF protocluster at z = 6.01 Among the most impressive large-scale structures known at z ∼ 6 is the protocluster discovered by Toshikawa et al. (2012Toshikawa et al. ( , 2014 in the Subaru Deep Field (SDF). Among 28 spectroscopically confirmed i-dropout objects at z ∼ 6 lies a structure of 10 objects at z = 5.984 − 6.047 (∆z = 0.06) and measuring 20 × 20 cMpc on the sky. The selection of these objects is similar to the dropout selections of Z06 and B20, and the spectroscopic followup similar to that performed by B20 in the J0836 field. In Fig. 9 we show the SDF structure, indicating which objects have spectroscopic redshifts (red squares), which objects are brighter than z = 26.5 mag (large circles), and which objects lie within ∆z = 0.13 of the z = 6.01 redshift spike. The large dotted circle marks a circular 5 arcmin 2 region that maximizes the number of structure members. In this region, there are 5 member galaxies, with 3 being brighter than z = 26.5 Figure 9. Map of the large-scale structure at z ≈ 6.01 in the SDF from Toshikawa et al. (2012Toshikawa et al. ( , 2014. The circle with a radius of 1. 26 (radius of 0.45 pMpc) that maximizes the number of galaxies in the SDF structure similar to those in the J0836 field has been indicated. There are three objects that would plausibly pass the same selection criteria as those in the J0836 field (large red circles). See Sect. 4.2.7 for details. ). They searched for > 4σ surface overdensities in regions with a radius of 1 pMpc (2.9 ), suitable for the identification of massive cluster progenitors according to numerical simulations and previous studies (Toshikawa et al. 2012(Toshikawa et al. , 2014(Toshikawa et al. , 2016. Five regions were found, and the two most significant overdensities (D1ID01 of 6.1σ and D3ID01 of 7.6σ) were targeted by spectroscopy to confirm the dropout candidates. Three redshifts were obtained in D1ID01, with two objects separated by ∆z = 0.08 and 53 (∼0.32 pMpc) and a third object not physically associated at much higher redshift. Two redshifts were obtained in D3ID01, with the two objects separated by ∆z = 0.007 and 83 (∼0.49 pMpc). There are no quasars known to be associated with either structure. The three confirmed objects in the J0836 field lie within a projected 0.4 pMpc from the quasar (and about 0.1 pMpc from each other) and have ∆z = 0.13. Based on the available photometric and spectroscopic evidence, the J0836 Figure 10. Map of the large-scale structure at z ≈ 7 associated with the LAGER-z7OD1 protocluster from Hu et al. (2021). Two circles with a radius of 1. 26 that maximize the number of galaxies similar to those in the J0836 field have been indicated. There are two locations, each with three objects, that would plausibly pass the same selection criteria as those in the J0836 field. See Sect. 4.2.9 for details. structure thus appears at least as densely populated as the two CFHTLS regions, which themselves represent significant overdensities compared to the general field at z ∼ 6. 4.2.9. LAGER-z7OD1 protocluster at z ≈ 7 Hu et al. (2021) presented a large overdensity of LAEs at z ≈ 7.0, LAGER-z7OD1. The LAEs were narrowband selected and have minimum Lyα luminosities comparable to the LAEs near J0836 ( log 10 L Lyα /(erg s −1 ) 42.6). They find 16 spectroscopic LAEs clustered in a region 66×30×26 cMpc 3 , where the last dimension corresponds to the redshift depth of ∆z = 0.072. Drawing again random circular regions with a radius of 1. 26, the maximum number of LAEs encountered is 3, centered around two peaks in the sky distribution of LAGER-z7OD1 (see Fig. 10). This shows that on these scales the structure found around J0836 is not too different from the densest peaks within this large-scale overdense region at z ≈ 7. 4.2.10. SC4K survey of LAEs at z ≈ 5.7 To assess the random chance of finding a given number of LAEs in the field, we used data from the ∼2 deg 2 SC4K survey (Sobral et al. 2018). We select all narrowband excess objects detected in the NB816 filter, which is sensitive to Lyα at z = 5.7 ± 0.05 (FWHM), and thus comparable to the redshift width of the objects in the J0836 field (∆z = 0.13). The contamination by foreground objects in this sample is estimated at about 15%. It is important to note that the J0836 sample was not narrow-band selected, and the comparison is thus somewhat skewed. However, the LAEs in the SC4K survey have redshifts, Lyα luminosities ( 10 42.6 erg s −1 ) and rest-frame EWs ( 50Å) comparable to the Figure 11. Map of the large-scale structure of LAEs at z ≈ 5.7 in the SC4K survey from Sobral et al. (2018). LAEs with rest-frame EW >25Å (>50Å) are indicated by the small (large) circles. A circular region with a radius of 1. 26 (0.45 pMpc) similar to the area studied in the J0836 field has been indicated for reference (dashed circle). The area within the large box was used for a counts in cells analysis. The chance of encountering two (three) LAEs that could plausibly pass the same selection criteria as those in the J0836 field amounts to 1% (0.1%). See Sect. 4.2.10 for details. dropout objects confirmed in the J0836 field by B20. The SC4K survey can thus be used as a conservative reference field for estimating the clustering statistics of these objects (in other words, if we were to perform a survey like SC4K on the J0836 field, we are likely to find even more objects than currently selected, and thus the SC4K reference gives the maximum expectation). In Fig. 11 we show the sky distribution of LAEs from the SC4K survey. We performed a counts-in-cells analysis using a circular region with radius of 1. 26 similar to that used in the J0836 field. The chance of randomly finding 2 LAEs in such a small area is at most 1%, and the chance of finding 3 LAEs is 0.1% (note that these numbers are likely skewed high due to the contamination of about 15% and the fact that the J0836 objects were not narrow-band selected). We can conclude from this analysis that the J0836 field contains a number of LAEs that is at least as large as only the densest location in the whole ∼2 deg 2 SC4K survey. Summary of the comparisons Based on the various comparisons with known z ∼ 6− 7 structures given above, it appears justified to conclude that the J0836 field is comparably rich as the peaks in other rare large-scale structures found at these redshifts. J0836 shows some characteristics of other quasar fields, such as the overdensity of star-forming galaxies found near SDSS J1030+0524 at z = 6.3 (Mignoli et al. 2020) and the evidence of massive companion galaxies found near some quasars Yue et al. 2021;Vito et al. 2021), although the scales are very different for the latter. Compared to the non-quasar fields, the J0836 field appears as rich as some of the densest regions known in the field to date. However, the latter include many spectroscopically confirmed surface overdensities extending over angular scales that are much larger than we can probe with the J0836 data, and it is not known whether the similarities between J0836 and these fields would hold on those scales as well. SUMMARY AND DISCUSSION Summary We have shown that the luminous radio-loud quasar J0836 at z = 5.8 is likely part of a relatively rich structure of galaxies in the early universe. The evidence consists mainly of a photometric overdensity previously found by Z06, a spectroscopic overdensity identified by B20, and the presence of at least one very massive companion galaxy detected at 3.6 µm with Spitzer/IRAC by Overzier et al. (2009b) allowing a stellar mass estimate. Based on a cross-correlation analysis of galaxies and quasars, we showed that the presence of these companion objects can plausibly be explained by an overdense environment associated with the quasar. We compared the global properties of the J0836 structure with those found near other quasars and in the field at similar redshifts, concluding that the structure resembles the densest peaks in the cosmic density field currently known at z ∼ 6 − 7, at least at similar depths and scales as probed by our observations. This study is significant for a number of reasons. First, it provides new evidence that the first luminous quasars are associated with relatively dense peaks in the cosmic density field. Second, the J0836 structure appears relatively unique among the population of z ∼ 6 quasars studied to date, with one system showing a similar propensity of clustered companion galaxies (Mignoli et al. 2020), and several other quasars showing evidence for direct interactions on much smaller scales (e.g., Decarli et al. 2017;Vito et al. 2021;Yue et al. 2021). Third, although the J0836 structure appears, in some aspects, similar to some of the most clustered regions of starforming galaxies known at z ∼ 6 − 7, these regions are clearly not unique to the quasars given that overdensities of galaxies have been discovered in the field as well, sometimes larger and more significant than what has been found near any quasar to date (e.g., Toshikawa et al. 2014;Harikane et al. 2019;Hu et al. 2021). By combining these different clues, we will discuss a number of important open questions related to the formation of SMBHs, quasar environments and radio jets. Insights into seed formation One of the most important questions in the study of galaxies relates to the origin of the SMBHs. This question is particularly pertinent to the population of luminous quasars at z 6, given that the masses of their SMBHs managed to rival that of M87 (M BH = 6.5 ± 0.2 ± 0.7 × 10 9 M ; Event Horizon Telescope Collaboration et al. 2019) within 1 Gyr of the Big Bang. This notion has led to a substantial effort in theoretical modeling of possible seed black hole populations and their accretion histories with stellar mass and intermediate mass BHs emerging as the main candidates for the seeds (for reviews see, e.g., Bromm & Yoshida 2011;Inayoshi et al. 2020;Volonteri et al. 2021). The slightly longer cosmic times available to quasars at z 6 make it significantly "easier" to go from a 100 M seed to a SMBH of a few times 10 9 M compared to the quasars at z 7 (e.g., Bañados et al. 2018b;Marinello et al. 2020;Pacucci & Loeb 2021). For example, assuming constant accretion with Eddington ratios of 0.8-1.3 onto a seed formed at z f = 20 − 10 is able to produce a SMBH like that in J0836 by z = 5.82, while the quasar ULAS J1342+0928 at z = 7.54 requires the existence of a >1000 M seed as early as z = 45 assuming standard Eddington rate accretion (Bañados et al. 2018b). There are at least two problems with the latter scenario: (1) it is not clear if such massive seeds could exist that early, and (2) it is not clear if such an efficient accretion rate could be sustained over such a long cosmic time. In scenarios where the growth of SMBHs is allowed to start with an intermediate mass BH (IMBH) seed, the constraints on formation redshift, accretion rates and growth times are significantly relaxed. The biggest uncertainty with these scenarios, however, is whether there exist a viable channel for IMBH formation at early times. Models and simulations have identified a number of ways in which IMBH seeds can form, for example through the merging of numerous stellar mass seeds or through the collapse of massive gas clouds into a DCBH seed (see Sect. 1). The results presented in this paper cannot constrain any of these scenarios, but the association of J0836 with a relatively dense cosmic structure is very interesting in light of a number of recent model predictions for DCBH formation. The various DCBH scenarios proposed all have in common that they require the quasar to form in an overdense region. The overdense region ensures that the primordial gas cloud is heated through an enhanced local Lyman-Werner photon flux and a high rate of merging (mini)halos (e.g., Wise et al. 2019;Lupi et al. 2021). After the collapse, the overdense environment can further ensure a steady accretion rate to form a SMBH and power a quasar. The simulations of Wise et al. (2019) specifically point to a scenario in which the presence of an overdense region of star-forming galaxies several tens of kpc away stimulates the formation of a DCBH seed. When combined with sufficient dynamical heating of atomic cooling halos (ACHs) that are rapidly growing, the nearby star-forming complex only needs to provide a fraction of the LW flux assumed by earlier models. They find a density of DCBHs in overdense regions at z = 15 of 10 −4 − 10 −3 cMpc −3 with a global number density of 10 −7 − 10 −6 cMpc −3 given the rarity of the overdense regions simulated by Wise et al. (2019). For normal and void regions of the universe, the number density of DCBH candidates is predicted to be lower by 3-6 orders of magnitude compared to the number density in overdense regions. It is important to point out that the overdense region chosen by Wise et al. (2019) corresponds to a dark matter halo of a few times 10 10 M at z = 6, which is still two orders of magnitude below the typical halo mass of the luminous quasars studied here. If overdense environments indeed enhance the number density of DCBHs by orders of magnitude, perhaps the proto-J0836 environment would make an ideal site. The galaxies detected as part of the J0836 structure are unlikely the same as the ones that provided the required LW background at early times. For instance, if we take our model fit results for object A-Z06/A-B20 from Sect. 3.3 (log 10 (M /M ) = 10.34 +0.30 −0.16 with a mass-weighted age of 0.27 +0.28 −0.14 Gyr), we can see that even this massive galaxy likely was not around at z f 10.5. Its distance from J0836 is also much too great to have provided a significant Lyman-Werner intensity near the inferred site of seed formation (several Mpc instead of tens of kpc). However, the J0836 structure suggests that an overdense region that predates the formation of the quasar was probably present from early times. It is difficult to be more quantitative at this point. Deep observations with ALMA and the James Webb Space Telescope (JWST) could be used to quantify quasar environments on smaller scales and down to fainter galaxies of much lower bias, and these could then be compared to tailored simulations of DCBH collapse as performed by Wise et al. (2019). Lupi et al. (2021) revisit the idea of synchronized pairs (SPs) of ACHs (Dijkstra et al. 2008;Visbal et al. 2014;Regan et al. 2017), where two halos separated by <1 kpc and star formation commencing in one halo <5 Myr prior to that in the other provides a high LW flux to drive the formation of a DCBH. They compare this scenario to the dynamical heating (DH) driven collapse similar to that discussed by Wise et al. (2019), but focusing specifically on the progenitor of a 3 × 10 12 M at z = 6. Although the dense environment stimulates the formation of SPs, the pristine halos of the pairs are also easily polluted by metals due to the enhanced supernova activity in the region. These polluted halos are assumed to cool and fragment, and no longer considered candidates for DCBH formation. Comparing this with the DH scenario in the same overdense region shows that, because of the clustering of LW sources in the overdense region, several DH seeds may be formed for each SP seed, although the initial mass of the DH seeds may be lower than the SP seeds. The studies by Wise et al. (2019) and Lupi et al. (2021) thus point in the same direction that overdense regions in the early universe may be a necessary ingredient for the formation of DCBHs. One could argue that in principle each quasar at z 6 must have formed in such an overdense environment given their high halo masses (Chen et al. 2021). However, the finding of relatively rich environments around quasars such as J0836 and J1030 compared to other quasars must then imply that the conditions required for DCBH formation as suggested by theory may have been particularly well met for the progenitor regions of these sources. Relevance of the radio jets The fast growth rates of the SMBHs powering the z ∼ 6 quasars implies that we are seeing them at a time of potentially significant impact on the growth of the host galaxies through Active Galactic Nucleus (AGN) feedback. The (small) subset of quasars that are radioloud are the first sources in which we could study, in principle, the interaction between powerful radio jets and the forming interstellar medium. Even though the fraction of quasars that is radio-loud is relatively small (∼10%, independent of redshift, see Bañados et al. (2015)), new radio surveys at low frequencies detect a significant fraction of the population of radio-quiet quasars at z > 5 ( Gloudemans et al. 2021). There is growing evidence that at least at low redshift faint radio structures present also in radio-quiet quasars may be disproportionally responsible for the feedback observed (e.g., Jarvis et al. 2021). At these redshifts, there are still relatively few radio sources known. J0836 appears as the 8th most distant radio-loud AGN in the overview of Bañados et al. (2021, their Table 6). While optically the most luminous, it has the lowest radio-loudness parameter (R 2500 = 16 ± 1). We determine a bolometric luminosity of L bol = 5.3 ± 0.02 × 10 47 erg s −1 using Richards et al. (2006), and with a black hole mass log 10 (M BH /M ) = 9.48 ± 0.55, we find an Eddington ratio of L bol /L Edd = 1.4 +3.5 −1.0 . This range indicates that J0836 lies along the upper envelope of Eddington ratios found for high redshift quasars compiled by Bañados et al. (2021). The compact radio size (Frey et al. 2005), steep spectral index and evidence for a peaked radio spectrum (Wolf et al. 2021) suggest that J0836 is not beamed and that it is a young radio source in which the jets are still confined to the inner kpc scales. With a (projected) size of 40 pc and assuming a typical hot spot advancement speed of (0.2 ± 0.1) h −1 c (Giroletti & Polatidis 2009), the kinematic age in the young source scenario is in the range ≈ 300 − 1000 yr. If the jets have an inclination angle as small as 1 • along the line of sight (still large enough such that it does not become a blazar), the radio age could be as high as ≈ 3 × 10 4 yr. This is much shorter than the most recent quasar phase of ∼ 2×10 5−7 yr estimated by B20. Given these relatively short jet life-times compared to the black hole and quasar accretion time scales inferred for J0836, it appears unlikely that the (current) radio jets have had any meaningful impact on the growth of the SMBH. Alternatively, the radio jets could be recurrent, or the radio source is much older than inferred from its maximum linear size, as expected in the case of jets that are confined by a dense ISM (e.g., O'Dea & Saikia 2021). What does seem relevant, however, is the fact that J0836 represents yet another case of a high redshift radio-loud AGN associated with a relatively dense environment (e.g., Overzier et al. 2006b;Venemans et al. 2007;Hatch et al. 2014;Overzier 2016). The ability of SMBHs to form strong radio jets is generally assumed to scale as a function of black hole mass, accretion rate and spin (Blandford & Znajek 1977;Fanidakis et al. 2011), and all three parameters may be expected to be enhanced in overdense regions due to the enhanced gas accretion rates and merger activity. On the other hand, radio jets may be the consequence of energy extracted from the accretion disk, in which case the SMBH grows much faster due to higher accretion rates (Blandford & Payne 1982). A recent discussion of this relevant to the jets in radio-loud quasars at z ∼ 6 is given by Connor et al. (2021). Quasar J0836 is different from the radio-loud quasar PSO J352.4034-15.3373 at a very similar redshift (Bañados et al. 2018a;Rojas-Ruiz et al. 2021;Connor et al. 2021), mainly in the sense of its very compact linear size (∼40 pc versus ∼1.6 kpc; Frey et al. 2005;Momjian et al. 2018), radio power (rest-frame L ν,1.4 ∼ 7×10 26 W Hz −1 versus ∼ 5×10 27 W Hz −1 ; Wolf et al. 2021;Bañados et al. 2018a), and radio loudness parameter (16 ± 1 versus 1470 +110 −100 ; Bañados et al. 2021;Rojas-Ruiz et al. 2021). It also appears to be quite different from the distant radio galaxy TGSS J1530+1049 at z = 5.72 discovered by Saxena et al. (2018) with rest-frame L ν,1.4 = 1.6 × 10 28 W Hz −1 and a radio morphology consisting of two radio components with a linear extent of ∼2.5 kpc Gabányi et al. (2018). However, all these sources are consistent with being relatively young radio sources in which the radio structures are comparable or within the scale of the host galaxies. These sources thus offer an excellent opportunity for investigating the interaction between radio jets, galaxy formation and SMBH growth. Concluding remarks The answer to the question of what were the seeds of today's SMBHs is coming within our reach. Likely it will require the synthesis of data and clues from a wide variety of independent upcoming experiments. Gravitational wave astronomy has already begun to map the black hole mass gap (Abbott et al. 2020), and the proposed Laser Interferometer Space Antenna (LISA) mission (Amaro-Seoane et al. 2017) and Einstein Telescope (Punturo et al. 2010;Maggiore et al. 2020) will significantly widen the mass and distance range of detectable black hole mergers, thereby addressing directly the mass distribution of the population as a function of redshift. JWST could in principle detect massive DCBH seeds -if they exist -through their unique spectral signatures in the infrared, provided they are not too rare (Pacucci et al. 2016;Natarajan et al. 2017;Woods et al. 2019Woods et al. , 2021a. The Nancy Grace Roman Space Telescope, due to its much wider field of view, could perform a search employing gravitational lensing (Whalen et al. 2020). An upper limit on the number density of DCBH candidates could already be used to constrain the DCBH formation scenario, and perhaps shift the focus of modeling efforts to seeds originating from Population III stars instead. In any case, JWST will also map for the first time in detail the amount of black hole activity in typical galaxies during the first billion years, and the masses of SMBHs in (faint) quasars, thereby constraining the total black hole demographics and indirectly the seed population. Evidence for Population III stars either at high redshift or in the Local Group as well as advanced models could constrain the range of the masses of this first generation of stars and thus the mass range that their end-products could realistically achieve (e.g. Hirano et al. 2014;Placco et al. 2021;Woods et al. 2021a,b). Searches for IMBHs in globular clusters and dwarf galaxies will provide yet another unique constraint on the seed population (Greene 2012;Mezcua 2017;Latimer et al. 2021). The search for distant obscured black holes will also greatly benefit from the new capacity in the X-rays provided by the Athena or Lynx missions (Pacucci et al. 2015;Amarantidis et al. 2019). Extremely Large Telescopes will then be needed to confirm and characterize the sources found. Yet another window on the first stages of black hole formation will be provided at radio wavelengths. Deep radio surveys with the Square Kilometer Array may detect DCBHs through their core radio emission (Whalen et al. 2021), and also find the most distant objects in the universe capable of hosting radio jets, the smoking gun of massive, spinning black holes. In the more immediate future, it will be extremely interesting to see (1) to what extend the compact radio structures seen in the most distant radio galaxies and quasars interact with the ISM of their young, forming host galaxies, (2) if there is any evidence for environ-mental triggers of this radio activity, and (3) to establish definitively what are the typical environments of the radio-loud and radio-quiet AGN at this important epoch. Several programs have been scheduled on the HST and the JWST that aim to do just that within the next two years (e.g., HST programs 16258/16693 and JWST programs 1205/1554/1764/2028/2078). Figure 2 . 2Figure 2. Locations of the spectroscopically identified galaxies with respect to the quasar in proper physical coordinates. The Z-axis was chosen for the direction along the line of sight, and the X-and Y -axes were chosen to indicate the directions along Right Ascension and Declination, respectively. The structure can be approximated by a narrow cylinder elongated along the line of sight. When this cylinder is seen face-on, as in the case of the HST observations shown in Fig. 1, the structure appears highly compact. and the 3000Å rest-frame luminosity of log 10 (L/[erg s −1 ]) = 47.0 from Shen et al. (2019), finding log 10 (M BH /M ) = 9.48 ± 0.55, consistent with the estimate of Kurk et al. (2007), but ∼0.4 dex below the estimate of Shen et al. (2019) based on the problematic C iv line. mass-weighted age of 0.27 +0.28 −0.14 Gyr (z f ≈ 7.5), a metallicity log 10 (Z/Z ) = 0.86 +0.86 −0.45 , and A V = 0.7 +0.3 −0.1 mag (see Fig. 5). The stellar mass and mass-weighted age returned for an exponential star formation history model are essentially the same. Without the nebular emission, the stellar mass is about 0.1 dex higher. Restricting the metallicity to ≤ 0.4Z as in Stefanon et al. (2021), we find a stellar mass log 10 (M /M ) = 10.2 +0.3 −0.2 (log(M /M ) = 10.40 +0.23 Figure 4 4Figure 4. HST/ACS z850 and Spitzer/IRAC 3.6 µm images of the object A-Z06/A-B20 discussed in Sect. 3.3. The boxes measure 2 ×2 as in Fig. 3. The HST object is clearly detected at 3.6 µm. The (deblended) object flux was extracted by simultaneously fitting point sources at the locations of the four objects seen near the central part of the image (see Overzier et al. 2009b). Figure 5 . 5Results of the constant star formation model fits to the z850 and 3.6 µm fluxes of object A-Z06/A-B20 at z = 5.856 described in Sect. 3.3. The top panel shows the best-fit spectral energy distribution (curve) together with the photometry and the sampled posterior probability distribution (data points with shaded regions). The bottom panels show the probability distributions of the SFR, mass-weighted age, stellar mass and specific SFR with the 16, 50 and 84 percentiles indicated by dashed vertical lines. mag. This indicates that the J0836 field resembles the densest part of the Toshikawa et al. (2014) structure. 4.2.8. CFHTLS structures at z ∼ 6 Toshikawa et al. (2016) performed a color selection of i-dropouts in the CFHTLS Deep Fields to a depth of z ∼ 26.3 − 26.5 mag (about M ,z=6 U V Table 1 . 1Objects in the field of J0836.ID α δ z a phot z b spec ∆z † † EW b 0,Lyα log(LLyα/[erg s −1 ]) b (J2000) (J2000) (Å) J0836 08:36:43.871 +00:54:53.15 - 5.804 ± 0.002 0.0 - - B-B20 08:36:46.280 +00:54:10.55 - 5.793 ± 0.003 -0.011 76 +55 −34 43.03 +0.03 −0.03 A-Z06/A-B20 08:36:45.248 +00:54:10.99 5.8 +1.4 −0.2 5.856 ± 0.003 +0.052 > 10.1 42.63 +0.05 −0.04 B-Z06/C-B20 08:36:47.053 +00:53:55.90 5.9 +1.0 −1.0 5.726 ± 0.003 -0.078 55 +8 −5 42.93 +0.15 −0.11 B -A06 08:36:47.127 +00:53:56.20 5.70 +0.03 −0.05 - - - - C-Z06 08:36:50.099 +00:55:31.16 5.9 +1.1 −0.5 - - - - C2-Z06 08:36:50.058 +00:55:30.54 5.9 +1.4 −1.5 - - - - C3-Z06 08:36:50.010 +00:55:30.27 7.0 +0.0 −0.7 - - - - D-Z06 08:36:48.211 +00:54:41.19 5.8 +1.2 −0.7 - - - - E-Z06 08:36:44.029 +00:54:32.79 5.2 +1.7 −0.7 - - - - F-Z06 08:36:42.666 +00:54:44.00 5.7 +1.2 −0.7 - - - - G-Z06 † 08:36:44.809 +00:55:04.41 5.7 +1.2 −0.8 Table 2 . 2Magnitudes and sizes of the confirmed objects.ID Filter r † 50 r † 90 mAB (arcsec) (arcsec) (mag) A-Z06/A-B20 i775 0.20 0.37 26.84 ± 0.17 z850 0.16 0.38 25.44 ± 0.05 3.6 µm - - 23.78 ± 0.09 B-B20 i775 0.16 0.26 26.57 ± 0.12 z850 0.12 0.29 26.07 ± 0.08 B-Z06/C-B20 i775 0.13 0.39 27.58 ± 0.36 z850 0.18 0.38 25.89 ± 0.08 B -A06 i775 0.10 0.35 27.53 ± 0.20 z850 0.10 0.16 27.14 ± 0.15 † The sizes quoted are the radii containing 50 and 90% of the total flux as returned by Source Extractor. They were not corrected for the HST seeing of about 0. 07 (FWHM). https://hla.stsci.edu/ http://bitbucket.org/bdiemer/colossus The author is grateful to Catarina Aydar, Sarah Bosman, Roberto Decarli, Benedikt Diemer, Pavel Kroupa, Marco Mignoli, Sofía Rojas, Aayush Saxena, Kazuhiro Shimasaku and Mauro Stefanon for comments or answering questions during the preparation of this manuscript. The author thanks the anonymous referee for insightful comments and recommendations. The author was supported in this work by a productivity grant (302981/2019-5) from the National Council for Scientific and Technological Development (CNPq).Facilities: HST(ACS), Spitzer(IRAC)Software: astropy (Astropy Collaboration et al. 2013, 2018, CosmoCalc(Wright 2006), BagPipes(Carnall et al. 2018), Source Extractor(Bertin & Arnouts 1996), Colossus(Diemer 2018) . R Abbott, T D Abbott, S Abraham, 10.1103/PhysRevLett.125.101102PhRvL. 125101102Abbott, R., Abbott, T. D., Abraham, S., et al. 2020, PhRvL, 125, 101102, doi: 10.1103/PhysRevLett.125.101102 . B Agarwal, B Smith, S Glover, P Natarajan, S Khochfar, 10.1093/mnras/stw929MNRAS. 4594209Agarwal, B., Smith, B., Glover, S., Natarajan, P., & Khochfar, S. 2016, MNRAS, 459, 4209, doi: 10.1093/mnras/stw929 . M Ajiki, Y Taniguchi, T Murayama, 10.1093/pasj/58.3.499PASJ. 58499Ajiki, M., Taniguchi, Y., Murayama, T., et al. 2006, PASJ, 58, 499, doi: 10.1093/pasj/58.3.499 . S Amarantidis, J Afonso, H Messias, 10.1093/mnras/stz551MNRAS. 4852694Amarantidis, S., Afonso, J., Messias, H., et al. 2019, MNRAS, 485, 2694, doi: 10.1093/mnras/stz551 . P Amaro-Seoane, H Audley, S Babak, arXiv:1702.00786arXiv e-printsAmaro-Seoane, P., Audley, H., Babak, S., et al. 2017, arXiv e-prints, arXiv:1702.00786. https://arxiv.org/abs/1702.00786 . T P Robitaille, Astropy CollaborationE J Tollerud, Astropy Collaboration10.1051/0004-6361/201322068A&A. 55833Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 . A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy Collaboration10.3847/1538-3881/aabc4fAJ. 156123Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f . E Bañados, C Carilli, F Walter, 10.3847/2041-8213/aac511ApJL. 86114Bañados, E., Carilli, C., Walter, F., et al. 2018a, ApJL, 861, L14, doi: 10.3847/2041-8213/aac511 . E Bañados, B Venemans, F Walter, 10.1088/0004-637X/773/2/178ApJ. 773178Bañados, E., Venemans, B., Walter, F., et al. 2013, ApJ, 773, 178, doi: 10.1088/0004-637X/773/2/178 . E Bañados, B P Venemans, E Morganson, 10.1088/0004-637X/804/2/118ApJ. 804118Bañados, E., Venemans, B. P., Morganson, E., et al. 2015, ApJ, 804, 118, doi: 10.1088/0004-637X/804/2/118 . E Bañados, B P Venemans, R Decarli, 10.3847/0067-0049/227/1/11ApJS. 22711Bañados, E., Venemans, B. P., Decarli, R., et al. 2016, ApJS, 227, 11, doi: 10.3847/0067-0049/227/1/11 . E Bañados, B P Venemans, C Mazzucchelli, 10.1038/nature25180Nature. 553473Bañados, E., Venemans, B. P., Mazzucchelli, C., et al. 2018b, Nature, 553, 473, doi: 10.1038/nature25180 . E Bañados, C Mazzucchelli, E Momjian, 10.3847/1538-4357/abe239ApJ. 90980Bañados, E., Mazzucchelli, C., Momjian, E., et al. 2021, ApJ, 909, 80, doi: 10.3847/1538-4357/abe239 . B Balmaverde, R Gilli, M Mignoli, 10.1051/0004-6361/201730683A&A. 60623Balmaverde, B., Gilli, R., Mignoli, M., et al. 2017, A&A, 606, A23, doi: 10.1051/0004-6361/201730683 . R L Barone-Nugent, M Trenti, J S B Wyithe, 10.1088/0004-637X/793/1/17ApJ. 79317Barone-Nugent, R. L., Trenti, M., Wyithe, J. S. B., et al. 2014, ApJ, 793, 17, doi: 10.1088/0004-637X/793/1/17 . G D Becker, J S Bolton, P Madau, 10.1093/mnras/stu2646MNRAS. 4473402Becker, G. D., Bolton, J. S., Madau, P., et al. 2015, MNRAS, 447, 3402, doi: 10.1093/mnras/stu2646 . C L Bennett, D Larson, J L Weiland, G Hinshaw, 10.1088/0004-637X/794/2/135ApJ. 794135Bennett, C. L., Larson, D., Weiland, J. L., & Hinshaw, G. 2014, ApJ, 794, 135, doi: 10.1088/0004-637X/794/2/135 . E Bertin, S Arnouts, 10.1051/aas:1996164A&AS. 117393Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164 . R Bhatawdekar, C J Conselice, B Margalef-Bentabol, K Duncan, 10.1093/mnras/stz866MNRAS. 4863805Bhatawdekar, R., Conselice, C. J., Margalef-Bentabol, B., & Duncan, K. 2019, MNRAS, 486, 3805, doi: 10.1093/mnras/stz866 . R D Blandford, D G Payne, 10.1093/mnras/199.4.883MNRAS. 883Blandford, R. D., & Payne, D. G. 1982, MNRAS, 199, 883, doi: 10.1093/mnras/199.4.883 . R D Blandford, R L Znajek, 10.1093/mnras/179.3.433MNRAS. 179433Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433, doi: 10.1093/mnras/179.3.433 . S E I Bosman, K Kakiichi, R A Meyer, 10.3847/1538-4357/ab85cdApJ. 89649Bosman, S. E. I., Kakiichi, K., Meyer, R. A., et al. 2020, ApJ, 896, 49, doi: 10.3847/1538-4357/ab85cd . R J Bouwens, G D Illingworth, J P Blakeslee, M Franx, 10.1086/498733ApJ. 65353Bouwens, R. J., Illingworth, G. D., Blakeslee, J. P., & Franx, M. 2006, ApJ, 653, 53, doi: 10.1086/498733 . R J Bouwens, G D Illingworth, M Franx, H Ford, 10.1086/521811ApJ. 670928Bouwens, R. J., Illingworth, G. D., Franx, M., & Ford, H. 2007, ApJ, 670, 928, doi: 10.1086/521811 . R J Bouwens, G D Illingworth, P A Oesch, 10.1088/0004-637X/811/2/140ApJ. 811140Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, ApJ, 811, 140, doi: 10.1088/0004-637X/811/2/140 . V Bromm, N Yoshida, 10.1146/annurev-astro-081710-102608ARA&A. 49373Bromm, V., & Yoshida, N. 2011, ARA&A, 49, 373, doi: 10.1146/annurev-astro-081710-102608 . R Calvi, J M Rodríguez Espinosa, J M Mas-Hesse, 10.1093/mnras/stz2177MNRAS. 489Calvi, R., Rodríguez Espinosa, J. M., Mas-Hesse, J. M., et al. 2019, MNRAS, 489, 3294, doi: 10.1093/mnras/stz2177 . D Calzetti, L Armus, R C Bohlin, 10.1086/308692ApJ. 533682Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682, doi: 10.1086/308692 . P L Capak, D Riechers, N Z Scoville, 10.1038/nature09681Nature. 470233Capak, P. L., Riechers, D., Scoville, N. Z., et al. 2011, Nature, 470, 233, doi: 10.1038/nature09681 . A C Carnall, R J Mclure, J S Dunlop, R Davé, 10.1093/mnras/sty2169MNRAS. 4804379Carnall, A. C., McLure, R. J., Dunlop, J. S., & Davé, R. 2018, MNRAS, 480, 4379, doi: 10.1093/mnras/sty2169 . H Chen, A.-C Eilers, S E I Bosman, arXiv:2110.13917arXiv e-printsChen, H., Eilers, A.-C., Bosman, S. E. I., et al. 2021, arXiv e-prints, arXiv:2110.13917. https://arxiv.org/abs/2110.13917 . T Connor, E Bañados, D Stern, 10.3847/1538-4357/abe710ApJ. 911120Connor, T., Bañados, E., Stern, D., et al. 2021, ApJ, 911, 120, doi: 10.3847/1538-4357/abe710 . F B Davies, J F Hennawi, E Bañados, 10.3847/1538-4357/aad6dcApJ. 864142Davies, F. B., Hennawi, J. F., Bañados, E., et al. 2018, ApJ, 864, 142, doi: 10.3847/1538-4357/aad6dc . S De Barros, L Pentericci, E Vanzella, 10.1051/0004-6361/201731476A&A. 608123De Barros, S., Pentericci, L., Vanzella, E., et al. 2017, A&A, 608, A123, doi: 10.1051/0004-6361/201731476 . R Decarli, F Walter, B P Venemans, 10.1038/nature22358Nature. 545457Decarli, R., Walter, F., Venemans, B. P., et al. 2017, Nature, 545, 457, doi: 10.1038/nature22358 . R Decarli, M Mignoli, R Gilli, 10.1051/0004-6361/201936813A&A. 63110Decarli, R., Mignoli, M., Gilli, R., et al. 2019, A&A, 631, L10, doi: 10.1051/0004-6361/201936813 . B Diemer, 10.3847/1538-4365/aaee8cApJS. 23935Diemer, B. 2018, ApJS, 239, 35, doi: 10.3847/1538-4365/aaee8c . M Dijkstra, Z Haiman, A Mesinger, J S Wyithe, 10.1111/j.1365-2966.2008.14031.xMNRAS. 391Dijkstra, M., Haiman, Z., Mesinger, A., & Wyithe, J. S. B. 2008, MNRAS, 391, 1961, doi: 10.1111/j.1365-2966.2008.14031.x . K Duncan, C J Conselice, A Mortlock, 10.1093/mnras/stu1622MNRAS. 4442960Duncan, K., Conselice, C. J., Mortlock, A., et al. 2014, MNRAS, 444, 2960, doi: 10.1093/mnras/stu1622 . A.-C Eilers, F B Davies, J F Hennawi, 10.3847/1538-4357/aa6c60ApJ. 84024Eilers, A.-C., Davies, F. B., Hennawi, J. F., et al. 2017, ApJ, 840, 24, doi: 10.3847/1538-4357/aa6c60 . D J Eisenstein, A Loeb, 10.1086/175498ApJ. 44311Eisenstein, D. J., & Loeb, A. 1995, ApJ, 443, 11, doi: 10.1086/175498 . K Akiyama, Event Horizon Telescope CollaborationA Alberdi, Event Horizon Telescope Collaboration10.3847/2041-8213/ab1141ApJL. 8756Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019, ApJL, 875, L6, doi: 10.3847/2041-8213/ab1141 . X Fan, R L White, M Davis, 10.1086/301534AJ. 1201167Fan, X., White, R. L., Davis, M., et al. 2000, AJ, 120, 1167, doi: 10.1086/301534 . X Fan, V K Narayanan, R H Lupton, 10.1086/324111AJ. 1222833Fan, X., Narayanan, V. K., Lupton, R. H., et al. 2001, AJ, 122, 2833, doi: 10.1086/324111 . N Fanidakis, C M Baugh, A J Benson, 10.1111/j.1365-2966.2010.17427.xMNRAS. 41053Fanidakis, N., Baugh, C. M., Benson, A. J., et al. 2011, MNRAS, 410, 53, doi: 10.1111/j.1365-2966.2010.17427.x . W Freudling, M R Corbin, K T Korista, 10.1086/375338ApJL. 58767Freudling, W., Corbin, M. R., & Korista, K. T. 2003, ApJL, 587, L67, doi: 10.1086/375338 . S Frey, L Mosoni, Z Paragi, L I Gurvits, 10.1046/j.1365-8711.2003.06869.xMNRAS. 34320Frey, S., Mosoni, L., Paragi, Z., & Gurvits, L. I. 2003, MNRAS, 343, L20, doi: 10.1046/j.1365-8711.2003.06869.x . S Frey, Z Paragi, L Mosoni, L I Gurvits, 10.1051/0004-6361:200500112A&A. 43613Frey, S., Paragi, Z., Mosoni, L., & Gurvits, L. I. 2005, A&A, 436, L13, doi: 10.1051/0004-6361:200500112 . K É Gabányi, S Frey, L I Gurvits, Z Paragi, K Perger, 10.3847/2515-5172/aaec82Research Notes of the American Astronomical Society. 2Gabányi, K.É., Frey, S., Gurvits, L. I., Paragi, Z., & Perger, K. 2018, Research Notes of the American Astronomical Society, 2, 200, doi: 10.3847/2515-5172/aaec82 . D Galárraga-Espinosa, N Aghanim, M Langer, C Gouin, N Malavasi, 10.1051/0004-6361/202037986A&A. 641173Galárraga-Espinosa, D., Aghanim, N., Langer, M., Gouin, C., & Malavasi, N. 2020, A&A, 641, A173, doi: 10.1051/0004-6361/202037986 . C García-Vergara, J F Hennawi, L F Barrientos, H.-W Rix, 10.3847/1538-4357/aa8b69ApJ. 848García-Vergara, C., Hennawi, J. F., Barrientos, L. F., & Rix, H.-W. 2017, ApJ, 848, 7, doi: 10.3847/1538-4357/aa8b69 . M Giavalisco, H C Ferguson, A M Koekemoer, 10.1086/379232ApJL. 60093Giavalisco, M., Ferguson, H. C., Koekemoer, A. M., et al. 2004, ApJL, 600, L93, doi: 10.1086/379232 . M Giroletti, A Polatidis, 10.1002/asna.200811154Astronomische Nachrichten. 330193Giroletti, M., & Polatidis, A. 2009, Astronomische Nachrichten, 330, 193, doi: 10.1002/asna.200811154 . A J Gloudemans, K J Duncan, H J A Röttgering, arXiv:2110.06222arXiv e-printsGloudemans, A. J., Duncan, K. J., Röttgering, H. J. A., et al. 2021, arXiv e-prints, arXiv:2110.06222. https://arxiv.org/abs/2110.06222 . J E Greene, 10.1038/ncomms2314Nature Communications. 31304Greene, J. E. 2012, Nature Communications, 3, 1304, doi: 10.1038/ncomms2314 . M Habouzit, M Volonteri, M Latif, Y Dubois, S Peirani, 10.1093/mnras/stw1924MNRAS. 463529Habouzit, M., Volonteri, M., Latif, M., Dubois, Y., & Peirani, S. 2016, MNRAS, 463, 529, doi: 10.1093/mnras/stw1924 . Y Harikane, M Ouchi, Y Ono, 10.3847/1538-4357/ab2cd5ApJ. 883142Harikane, Y., Ouchi, M., Ono, Y., et al. 2019, ApJ, 883, 142, doi: 10.3847/1538-4357/ab2cd5 . Y Harikane, Y Ono, M Ouchi, arXiv:2108.01090arXiv e-printsHarikane, Y., Ono, Y., Ouchi, M., et al. 2021, arXiv e-prints, arXiv:2108.01090. https://arxiv.org/abs/2108.01090 . N A Hatch, D Wylezalek, J D Kurk, 10.1093/mnras/stu1725MNRAS. 445280Hatch, N. A., Wylezalek, D., Kurk, J. D., et al. 2014, MNRAS, 445, 280, doi: 10.1093/mnras/stu1725 . R Higuchi, M Ouchi, Y Ono, 10.3847/1538-4357/ab2192ApJ. 87928Higuchi, R., Ouchi, M., Ono, Y., et al. 2019, ApJ, 879, 28, doi: 10.3847/1538-4357/ab2192 . S Hirano, T Hosokawa, N Yoshida, 10.1088/0004-637X/781/2/60ApJ. 78160Hirano, S., Hosokawa, T., Yoshida, N., et al. 2014, ApJ, 781, 60, doi: 10.1088/0004-637X/781/2/60 . W Hu, J Wang, L Infante, 10.1038/s41550-020-01291-yNature Astronomy. 5485Hu, W., Wang, J., Infante, L., et al. 2021, Nature Astronomy, 5, 485, doi: 10.1038/s41550-020-01291-y . K Husband, M N Bremer, E R Stanway, 10.1093/mnras/stt642MNRAS. 4322869Husband, K., Bremer, M. N., Stanway, E. R., et al. 2013, MNRAS, 432, 2869, doi: 10.1093/mnras/stt642 . H Inami, R Bacon, J Brinchmann, 10.1051/0004-6361/201731195A&A. 608Inami, H., Bacon, R., Brinchmann, J., et al. 2017, A&A, 608, A2, doi: 10.1051/0004-6361/201731195 . K Inayoshi, Z Haiman, J P Ostriker, 10.1093/mnras/stw836MNRAS. 4593738Inayoshi, K., Haiman, Z., & Ostriker, J. P. 2016, MNRAS, 459, 3738, doi: 10.1093/mnras/stw836 . K Inayoshi, E Visbal, Z Haiman, 10.1146/annurev-astro-120419-014455ARA&A. 5827Inayoshi, K., Visbal, E., & Haiman, Z. 2020, ARA&A, 58, 27, doi: 10.1146/annurev-astro-120419-014455 . M E Jarvis, C M Harrison, V Mainieri, 10.1093/mnras/stab549MNRAS. 5031780Jarvis, M. E., Harrison, C. M., Mainieri, V., et al. 2021, MNRAS, 503, 1780, doi: 10.1093/mnras/stab549 . L Jiang, X Fan, M Vestergaard, 10.1086/520811AJ. 1341150Jiang, L., Fan, X., Vestergaard, M., et al. 2007, AJ, 134, 1150, doi: 10.1086/520811 . L Jiang, J Wu, F Bian, 10.1038/s41550-018-0587-9Nature Astronomy. 2962Jiang, L., Wu, J., Bian, F., et al. 2018, Nature Astronomy, 2, 962, doi: 10.1038/s41550-018-0587-9 . A A Khostovan, D Sobral, B Mobasher, 10.1093/mnras/stz2149MNRAS. 489555Khostovan, A. A., Sobral, D., Mobasher, B., et al. 2019, MNRAS, 489, 555, doi: 10.1093/mnras/stz2149 . J Kormendy, L C Ho, 10.1146/annurev-astro-082708-101811ARA&A. 51511Kormendy, J., & Ho, L. C. 2013, ARA&A, 51, 511, doi: 10.1146/annurev-astro-082708-101811 . S M Koushiappas, J S Bullock, A Dekel, 10.1111/j.1365-2966.2004.08190.xMNRAS. 354292Koushiappas, S. M., Bullock, J. S., & Dekel, A. 2004, MNRAS, 354, 292, doi: 10.1111/j.1365-2966.2004.08190.x . P Kroupa, C M Boily, 10.1046/j.1365-8711.2002.05848.xMNRAS. 3361188Kroupa, P., & Boily, C. M. 2002, MNRAS, 336, 1188, doi: 10.1046/j.1365-8711.2002.05848.x . P Kroupa, L Subr, T Jerabkova, L Wang, 10.1093/mnras/staa2276MNRAS. 498Kroupa, P., Subr, L., Jerabkova, T., & Wang, L. 2020, MNRAS, 498, 5652, doi: 10.1093/mnras/staa2276 . J D Kurk, F Walter, X Fan, 10.1086/521596ApJ. 66932Kurk, J. D., Walter, F., Fan, X., et al. 2007, ApJ, 669, 32, doi: 10.1086/521596 . M A Latif, M Volonteri, J Wise, 10.1093/mnras/sty622MNRAS. 4765016Latif, M. A., Volonteri, M., & Wise, J. H. 2018, MNRAS, 476, 5016, doi: 10.1093/mnras/sty622 . C J Latimer, A E Reines, A Bogdan, R Kraft, 10.3847/2041-8213/ac3af6ApJL. 92240Latimer, C. J., Reines, A. E., Bogdan, A., & Kraft, R. 2021, ApJL, 922, L40, doi: 10.3847/2041-8213/ac3af6 . Y Li, L Hernquist, B Robertson, 10.1086/519297ApJ. 665187Li, Y., Hernquist, L., Robertson, B., et al. 2007, ApJ, 665, 187, doi: 10.1086/519297 . G Lodato, P Natarajan, 10.1111/j.1365-2966.2006.10801.xMNRAS. 3711813Lodato, G., & Natarajan, P. 2006, MNRAS, 371, 1813, doi: 10.1111/j.1365-2966.2006.10801.x . A Lupi, Z Haiman, M Volonteri, 10.1093/mnras/stab692MNRAS. 5035046Lupi, A., Haiman, Z., & Volonteri, M. 2021, MNRAS, 503, 5046, doi: 10.1093/mnras/stab692 . M Maggiore, C Van Den Broeck, N Bartolo, 10.1088/1475-7516/2020/03/050JCAP. 50Maggiore, M., Van Den Broeck, C., Bartolo, N., et al. 2020, JCAP, 2020, 050, doi: 10.1088/1475-7516/2020/03/050 . S Malhotra, J E Rhoads, N Pirzkal, 10.1086/430047ApJ. 626666Malhotra, S., Rhoads, J. E., Pirzkal, N., et al. 2005, ApJ, 626, 666, doi: 10.1086/430047 . M Marinello, R A Overzier, H J A Röttgering, 10.1093/mnras/stz3333MNRAS. 492Marinello, M., Overzier, R. A., Röttgering, H. J. A., et al. 2020, MNRAS, 492, 1991, doi: 10.1093/mnras/stz3333 . C Mazzucchelli, E Bañados, R Decarli, 10.3847/1538-4357/834/1/83ApJ. 83483Mazzucchelli, C., Bañados, E., Decarli, R., et al. 2017, ApJ, 834, 83, doi: 10.3847/1538-4357/834/1/83 . R A Meyer, K Kakiichi, S E I Bosman, 10.1093/mnras/staa746MNRAS. 4941560Meyer, R. A., Kakiichi, K., Bosman, S. E. I., et al. 2020, MNRAS, 494, 1560, doi: 10.1093/mnras/staa746 . M Mezcua, 10.1142/S021827181730021XInternational Journal of Modern Physics D. 261730021Mezcua, M. 2017, International Journal of Modern Physics D, 26, 1730021, doi: 10.1142/S021827181730021X . M Mignoli, R Gilli, R Decarli, 10.1051/0004-6361/202039045A&A. 6421Mignoli, M., Gilli, R., Decarli, R., et al. 2020, A&A, 642, L1, doi: 10.1051/0004-6361/202039045 . E Momjian, C L Carilli, E Bañados, F Walter, B Venemans, 10.3847/1538-4357/aac76fApJ. 86186Momjian, E., Carilli, C. L., Bañados, E., Walter, F., & Venemans, B. P. 2018, ApJ, 861, 86, doi: 10.3847/1538-4357/aac76f . L Morselli, M Mignoli, R Gilli, 10.1051/0004-6361/201423853A&A. 568Morselli, L., Mignoli, M., Gilli, R., et al. 2014, A&A, 568, A1, doi: 10.1051/0004-6361/201423853 . D J Mortlock, S J Warren, B P Venemans, 10.1038/nature10159Nature. 474616Mortlock, D. J., Warren, S. J., Venemans, B. P., et al. 2011, Nature, 474, 616, doi: 10.1038/nature10159 . P Natarajan, F Pacucci, A Ferrara, 10.3847/1538-4357/aa6330ApJ. 838117Natarajan, P., Pacucci, F., Ferrara, A., et al. 2017, ApJ, 838, 117, doi: 10.3847/1538-4357/aa6330 . M Neeleman, M Novak, B P Venemans, 10.3847/1538-4357/abe70fApJ. 911141Neeleman, M., Novak, M., Venemans, B. P., et al. 2021, ApJ, 911, 141, doi: 10.3847/1538-4357/abe70f . C P O&apos;dea, D J Saikia, 10.1007/s00159-021-00131-wA&A Rv. 29O'Dea, C. P., & Saikia, D. J. 2021, A&A Rv, 29, 3, doi: 10.1007/s00159-021-00131-w . K Ota, B P Venemans, Y Taniguchi, 10.3847/1538-4357/aab35bApJ. 856109Ota, K., Venemans, B. P., Taniguchi, Y., et al. 2018, ApJ, 856, 109, doi: 10.3847/1538-4357/aab35b . M Ouchi, K Shimasaku, M Akiyama, 10.1086/428499ApJL. 6201Ouchi, M., Shimasaku, K., Akiyama, M., et al. 2005, ApJL, 620, L1, doi: 10.1086/428499 . R A Overzier, 10.1007/s00159-016-0100-3A&A Rv. 2414Overzier, R. A. 2016, A&A Rv, 24, 14, doi: 10.1007/s00159-016-0100-3 . R A Overzier, R J Bouwens, G D Illingworth, M Franx, 10.1086/507678ApJL. 6485Overzier, R. A., Bouwens, R. J., Illingworth, G. D., & Franx, M. 2006a, ApJL, 648, L5, doi: 10.1086/507678 . R A Overzier, Q Guo, G Kauffmann, 10.1111/j.1365-2966.2008.14264.xMNRAS. 394577Overzier, R. A., Guo, Q., Kauffmann, G., et al. 2009a, MNRAS, 394, 577, doi: 10.1111/j.1365-2966.2008.14264.x . R A Overzier, G K Miley, R J Bouwens, 10.1086/498234ApJ. 63758Overzier, R. A., Miley, G. K., Bouwens, R. J., et al. 2006b, ApJ, 637, 58, doi: 10.1086/498234 . R A Overzier, X Shu, W Zheng, 10.1088/0004-637X/704/1/548ApJ. 704548Overzier, R. A., Shu, X., Zheng, W., et al. 2009b, ApJ, 704, 548, doi: 10.1088/0004-637X/704/1/548 . F Pacucci, A Ferrara, A Grazian, 10.1093/mnras/stw725MNRAS. 4591432Pacucci, F., Ferrara, A., Grazian, A., et al. 2016, MNRAS, 459, 1432, doi: 10.1093/mnras/stw725 . F Pacucci, A Ferrara, M Volonteri, G Dubus, 10.1093/mnras/stv2196MNRAS. 3771Pacucci, F., Ferrara, A., Volonteri, M., & Dubus, G. 2015, MNRAS, 454, 3771, doi: 10.1093/mnras/stv2196 . F Pacucci, A Loeb, 10.1093/mnras/stab3071MNRAS. Pacucci, F., & Loeb, A. 2021, MNRAS, doi: 10.1093/mnras/stab3071 . A O Petric, C L Carilli, F Bertoldi, 10.1086/375645AJ. 12615Petric, A. O., Carilli, C. L., Bertoldi, F., et al. 2003, AJ, 126, 15, doi: 10.1086/375645 . O Piana, P Dayal, M Volonteri, T R Choudhury, 10.1093/mnras/staa3363MNRAS. 5002146Piana, O., Dayal, P., Volonteri, M., & Choudhury, T. R. 2021, MNRAS, 500, 2146, doi: 10.1093/mnras/staa3363 . V M Placco, I U Roederer, Y S Lee, 10.3847/2041-8213/abf93dApJL. 91232Placco, V. M., Roederer, I. U., Lee, Y. S., et al. 2021, ApJL, 912, L32, doi: 10.3847/2041-8213/abf93d . M Punturo, M Abernathy, F Acernese, 10.1088/0264-9381/27/19/194002Classical and Quantum Gravity. 27194002Punturo, M., Abernathy, M., Acernese, F., et al. 2010, Classical and Quantum Gravity, 27, 194002, doi: 10.1088/0264-9381/27/19/194002 . J A Regan, T P Downes, M Volonteri, 10.1093/mnras/stz1045MNRAS. 4863892Regan, J. A., Downes, T. P., Volonteri, M., et al. 2019, MNRAS, 486, 3892, doi: 10.1093/mnras/stz1045 . J A Regan, E Visbal, J H Wise, 10.1038/s41550-017-0075Nature Astronomy. 175Regan, J. A., Visbal, E., Wise, J. H., et al. 2017, Nature Astronomy, 1, 0075, doi: 10.1038/s41550-017-0075 . J A Regan, J H Wise, B W Shea, M L Norman, 10.1093/mnras/staa035MNRAS. 4923021Regan, J. A., Wise, J. H., O'Shea, B. W., & Norman, M. L. 2020, MNRAS, 492, 3021, doi: 10.1093/mnras/staa035 . G T Richards, M Lacy, L J Storrie-Lombardi, 10.1086/506525ApJS. 166470Richards, G. T., Lacy, M., Storrie-Lombardi, L. J., et al. 2006, ApJS, 166, 470, doi: 10.1086/506525 . D A Riechers, C L Carilli, P L Capak, 10.1088/0004-637X/796/2/84ApJ. 79684Riechers, D. A., Carilli, C. L., Capak, P. L., et al. 2014, ApJ, 796, 84, doi: 10.1088/0004-637X/796/2/84 . S Rojas-Ruiz, E Bañados, M Neeleman, 10.3847/1538-4357/ac1a13ApJ. 920150Rojas-Ruiz, S., Bañados, E., Neeleman, M., et al. 2021, ApJ, 920, 150, doi: 10.3847/1538-4357/ac1a13 . A Saxena, M Marinello, R A Overzier, 10.1093/mnras/sty1996MNRAS. 4802733Saxena, A., Marinello, M., Overzier, R. A., et al. 2018, MNRAS, 480, 2733, doi: 10.1093/mnras/sty1996 . Y Shen, J Wu, L Jiang, 10.3847/1538-4357/ab03d9ApJ. 87335Shen, Y., Wu, J., Jiang, L., et al. 2019, ApJ, 873, 35, doi: 10.3847/1538-4357/ab03d9 . T Shibuya, M Ouchi, A Konno, 10.1093/pasj/psx122PASJ. 7014Shibuya, T., Ouchi, M., Konno, A., et al. 2018, PASJ, 70, S14, doi: 10.1093/pasj/psx122 . K Shimasaku, T Izumi, 10.3847/2041-8213/ab053fApJL. 87229Shimasaku, K., & Izumi, T. 2019, ApJL, 872, L29, doi: 10.3847/2041-8213/ab053f . D Sijacki, V Springel, M G Haehnelt, 10.1111/j.1365-2966.2009.15452.xMNRAS. 400100Sijacki, D., Springel, V., & Haehnelt, M. G. 2009, MNRAS, 400, 100, doi: 10.1111/j.1365-2966.2009.15452.x . B D Smith, J A Regan, T P Downes, 10.1093/mnras/sty2103MNRAS. 4803762Smith, B. D., Regan, J. A., Downes, T. P., et al. 2018, MNRAS, 480, 3762, doi: 10.1093/mnras/sty2103 . D Sobral, S Santos, J Matthee, 10.1093/mnras/sty378MNRAS. 4764725Sobral, D., Santos, S., Matthee, J., et al. 2018, MNRAS, 476, 4725, doi: 10.1093/mnras/sty378 . V Springel, S D M White, A Jenkins, 10.1038/nature03597Nature. 435629Springel, V., White, S. D. M., Jenkins, A., et al. 2005, Nature, 435, 629, doi: 10.1038/nature03597 . M Stefanon, R J Bouwens, I Labbé, arXiv:2103.16571arXiv e-printsStefanon, M., Bouwens, R. J., Labbé, I., et al. 2021, arXiv e-prints, arXiv:2103.16571. https://arxiv.org/abs/2103.16571 . D Stern, P B Hall, L F Barrientos, 10.1086/379206ApJL. 59639Stern, D., Hall, P. B., Barrientos, L. F., et al. 2003, ApJL, 596, L39, doi: 10.1086/379206 . M Stiavelli, S G Djorgovski, C Pavlovsky, 10.1086/429406ApJL. 6221Stiavelli, M., Djorgovski, S. G., Pavlovsky, C., et al. 2005, ApJL, 622, L1, doi: 10.1086/429406 . C M S Straatman, I Labbé, L R Spitler, 10.1088/2041-8205/808/1/L29ApJL. 80829Straatman, C. M. S., Labbé, I., Spitler, L. R., et al. 2015, ApJL, 808, L29, doi: 10.1088/2041-8205/808/1/L29 . J Toshikawa, N Kashikawa, K Ota, 10.1088/0004-637X/750/2/137ApJ. 750137Toshikawa, J., Kashikawa, N., Ota, K., et al. 2012, ApJ, 750, 137, doi: 10.1088/0004-637X/750/2/137 . J Toshikawa, N Kashikawa, R Overzier, 10.1088/0004-637X/792/1/15ApJ. 79215Toshikawa, J., Kashikawa, N., Overzier, R., et al. 2014, ApJ, 792, 15, doi: 10.1088/0004-637X/792/1/15 . 10.3847/0004-637X/826/2/114ApJ. 826-. 2016, ApJ, 826, 114, doi: 10.3847/0004-637X/826/2/114 . B Trakhtenbrot, P Lira, H Netzer, 10.3389/fspas.2017.00049Frontiers in Astronomy and Space Sciences. 449Trakhtenbrot, B., Lira, P., Netzer, H., et al. 2017, Frontiers in Astronomy and Space Sciences, 4, 49, doi: 10.3389/fspas.2017.00049 . M Umemura, A Loeb, E L Turner, 10.1086/173499ApJ. 419459Umemura, M., Loeb, A., & Turner, E. L. 1993, ApJ, 419, 459, doi: 10.1086/173499 . Y Utsumi, T Goto, N Kashikawa, 10.1088/0004-637X/721/2/1680ApJ. 7211680Utsumi, Y., Goto, T., Kashikawa, N., et al. 2010, ApJ, 721, 1680, doi: 10.1088/0004-637X/721/2/1680 . B P Venemans, F Walter, L Zschaechner, 10.3847/0004-637X/816/1/37ApJ. 81637Venemans, B. P., Walter, F., Zschaechner, L., et al. 2016, ApJ, 816, 37, doi: 10.3847/0004-637X/816/1/37 . B P Venemans, H J A Röttgering, G K Miley, 10.1051/0004-6361:20053941A&A. 461Venemans, B. P., Röttgering, H. J. A., Miley, G. K., et al. 2007, A&A, 461, 823, doi: 10.1051/0004-6361:20053941 . M Vestergaard, P S Osmer, 10.1088/0004-637X/699/1/800ApJ. 699800Vestergaard, M., & Osmer, P. S. 2009, ApJ, 699, 800, doi: 10.1088/0004-637X/699/1/800 . E Visbal, Z Haiman, G L Bryan, 10.1093/mnras/stu1794MNRAS. 4451056Visbal, E., Haiman, Z., & Bryan, G. L. 2014, MNRAS, 445, 1056, doi: 10.1093/mnras/stu1794 . F Vito, W N Brandt, F Ricci, 10.1051/0004-6361/202140399A&A. 649133Vito, F., Brandt, W. N., Ricci, F., et al. 2021, A&A, 649, A133, doi: 10.1051/0004-6361/202140399 . M Volonteri, 10.1126/science.1220843Science. 337544Volonteri, M. 2012, Science, 337, 544, doi: 10.1126/science.1220843 . M Volonteri, M Habouzit, M Colpi, 10.1038/s42254-021-00364-9Nature Reviews Physics. 3732Volonteri, M., Habouzit, M., & Colpi, M. 2021, Nature Reviews Physics, 3, 732, doi: 10.1038/s42254-021-00364-9 . M Volonteri, M J Rees, 10.1086/466521ApJ. 633624Volonteri, M., & Rees, M. J. 2005, ApJ, 633, 624, doi: 10.1086/466521 . F Wang, F B Davies, J Yang, 10.3847/1538-4357/ab8c45ApJ. 89623Wang, F., Davies, F. B., Yang, J., et al. 2020, ApJ, 896, 23, doi: 10.3847/1538-4357/ab8c45 . D J Whalen, M Mezcua, S J Patrick, A Meiksin, M A Latif, 10.3847/2041-8213/ac35e6ApJL. 92239Whalen, D. J., Mezcua, M., Patrick, S. J., Meiksin, A., & Latif, M. A. 2021, ApJL, 922, L39, doi: 10.3847/2041-8213/ac35e6 . D J Whalen, M Surace, C Bernhardt, 10.3847/2041-8213/ab9d29ApJL. 89716Whalen, D. J., Surace, M., Bernhardt, C., et al. 2020, ApJL, 897, L16, doi: 10.3847/2041-8213/ab9d29 . J H Wise, J A Regan, B W Shea, 10.1038/s41586-019-0873-4Nature. 56685Wise, J. H., Regan, J. A., O'Shea, B. W., et al. 2019, Nature, 566, 85, doi: 10.1038/s41586-019-0873-4 . J Wolf, K Nandra, M Salvato, 10.1051/0004-6361/202039724A&A. 6475Wolf, J., Nandra, K., Salvato, M., et al. 2021, A&A, 647, A5, doi: 10.1051/0004-6361/202039724 . T E Woods, S Patrick, J S Elford, D J Whalen, A Heger, 10.3847/1538-4357/abfaf9ApJ. 915110Woods, T. E., Patrick, S., Elford, J. S., Whalen, D. J., & Heger, A. 2021a, ApJ, 915, 110, doi: 10.3847/1538-4357/abfaf9 . T E Woods, C J Willott, J A Regan, 10.3847/2041-8213/ac2a45ApJL. 92022Woods, T. E., Willott, C. J., Regan, J. A., et al. 2021b, ApJL, 920, L22, doi: 10.3847/2041-8213/ac2a45 . T E Woods, B Agarwal, V Bromm, 10.1017/pasa.2019.14PASA. 3627Woods, T. E., Agarwal, B., Bromm, V., et al. 2019, PASA, 36, e027, doi: 10.1017/pasa.2019.14 . E L Wright, 10.1086/510102PASP. 1181711Wright, E. L. 2006, PASP, 118, 1711, doi: 10.1086/510102 . J Yang, F Wang, X Fan, 10.3847/1538-4357/abbc1bApJ. 90426Yang, J., Wang, F., Fan, X., et al. 2020, ApJ, 904, 26, doi: 10.3847/1538-4357/abbc1b . M Yue, X Fan, J Yang, F Wang, arXiv:2110.12315arXiv e-printsYue, M., Fan, X., Yang, J., & Wang, F. 2021, arXiv e-prints, arXiv:2110.12315. https://arxiv.org/abs/2110.12315 . W Zheng, R A Overzier, R J Bouwens, 10.1086/500167ApJ. 640574Zheng, W., Overzier, R. A., Bouwens, R. J., et al. 2006, ApJ, 640, 574, doi: 10.1086/500167
[]
[ "GENERALIZING THE NOTION OF KOSZUL ALGEBRA", "GENERALIZING THE NOTION OF KOSZUL ALGEBRA" ]
[ "Thomas Cassidy ", "Brad Shelton \nDepartment of Mathematics\nUniversity of Oregon Eugene\n97403-1222Oregon\n", "\nDepartment of Mathematics\nBucknell University Lewisburg\n17837Pennsylvania\n" ]
[ "Department of Mathematics\nUniversity of Oregon Eugene\n97403-1222Oregon", "Department of Mathematics\nBucknell University Lewisburg\n17837Pennsylvania" ]
[]
We introduce a generalization of the notion of a Koszul algebra, which includes graded algebras with relations in different degrees, and we establish some of the basic properties of these algebras. This class is closed under twists, twisted tensor products, regular central extensions and Ore extensions. We explore the monomial algebras in this class and we include some well-known examples of algebras that fall into this class.
10.1007/s00209-007-0263-8
null
15,847,794
0704.3752
b271be11f8916312108476125656ed942e4d9aa6
GENERALIZING THE NOTION OF KOSZUL ALGEBRA 27 Apr 2007 Thomas Cassidy Brad Shelton Department of Mathematics University of Oregon Eugene 97403-1222Oregon Department of Mathematics Bucknell University Lewisburg 17837Pennsylvania GENERALIZING THE NOTION OF KOSZUL ALGEBRA 27 Apr 2007arXiv:0704.3752v1 [math.RA] We introduce a generalization of the notion of a Koszul algebra, which includes graded algebras with relations in different degrees, and we establish some of the basic properties of these algebras. This class is closed under twists, twisted tensor products, regular central extensions and Ore extensions. We explore the monomial algebras in this class and we include some well-known examples of algebras that fall into this class. Introduction Koszul algebras were originally defined by Priddy in 1970 [15] and have since revealed important applications in algebraic geometry, Lie theory, quantum groups, algebraic topology and, recently, combinatorics (cf. [9]). The rich structure and long history of Koszul algebras are clearly detailed in [13]. There exist numerous equivalent definitions of a Koszul algebra (see for example [2]). For our purposes, there are two particular equivalent definitions that motivate our discussion. Let K be a field and let A be connected graded K-algebra, finitely generated in degree 1. Let E(A) = E n,m (A) = Ext n,m A (K, K) be the associated bigraded Yoneda algebra of A (where n is the cohomology degree and −m is the internal degree inherited from the grading on A). Set E n (A) = m E n,m (A). Then A is said to be Koszul if it satisfies either of the following (equivalent) definitions: (1) Diagonal purity: E n,m (A) = 0 unless n = m. (2) Low-degree generation: E(A) is generated as an algebra by E 1,1 (A). Stated this way, these are very strong conditions and either of them immediately imply that a Koszul algebra must be quadratic. Recently, Berger introduced the class of N -Koszul algebras [3] as it is supported in a singe internal degree. Purity then becomes a powerful homological tool, see for example [4] or [8]. In [8] it was shown that N -Koszul could also be rephrased in terms of low-degree generation. It is shown there, as we will reprove in 4.6, that A is N -Koszul if and only if: (1) E(A) is generated as an algebra by E 1 (A) and E 2 (A), and (2) A has defining relations all of degree N . Several works ( [3], [8], [4], [7] among others) demonstrate similarities between N -Koszul algebras and Koszul algebras, and so provide evidence that the N -Koszul algebras should be included in a generalization of Koszulity. However the restriction to N -homogeneity is somewhat artificial and poses serious problems. In particular, the class of N -Koszul algebras, for N > 2, is not closed with respect to many standard operations, such as graded Ore extensions, regular normal extensions, or tensor products. The goal of this paper is to give an alternate generalization of the notion of Koszul with four good properties: it should allow for graded algebras with relations in more than one degree; it should, at minimum, be closed under tensor products and regular central extensions; it should include the N -Koszul algebras, and it should collapse to the definition of Koszul in the case of quadratic algebras. As we will see, the following simple definition does all this, and more. Definition 1.1. The graded algebra A is said to be K 2 if E(A) is generated as an algebra by E 1 (A) and E 2 (A). It is clear that this is the next most restrictive definition one could make, following Koszul and N -Koszul, since for a non-Koszul algebra, E(A) could never be generated by anything less than E 1 (A) and E 2 (A). However, this definition sacrifices homological purity. Surprisingly, many statements that one would want in a generalization of Koszulity hold even without the convenience of homological purity. The same definition, under the name semi-Koszul, was used in [11] to study certain Hopf algebras over finite fields. Clearly this definition captures the Koszul and N -Koszul algebras. But there are many important examples of K 2 algebras which have defining relations in more than one degree and so cannot be N -Koszul. For example, the homogeneous coordinate ring of any projective complete intersection is K 2 (Corollary 9.2). Any Artin-Schelter regular algebra of global dimension four on three generators (refer to [1] or [10]) will have two quadratic relations and two cubic relations. We show in Theorem 4.7 that these algebras are all K 2 . The main results in this paper describe operations that will preserve the class of K 2 algebras. For example, the class is closed under twists and twisted tensor products, as proved in section 3. Most of the results require technical hypotheses, and are therefore not stated explicitly in this introduction. Along the way we give an alternative description of K 2 algebras in terms of minimal projective resolutions (Theorem 4.4) and define K 2 -modules. This is analogous to the standard Koszul theory of modules with linear projective resolutions. As a corollary to this result we get Theorem 5.3, a simple combinatorial algorithm for establishing which monomial algebras are K 2 . The last five sections of the paper deal with change of rings theorems. Throughout the paper we include many examples of both K 2 and non-K 2 algebras that we hope are both interesting and illustrative. Notation Let K be a field, V a finite dimensional K-vector space and T (V ) the usual N-graded tensor algebra over V . Throughout this paper, the phrase graded algebra will refer to a connected graded K-algebras of the form A = T (V )/I, where I is an ideal of T (V ) that is generated by a (minimal) finite collection of homogeneous elements {r 1 . . . , r s } of degree at least 2. We do not asssume that the relations r i all have the same degree. We put I ′ = I ⊗ V + V ⊗ I so that I ′ n is the set of element of I of degree n that are generated by elements of I of strictly smaller degree. We note that I ′ m = I m for all m sufficiently large, and that I ′ is a proper sub-ideal of I. Definition 2.1. An element r ∈ I is said to be essential (in I) if r is not in I ′ , or equivalently r does not vanish in I/I ′ . For any Z-graded vector space W , we use W (n) to denote the same vector space with degree shifted grading W (n) k = W k+n . It is sometimes convenient to write W (n 1 , n 2 , . . . , n k ) for j=k j=1 W (n j ). We denote the projective dimension of W as an A-module by pd A (W ). Given a graded algebra A, as above, we identify the ground field K with the trivial (left or right) A-module A/A + and we write ǫ : A → K for the corresponding augmentation. Our purpose is to study certain properties of the bigraded Yoneda algebra of A which we denote E(A). This is defined as E(A) = n,k≥0 E n,k (A) where E n,k (A) = Ext n,k A (K, K) = Ext n A (K, K) −k . In the bigrading E n,k (A), n is the cohomology degree and −k is the internal degree (inherited from the grading on A). Note that E(A) is supported in non-positive internal degrees. We also write E n (A) for k E n,k (A). The cup product on the ring E(A) is denoted ⋆. The Ext groups involved here are calculated in the category of Z-graded and locally finite A-modules with graded Hom spaces. Given two graded algebras A and B and a degree zero K-algebra homomorphism σ : A → B, we will denote the induced algebra homorphism from E(B) to E(A) by σ * : E(B) → E(A). In particular, if σ is a graded automorphism of A, then σ * is a bi-graded automorphism of E(A). Operations on the class K 2 In this section we show that the class of K 2 algebras is closed under twists by automorphisms and twisted tensor products. Let σ be an automorphism of a graded algebra A = T (V )/I. Then we have the notion of the twisted algebra A σ , as studied extensively by [18]. This algebra has the same underlying graded vector space structure as A, with the product of homogeneous elements a · b = aσ |a| (b). (Here |a| gives the degree of a.) Theorem 3.1. Let A be an N-graded algebra and σ a graded automorphism of A. Then A is K 2 if and only if A σ is K 2 . Proof . The Theorem follows immediately from the claim that E(A σ ) = E(A) (σ −1 ) * , where the twisting is done with respect to the internal grading. We sketch a proof of this claim. The spaces in the bar complex computing E(A) and E(A σ ) are the same, namely Ω n := Hom K (A ⊗n + , K). Let d be the differential on Ω n which determines E(A), and d σ the differential which determines E(A σ ). Then there is an explicit homotopy µ * : (Ω n , d) → (Ω n , d σ ) between the two complexes. This homotopy is the dual of a vector space isomorphism µ ∈ Aut K (A ⊗n + ) with the following complicated formula. Letā = a 1 ⊗ . . . ⊗ a n be a multihomogeneous element of (A + ) ⊗n . Define |ā, i| = n j=i |a j | for 1 ≤ i ≤ n. Then we define: µ(a 1 ⊗ . . . ⊗ a n ) = µ(ā) = σ |ā,1| (a 1 ) ⊗ σ |ā,2| (a 2 ) . . . ⊗ σ |ā,n| (a n ). One checks that d(f • µ) = (d σ f ) • µ. This shows that µ * induces a bigraded vector space isomorphism from E(A) to E(A σ ). Let ⋆ be the usual cup product in E(A) and ⋆ σ the cup product in E(A σ ). One then calculates explicitly that for bigraded elements f and g of E(A) µ * f ⋆ σ µ * g = µ * ((σ * ) −|g| (f )⋆g) where |g| is the internal degree of g. This shows that E(A σ ) is isomorphic to E(A) (σ −1 ) * and proves the claim. Let A and B be graded algebras and let σ be a graded automorphism of A. Then we can define an associative multiplication on the K-tensor product B ⊗ A by the rule (b 1 ⊗ a 1 ) · (b 2 ⊗ a 2 ) = b 1 b 2 ⊗ σ |b 2 | (a 1 )a 2 whenever a 1 , a 2 , b 1 , b 2 are homogeneous. We denote the graded algebra thus formed by B ⊗ σ A and write b ⊗ σ a for the element b ⊗ a in the algebra. Proof . Arguing in a similar fashion to the previous Theorem, one proves the formula: E(B ⊗ σ A) = E(B)⊗ (σ −1 ) * E(A). (See also [13], 3.1.1.) The result follows immediately. Conditions on Minimal Projective Resolutions It is well known that a graded algebra A is Koszul if and only if the trivial module admits a linear free resolution, see for example [13]. A simple way to express this is to say that all of the matrix entries of the maps in any minimal projective resolution of the trivial A-module K have degree 1. In this section we prove a similar, but much more complicated, version of this statement for K 2 algebras, and then we use this to determine which 4 dimensional Artin-Schelter regular algebras are K 2 . We continue with the notation A = T (V )/I where I is a graded ideal generated (minimally) by homogeneous elements r 1 , r 2 , . . . , r s of degrees n 1 , . . . , n s respectively (all greater than 1). Let R denote both the set {r 1 , . . . , r s } and the transpose of the 1 × s matrix (r 1 r 2 . . . r s ). We fix a minimal projective resolution (Q, d) of the trivial left A module K. It is clear that if A is a K 2 algebra then every Q n is finitely generated over A. Hence we will assume unless otherwise stated that every Q n is finitely generated. For each n ≥ 0, we can define integers t n and m 1,n · · · , m tn,n by Q n = A(m 1,n · · · , m tn,n ). (Note that m k,n ≤ −n for all k.) We choose a homogeneous A-basis for each Q n . With respect to these bases, we let M n be the matrix representing d n : Q n → Q n−1 and letM n be a lift of M n to a matrix over T (V ) with homogeneous entries. The entries ofM 1 give a basis for V and we may choose our basis of Q 2 , as well as the liftM 2 , so thatM 2M1 = R. (When multiplying matrices defined over T (V ) we suppress the tensor product notation.) The following Lemma is obvious. Before we state our theorem we recall the minimal projective resolution contruction of the cup product on E(A). Recall that E n (A) = Hom A (Q n , K), by minimality of the resolution. To multiply f ∈ E n (A) by g ∈ E k (A) we consider the following commutative diagram, where existence of the downward maps F j is assured by projectivity. Q n+k M n+k → Q n+k−1 → . . . → Q n+1 M n+1 → Q n F k ↓ F k−1 ↓ F 1 ↓ F 0 ↓ f ց Q k M k → Q k−1 → . . . → Q 1 M 1 → Q 0 ǫ → K g ↓ K Then the product of g and f in E(A) is g⋆f := g • F k ∈ Hom A (Q n+k , K). It will be convenient to work with a graded basis of E(A). Let {ε k,n } tn k=1 with ε k,n : Q n → K be the graded basis of E n (A) dual to a minimal set of homogeneous A-generators of Q n , so that ε k,n has cohomology degree n and internal degree m k,n (since the corresponding A-generator has degree −m k,n ). For n ≥ 1 let U n = E 2 (A)⋆E n−2 (A) and V n = E 1 (A)⋆E n−1 (A). We note that the algebra A is K 2 if and only if E n (A) = U n + V n for all n ≥ 3. (This requires an inductive argument.) We set some temporary notation to analyze the diagrams dictating the relationship of E n (A) to U n and to V n . For every n and every 1 ≤ k ≤ t n , let e k,n be the k th column of the t n × t n identity matrix. We can define matrices G k,n and J k,n with entries in A by commutativity of the following diagram. Q n+2 M n+2 → Q n+1 M n+1 → Q n J k,n ↓ G k,n ↓ e k,n ↓ ε k,n ց Q 2 M 2 → Q 1 M 1 → A ǫ → K ( * ) The entries of J k,n and G k,n are all homogeneous, but the actual degrees will not be important. Since the matrix e k,n is scalar and the matrix M 1 is linear, we may lift the matrix G k,n to a (unique) homogeneous matrixĜ k,n over T (V ) which satisfiesĜ k,nM1 =M n+1 e k,n . We also choose a homogeneous lift of J k,n toĴ k,n in T (V ). Let R k,n =M n+2Ĝk,n −Ĵ k,nM2 . We observe that R k,n is a matrix of homogeneous elements of I. With all of this notation in hand we can state two technical lemmas, the first of which concerns U n . Lemma 4.2. (1) Let 1 ≤ p ≤ t 2 and 1 ≤ k ≤ t n and define scalars λ j by ε p,2 ⋆ε k,n = t n+2 j=1 λ j ε j,n+2 . Then for 1 ≤ i ≤ t n+2 , λ i = 0 if and only if the entry in row i and column p of J k,n is a unit. (2) Let 1 ≤ i ≤ t n+2 . Then there exist p and k as in (1) such that λ i = 0 if and only if some entry in row i ofM n+2Mn+1 is essential in I. Proof . The first statement is clear (and the required unit is λ i ). For the second statement, suppose first that the required p and k exist. Then the entry in row i and column p of J k,n is the unit λ i . This must also be true of the matrixĴ k,n . Let f i be row i of the t n+2 × t n+2 identity matrix. Then f iĴk,n R := r is essential in I by 4.1. But we have r = f i (M n+2Ĝk,n − R k,n )M 1 = f iMn+2Mn+1 e k,n − f i R k,nM1 . Since R k,n has entries from I, the element f i R k,nM1 cannot be essential. It follows that f iMn+2Mn+1 e k,n is essential, as required. Conversely, suppose that an element of row i ofM n+2Mn+1 is essential, say the element in column k. Call the element r. Then r = f iMn+2Mn+1 e k,n = f i (Ĵ k,nM2 + R k,n )M 1 . Since no entry of f i R k,nM1 is essential, we see that f iĴk,nM2M1 = f i J k,n R is essential. From Lemma 4.1, some entry of f i J k,n is thus a unit, as required. Our second technical lemma concerns V n and we leave its (easier) proof to the reader. Lemma 4.3. (1) Let 1 ≤ p ≤ t 1 and 1 ≤ k ≤ t n and define scalars µ j by ε p,1 ⋆ε k,n = t n+1 j=1 µ j ε j,n+1 . Then for 1 ≤ i ≤ t n+1 , µ i = 0 if and only if the entry in row i and column p of E k,n is a unit. (2) Let 1 ≤ i ≤ t n+1 . Then there exist p and k as in (1) such that µ i = 0 if and only if some entry in row i ofM n+1 is (nonzero) linear. To state our theorem we need just a bit more notation. For each n ≥ 2, let L n be the image ofM n modulo the ideal T (V ) ≥2 and let E n be the image ofM nMn−1 modulo I ′ . We think of L n as the linear part ofM n and E n as the essential part ofM nMn−1 . Finally, let [L n : E n ] be the t n × (t n−1 + t n−2 ) matrix obtained by concatenating the rows of L n and E n . (Note that the entries in any given column of this matrix are all in the same vector space, either V or I/I ′ .) Theorem 4.4. Let A be a graded K-algebra as above, and Q → K → 0 a minimal graded projective resolution. Then the following are equivalent: (1) A is K 2 (2) For 2 < n ≤ pd A (K), Q n is finitely generated and the rows of the matrix [L n : E n ] are linearly independent over K. Proof . Suppose first that for some specific n > 2, the rows of the matrix [L n : E n ] are linearly dependent. By changing basis (homogeneously) in the free module Q n = A(m 1,n , . . . , m tn,n ), we may assume that the first row of the matrix [L n : E n ] is zero. By lemmas 4.2 and 4.3, the coefficient of the cohomology class ε i,n in every element of U n or V n is then zero. Hence ε i,n ∈ U n + V n and A is not K 2 . Conversely, suppose that A is not K 2 and fix the smallest n for which E n (A) = U n +V n . Choose a homogeneous basis for U n +V n , say ε 1,n , . . . , ε k,n , k < t n . Extend this a to full homogeneous basis of E n (A) by choosing ε k+1,n , . . . , ε tn,n . This basis then corresponds to a homogeneous A-basis for Q n . We may assume the matrices representing the minimal resolution are calculated with respect to this new basis. But then again by lemmas 4.2 and 4.3, the last row of [L n : E n ] must be zero. An algebra will often fail the K 2 hypothesis simply because of the existence of a cohomology class in the "wrong" internal degree, i.e. an internal degree that could not be generated by the internal degrees of elements of lower cohomological degree. For example, the algebra B = K x, y / x 2 − xy, y 2 is a well-known example of a non-Koszul quadratic (and hence non-K 2 algebra). Since the algebra is quadratic, the portion of E(A) generated by E 1 (A) and E 2 (A) is exactly n E n,n (A). But E 3,4 (B) = 0, so we have cohomology in the "wrong" internal degree. However, an algebra can also fail to be K 2 even when the errant cohomology class has an internal degree that could be generated by classes of lower cohomological degree. Theorem 4.4 can be a simple way to detect this, as exhibited by the following example. Example 4.5. Let A be the algebra K x, y / x 2 − xy, yx, y 3 . Using the fact that A has Hilbert series 1 + 2t + 2t 2 , one can calculate the first few terms of a minimal projective resolution of A K to obtain: A(−3, −4, −4, −4, −4) M 3 → A(−2, −2, −3) M 2 → A(−1, −1) M 1 → A → K → 0 whereM 3 =        y 0 0 0 xy 0 0 y 2 0 0 0 x 0 0 y       M 2 =   x −x y 0 0 y 2  M 1 = x y Since the tensor xy 2 is in the ideal of definition of A, but is not essential (xy 2 = −x(x 2 − xy) + (x 2 − xy)(x − y) + x(yx)), we get [L 3 : E 3 ] =        y 0 0 yx −yx 0 0 0 0 0 0 0 0 y 3 0 0 0 x 0 0 0 0 y 0 y 3        By 4.4, the algebra A is not K 2 and E 3 (A) = U 3 + V 3 . Indeed a more careful analysis of the matrix above shows dim K ((U 3 + V 3 ) −4 ) = 3, whereas we know dim K (E 3,4 (A)) = 4. We can also reprove the following fact about N -Koszul agebras (see also [8]). Proof . Suppose first that the algebra A is N -Koszul. Then the matriceŝ M n will have linear entries when n is odd and entries all of degree N − 1 when n is even. In particularM nMn−1 will always be a matrix whose entries are all essential relations. The theorem immediately implies that A is K 2 . Conversely, suppose that A is K 2 and N -homogeneous. 0 → A(−7) → A(−6) 2 → A(−3, −4) → A(−1) 2 → A → K → 0 and it is easy to see that the algebra can not be K 2 as there is a nonzero cohomology class in E 3,6 (A) which can not be generated by E 2 (A) = E 2,3 (A)+ E 2,4 (A) and E 1 (A) = E 1,1 (A). For the remainder of this section we will assume that A is AS-regular of global dimension 4 and has 3 linear generators. Such an algebra will have 4 relations of degrees 2, 2, 3 and 3. We will write A as K x 1 , x 2 , x 3 /I. In this case A K has a minimal projective resolution of the form: 0 → A(−5) Y → A(−4) 3 N → A(−2, −2, −3, −3) M → A(−1) 3 X → A → K → 0 (4.1) where the matrices Y , N , M , X have homogeneous entries from K x 1 , x 2 , x 3 . From the Gorenstein condition, we may assume: we may assume that the first row of this matrix is zero. Using this, let The two quadratic entries in Y N must span I 2 , so from the first row of N we see that no element of I 2 can contain a term beginning with Y = x 1 x 2 x 3 , X =   x 1 x 2 x 3   ,( i x i ⊗ a i , i x i ⊗ b i , 0,x 1 , i.e. if i x i ⊗ l i ∈ I 2 , then l 1 = 0. Consequently, if i x i ⊗ q i ∈ (I ′ ) 3 , then x 1 ⊗ q 1 ∈ (I ′ ) 3 and q 1 ∈ I 2 (one can make no such statement about q 2 or q 3 ). Let the first two rows of M form the matrix (d ij ), i = 1, 2 and j = 1, 2, 3 where the elements d ij are all linear. Then the j-th entry of the first row of the matrix N M is i x i ⊗ (a i ⊗ d 1j + b i ⊗ d 2j ) . The statement that the first row of E is zero is the statement that these three 3-tensors are all in (I ′ ) 3 . As stated above, this implies that a 1 ⊗ d 1,j + b 1 ⊗ d 2,j is in I 2 for 1 ≤ j ≤ 3, so that (a 1 b 1 0 0) is in the kernel of M . Since (4.1) is exact, (a 1 b 1 0 0) must be in the image of N , which is impossible because the first two columns of N are made up of two-tensors, whereas a 1 and b 1 are linear. Monomial K 2 Algebras In this section we explore in detail the K 2 property for monomial algebras and using Theorem 4.4 we present an algorithm for determining whether a given monomial algebra is K 2 . Let {x 1 , ..., x n } be a fixed basis of the vector space V . We use this basis and a ′ b ′ = 0. The minimality of both a ′ and b ′ assures us that a ′ b ′ ∈ R, and hence that a ′ ∈ LE(R). to identify T = T (V ) with K x 1 , ..., Finally, for b ∈ T mon such thatb = 0, define L(b) = {a ∈ LE(R) | L(a, b) = a}. The following Lemma is clear from the definitions of L(a, b), L(b) and LE(R). From this Lemma it is easy to give a combinatorial description of the graded vector space structure of E(A). We do this by describing a minimal (monomial) projective resolution of the trivial module A K of the form: → A tm Mm → A t m−1 → · · · → A t 1 M 1 → A → K → 0. To do this, it suffices to give an inductive definition of the matricesM m , the homogeneous lifts of the matrices M m to matrices over T . LetM 1 =    x 1 . . . First we note that a monic monomial r in I is essential in I if and only if r ∈ R, so we may assume that E m is a matrix with entries in the span of R (rather than in I/I ′ ). Moreover, since every row ofM mMm−1 has a single nonzero entry, the nonzero rows of E m must be identical to the nonzero rows ofM mMm−1 . It is now clear that if a set of rows of E m is linearly dependent and none of the rows are zero, then two of the rows must be identical. We may assume that these are the first two rows and that the unique nonzero entry r ∈ R is in the first column. Now the multiplicationM nMn−1 must have the following form: Suppose now that some set of rows of [L m : E m ] is minimally linearly dependent. By the above, the E m portion of every one of those rows must be zero. We are left with a minimally linearly dependent set of rows of L m .    a 0 0 · · · 0 b 0 · · · . . .       c 0 0 · · · d 0 0 · · · . . .    =    r 0 · · · r 0 · · · . . . But as above, the nonzero rows of L m are identical to the corresponding rows ofM m , which has linearly independent rows. Therefore at least one of the rows in our minimal dependence set is zero. This proves the required claim. The monomial algebra A will fail to be K 2 if and only if for some minimal i there exists some b ∈ S i and a ∈ L(b) such that |a| > 1 and ab / ∈ R. In this case we will say that the K 2 property fails at level i. not fail at level 2, but K 2 fails at level 3 because in S 3 we have y 3 which is minimally annihilated by z 2 , however z 2 y 3 is not in R. Example 5.5. Let A be generated by {x, y, z}. Let R = {x 4 , yx 3 , x 3 z}. Then S 1 = {x, y, z}, S 2 = {x 3 , yx 2 } and S 3 = ∅. We conclude that A is K 2 For any monomial algebra if b ∈ S 1 and a ∈ L(b) then ab ∈ R, so the earliest K 2 can fail is at level two. In the case of N -homogeneous algebras this is also the latest K 2 can fail. Corollary 5.6. Let A be an N -homogenous algebra. S 3 will be empty if A is K 2 , and otherwise K 2 will fail at level 2. Proof . Assume that every monomial in R is degree N . Since every quadratic monomial algebra is Koszul, and hence K 2 , we will assume N > 2. Let S 1 = {x 1 , ..., x n } and let b be in S 2 . Since A in N −homogenous, |b| = N − 1. For any a ∈ L(b) if |a| > 1 then |a| > N , which means ab / ∈ R and K 2 fails at level 2. If |a| = 1 for all a ∈ L(b) and all b ∈ S 2 , then |ab| = N and ab = 0, which means ab ∈ R. In this case K 2 has not failed at level 2 and moreover, since |a| = 1 we have a ∈ S 1 so that S 3 = ∅. If S 3 = ∅ then S m = ∅ for all m > 2 and K 2 does not fail at any level. This Corollary should be compared to the overlap condition for N -Koszul algebras given in [3] Proposition 3.8. The two conditions are logically equivalent, but the condition here requires fewer formal calculations. Spectral sequence lemmas This section contains some spectral sequence facts which will be used in E r × Ext B ( ′ M, M ) dr×1 −→ E r × Ext B ( ′ M, M ) ↓ ⋆ ↓ ⋆ ′ E r ′ dr −→ ′ E r Proof . We prove the lemma only for r = 2, the general case being similar. We with horizontal differential d h and vertical differential d v (chosen so that d h d v + d v d h = 0). Similarly we have ′ F p,q = Hom B ( ′ Q p , J q ) with differen- tials ′ d h and ′ d v . We can use the usual trick (see for example [17]) of identifying E p,q 2 with the set of classes of elements (f, g) ∈ F p,q × F p+1,q−1 for which d v f = 0 = d h f + d v g, modulo elements of the form (0, g), (d v f, d h f ) (for f ∈ F p,q−1 ) or (d h g, 0) (for g ∈ F p−1,q ). This allows us to identify ′ Q k+p+2 → ′ Q k+p+1 → ′ Q k+p → · · · → ′ Q k+1 → ′ Q k ζ p+2 ↓ ζ p+1 ↓ ζ p ↓ · · · ζ 1 ↓ ζ 0 ↓ ζ ց Q p+2 → Q p+1 → Q p → · · · → Q 1 → Q 0 → M g ↓ f ↓ J q−1 → J q From this diagram it is clear that [(f, g)]⋆[ζ] = [(f ζ p , gζ p+1 )] . So we may cal- culate ′ d 2 ([(f, g)]⋆[ζ]) = [( ′ d h (gζ p+1 ), 0)] = [(d h (g)ζ p+2 , 0)] = [(d h g, 0)]⋆[ζ] = d 2 [(f, g)]⋆[ζ] , as required. The following lemma should also be well-known. N )), which converge to Ext A ( φ M , N ) and Ext A ( φ M , ′ N ) respectively. Then, the left cup product action of Ext A (N, ′ N ) from Ext A (B, N ) to Ext A (B, ′ N ), commutes with the natural action of B. Hence the Ext A (N, ′ N )-action lifts to an action (still denoted ⋆) from E r to ′ E r . Moreover, the differentials d r and ′ d r of these spectral sequences intertwine with this action. That is, we have the commutative diagram: Hom B (Q p , I J q ) with horizontal differential d h and vertical differential d v . Similarly let ′ F p,q = Hom B (Q p , Hom A (B, ′ I q )) = Hom B (Q p , ′ I J q ) with differentials ′ d h and ′ d v . As in the previous proof, we identify an element of E p,q 2 with a certain class of elements (f, g) ∈ F p,q × F p+1,q−1 . Let Ext A (N, ′ N ) × E r 1×dr −→ Ext A (N, ′ N ) × E r ↓ ⋆ ↓ ⋆ ′ E r ′ dr −→ ′ E r Proof . [ζ] ∈ Ext k A (N, ′ N ) be represented by ζ ∈ Hom A (N, ′ I k ). The following diagram defines, via injectivity, maps ζ j , j ≥ 0 so that the bottom row of boxes is commutative and the top box is anticommutative. Note that we abuse notation and consider f and g as maps into I q and I q−1 respectively. For the remainder of this section let A be a connected graded K-algebra and g ∈ A a homogeneous element of degree d that is normal and regular in A. Set B = A/gA and φ : A→ →B the associated epimorphism. Q p+1 → Q p g ↓ f ↓ 0 → N −→ I 0 → · · · → I q−1 → I q → I q+1 ζ ց ζ 0 ↓ ζ q−1 ↓ ζ q ↓ ↓ ′ I k → · · · → ′ I k+q−1 → ′ I k+q → ′ I k+q+1 For any graded A-module N , let N g be the B-module {n ∈ N |gn = 0}. Then the Cartan-Eilenberg spectral sequence for N/gN ) and E p,q 2 = 0 for q > 1. The edge homomorphism E p,0 2 → Ext p A ( φ M , N ) of this spectral sequence is the map induced by φ * and we denote it by φ * as well. Since the spectral sequence has only two nonzero rows, there is an associated long exact sequence: 2 and ′ E p,q 2 be as in Lemma 6.1. There is a well defined cup product: Ext A ( φ M , N ), E p,q 2 = Ext p B (M, Ext q A (B, N )), satisfies E p,0 2 = Ext p B (M, N g ), E p,1 2 = Ext p B (M,· · · → Ext n A ( φ M , N ) γ → Ext n−1 B (M, N g ) d 2 → Ext n+1 B (M, N/gN ) φ * → Ext n+1 A ( φ M , N ) γ → Ext n B (M, N g ) → · · · WeExt n A ( φ M, N ) × Ext m B ( ′ M, M ) ⋆ → Ext n+m A ( φ′ M , N ). Moreover, the maps γ : Ext p+1 A ( φ M , N ) → E p,1 2 and ′ γ : Ext p+1 A ( φ′ M , N ) → ′ E p,1 2 intertwine with the right action of Ext B ( ′ M , M ). That is we have a commutative diagram Ext p A ( φ M , N ) × Ext k B ( ′ M, M ) γ×1 −→ E p,1 2 × Ext k B ( ′ M, M ) ↓ ⋆ ↓ ⋆ Ext p+k A ( φ′ M , N ) ′ γ −→ ′ E p+k,1 2 Consider now the case N = M = K, the trivial module for either A or B (from now on we will suppress the notation φ K). The spectral sequence above then becomes E p,q 2 = 0 for q > 1, E p,1 2 = Ext p B (K, K(d)) = E p (B)(d) and E p,0 2 = Ext p B (K, K) = E p (B). The associated long exact sequence is therefore · · · → E p−2 (B)(d) d 2 → E p (B) φ * → E p (A) γ → E p−1 (B)(d) → · · · (6.1) The lemmas show that this can be interpreted as an exact triangle of right E(B) modules. There is a second Cartan-Eilenberg spectral sequence converging to E(A) given byĒ p,q 2 = Ext p (T or A q (B, K), K). This spectral sequence is also supported on p ≥ 0 and q = 0, 1 with nonzero termsĒ p,1 2 = Ext p B (K(−d), K) = E p (B)(d) andĒ p,0 2 = Ext p B (K, K) = E p (B) . The edge homomorphism of this spectral sequence is also φ * : E(B) → E(A) and we have a long exact sequence: · · · → E p−2 (B)(d)d 2 → E p (B) φ * → E p (A)γ → E p−1 (B)(d) → · · · (6.2) Exactly as above, we see that the differentiald 2 , as well as the induced map γ intertwine with the left cup product by elements of E(B). We therefore interpret (6.2) as an exact triangle of left E(B)-modules. Finally, we note that although the terms of the two exact triangles above are all the same, and the map φ * is common to them, the sequences are not, in general, the same. In particular, they are not generally triangles of E(B)-bimodules. K 2 modules and a change of rings theorem For Koszul algebras, Positselskii [14] establishes a powerful change of rings theorem, which states that if A is a quadratic algebra and B is a quadratic factor algebra of A which admits a linear free resolution as an A-module, then the Koszul property for B lifts to the Koszul property for A, (see also [12]). In this section we prove the K 2 analog of that theorem using the definition of a K 2 module given below. → Q i M i → Q i−1 → · · · → Q 0 → B → 0 N i ↓ N i−1 ↓ N 0 ↓ ·b ↓ → Q i M i → Q i−1 → · · · → Q 0 → B → 0 To prove the lemma, it suffices to prove the claim that there are no non-zero entries in N i of degree 0. We do this by induction on i. Since I has no elements of degree 1 the linear part of N 1 M 1 is the linear part of M 1 b, which is 0. N 0 1 L 1 is the linear part of N 1 M 1 , hence N 0 1 L 1 = 0. Since the rows of [L 1 : E 1 ] are linearly independent, N 0 1 = 0. Assume now that i > 1 and the claim is true for all k < i. Consider the ma- Write N i = N 0 i +N + i ,trix N 0 i [L i : E i ] = [N 0 i L i : N 0 i E i ] . First we note that N 0 i L i is the linear component of the matrix N i M i . Since I contains no elements of degree 1 this is also the linear component of M i N i−1 . By the inductive hypothesis, every entry of this matrix has degree at least 2. Thus N 0 2 ) = 0, we conclude that d 2 = 0. Similarly, for all r ≥ 2, d r = 0 and E p,q r = E p,q 2 . Since our spectral sequence has collapsed, there is a filtration F r E(A) on Example 7.5. Let A and B be defined by A = K x, y, z xz − zx, yz − zy, x 3 z, y 4 + xz 3 , B = A/ z = K x, y y 4 Note that A is not a regular central extension of B. Clearly B is a K 2 algebra, and we claim B is also a K 2 A-module. To see this it suffices to show that the following is a projective resolution of B as an A-module. We omit any details. i L i = 0. Similarly, N 0 i E i = N 0 i (M i M i−1 mod I ′ ) = (N 0 i M i M i−1 mod I ′ ). By defini- tion, (N + i M i M i−1 mod I ′ ) = 0, since every entry of M i M i−1 is already in I. Now N i M i = M i N i−1 mod I, so N i M i M i−1 = M i N i−1 M i−1 mod I ′ , and likewise M i N i−1 M i−1 = M i M i−1 N i−2 mod I ′ . Thus, again by induc- tion, N 0 i E i = (N i M i M i−1 mod I ′ ) = (M i M i−1 N iE(A) for which (F q E p+q (A)/F q−1 E p+q (A)) = E p,q 2 . E(A) is a D(A)-E(B)-· · · → A(−8) x 3 −→ A(−5) z −→ A(−4) x 3 −→ A(−1) z −→ A → B → 0. We conclude that A is also a K 2 algebra. 8. Normal, regular factor rings I, the degree 1 case The converse of the previous theorem, as we will see, is not true. However, one extra hypothesis will give us a converse. Proof . Assume A is K 2 . Consider the exact triangle of right E(B)-modules Let B = A/gA. We make the following three non-obvious claims: (1) B is not K 2 . (2) The element g is central and regular in A. (3) A is K 2 Proof . Let T = K x, y, z, w, . To prove the first claim, we apply the algorithm of Section 5 to get a minimal projective resolution of the trivial B-module. This has the form 0 → B(−5) M 3 → B(−3, −3, −4) M 3 → B(−1, −1, −1, −1) M 1 → B → K → 0 where M 3 = 0 y 2 0 , M 2 =   0 0 y 2 0 zx 0 0 0 0 0 0 y 2 w   , M 1 =     x y z w     We calculate [L 3 : E 3 ] for this, as in theorem 4.4, and see that its first row is zero. This proves (1). To prove (2) we use the algorithm from [6] for checking the regularity of a central extending variable. We work over the algebra T [g]. Define: f 2 =   0 w 2 0   and f 3 = 0 0 0 0 so that M 2 M 1 + f 2 g and M 3 M 2 − f 3 g are both identically 0 in A. Put M n = M n f n (−1) (n−1) gI M n−1 for all n ≥ 1. The principal theorem of [6] states that g is regular in A if and only if the matrixM 3M2 is zero in A. Furthermore, if g is regular in A, then the matricesM n provide the maps in a minimal projective resolution of the trivial A-module. We havê M 3 =     0 y 2 0 0 0 0 0 g 0 0 0 0 y 2 0 0 g 0 zx 0 0 0 0 0 g 0 0 0 y 2 w     ,M 2 =            0 0 y 2 0 0 zx 0 0 0 w 2 0 0 0 y 2 w 0 −g 0 0 0 x 0 −g 0 0 y 0 0 −g 0 z 0 0 0 −g w            . It follows immediately thatM 3M2 is 0 in A and that g is regular. To prove the last claim, we consider the matrix condition 4.4. We will refrain from including the calculation. The salient point is that the term y 2 w 2 , which is an essential relation of A, appears in the first row of [L 3 : E 3 ] and that −g appears in the first row of [L 4 : E 4 ]. We use induction on n to prove the following two claims: D n = E n (B) and φ * : D n → E n (A) is surjective. We may assume the claim for n − 1 and n − 2. Our long exact sequence (6.1) then looks like: (K, K) → Ext 1 B (K, K(d)) of (6.1) is 0. Then B is K 2 ,ĝ0 → D n−2 (d) d 2 → E n (B) φ * → E n (A) γ → D n−1 → · · · Since A is K 2 , we have by induction, E n (A) = E 1 (A)E n−1 (A) + E 2 (A)E n−2 (A) = φ * (D 1 )φ * (D n−1 ) + φ * (D 2 )φ * (D n−2 ) = φ * (D n ) This proves the second claim and also shows that γ(E n (A)) = 0. Since d 2 is multiplication byĝ ∈ E 2 (B) = D 2 , we get E n (B) = D n +ĝ · D n−2 = D n , completing the induction. We now see that φ * is surjective, γ = 0 and d 2 is injective. In particular, g is left regular in E(B). Since φ * is surjective we also see, from (6.2) thatd 2 is injective andγ = 0. There existsĝ ′ ∈ Ext 2,d (B) for whichd 2 (a) = a⋆ĝ ′ . Thenĝ⋆E(B) = E(B)⋆ĝ ′ = ker(φ * ). It follows thatĝ = µĝ ′ for some scalar µ = 0 and henceĝ is normal and regular in E(B), as required. There are several ways to assure the hypothesis that the image of γ : Ext 2 A (K, K) → Ext 1 B (K, K(d)) is 0. In particular, the hypothesis can only fail if there are relations of A of degree d + 1 (since Ext 1 B (K, K(d)) is supported in degree d + 1.) More specifically, let σ be the automorphism of A defined by ag = gσ(a). Then σ defines an automorphism of T (V ) as well. Choose g ′ ∈ T (V ) d , a preimage of g ∈ A. It is not difficult to see that the map fails to be zero if and only if there exists an essential relation of the form x ⊗ g ′ − g ′ ⊗ σ(x) for some x ∈ T (V ) 1 . Corollary 9.2. If A is a commutative or graded-commutative K 2 -algebra and g ∈ A d is regular, then A/gA is K 2 . In particular, any graded complete intersection is K 2 . The hypothesis γ(Ext 2 A (K, K)) = 0 is certainly not necessary in Theorem 9.1. For example, in the K 2 -algebra A = K x, y / xy 2 − y 2 x , y 2 is normal and regular and A/y 2 A = K x, y / y 2 is also K 2 , but γ(Ext 2 A (K, K)) = 0. However, the theorem is false without the γ hypothesis as shown by the following important example. Example 9.3. Let A = K x, y / x 2 y−yx 2 , xy 3 −y 3 x and let B = A/y 3 A = K x, y / x 2 y − yx 2 , y 3 . It is evident that y 3 is central in A, and by a straightforward application of Bergman's diamond lemma [5] we see that y 3 is regular in A. Moreover A is K 2 . This is proved by applying theorem 4.4 to the following minimal projective resolution of A K: 0 → A(−5) y 2 −x −→ A(−3, −4) yx −x 2 y 3 −xy 2 −→ A(−1, −1) x y −→ A → K → 0. However, B is not K 2 . The minimal projective resolution of B is · · · → B(−5, −4) y 2 x 2 0 y −→ B(−3, −3) yx −x 2 0 y 2 −→ B(−1, −1) x y −→ B → K → 0, which fails the criteria of 4.4. Alternatively, simply note that E 1 (B) and E 2 (B) are supported in internal degrees −1 and −3 respectively, and thus they can not possibly generate the nonzero cohomology class in E 3,5 (B). Graded Ore Extensions Fix a graded algebra B, a graded automorphism σ of B and a degree +1 graded σ-derivation δ of B (i.e. δ(ab) = δ(a)σ(b) + aδ(b)). We let A be the associated Ore extension B[z; σ, δ]. The extending variable z is assumed to have degree 1 in A. In this section we investigate when the K 2 property can pass between A and B. Let J be the two sided ideal J = AB + = B + A ⊂ A and put C = A/J, with factor morphism φ : A → C. We note that C is just the polynomial ring C = K[z],z = φ(z). Let α : B → A be the inclusion homomorphism. Let ζ : A → A be the left A-module homomorphism given by right multiplication by z. Let ζ also denote the induced map ζ : C → C (still given by multiplication by z). We have a short exact sequence of left A-modules: 0 → C(−1) ζ → C → K → 0 This induces a long exact sequence · · · → Ext q−1 A (C, K)(1) → E q (A) → Ext q A (C, K) ζ * → Ext q A (C, K)(1) → · · · Since A is free as a right (or left) B-module and A C = A⊗ B K, Ext A (C, K) = Ext B (K, K) = E(B), in particular we may consider ζ * as a map on E(B). As usual, let σ * be the automorphism of E(B) induced from the automorphism σ. The following Lemma does not seem to be widely known. Moreover, ζ * vanishes on E 1 (B) and E 2 (B). Proof . Let Bar(B) = n B ⊗ B ⊗n + be the usual bar resolution of B K, with differential ∂(b 0 ⊗ · · · ⊗ b n ) = n i=1 (−1) i b 0 ⊗ · · · b i−1 b i ⊗ · · · ⊗ b n . Since A B is free and A ⊗ B K = C, we may tensor Bar(B) by A to get an A-projective resolution A ⊗ B Bar(B) = n A ⊗ B ⊗n + of A C with boundary map ∂ A . We define mapsδ n and ζ n as follows: It is straightforward to check ζ n • ∂ A = ∂ A • ζ n+1 . Therefore ζ * on Ext m A (C, K) is given by [f ] → [ζ * q f ], for f ∈ Hom A (A ⊗ B ⊗m + , K) (a cocycle). However, (ζ ⊗ σ ⊗m ) * f = 0, simply because z has positive degree, and thus ζ * m f = (1 ⊗δ m ) * f . In particular, under the identification Hom A (A ⊗ B ⊗m + , K) = Hom K (B ⊗m + ), ζ * m becomesδ * m . The definition ofδ n+m makes it clear that for f ∈ Hom(B ⊗n + , K) and g ∈ Hom(B ⊗m + , K),δ * n+m (f ⊗ g) = f ⊗δ * m (g) +δ * n (f ) ⊗ (σ ⊗m ) * (g). This proves that ζ * is a σ * -derivation, as claimed. Since ζ * is a σ * -derivation of internal degree +1, It is now clear that ζ * vanishes on E 1 (B) = E 1,1 (B) and on E 2,2 (B) (since E 2,1 (B) = 0). Suppose now that [f ] ∈ E 2,q (B) for some q > 2 and consider ζ * [f ] ∈ E 2,q−1 (B). Define g ∈ Hom(B + , K) as follows: g(B 1 ) = 0 and for any b i , c i ∈ B + , g( i b i c i ) = f ( i (δ(b i ) ⊗ σ(c i ) + b i ⊗ δ(c i )). From the formula ∂( i (δ(b i ) ⊗ σ(c i ) + b i ⊗ δ(c i )) = δ( i b i c i ) and the fact that f is a cocycle, it follows that g is well-defined. But then ∂ * g =δ * 2 f , showing that ζ * [f ] = 0, as required. Theorem 10.2. If B is a K 2 algebra, then the graded Ore extension A = B[z; σ, δ] is also K 2 . Proof . By the lemma, since B is K 2 , the map ζ * : E(B) → E(B) is zero. The long exact sequence above becomes the short exact sequences 0 → E q−1 (B)(1) → E q (A) → E q (B) → 0 It is easy to see that the second map in this sequence is just the ring homomorphism α * . Letž ∈ E 1,1 (A) span the kernel of α * : E 1 (A) → E 1 (B). Then the fact that α * is surjective and has a kernel isomorphic to E(B) (1) implies thatž generates the kernel of α * , i.e. our short exact sequence is encoding: 0 →ž⋆E(A) → E(A) → E(B) → 0. It follows at once that A is K 2 . Remark 10.3. If we hypothesize that the map ζ * of Lemma 10.1 is known to be zero, then α * : E(A) → E(B) is surjective. It follows then that B inherits the K 2 property from A. However, the following example shows that the map ζ * need not be zero. In this example, neither the algebra B of Koszul algebras. The definition of N -Koszul generalizes the purity definition of Koszul given above and is given as the statement:E n,m (A) = 0 unless m = δ(n), where δ(n) = N (n − 1)/2 + 1 if n is odd and δ(n) = N n/2 if n is even. In particular, an N -Koszul algebra must have all of it relations in degree N and each E n (A) is pure in the sense that Theorem 3. 2 . 2If A and B are graded algebras, and σ is an automorphism of A, then B ⊗ σ A is K 2 if and only if A and B are both K 2 . Lemma 4. 1 . 1Let α 1 , . . . , α s be homogeneous elements of T (V ) and suppose r = s α s r s . Then r is essential in I if and only if some α s is a unit. Corollary 4. 6 . 6The algebra A is N -Koszul if and only if it is N -homogeneous and K 2 . N is a 3 by 4 matrix where the entries of the first two columns are 2-tensors and those in the last two columns are 1-tensors and M is a 4 by 3 matrix with 1-tensors as entries in the first two rows and 2-tensors in the last two rows. In particular, the entries in M X and the entries in Y N must each give a set of relations for A. The K-span of the two quadratic entries in M X must be the same as the K-span of the two quadratic entries in Y N . Theorem 4. 7 . 7Let A be an AS-regular algebra of global dimension 4 with 3 linear generators. Then A is K 2 . Proof . We see from (4.1) that the K 2 matrix condition given in Theorem 4.4 can fail only at N . Let us assume that the condition fails, and find a contradiction. Let L be the linear part of the matrix N and let E be N M modulo I ′ . Failure of the matrix condition is the statement that the composite matrix [L : E] has linearly dependent rows. By choosing new coordinates, 0) be the first row of N , where the terms a i and b i are linear. x n . Throughout this section, A = T /I, where I is an ideal generated by a minimal finite set of monic monomials (in the variables x i ) R = {r 1 , .., r s }. For any element a in T , we denote its image in A byā. Let T mon be the set of monic monomials in T . We write |a| for the degree of any a ∈ T mon .The combinatorics of monomial algebras is straightforward and built primarily on the finite set LE(R) consisting of all a ∈ T mon for which there exists a b ∈ T mon such that |b| > 0 and ab ∈ R. This is the set of proper "left ends" of R.For any pair of monomials a and b in T mon such that ab = 0 we define the monic monomial L(a, b) to be a ′ where a = a ′′ a ′ and a ′ is minimal such that a ′ b = 0. If ab = 0 we set L(a, b) = 0. Suppose thatā = 0 andb = 0, but ab = 0. Let L(a, b) = a ′ . Then there exists a minimal b ′ so that b = b ′ b ′′ Lemma 5. 1 . 1If b is a monomial in T andb = 0 in A then the left annihilator ofb in A is generated by {ā| a ∈ L(b)}. Assume that m > 1 and thatM m−1 has been defined and has the property that each row ofM m−1 has a unique nonzero entry and that entry is in LE(R) ∪ {x 1 , . . . , x n }. LetM m−1 have t m−1 rows. For 1 ≤ i ≤ t m−1 , if b i is the non-zero entry in row i ofM m−1 and L(b i ) = {a 1 , . . . , a j }, letM m (i) be the j by t m−1 matrix with a 1 , . . . , a j arranged as column i and zeroes elsewhere. If L(b i ) is empty, this is the empty matrix. Finally, takeM m to be the matrix obtained by stacking the matricesM m (i), 1 ≤ i ≤ t m−1 . It is clear from construction thatM m has the property that each row has a unique nonzero entry from LE(R) ∪ {x 1 , . . . , x n } (and in fact, for m ≥ 2, all the nonzero entries are from LE(R).) We note that it is clear that M m M m−1 = 0 for all m > 1 and hence the matrices M m define a complex of free-graded A-modules as above. Lemma 5.1 makes the following clear.Lemma 5.2. The complex described above is a minimal resolution of A K. Now let S be the smallest set in T such that (1) each x i is in S, and (2) if b ∈ S and L(a, b) = a for some a ∈ LE(R), then a ∈ S. There is a simple inductive algorithm for calculating S. Put S 1 = {x 1 . . . , x n } and for i > 1 define S i = b∈S i−1 L(b)\ j<i S j . Then S is the disjoint union of the S i . It should be clear that S k is just those nonzero matrix entries of M k which have not appeared inM i for any i < k. Moreover, since LE(R) is finite, S k = ∅ for all k sufficiently large. Theorem 5.3. Let A be a monomial algebra over the field k. Then A is K 2 if and only if the following condition holds for every b ∈ S: if there exists an a ∈ LE(R) such that L(a, b) = a then ab ∈ R or deg(a) = 1. Proof . From 4.4, we need only see that the condition given in the theorem is equivalent to the condition that for all m the rows of the matrix [L m : E m ] are linearly independent. It is clear that the condition is equivalent to the statement that every row of [L m : E m ] is non-zero. It therefore suffices to prove that [L m : E m ] has linearly dependent rows if and only if one of its rows is zero. We begin with some simple observations about this matrix. a, b, c and d are monomials with ac = bd = r. In particular there is a monomial e such that c = ed or d = ec. By symmetry we may assume d = ec. But then the second row ofM m−1 is a left multiple of the first row, which contradicts the minimality of the projective resolution. This contradiction proves that any linearly dependent set of rows of E m must involve a row of zeroes. Example 5. 4 . 4Let A be the monomial algebra generated by {w, x, y, z} with relations R = {z 2 y 2 , y 3 x 2 , x 2 w, zy 3 x}. Then S 1 = {w, x, y, z}, S 2 = {z 2 y, y 3 x, x 2 , zy 3 }, S 3 = {y 3 }, S 4 = {z 2 } and S 5 = ∅. In this case K 2 does 2 = 2the subsequent sections. Let φ : A → B be a (graded) ring epimorphism with kernel J. For any B module M we write φ M for the corresponding A module induced via φ. For any graded A module N we write N J for the graded B-module N J = {n ∈ N | Jn = 0} = Hom A (B, N ). The following lemma about the change of rings spectral sequence of Cartan and Eilenberg is certainly well known to experts.Lemma 6.1. Let φ : A→ →B be an epimorphism of K algebras. For any left A-module N and any left B-modules M and ′ M we have first quadrant spectral sequences E r and ′ E r , with E 2 -terms E p,q 2 = Ext p B (M, Ext q A (B, N ))and ′ E p,q Ext p B ( ′ M , Ext q A (B, N )) which converge to Ext A ( φ M , N ) and Ext A ( φ′ M , N ) respectively. The differentials d r and ′ d r of these spectral sequences intertwine with the natural cup product by Ext B ( ′ M, M ) from the right, that is we have the commutative diagram: recall the construction of the spectral sequences. Let Q → M → 0 and ′ Q → ′ M → 0 be projective resolutions (over B) of M and ′ M respectively. Let 0 → N → I be an A-injective resolution of N . We write J q = Hom A (B, I q ). The spectral sequence E r is the first spectral sequence related to the double complex F p,q = Hom B (Q p , Hom A (B, I q )) = Hom B (Q p , J q ) d 2 : 2E 2 → E 2 as d 2 [(f, g)] = [(d h g, 0)]. Let [ζ] ∈ Ext k B ( ′ M, M ) be represented by ζ ∈ Hom B ( ′ Q k , M ).The following diagram then defines, via projectivity, maps ζ j , j ≥ 0, so that the top row of boxes are commutative and the box in the bottom row is anticommutative. The first claim is clear (since the action of B is by precomposition and the action of Ext A (N, ′ N ) is by post composition). Let Q → M → 0 be a projective resolution (over B) of M . Let 0 → N → I and 0 → ′ N → ′ I be A-injective resolutions of N and ′ N . Let F p,q = Hom B (Q p , Hom A (B, I q )) = From this diagram we see that [ζ]⋆[(f, g)] = [(ζ q f, ζ q−1 g)]. (We are once again abusing notation, using the fact that image of ζ q f must be in ′ I J r+q and similarly for ζ q−1 g.) Thus we have ′ d 2 ([ζ]⋆[(f, g)]) = [( ′ d h (ζ q−1 g), 0)] = [(ζ q−1 d h (g), 0)] = [ζ]⋆[(d h (g), 0)] = [ζ]⋆d 2 [(f, g)]. This proves the lemma for r = 2, and the general proof is essentially the same. are primarily interested in applying this long exact sequence to the case when M and N are both the trivial module, K. But we need to know that the map γ, like d 2 , intertwines with the right cup product from Ext B ( ′ M, M ). This theorem should also be well-known and we omit the proof. Theorem 6.3. Let A, g, B and φ be as above. For any left A-module N and any left B-modules M and ′ M let E p,q Let A = T (V )/I be a graded algebra and W a graded left A-module. Let D(A) be the subalgebra of E(A) generated by E 1 (A) and E 2 (A). Let Q * → W → 0 be a projective resolution of W as a left A module, where the maps Q n → Q n−1 are given by matrices M n with entries in T (V ). LetL n be the linear part of M n , i.e. M n modulo T (V ) ≥2 . For n ≥ 2, let E n = M n M n−1 modulo I ′ and let E 1 = 0.Definition 7.1. We say W is a K 2 A-module if for all 1 ≤ n ≤ pd A (W ),the rows of the matrix [L n : E n ] are linearly independent.Theorem 4.4 can be interpreted as the statement that the algebra A is K 2 if and only if the trivial module is a K 2 -module. The following lemma is proved in exactly the same way as that theorem.Lemma 7.2. The module W is K 2 if and only if Ext A (W, K) is generated as a left D(A) module by Ext 0 A (W, K). Before we can state and prove the analog of Positselskii's theorem for K 2 algebras, we need a technical lemma. Let A = T (V )/I be a graded algebra and let B be a graded factor algebra of A. Let φ : A → B be the associated epimorphism. Lemma 7.3. Suppose that B, as a left A-module, is a K 2 -module. Then the natural action of B on Ext A (B, K) is trivial. Proof . Assume that the rows of the matrix [L n : E n ] are linearly independent and fix b ∈ B k for some k > 0. The left action of b on Ext A (B, K) is induced from the right action of b on B. We choose matrices N i with homogeneous entries in T (V ) to produce a commutative diagram of A-modules: where the entries of N 0 i are of degree 0 and the entries of N + i are of positive degree. We must prove N 0 i = 0 for all i > 0. For i = 0, N 0 is a diagonal matrix with b on the diagonal, hence N 0 0 = 0. The commutativity of the above diagram implies that N 1 M 1 = M 1 b mod I. 2 = 2 = 22−2 mod I ′ ) = 0. Hence N 0 i annihilates the matrix [L i : E i ]. Since the rows of [L i : E i ] are linearly independent it follows that N 0 i = 0. This completes the induction and proves the lemma.Theorem 7.4. Let A be a graded algebra and B a graded factor algebra of A. Assume that B is K 2 as an A-module and also K 2 as an algebra. Then A is K 2 . Proof . Let φ : A → B be the algebra epimorphism. As usual, we have a change of rings spectral sequence E p,q r converging to E(A) with E 2 term E p,q Ext p B (K, Ext q A (B, K)). By Lemmas 6.1 and 6.2 this is a spectral sequence of right E(B) modules and left E(A) modules. By the previous lemma, the B-action on Ext A (B, K) is trivial, and thus we have E p,q Ext q A (B, k) ⊗ K E(B) as an E(A)-E(B) bimodule. Since A B is a K 2 -module, this bimodule is generated by E 0,0 2 = Ext 0 B (B, K) ⊗ E 0 (B) as a D(A)-E(B) bimodule. Since d 2 is a bimodule homomorphism and d 2 (E 0,0 bimodule, where E(B) acts on the right through φ * . The convergence of the spectral sequence is compatible with the two bimodule actions. Hence we conclude that E(A) is generated by E 0 (A) as an D(A)-E(B) bimodule. By hypothesis φ * (E(B)) = φ * (D(B)) ⊂ D(A), and thus E(A) is generated by E 0 (A) over D(A), i.e. E(A) = D(A), as required. Let A = T (V )/I be a graded K-algebra and g a homogeneous element of degree 1 that is both normal and regular in A. We set B = A/gA and let φ : A → B the natural graded algebra epimorphism. The purpose of this section is to investigate the extent to which A and B inherit the K 2 property from one another. We remind the reader that the definition of K 2 was specifically motivated by the desire that the class of algebras be closed when passing from B to A. Fortunately, this holds as a consequence of Theorem 7.4. Theorem 8.1. If the algebra B is K 2 then the algebra A is K 2 . (6. 1 ), with d = 1 . 11Since E 0 (B)(1) is supported in degree −1 and E 2,1 (B) = 0, the map d 2 must be zero, the ring homomorphism φ * : E(B) → E(A) is injective and γ is surjective. Choose anyĝ ∈ E 1,1 (A) for which γ(ĝ) = 1 ∈ E 0 (B). Since the image of γ is a projective (bigraded-free) right E(B)module, we can split what is now a short exact sequence to get E(A) = E(B) ⊕ĝ⋆E(B) (we supress the φ * -notation). Now consider the long exact sequence (6.2). SinceE 1 (A) = E 1 (B) ⊕ Kĝ and E 1 (B) is in the kernel of γ ′ , we see γ ′ (ĝ) = 0. Hence, arguing as above, we get E(A) = E(B) ⊕ E(B)⋆ĝ. Let D = j,k D j,k be the bigraded subalgebra of E(B) generated by E 1 (B) and E 2 (B). We have E 1 (A) = D 1 + K⋆ĝ and E 2 (A) = D 2 ⊕ D 1 ⋆ĝ = D 2 ⊕ĝ⋆D 1 . Thus D andĝ generate E(A). Since E 3 (B) = D 3 , we have D 2 ⋆ĝ ⊂ D 3 ⊕ĝ⋆D 2 . Therefore D⋆ĝ ⊂ D ⊕ĝ⋆D. Since D andĝ generate E(A), we conclude that E(A) = D ⊕ĝ⋆D. This proves that D = E(B), as required. Remark 8.3. The hypothesis that E 3 (B) should be generated by E 1 (B) and E 2 (B) is easily translated into the hypothesis that the matrix [L 3 : E 3 ], from Theorem 4.4, has linearly independent rows.The following example shows that the E 3 (B) hypothesis of the theorem cannot be avoided. The example also highlights an interesting connection between Theorem 8.2 and the main theorem of[6], where it is shown that the regularity of a central extending variable is controlled by information encoded in E 3 (B).Example 8.4. Let A = K x, y, z, w, g /I where I is generated by {y 2 z, zx 2 + gw 2 , y 2 w 2 , xg − gx, yg − gy, wg − gw, zg − gz}. is a normal and regular element of E(B) andE(A) = E(B)/ĝE(B). Proof . Since d > 1, φ * : E 1 (B) → E 1 (A) is an isomorphism (i.e. A andB have the same set of generators). From (6.1) the hypothesis on γ tells us that φ * : E 2 (B) → E 2 (A) is surjective. Let D = ⊕ n D n be the graded subalgebra of E(B) generated by E 1 (B) and E 2 (B). Lemma 10. 1 . 1The map ζ * : E(B) → E(B) is a σ * -derivation (with respect to the cup product ⋆) of homological degree 0 and internal degree +1. 1 ⊗i ⊗ δ ⊗ σ ⊗j , and ζ n : A ⊗ B ⊗n + → A ⊗ B ⊗n + ζ n = ζ ⊗ σ n + 1 ⊗δ n , ζ 0 = ζ. Section 6 contains lemmas about the structure of the spectral sequences associated to a change of rings. In Section 7 we use the notion of a K 2 module and prove Theorem 7.4 about algebras with K 2 quotients, an analog to a standard theorem in Koszul theory. In particular, we see then that the class of K 2 algebras is closed under regular central extensions. In Theorem 8.2 we analyze the inheritance of the K 2 property from an algebra A to a normal regular quotient of the form A/gA where the degree of g is 1. The case where g has degree greater than 1 is analyzed in Theorem 9.1, with a remarkably different answer. In Theorem 10.2 we show that the class of K 2 algebras is closed under graded Ore extension (A = B[z, σ; δ]), and in the process we establish an interesting and apparently previously unknown fact about maps induced on the Yoneda algebra (Lemma 10.1). Then a relation ofA is essential if and only if it has degree N . An inductive argument, based on the condition of Theorem 4.4, then assures that the matricesM n must alternate between having entries of degree 1 when n is odd and degree N − 1 when n is even. This proves that A is N -Koszul.We conclude this section by determining which 4 dimensional Artin-Schelter regular (AS-regular) algebras are K 2 . See[10] for the definition of Artin-Schelter regular and for details on the degrees of the defining relations for these algebras.Let A be an AS-regular algebra of global dimension 4. Then A has 2, 3, or 4 linear generators. If A has 4 generators then it is Koszul[16] and hence K 2 . If the algebra has 2 generators then it has two relations, one of degree 3 and one of degree 4. The definition of Artin-Schelter regular assures us that A K has a minimal projective resolution of the form: Lemma 6.2. Let φ : A→ →B be an epimorphism of K algebras. For any left A-modules N and ′ N and any left B-module M consider the first quadrant spectral sequences E r and ′ E r , with E 2 -terms E p,q2 = Ext p B (M, Ext q A (B, N )) and ′ E p,q 2 = Ext p B (M, Ext q A (B, ′ Remark 8.5. The example above becomes substantially more complex if we make the rings A and B commutative (that is, factor A and B by the commutativity relations). Nonetheless, using Theorem 7.4 it can be shown that the interesting features of the example remain the same, i.e. A is K 2 , g is regular, and B is not K 2 .9. Normal, regular factor rings II, the degree d > 1 case Throughout this section, let A = T (V )/I be a graded algebra, g a normal and regular homogeneous element of A of degree d > 1 and B = A/gA. As before, φ : A → B is the associated algebra epimorphism. We examine the conditions under which the K 2 property will descend from A to B.Since the map d 2 : E(B)(d) → E(B) associated to the long exact sequence (6.1) is a right E(B) module homomorphism, there is a distinguished element g ∈ E 2,d (B) such that d 2 (a) =ĝ⋆a.Theorem 9.1. Assume A is K 2 and the map γ : Ext 2 A nor its Ore extension B[z; δ] are K 2 . We do not know if the converse to10.2is true or false.Example 10.4. Let B be the monomial algebra K x, y / xyx, xy 2 x, y 3 .Let σ be the identity automorphism of B and let δ be the derivation of B extended from the formula δ(x) = 0 and δ(y) = y 2 . The algebra B fails the K 2 criterion of Section 5. Since B is a monomial algebra, each graded component B n has a canonical monomial basis, and these determine monomial bases for the graded components of B ⊗m + . We define f ∈ Hom(B ⊗3 + , K) −6 by:and by insisting that f vanishes on all other monomials of degree 6. One checks easily that ∂ * f = 0, so that [f ] is a nonzero cohomology class inWe calculate:δ * 3 f (xy ⊗ xy ⊗x) = f (xy 2 ⊗xy ⊗x+xy ⊗xy 2 ⊗x) = 1 and similarlyδ *3 f vanishes on all other monomials of degree 5. This assures that [δ * 3 f ] = 0 and shows that ζ * : E 3 (B) → E 3 (B) is not zero. Some algebras associated to automorphisms of elliptic curves. M Artin, J Tate, M Van Den, Bergh, The Grothendieck Festschrift. IBirkhäuser BostonProgr. Math.. MR MR1086882 (92e:14002M. Artin, J. Tate, and M. Van den Bergh, Some algebras associated to automor- phisms of elliptic curves, The Grothendieck Festschrift, Vol. I, Progr. Math., vol. 86, Birkhäuser Boston, Boston, MA, 1990, pp. 33-85. MR MR1086882 (92e:14002) Koszul algebras, Veronese subrings and rings with linear resolutions. Jörgen Backelin, Ralf Fröberg, 85-97. MR MR789425Rev. Roumaine Math. Pures Appl. 30216002Jörgen Backelin and Ralf Fröberg, Koszul algebras, Veronese subrings and rings with linear resolutions, Rev. Roumaine Math. Pures Appl. 30 (1985), no. 2, 85-97. MR MR789425 (87c:16002) Koszulity for nonquadratic algebras. Roland Berger, 705-734. MR MR1832913J. Algebra. 239216034Roland Berger, Koszulity for nonquadratic algebras, J. Algebra 239 (2001), no. 2, 705-734. MR MR1832913 (2002d:16034) Higher symplectic reflection algebras and non-homogeneous N -Koszul property. Roland Berger, Victor Ginzburg, J. Algebra. 3041MR MR2256407Roland Berger and Victor Ginzburg, Higher symplectic reflection algebras and non-homogeneous N -Koszul property, J. Algebra 304 (2006), no. 1, 577-601. MR MR2256407 The diamond lemma for ring theory. George M Bergman, 178-218. MR MR506890Adv. in Math. 29216001George M. Bergman, The diamond lemma for ring theory, Adv. in Math. 29 (1978), no. 2, 178-218. MR MR506890 (81b:16001) PBW-deformation theory and regular central extensions, pre-print. T Cassidy, B Shelton, T. Cassidy and B. Shelton, PBW-deformation theory and regular central extensions, pre-print, 2006. PBW-deformations of N -Koszul algebras. Gunnar Fløystad, Jon Eivind Vatne, J. Algebra. 3021MR MR2236596Gunnar Fløystad and Jon Eivind Vatne, PBW-deformations of N -Koszul algebras, J. Algebra 302 (2006), no. 1, 116-155. MR MR2236596 D-Koszul algebras. E L Green, E N Marcos, R Martínez-Villa, Pu Zhang, 141-162. MR MR2076383J. Pure Appl. Algebra. 1931-316044E. L. Green, E. N. Marcos, R. Martínez-Villa, and Pu Zhang, D-Koszul algebras, J. Pure Appl. Algebra 193 (2004), no. 1-3, 141-162. MR MR2076383 (2005f:16044) Koszul algebras and the quantum macmahon master theorem, pre-print. P H Hai, M Lorenz, P.H. Hai and M. Lorenz, Koszul algebras and the quantum macmahon master theorem, pre-print, 2007. Regular algebras of dimension 4 and their a∞-ext-algebras. D M Lu, J H Palmieri, Q S Wu, J J Zhang, pre-printD. M. Lu, J. H. Palmieri, Q. S. Wu, and J. J. Zhang, Regular algebras of dimension 4 and their a∞-ext-algebras, pre-print, 2004. The cohomology of certain Hopf algebras associated with p-groups. Justin M Mauger, electronic). MR MR2052951Trans. Amer. Math. Soc. 356816014Justin M. Mauger, The cohomology of certain Hopf algebras associated with p-groups, Trans. Amer. Math. Soc. 356 (2004), no. 8, 3301-3323 (electronic). MR MR2052951 (2005c:16014) Koszul configurations of points in projective spaces. A Polishchuk, J. Algebra. 2981MR MR2215128A. Polishchuk, Koszul configurations of points in projective spaces, J. Algebra 298 (2006), no. 1, 273-283. MR MR2215128 Quadratic algebras. Alexander Polishchuk, Leonid Positselski, University Lecture Series. 37American Mathematical SocietyAlexander Polishchuk and Leonid Positselski, Quadratic algebras, University Lec- ture Series, vol. 37, American Mathematical Society, Providence, RI, 2005. MR MR2177131 Koszul property and bogomolov's conjecture, Harvard PhD thesis. L Positselskii, L Positselskii, Koszul property and bogomolov's conjecture, Harvard PhD thesis, 1998. Koszul resolutions. B Stewart, Priddy, 39-60. MR MR0265437Trans. Amer. Math. Soc. 152346Stewart B. Priddy, Koszul resolutions, Trans. Amer. Math. Soc. 152 (1970), 39-60. MR MR0265437 (42 #346) On Koszul algebras and a new construction of Artin-Schelter regular algebras. Brad Shelton, Craig Tingey, MR MR1843325. 24116045Brad Shelton and Craig Tingey, On Koszul algebras and a new construction of Artin- Schelter regular algebras, J. Algebra 241 (2001), no. 2, 789-798. MR MR1843325 (2002e:16045) An introduction to homological algebra. Charles A Weibel, Cambridge Studies in Advanced Mathematics. 38Cambridge University PressMR MR1269324 (95f:18001Charles A. Weibel, An introduction to homological algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, Cambridge, 1994. MR MR1269324 (95f:18001) Twisted graded algebras and equivalences of graded categories. J J Zhang, 281-311. MR MR1367080 (96k:16078Proc. London Math. Soc. 372J. J. Zhang, Twisted graded algebras and equivalences of graded categories, Proc. Lon- don Math. Soc. (3) 72 (1996), no. 2, 281-311. MR MR1367080 (96k:16078)
[]
[ "Tensor Decomposition for EEG Signals Retrieval", "Tensor Decomposition for EEG Signals Retrieval" ]
[ "Member IEEEZehong Cao \nSchool of Software\nFaculty of Engineering and Information Technology\n{Zehong.Cao\nCentre for Artificial Intelligence\nUniversity of Technology Sydney\nMukesh.PrasadSydney, ChinNSWAustralia\n", "Member IEEEMukesh Prasad \nSchool of Software\nFaculty of Engineering and Information Technology\n{Zehong.Cao\nCentre for Artificial Intelligence\nUniversity of Technology Sydney\nMukesh.PrasadSydney, ChinNSWAustralia\n", "Fellow IEEEChin-Teng Lin \nSchool of Software\nFaculty of Engineering and Information Technology\n{Zehong.Cao\nCentre for Artificial Intelligence\nUniversity of Technology Sydney\nMukesh.PrasadSydney, ChinNSWAustralia\n" ]
[ "School of Software\nFaculty of Engineering and Information Technology\n{Zehong.Cao\nCentre for Artificial Intelligence\nUniversity of Technology Sydney\nMukesh.PrasadSydney, ChinNSWAustralia", "School of Software\nFaculty of Engineering and Information Technology\n{Zehong.Cao\nCentre for Artificial Intelligence\nUniversity of Technology Sydney\nMukesh.PrasadSydney, ChinNSWAustralia", "School of Software\nFaculty of Engineering and Information Technology\n{Zehong.Cao\nCentre for Artificial Intelligence\nUniversity of Technology Sydney\nMukesh.PrasadSydney, ChinNSWAustralia" ]
[]
Prior studies have proposed methods to recover multi-channel electroencephalography (EEG) signal ensembles from their partially sampled entries. These methods depend on spatial scenarios, yet few approaches aiming to a temporal reconstruction with lower loss. The goal of this study is to retrieve the temporal EEG signals independently which was overlooked in data pre-processing. We considered EEG signals are impinging on tensor-based approach, named nonlinear Canonical Polyadic Decomposition (CPD). In this study, we collected EEG signals during a resting-state task. Then, we defined that the source signals are original EEG signals and the generated tensor is perturbed by Gaussian noise with a signal-to-noise ratio of 0 dB. The sources are separated using a basic nonnegative CPD and the relative errors on the estimates of the factor matrices. Comparing the similarities between the source signals and their recovered versions, the results showed significantly high correlation over 95%. Our findings reveal the possibility of recoverable temporal signals in EEG applications.
10.1109/smc.2019.8914076
[ "https://arxiv.org/pdf/1807.01541v6.pdf" ]
126,354,836
1807.01541
dd2bb3b53b216a577ed27d1f15f0f910f4a84481
Tensor Decomposition for EEG Signals Retrieval Member IEEEZehong Cao School of Software Faculty of Engineering and Information Technology {Zehong.Cao Centre for Artificial Intelligence University of Technology Sydney Mukesh.PrasadSydney, ChinNSWAustralia Member IEEEMukesh Prasad School of Software Faculty of Engineering and Information Technology {Zehong.Cao Centre for Artificial Intelligence University of Technology Sydney Mukesh.PrasadSydney, ChinNSWAustralia Fellow IEEEChin-Teng Lin School of Software Faculty of Engineering and Information Technology {Zehong.Cao Centre for Artificial Intelligence University of Technology Sydney Mukesh.PrasadSydney, ChinNSWAustralia Tensor Decomposition for EEG Signals Retrieval EEGTensorNonlinearCPDRecovery Prior studies have proposed methods to recover multi-channel electroencephalography (EEG) signal ensembles from their partially sampled entries. These methods depend on spatial scenarios, yet few approaches aiming to a temporal reconstruction with lower loss. The goal of this study is to retrieve the temporal EEG signals independently which was overlooked in data pre-processing. We considered EEG signals are impinging on tensor-based approach, named nonlinear Canonical Polyadic Decomposition (CPD). In this study, we collected EEG signals during a resting-state task. Then, we defined that the source signals are original EEG signals and the generated tensor is perturbed by Gaussian noise with a signal-to-noise ratio of 0 dB. The sources are separated using a basic nonnegative CPD and the relative errors on the estimates of the factor matrices. Comparing the similarities between the source signals and their recovered versions, the results showed significantly high correlation over 95%. Our findings reveal the possibility of recoverable temporal signals in EEG applications. INTRODUCTION Nowadays, considerable interest has been dedicated to the development of several wearable electroencephalography (EEG) systems with dry sensors that collect and record different vital signs for an extended period. The long-term recording EEG data depends on low-power communication and transmission protocols. However, the performance of wearable EEG systems is bottlenecked mainly by the limited lifespan of batteries. Therefore, exploring data compression techniques can reduce the number of the data transmitted from the EEG systems to the clouds. Compressive sensing (CS), a novel data sampling paradigm that merges the acquisition and the compression processes, provides the best trade-off between reconstruction quality and low-power consumption compared to conventional compression approaches [1]. The CS suggests reconstructing a signal from its partial observations if it enjoys a sparse representation in some transform domain and the observation operator satisfies some incoherence conditions. Recently, recovering a spectrally temporal and spatial signal becomes of great interest in signal processing community [2]- [3]. The spectrally spatial signal can be sparse in the discrete Fourier transform domain if the frequencies are aligned well with the discrete frequencies. In this case, signals can be recovered from few measurements by enforcing the sparsity in the discrete Fourier domain [4]. However, frequency information in practical applications generally take fewer values compared to the temporal domain, and leads to the loss of sparsity and hence worsens the performance of compressed sensing. To address this problem, total variation or atomic norm [5] minimization methods were proposed to deal with signal recovery with continuous sinusoids or exponential signals [6], but these methods did not touch to temporal EEG signals. Therefore, the signal reconstruction from its temporal sampled paradigm is recognized as a challenge of EEG signal processing. II. MULTI-VIEW EEG SIGNALS Given a time series recorded physiological data, all data samples were carried by a vector. The power spectrum analysis of the time series has often been applied for investigating physiological (e.g., EEG) oscillations by computational intelligence models [7][8][9][10][11][12][13][14] and associated healthcare applications [15][16][17][18][19][20]. Recently, multiple electrodes are often used to collect EEG data in the experiment. Indeed, in EEG experiments, there are high-order modes than the two modes of time and space. For instance, analysis of EEG signals may compare responses recorded in different subject groups or event-related potentials (ERPs) as trials, which indicates the brain data collected by EEG techniques can be naturally fit into a multi-way array including multiple modes. The multi-way array is a tensor, a new way to represent EEG signals. Tensor decomposition inherently exploits the interactions among multiple modes of the tensor. In an EEG experiment, potentially, there could be even seven modes including time, frequency, space, trial, condition, subject, and group. In the past ten years, there have been many reports about tensor decomposition for processing and analyzing EEG signals [21][22]. However, there is no study particularly for tensor decomposition of EEG signals retrieval yet. III. MULTIDIMENSIONAL HARMONIC RETRIEVAL The fundamental models for tensor decomposition are Canonical Polyadic Decomposition (CPD) [23], and we expanded this framework to Nonlinear Canonical Polyadic Decomposition (NCPD) to fit EEG signals [24]. A. Definition Given a third-order tensor, a two-component canonical polyadic decomposition (CPD) is shown below: X = a1 • b1 • c1 + a2 • b2 • c2 + E ≈ a1 • b1 • c1 + a2 • b2 • c2 = X1 + X2.(1) After the two-component CPD is applied on the tensor, two temporal, two spectral, and two spatial components are extracted. The first temporal component a1, the first spectral component b1, and the first spatial component c1 are associated with one another, and their outer product produces rank-one tensor X1. The second components in the time, frequency, and space modes are associated with one another, and their outer product generates rank-one tensor X2. The sum of rank-one tensors X1 and X2 approximates original tensor X. Therefore, CPD is the sum of some rank-one tensors plus the error tensor E. Generally, for a given Nth-order tensor X ∈ R I 1 ×I 2 … ×I N , the CPD is defined as X = ∑ ( $ %&' u (1) •u (2) •···•u (N) )+E = Xˆ+E ≈ Xˆ .(2) where X =u (1) •u (2) •···•u (N) , r=1, 2, ···, R; Xˆapproximates tensor X, E ∈ R I 1 ×I 2 ×···×I N ; and ||u (n) || = 1, for n=1, 2, ···, N−1. U (n) = u (n) , u (n) , · · ·, u (n) ∈ R I n ×R denotes a component matrix for mode n, and n=1, 2, ···, N. In the tensor-matrix product form, Eq. (2) transforms into X=I × 1 U (1) × 2 U (2) × 3 ···× N U (N) +E=Xˆ +E.(3) where I is an identity tensor, which is a diagonal tensor with a diagonal entry of one. Here, we used Tensorlab [25] for signal processing and tensor compositions. The batch algorithms, nonlinear least squares (NLS) algorithm, called cpd_nls, compute the CPD of the tensor formed by the slices in the window. B. Data One man with age 25 participated in the resting-state experiment with recording EEG signals at O1, Oz, and O2 channels, who were asked to read and sign an informed consent form before participating in the EEG experiment. This study was approved by the Institutional Review Board of the Veterans General Hospital, Taipei, Taiwan. Three sources impinge on EEG signals with azimuth angles of 10°, 30° and 70°, respectively, and with elevation angles of 20°, 30° and 40°, respectively. We observe 200-time samples, such that a tensor T∈ℂ 10×10×15 is obtained with tijk the observed signal sampled at time instance k. Each source contributes a rank-1 term to the tensor. The vectors in the first and second mode are Vandermonde and the third mode contains the respective source signals multiplied by attenuation factors. Hence, the factor matrices in the first and second mode denoted as A and E, are Vandermonde matrices and the factor matrix in the third mode is the matrix containing the attenuated sources, denoted by S is the EEG raw data (source signal). Additionally, we defined the generated tensor is perturbed by Gaussian noise with a signalto-noise ratio of 0 dB. C. Signal separation and direction-of-arrival estimation The sources are separated by means of a basic CPD, without using the Vandermonde structure. The relative errors on the estimates of the factor matrices can be calculated with errors between factor matrices in a CPD (ERRCPD), called cpderr in Tensorlab. The ERRCPD computes the relative difference in Frobenius norm between the factor matrix U n and the estimated factor matrix Uest n as: ERRCPD n = Norm(U n -Uest n × P × D n )/ Norm(Un)(4) Where the matrices P and D n are a permutation and scaling matrix such that the estimated factor matrix Uest n is optimally permuted and scaled to fit U n . The optimally permuted and scaled version is returned as fourth output argument. If size (Uest n ,2) > size(U n ,2), then P selects size (U n ,2) rank-one terms of Uest that best match those in U. If size (Uest n ,2) < size(U n ,2), then P pads the rankone terms of Uest with rank-zero terms. Furthermore, it is important to note that the diagonal matrices D n are not constrained to multiply to the identity matrix. In other words, ERRCPD n returns the relative error between U n and Uest n independently from the relative error between U m and Uest m where m ~= n. IV. RESULTS A. The source of EEG signals The source of EEG signals is shown in Fig. 1, which includes the O1, Oz and O2 channels corresponding to source 1, 2 and 3. Figure 1 The three sources of EEG signals. B. Visualisation of the tensor Here, as shown in Fig. 2, we visualized the third-order tensor T by drawing its mode 1, 2, and 3 slices using sliders to define their respective indices. The index i, j, and k indicate the representation scales for the third-order tensor. Figure 2 Visualize a third-order tensor with slices. C. Observed signals with and without noise We generated tensor perturbed by Gaussian noise with a signal-to-noise ratio of 0 dB. As shown in Fig. 3, we gave three observed signals with and without noise, for source EEG signals. Fig. 3 Three observed signals with and without noise. D. Signal separation The relative errors on the estimates of the factor matrices can be calculated with ERRCPD, which are 0.0504, 0.0487 and 0.1634, respectively. The ERRCPD also returns estimates of the permutation matrix and scaling matrices, which can be used to fix the indeterminacies. The source signals and their recovered versions are compared in Fig. 4. Figure 4 The original and recovered source signals. Additionally, we have conducted the correlation between original and recovered source signals, and the outcome showed the over 95% correlation with the significance level (p < 0.05). E. Direction-of-arrival estimation and missing values due to broken sensors The direction-of-arrival angles can be determined using the shift-invariance property of the individual Vandermonde vectors. This gives relative errors for the azimuth angles of 0.0303, 0.0069 and 0.0058, and for the elevation angles of 0.0061, 0.0114 and 0.0098. Since Tensorlab is enable to process full, sparse and incomplete tensors, the missing entries can be indicated by empty values. We consider the equivalent of a deactivated sensor, a sensor that breaks halfway the experiment, and a sensor that starts to work halfway the experiment. The incomplete tensor is visualized in Fig 5. V. CONCLUSION This is the first study to retrieve the temporal EEG signals independently. In this study, we collected EEG signals during a resting-state task and investigated EEG signals impinging on tensor-based approach, named nonlinear CPD. The source signals are separated using a basic CPD and the relative errors on the estimates of the factor matrices of tensors. Comparing the similarities between the source signals and their recovered versions, the results showed significantly high correlation over 95%. Our findings reveal the possibility of recoverable temporal signals in EEG applications. Figure 5 5Visualization of the data tensor in the case of broken sensors. ACKNOWLEDGEMENTThis work was supported in part by the Australian Research Council (ARC) under discovery grant DP180100670 and DP180100656. The research was also sponsored in part by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-2-0022 and W911NF-10-D-0002/TO 0023. The views and the conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S Government. The U.S Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. Additionally, we express our gratitude to the subject who kindly participated in this study. Also, we thank all of the students and staff at the Brain Research Center in National Chiao Tung University and Computational Intelligence and Brain-Computer Interface Center in the University of Technology Sydney for their assistance during the study process. Bayesian compressive sensing. S Ji, Y Xue, &amp; L Carin, IEEE Trans. Signal Process. 562346S. Ji, Y. Xue, & L. Carin, "Bayesian compressive sensing," IEEE Trans. Signal Process., vol. 56, pp. 2346, 2008. Towards a mathematical theory of super-resolution. E J Cande`s, C Fernandez-Granda, Commun. Pure Appl. Math. 67E. J. Cande`s and C. Fernandez-Granda, "Towards a mathematical theory of super-resolution," Commun. Pure Appl. Math., vol. 67, pp. 906-956, 2014. Compressed sensing off the grid. G Tang, B N Bhaskar, P Shah, B Recht, IEEE Trans. Inf. Theory. 59G. Tang, B. N. Bhaskar, P. Shah, and B. Recht, "Compressed sensing off the grid," IEEE Trans. Inf. Theory, vol. 59, pp. 7465-7490, 2013. Robust uncertainty principles: ex-act signal reconstruction from highly incomplete frequency information. E J Cande`s, J Romberg, T Tao, IEEE Trans. Inf. Theory. 52E. J. Cande`s, J. Romberg, and T. Tao, "Robust uncertainty principles: ex-act signal reconstruction from highly incomplete frequency information," IEEE Trans. Inf. Theory, vol. 52, pp. 489-509, 2006. The convex geometry of linear inverse problems. V Chandrasekaran, B Recht, P Parrilo, A Willsky, Found. Comut. Math. 12V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky, "The convex geometry of linear inverse problems," Found. Comut. Math., vol. 12, pp. 805-849, 2012. Compressive two-dimensional harmonic retrieval via atomic norm minimization. Y Chi, Y Chen, IEEE Trans. Signal Process. 63Y. Chi and Y. Chen, "Compressive two-dimensional harmonic retrieval via atomic norm minimization," IEEE Trans. Signal Process., vol. 63, pp. 1030-1042, 2015. Inherent fuzzy entropy for the improvement of EEG complexity evaluation. Z Cao, C T Lin, IEEE Trans. Fuzzy Systems. 26Z. Cao, and C.T. Lin. "Inherent fuzzy entropy for the improvement of EEG complexity evaluation." IEEE Trans. Fuzzy Systems, vol. 26, pp.1032-1035, 2018. A Layered-Coevolution-Based Attribute-Boosted Reduction Using Adaptive Quantum-Behavior PSO and Its Consistent Segmentation for Neonates Brain Tissue. W Ding, C T Lin, M Prasad, Z Cao, J Wang, IEEE Trans. Fuzzy Systems. 26W. Ding, C. T. Lin, M. Prasad, Z. Cao and J. Wang. "A Layered-Coevolution-Based Attribute-Boosted Reduction Using Adaptive Quantum-Behavior PSO and Its Consistent Segmentation for Neonates Brain Tissue." IEEE Trans. Fuzzy Systems, vol. 26, p.p. 1177-1191, 2018. Deep Neuro-Cognitive Co-Evolution for Fuzzy Attribute Reduction by Quantum Leaping PSO With Nearest-Neighbor Memeplexes. W Ding, C T Lin, Z Cao, IEEE Trans. Cybern. W. Ding, C.T. Lin, and Z. Cao. "Deep Neuro-Cognitive Co-Evolution for Fuzzy Attribute Reduction by Quantum Leaping PSO With Nearest-Neighbor Memeplexes." IEEE Trans. Cybern., 2018. . C T Lin, Y T Liu, S L Wu, Z Cao, Y Wang, C S , C.T. Lin, , Y.T. Liu, S.L. Wu, Z. Cao, Y.K Wang, C.S. EEG-Based Brain-Computer Interfaces: A Novel Neurotechnology and Computational Intelligence Method. J T Huang, S King, S W Chen, C H Lu, Chuang, Man, Cybern. Magazine. 34IEEE SystHuang, J.T. King, S.A Chen, S.W. Lu, and C.H. Chuang. "EEG-Based Brain-Computer Interfaces: A Novel Neurotechnology and Computational Intelligence Method." IEEE Syst., Man, Cybern. Magazine, vol. 3, no. 4, p.p. 16-26, 2017. Shared Nearest Neighbor Quantum Game-based Attribute Reduction with Hierarchical Co-evolutionary Spark and Its Consistent Segmentation Application in Neonatal Cerebral Cortical Surfaces. W Ding, C T Lin, Z Cao, IEEE Trans. Neural Netw. Learn. Syst. W. Ding, C.T. Lin, and Z. Cao. "Shared Nearest Neighbor Quantum Game-based Attribute Reduction with Hierarchical Co-evolutionary Spark and Its Consistent Segmentation Application in Neonatal Cerebral Cortical Surfaces", IEEE Trans. Neural Netw. Learn. Syst., 2018. Z Cao, W Ding, Y K Wang, F K Hussain, A Jumaily, C T Lin, arXiv:1809.06671Effects of Repetitive SSVEPs on EEG Complexity using Multiscale Inherent Fuzzy Entropy. arXiv preprintZ. Cao, W. Ding, Y.K. Wang, F.K. Hussain, A. Jumaily, and C.T. Lin, "Effects of Repetitive SSVEPs on EEG Complexity using Multiscale Inherent Fuzzy Entropy", arXiv preprint arXiv: 1809.06671, 2018. Z Cao, C T Lin, K Lai, L W Ko, J T King, J L Fuh, S Wang, arXiv:1809.06673Extraction of SSVEPs-based Inherent Fuzzy Entropy Using a Wearable Headband EEG in Migraine Patients. arXiv preprintZ. Cao, C.T. Lin, K.L Lai, L.W. Ko, J.T. King, J.L. Fuh, and S.J Wang, "Extraction of SSVEPs-based Inherent Fuzzy Entropy Using a Wearable Headband EEG in Migraine Patients, arXiv preprint arXiv: 1809.06673, 2018. . C H Chuang, Z Cao, Y K Wang, P T Chen, C S , C. H. Chuang, Z. Cao, Y. K. Wang, P.T. Chen, C.S. Dynamically Weighted Ensemble-based Prediction System for Adaptively Modeling Driver Reaction Time. N R Huang, C T Pal, Lin, arXiv:1809.06675arXiv preprintHuang, N.R. Pal and C. T. Lin. "Dynamically Weighted Ensemble-based Prediction System for Adaptively Modeling Driver Reaction Time", arXiv preprint arXiv: 1809.06675, 2018. Resting-state EEG power and coherence vary between migraine phases. Z Cao, C T Lin, C Chuang, K Lai, A C Yang, J L Fuh, S Wang, J. Headache Pain. 17102Z. Cao, C.T. Lin,, C.H Chuang, K.L Lai, A.C. Yang, J.L. Fuh, and S.J Wang. "Resting-state EEG power and coherence vary between migraine phases". J. Headache Pain, vol. 17, pp.102, 2016. Exploring resting-state EEG complexity before migraine attacks. Z Cao, K L Lai, C T Lin, C H Chuang, C C Chou, S J Wang, Z. Cao, K. L. Lai, C. T. Lin, C. H. Chuang, C. C. Chou, and S. J. Wang. "Exploring resting-state EEG complexity before migraine attacks". Cephalalgia, 2017. Brain electrodynamic and hemodynamic signatures against fatigue during driving. C H Chuang, Z Cao, J T King, B S Wu, Y K Wang, C T Lin, Front. Neurosci. 12181C. H. Chuang, Z. Cao, J. T. King, B. S. Wu, Y. K. Wang, and C. T. Lin. "Brain electrodynamic and hemodynamic signatures against fatigue during driving". Front. Neurosci., vol. 12, pp.181, 2018. Classification of migraine stages based on resting-state EEG power. Z H Cao, L W Ko, K L Lai, S B Huang, S Wang, C Lin, IEEE. International Joint Conference on Neural Networks (IJCNN). Z. H. Cao, L.W. Ko, K. L. Lai, S.B. Huang, S. J Wang, and C. T Lin. "Classification of migraine stages based on resting-state EEG power", IEEE. International Joint Conference on Neural Networks (IJCNN), 2015. Forehead EEG in support of future feasible personal healthcare solutions: Sleep management, headache prevention, and depression treatment. C T Lin, C H Chuang, Z Cao, A K Singh, C S Hung, Y H Yu, S J Wang, IEEE Access. 5C.T. Lin, C. H. Chuang, Z. Cao, A.K. Singh, C.S. Hung, Y. H. Yu … and S. J. Wang. "Forehead EEG in support of future feasible personal healthcare solutions: Sleep management, headache prevention, and depression treatment", IEEE Access, vol. 5, p.p. 10612-10621, 2017. . Z Cao, C T Lin, W Ding, M H Chen, C T Li, T , Z. Cao, C.T. Lin, W. Ding, M.H. Chen, C.T. Li, and T.P. Identifying Ketamine Responses in Treatment-Resistant Depression Using a Wearable Forehead EEG. Su, arXiv:1805.11446arXiv preprintSu, "Identifying Ketamine Responses in Treatment-Resistant Depression Using a Wearable Forehead EEG". arXiv preprint arXiv:1805.11446, 2018. . F Cong, Q Lin, H , L Kuang, X F Gong, P , F. Cong, Q. Lin, H., L. D Kuang, X. F. Gong, P. Tensor decomposition of EEG signals: a brief review. &amp; T Astikainen, Ristaniemi, J. Neurosci. Methods. 248Astikainen, & T. Ristaniemi. "Tensor decomposition of EEG signals: a brief review". J. Neurosci. Methods, vol. 248, pp. 59-69, 2015. EEG extended source localization: tensor-based vs. conventional methods. H Becker, L Albera, P Comon, M Haardt, G Birot, F Wendling, . .. &amp; I Merlet, NeuroImage. 96H. Becker, L. Albera, P. Comon, M. Haardt, G. Birot, F. Wendling, ... & I. Merlet. "EEG extended source localization: tensor-based vs. conventional methods". NeuroImage, vol. 96, pp. 143-157, 2014. Canonical polyadic decomposition based on a single mode blind source separation. G Zhou, C Andrzej, IEEE Signal Proc. Let. 19G. Zhou, , and C. Andrzej. "Canonical polyadic decomposition based on a single mode blind source separation." IEEE Signal Proc. Let. vol. 19, pp.523-526, 2012. Tensor decompositions and applications. T G Kolda, B W Bader, SIAM Rev. 51T.G.Kolda and B.W.Bader,"Tensor decompositions and applications," SIAM Rev., vol. 51, pp. 455-500, 2009. Tensorlab 3.0 -Numerical optimization strategies for large-scale constrained and coupled matrix/tensor factorization. V Nico, O Debals, L D Lathauwer, 50thV. Nico, O. Debals, and L.D. Lathauwer. "Tensorlab 3.0 -Numerical optimization strategies for large-scale constrained and coupled matrix/tensor factorization." 50th . Asilomar Conference on Signals, Systems and Computers. IEEEAsilomar Conference on Signals, Systems and Computers, IEEE, 2016.
[]
[ "Statistical Properties of the Intrinsic Geometry of Heavy-particle Trajectories in Two-dimensional, Homogeneous, Isotropic Turbulence", "Statistical Properties of the Intrinsic Geometry of Heavy-particle Trajectories in Two-dimensional, Homogeneous, Isotropic Turbulence" ]
[ "Anupam Gupta \nDepartment of Physics\nUniversity of Tor Vergata\nVia della Ricerca Scientifica 100133RomeItaly\n\nDepartment of Physics\nCentre for Condensed Matter Theory\nIndian Institute of Science\n560012BangaloreIndia\n", "Dhrubaditya Mitra \nNORDITA\nRoyal Institute of Technology and Stockholm University\nRoslagstullsbacken 23SE-10691StockholmSweden\n", "Prasad Perlekar \nTIFR Centre for Interdisciplinary Sciences\n21 Brundavan Colony500075NarsingiHyderabadIndia\n", "Rahul Pandit \nDepartment of Physics\nCentre for Condensed Matter Theory\nIndian Institute of Science\n560012BangaloreIndia\n" ]
[ "Department of Physics\nUniversity of Tor Vergata\nVia della Ricerca Scientifica 100133RomeItaly", "Department of Physics\nCentre for Condensed Matter Theory\nIndian Institute of Science\n560012BangaloreIndia", "NORDITA\nRoyal Institute of Technology and Stockholm University\nRoslagstullsbacken 23SE-10691StockholmSweden", "TIFR Centre for Interdisciplinary Sciences\n21 Brundavan Colony500075NarsingiHyderabadIndia", "Department of Physics\nCentre for Condensed Matter Theory\nIndian Institute of Science\n560012BangaloreIndia" ]
[]
We obtain, by extensive direct numerical simulations, trajectories of heavy inertial particles in two-dimensional, statistically steady, homogeneous, and isotropic turbulent flows, with friction. We show that the probability distribution function P(κ), of the trajectory curvature κ, is such that, as κ → ∞, P(κ) ∼ κ −hr , with hr = 2.07 ± 0.09. The exponent hr is universal, insofar as it is independent of the Stokes number St and the energy-injection wave number kinj. We show that this exponent lies within error bars of their counterparts for trajectories of Lagrangian tracers. We demonstrate that the complexity of heavy-particle trajectories can be characterized by the number NI(t, St) of inflection points (up until time t) in the trajectory and nI(Stwhere the exponent ∆ = 0.33 ± 0.02 is also universal.
null
[ "https://arxiv.org/pdf/1402.7058v1.pdf" ]
118,113,132
1402.7058
1d0ad18e54fbd789134c1109b89abdbaca198f7f
Statistical Properties of the Intrinsic Geometry of Heavy-particle Trajectories in Two-dimensional, Homogeneous, Isotropic Turbulence Anupam Gupta Department of Physics University of Tor Vergata Via della Ricerca Scientifica 100133RomeItaly Department of Physics Centre for Condensed Matter Theory Indian Institute of Science 560012BangaloreIndia Dhrubaditya Mitra NORDITA Royal Institute of Technology and Stockholm University Roslagstullsbacken 23SE-10691StockholmSweden Prasad Perlekar TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony500075NarsingiHyderabadIndia Rahul Pandit Department of Physics Centre for Condensed Matter Theory Indian Institute of Science 560012BangaloreIndia Statistical Properties of the Intrinsic Geometry of Heavy-particle Trajectories in Two-dimensional, Homogeneous, Isotropic Turbulence We obtain, by extensive direct numerical simulations, trajectories of heavy inertial particles in two-dimensional, statistically steady, homogeneous, and isotropic turbulent flows, with friction. We show that the probability distribution function P(κ), of the trajectory curvature κ, is such that, as κ → ∞, P(κ) ∼ κ −hr , with hr = 2.07 ± 0.09. The exponent hr is universal, insofar as it is independent of the Stokes number St and the energy-injection wave number kinj. We show that this exponent lies within error bars of their counterparts for trajectories of Lagrangian tracers. We demonstrate that the complexity of heavy-particle trajectories can be characterized by the number NI(t, St) of inflection points (up until time t) in the trajectory and nI(Stwhere the exponent ∆ = 0.33 ± 0.02 is also universal. The transport of particles by turbulent fluids has attracted considerable attention since the pioneering work of Taylor [1]. The study of such transport has experienced a renaissance because (a) there have been tremendous advances in measurement techniques and direct numerical simulations (DNSs) [2] and (b) it has implications not only for fundamental problems in the physics of turbulence [11] but also for a variety of geophysical, atmospheric, astrophysical, and industrial problems [3][4][5][6][7][8][9]. It is natural to use the Lagrangian frame of reference [10] here; but we must distinguish between (a) Lagrangian or tracer particles, which are neutrally buoyant and follow the flow velocity at a point, and (b) inertial particles, whose density ρ p is different from the density ρ f of the advecting fluid. The motion of heavy inertial particles is determined by the flow drag, which can be parameterized by a time scale τ s , whose ratio with the Kolmogorov dissipation time T η is the Stokes number St = τ s /T η ; tracer and heavy inertial particles show qualitatively different behaviors in flows; e.g., the former are uniformly dispersed in a turbulent flow, whereas the latter cluster [11], most prominently when St 1. Differences between tracers and inertial particles have been investigated in several studies [2], which have concentrated on three-dimensional (3D) flows and on the clustering or dispersion of these particles. We present the first study of the statistical properties of the geometries of heavy-particle trajectories in twodimensional (2D), homogeneous, isotropic, and statistically steady turbulence, which is qualitatively different from its 3D counterpart because, if energy is injected at wave number k inj , two power-law regimes appear in the energy spectrum E(k) [12][13][14], for wave numbers k < k inj and k > k inj . One regime is associated with an inverse cascade of energy, towards large length scales, and the other with a forward cascade of enstrophy to small length scales. It is important to study both forward-and inverse-cascade regimes, so we use k inj = 4, which gives a large forward-cascade regime in E(k), and k inj = 50, which yields both forward-and inverse-cascade regimes. For a heavy inertial particle, we calculate the velocity v, the acceleration a = dv/dt, with magnitude a and normal and tangential components a n and a t , respectively. The intrinsic curvature of a particle trajectory is κ = a n /v 2 . We find two intriguing results that shed new light on the geometries of particle tracks in 2D turbulence: First, the probability distribution function (PDF) P(κ) is such that, as κ → ∞, P(κ) ∼ κ −hr ; in contrast, as κ → 0, P(κ) has slope zero; we find that h r = 2.07 ± 0.09 is universal, insofar as they are independent of St and k inj . We present high-quality data, with two decades of clean scaling, to obtain the values of these exponents, for different values of St and k inj . We obtain data of similar quality for Lagrangian-tracer trajectories and thus show that h r lies within error bars of its tracer-particle counterpart. Second, along every heavy-particle track, we calculate the number, N I (t, St), of inflection points (at which a × v changes sign) up until time t. We propose that n I (St) ≡ lim t→∞ N I (t, St) t(1) is a natural measure of the complexity of the trajectories of these particles; and we find that n I ∼ St −∆ , where the exponent ∆ = 0.33 ± 0.02 is also universal. We obtain several other interesting results: (a) At short times the particles move ballistically but, at large times, there is a crossover to Brownian motion, at a crossover time T cross that increases monotonically with St. (b) The PDFs P(a), P(a n ), and P(a t ) all have exponential tails. (c) By conditioning P(κ) on the sign of the Okubo-Weiss [16][17][18] parameter Λ, we show that particles in regions of elongational flow (Λ > 0) have, on average, trajectories with a lower curvature than particles in vortical regions (Λ < 0). We write the 2D incompressible Navier-Stokes (NS) equation in terms of the stream-function ψ and the vorticity ω = ∇ × u(x, t), where u ≡ (−∂ y ψ, ∂ x ψ) is the fluid velocity at the point x and time t, as follows: D t ω = ν∇ 2 ω − µω + F ;(2)∇ 2 ψ = ω.(3) Here, D t ≡ ∂ t +u·∇, the uniform fluid density ρ f = 1, µ is the coefficient of friction, and ν the kinematic viscosity of the fluid. We use a Kolmogorov-type forcing F (x, y) ≡ −F 0 k inj cos(k inj y), with amplitude F 0 and length scale inj ≡ 2π/k inj . (A) For k < k inj , the inverse cascade of energy yields E(k) ∼ k −5/3 ; and (B) for k > k inj , there is a forward cascade of enstrophy and E(k) ∼ k −δ , where the exponent δ depends on the friction µ (for µ = 0, δ = 3). We use µ = 0.01 and obtain δ = −3.6. The equation of motion for a small, spherical, rigid particle (henceforth, a heavy particle) in an incompressible flow [19] assumes the following simple form, if ρ p ρ f : dx dt = v(t), dv dt = − 1 τ s [v(t) − u(x(t), t)] ,(4) where x, v, and τ s = (2R 2 p )ρ p /(9νρ f ) are, respectively, the position, velocity, and response time of the particle, and R p is its radius. We assume that R p η, the dissipation scale of the carrier fluid, and that the particle number density is so low that we can neglect interactions between particles, the particles do not affect the flow, and particle accelerations are so high that we can neglect gravity. In our DNSs we solve simultaneously for several species of particles, each with a different value of St; there are N p particles of each species. We also obtain the trajectories for N p Lagrangian particles, each of which obeys the equation In Fig. (1) we show representative particle trajectories of a Lagrangian tracer (black line) and three different heavy particles with St = 0.1 (red asterisks), St = 0.5 (blue circles), and St = 1 (black squares) superimposed on a pseudocolor plot of ω. We expect that inertial particles move ballistically in the range 0 < t ≤ τ s ; for t τ s , we anticipate a crossover to Brownian behavior, which we quantify by defining the mean-square displacement an average over t 0 and over the N p particles with a given value of St. Figure (2) contains log-log plots of r 2 versus t, for the representative cases with St = 0.1 (red asterisks) and St = 1 (black squares); both of these plots show clear crossovers from ballistic (r 2 ∼ t 2 ) to Brownian (r 2 ∼ t) behaviors. We define the crossover time T cross as the intersection of the ballistic and Brownian asymptotes (bottom inset of Fig. (2)). The top inset of Fig. (2) shows that, in the parameter range we consider, T cross increases monotonically with St. In Fig. (3) we present semilog plots of the PDFs P(a), P(a t ), and P(a n ) for some representative values of St. Clearly, all of these PDFs have exponential tails, i.e., d(x)/dt = u [x(t), t].r 2 (t) = (x(t 0 + t) − x(t 0 )) 2 t0,Np , where t0,Np denotesP(a, St) ∼ exp[−a/α(St)], P(a t , St) ∼ exp[−a t /α t (St)], and P(a n , St) ∼ exp[−a n /α n (St)]. As St increases, the tails of these PDFs fall more and more rapidly, because the higher the inertia the more difficult is it to accelerate a particle. Hence, α, α t , and α n decrease with St [see Table (II)]. Although these acceleration PDFs have exponential tails, P(κ) shows a power-law behavior as κ → ∞, as we have mentioned above. The exponent h r for the right-tail of P(κ) is especially interesting because it characterizes the parts of a trajectory that have large values of κ. If P(κ) ∼ κ −hr , then its cumulative PDF Q(κ) ∼ κ −hr+1 . We obtain an accurate estimate of h r from Q, which we obtain by a rank-order method that does not suffer from binning errors [15]. We give representative, log-log plots of Q in Fig. (4), for St = 0.1 (blue asterisks) and St = 1 (red squares); and we determine h r by fitting a straight line to Q over a scaling range of more than two decades; We plot, in the inset, Fig. (4), the local slope of this scaling range, whose mean value and standard deviation yield, respectively, h r and its error bars. From such plots we find that h r does not depend significantly on St [ Table (II)]. Furthermore, we find that the Lagrangian analog of h r , which we denote by h lagrangian , is 2.03 ± 0.09, i.e., it lies within error bars of h r . By analyzing the κ → 0 limit of P(κ), we find that P(κ) ∼ A 0 κ h l , where A 0 > 0 is an amplitude and h l = 0.0 ± 0.1 (the latter is independent of St); this indicates that there is a nonzero probability that the paths of particles have zero curvature, i.e., they can move in straight lines. The κ → 0 limit of P(κ) seems, therefore, to be different from its counterpart for 3D fluid turbulence (see Ref. [26] for Lagrangian tracers and Ref. [28] for heavy particles), where P(κ) → 0 as κ → 0. Very-high-resolution DNSs for 2D turbulence must be undertaken to probe the κ → 0 limit of P(κ) by going to even smaller values of κ than we have been able to obtain reliably in our DNS. A point in a 2D flow is vortical or strain-dominated if the Okubo-Weiss parameter Λ = (1/8)(ω 2 − σ 2 ) is, respectively, positive or negative [16][17][18]. We now investigate how the acceleration statistics of heavy particles depends on the sign of Λ by conditioning the PDFs of a t and κ on this sign. In particular, we obtain the conditional PDFs P + and P − , where the superscript stands for the sign of Λ. We find, on the one hand, that the tail of P + (a t ) falls faster than that of P − (a t ) because regions of the trajectory with high tangential accelerations are associated with strain-dominated points in the flow. On the other hand, the right tail of P + (κ) falls more slowly than that of P − (κ), which implies that high-curvature parts of a particle trajectory are correlated with vortical regions of the flow. We give plots of P + (a t ), P + (κ), P − (a t ), and P − (κ) in the Appendix . We find that a×v (a pseudoscalar in 2D like the vorticity) changes sign at several inflection points along a particle trajectory. We use the number of inflection points on a trajectory, per unit time, n I (St) (see Eq. (1)) as a measure of its complexity. In Fig. (5) we demonstrate that the limit in Eq. Table (I)] is independent of the Reynolds number and µ, within the range of parameters we have explored. Furthermore, ∆ is independent of whether our 2D turbulent flow is dominated by forward or the inverse cascades in E(k), which are controlled by k inj . We have repeated all the above studies with a forcing term that yields an energy spectrum with a significant inverse-cascade part (k inj = 50); the parameters for this run are given in Table (1) in the Appendix and in Ref. [20]. The dependence of all the tails of the PDFs discussed above and the exponents h l and h r on St are similar to those we have found above for k inj = 4. Earlier studies of the geometrical properties of particle tracks have been restricted to tracers; and they have inferred these properties from tracer velocities and accelerations. For example, the PDFs of different components of the acceleration of Lagrangian particles in 2D turbulent flows has been studied for both decaying [21] and forced [22] cases; they have shown exponential tails in periodic domains, but, in a confined domain, have obtained PDFs with heavier tails [23]. The PDF of the curvature of tracer trajectories has been calculated from the same simulations, which quote an exponent h lagrangian 2.25 (but no error bars are given). Our work goes well beyond these earlier studies by (a) investigating the statistical properties of the geometries of the trajectories of heavy particles in 2D turbulent flows for a variety of parameter ranges and Stokes numbers, (b) by introducing and evaluating, with unprecedented accuracy (and error bars), the exponent h r , (c) proposing n I as a measure of the complexity of heavy-particle trajectories and obtaining the exponent ∆ accurately, (d) by examining the dependence of all these exponents on St and k inj , and (e) showing, thereby, that these exponents are universal (within our error bars). Our results imply that n I (St) has a power-law divergence, so the trajectories become more and more contorted, as St → 0. This divergence is suppressed eventually, in any DNS, which can only achieve a finite value of Re λ because it uses only a finite number of collocation points. Such a suppression is the analog of the finite-size rounding off of divergences, in say the susceptibility, at an equilibrium critical point [27]. Note also that the limit St → 0 is singular and it is not clear a priori that this limit should yield the same results, for the properties we study, as the Lagrangian case St = 0. We hope that our study will lead to experimental studies and accurate measurements of the exponents h r and ∆, and applications of these in developing a detailed understanding of particle-laden flows in the variety of systems that we have mentioned in the introduction. For 3D turbulent flows, geometrical properties of Lagrangian-particle trajectories have been studied numerically [24,25] and experimentally [26]. However, such geometrical properties have not been studied for heavy particles. The extension of our heavy-particle study to the case of 3D fluid turbulence is nontrivial and will be given in a companion paper [28]. The parameters for our DNS runs: N 2 is the number of collocation points, Np = 10 4 is the number of Lagrangian or inertial particles, δt the time step, ν = 10 −5 the kinematic viscosity, and µ = 0.01 the air-drag-induced friction, F0 the forcing amplitude, kinj the forcing wave number, l d ≡ (ν 3 /ε) 1/4 the dissipation scale, λ ≡ νE/ε the Taylor microscale, Re λ = urmsλ/ν the Taylor-microscale Reynolds number, T eddy = ( k E(k)/k k E(k) )/urms the eddy-turnover time, and Tη ≡ (ν/ε) the Kolmogorov time scale. Tinj ≡ ( 2 inj /Einj) 1/3 is the energy-injection time scale, where Einj =< fu · u >, is the energy-injection rate, inj = 2π/kinj is the energy-injection length scale, and fω = ∇ × fu. In this Supplemental Material we provide numerical details of our direct numerical simulation (DNS) of Eq. Run (2) in the main part of this paper. We also give results of our DNS for the case of the injection wave vector k inj = 50, which yields a significant inverse-cascade part in the energy spectrum E(k). In Fig. (6) we show the energy spectra E(k) for our runs FA (k inj = 4) and IA (k inj = 50). We perform a DNS of Eq. (2) by using a pseudospectral code [30] with the 2/3 rule for dealiasing; and we use a second-order, exponential time differencing Runge-Kutta method [31] for time stepping. We use periodic boundary conditions in a square simulation domain with side L = 2π, with N 2 collocation points. Together with Eq.(2) we solve for the trajectories of N p heavy particles, for each of which we solve Eq. (4) with an Euler scheme. The use of an Euler scheme to evolve particles is justified because, in time δt, a particle crosses at most one-tenth of grid spacing. We obtain the Lagrangian velocity at an off-grid particle position x, from the Eulerian velocity field by using a bilinear-interpolation scheme [32]; for numerical details see Refs. [16,[33][34][35]. We calculate the fluid energy-spectrum E(k) ≡ k−1/2<k ≤k+1/2 k 2 |ψ(k , t)| 2 t , where · t indicates a time average over the statistically steady state. The parameters in our simulations are given in Table(II) of the main part of this paper and in Table(III). These include the Taylor-microscale Reynolds number, Re λ ≡ u rms λ/ν, where λ ≡ νE/ε is the Taylor microscale and the Stokes number St = τ s /T η . We use 20 different values of St to study the dependence on St of the PDFs P(a), P(a t ) and P(a n ), the cumulative PDF Q(κ), the mean square displacement, and the number of inflection points N I (t, St) at which a × v changes sign along a particle trajectory. A point in a 2D flow is vortical or strain-dominated if the Okubo-Weiss parameter Λ = (1/8)(ω 2 − σ 2 ) is, respectively, positive or negative [16][17][18]. We investigate how the acceleration statistics of heavy particles depends on the sign of Λ by conditioning the PDFs of a t and κ on this sign. In particular, we obtain the conditional PDFs P + and P − , where the superscript stands for the sign of Λ. We find, on the one hand, that the tail of P + (a t ) falls faster than that of P − (a t ) because regions of the trajectory with high tangential accelerations are associated with strain-dominated points in the flow. On the other hand, the right tail of P + (κ) falls more slowly than that of P − (κ), which implies that high-curvature parts of a particle trajectory are correlated with vortical regions of the flow. We give plots of P + (a t ), P + (κ), P − (a t ), and P − (κ) in Fig. (7) and Fig. (8). These trends hold for all values of St and k inj that we have studied. In Fig. (9), we plot the square of the mean-squared displacement r 2 versus time t for k inj = 50; here too we see a crossove from ballistic to Brownian behaviors; however, in contrast to the case k inj = 4, the crossover time T cross does not depend significantly on St. In Fig. (10), we plot the PDF P(log 10 (κη)) versus log 10 (κη), for St = 0.1 (blue asterisks), St = 1 (red squares) and St = 2 (black circles). Such PDFs provide another convenient way of displaying the power-law behaviors, as κ → ∞ and κ → 0, which we have reported in the main part of this paper, where we have used the cumulative PDF of κ to obtain the power-law exponents. In Table(III) we report the values of α, α n , α t , and the exponent h r of the right tail of the PDF of the trajectory curvature, for the case k inj = 50 and for different values of St. In Table(IV) we report the exponent h l , which charcterizes P(κη), as κ → 0, in both the cases k inj = 4 and k inj = 50. In both these cases and for all the different values of St we have studied, h l = 0.0 ± 0.1. The details of our DNS are given in the Appendix and parameters in our DNSs are given in Tables(I) and (II) for 12 representative values of St (we have studied 20 different values of St). FIG. 1 : 1(Color online) Representative particle trajectories of a Lagrangian tracer (black line) and three different heavy particles with St = 0.1 (red asterisks), St = 0.5 (blue circles), and St = 1 (black squares) superimposed on a pseudocolor plot of ω. For the spatiotemporal evolution of this plot see the animation available at the location http://www.youtube. com/watch?v=lk3iSHhfTuU FIG. 2 : 2(Color online) Log-log (base 10) plots of r 2 versus t/T eddy for St = 0.1 (red triangles), and St = 1 (black squares); top inset: plot of Tcross/T eddy versus St; bottom inset: log-log (base 10) plot of r 2 /t versus t/T eddy for tracers (blue curve) and linear fits to the small-and large-t asymptotes (dashed lines) with slopes 1 and 0 in ballistic and Brownian regimes, respectively; the intersection point of these dashed lines yields Tcross. (1) exists by plotting N I (t, St)/t as a function of t for St = 0.1 (red asterisks) and St = 2 (black triangles); the mean value of N I (t, St)/t, between the two vertical dashed lines in Fig. (5), yields our estimate for n I (St), which is given in the inset as a function of St (on a log-log scale); the standard deviation gives the error bars. From this inset of Fig. (5) we conclude that n I (St) ∼ St −∆ , with ∆ = 0.33 ± 0.05. This exponent ∆ [ FIG. 3 : 3(Color online) Plots of PDFs of (a) the modulus of a of the particle acceleration, (b) its tangential component at, and (c) its normal component an for St = 0 (blue curve), 0.5 (red curve), 1 (green curve), and 2 (black curve).FIG. 4: (Color online) Log-log plots of the cumulative PDFs Q(κ) for St = 0.1 (blue asterisks) and St = 1 (red squares); the inset shows a plot of the local slope of the tail of this cumulative PDF and the two dashed horizontal lines indicate the maximum and minimum values of this local slope in the range we use for fitting the exponent hr. FIG. 5 : 5(Color online) Plots of NI/(t/T eddy ) versus t/T eddy for St = 0.1 (red curve) and St = 2 (black curve); the inset shows a log-log (base 10) plot of nI versus St (blue open circles); the black dotted line has a slope = −1/3.ACKNOWLEDGMENTSWe thank A. Bhatnagar, A. Brandenburg, B. Mehlig, S.S. Ray, and D. Vincenzi for discussions, and particularly A. Niemi, whose study of the intrinsic geometrical properties of polymers[29], inspired our work on particle trajectories. The work has been supported in part by the European Research Council under the AstroDyn Research Project No. 227952 (DM), Swedish Research Council under grant 2011-542 (DM), NORDITA visiting PhD students program (AG), and CSIR, UGC, and DST (India) (AG and RP). We thank SERC (IISc) for providing computational resources. AG, PP, and RP thank NORDITA for hospitality; DM thanks the Indian Institute of Science for hospitality. online) Log-log (base 10) plots of the energy spectra E(k) versus k for (a) runs FA (kinj = 4) and (b) runs IA (kinj = 50). FIG. 7 : 7(Color online) Semilog (base 10) plots of the PDFs of the tangential component of the acceleration for St = 0.1 in vortical regions P(a + t ) (red squares) and in strain-dominated regions P(a − t ) (blue asterisks). online) Semilog (base 10) plots of PDF of the curvature of trajectories for St = 0.1 in vortical regions P(κ + η) (red squares), in strain-dominated regions P(κ − η) (blue asterisks), and in general (i.e., without conditioning on the sign of Λ) P(κη) (black triangles). FIG. 9 : 9(Color online) Log-log (base 10) plots for kinj = 50 of r 2 versus t/T eddy for St = 0.1 (red asterisks) and St = 1 (black squares). FIG. 10 : 10(Color online) Semilog (base 10) plot of the PDF P(log 10 (κη)) versus log 10 (κη), for St = 0.1 (blue asterisks), 301 St = 1 (red squares) and St = 2 (black circles). : The exponent h l that charcterizes P(κη), as κ → 0, in both the cases kinj = 4 and kinj = 50 and for different values of St. TABLE I : I TABLE II : IIThe values of α, αn, and αt and the exponent hr for the case kinj = 4 and for different values of St.Statistical Properties of the Intrinsic Geometry of Heavy-particle Trajectories in Two-dimensional, Homogeneous, Isotropic Turbulence : Supplemental Material I1 0.1 0.39 ± 0.06 0.69 ± 0.02 0.40 ± 0.06 2.16 ± 0.09 I2 0.2 0.47 ± 0.05 0.81 ± 0.03 0.46 ± 0.05 2.14 ± 0.09 I3 0.3 0.55 ± 0.04 0.95 ± 0.02 0.54 ± 0.05 2.1 ± 0.1 I4 0.4 0.63 ± 0.04 1.09 ± 0.03 0.61 ± 0.04 2.10 ± 0.08 I5 0.5 0.71 ± 0.04 1.21 ± 0.02 0.68 ± 0.03 2.09 ± 0.09 I6 0.6 0.80 ± 0.03 1.34 ± 0.03 0.77 ± 0.03 2.08 ± 0.09 I7 0.7 0.88 ± 0.04 1.48 ± 0.04 0.85 ± 0.03 2.07 ± 0.09 I8 0.8 0.97 ± 0.03 1.60 ± 0.03 0.94 ± 0.04 2.07 ± 0.09 I9 0.9 1.05 ± 0.03 1.73 ± 0.03 1.01 ± 0.04 2.1 ± 0.1 I10 1.0 1.16 ± 0.03 1.87 ± 0.03 1.10 ± 0.03 2.1 ± 0.1Run St α αt αn hr TABLE III : IIIThe values of α, αn, αt, and the exponent hr, for the case kinj = 50 for different values of St. * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] § Electronic address: [email protected]; also at. Jawaharlal Nehru Centre For Advanced Scientific Research. Jakkur† Electronic address: [email protected] ‡ Electronic address: [email protected] § Electronic address: [email protected]; also at Jawaharlal Nehru Centre For Advanced Scientific Research, Jakkur, Bangalore, India. . G Taylor, Proc. London. Math. Soc. 196s2-20G. Taylor, Proc. London. Math. Soc. s2-20, 196 (1922). . F Toschi, E Bodenschatz, Ann. Rev. Fluid Mech. 41375F. Toschi and E. Bodenschatz, Ann. Rev. Fluid Mech. 41, 375 (2009). . R A Shaw, Annual Review of Fluid Mechanics. 35183R. A. Shaw, Annual Review of Fluid Mechanics 35, 183 (2003). . W W Grabowski, L.-P Wang, Ann. Rev. Fluid Mech. 45293W. W. Grabowski and L.-P. Wang, Ann. Rev. Fluid Mech. 45, 293 (2013). . G Falkovich, A Fouxon, M Stepanov, Nature, London. 419151G. Falkovich, A. Fouxon, and M. Stepanov, Nature, Lon- don 419, 151 (2002). P J Armitage, Astrophysics of Planet Formation. Cambridge, UKCambridge University PressP. J. Armitage, Astrophysics of Planet Formation (Cam- bridge University Press, Cambridge, UK, 2010). G T Csanady, Turbulent Diffusion in the Environmnet. Springer3G. T. Csanady, Turbulent Diffusion in the Environmnet (Springer, ADDRESS, 1973), Vol. 3. . J Eaton, J Fessler, Intl. J. Multiphase Flow. 20169J. Eaton and J. Fessler, Intl. J. Multiphase Flow 20, 169 (1994). . S Post, J Abraham, Intl. J. Multiphase Flow. 28997S. Post and J. Abraham, Intl. J. Multiphase Flow 28, 997 (2002). . G Falkovich, K Gawȩdzki, M Vergassola, Rev. Mod. Phys. 73913G. Falkovich, K. Gawȩdzki, and M. Vergassola, Rev. Mod. Phys. 73, 913 (2001). . J Bec, Phys. Fluids. 1891702J. Bec, et al., Phys. Fluids 18, 091702 (2006). . R Kraichnan, D Montgomery, Rep. Prog. Phys. 43R. Kraichnan and D. Montgomery, Rep. Prog. Phys. 43, (1980). . R Pandit, P Perlekar, S S Ray, Pramana. 73179R. Pandit, P. Perlekar, and S.S. Ray, Pramana 73, 179 (2009). . G Boffetta, R E Ecke, Ann. Rev. Fluid Mech. 44427G. Boffetta and R. E. Ecke, Ann. Rev. Fluid Mech. 44, 427 (2012). . D Mitra, J Bec, R Pandit, U Frisch, Phys. Rev. Lett. 94194501D. Mitra, J. Bec, R. Pandit, and U. Frisch, Phys. Rev. Lett 94, 194501 (2005). . P Perlekar, S S Ray, D Mitra, R Pandit, Phys. Rev. Lett. 10654501P. Perlekar, S.S. Ray, D. Mitra, and R. Pandit, Phys. Rev. Lett 106, 054501 (2011). . A Okubo, Deep-Sea. Res. 17445A. Okubo, Deep-Sea. Res. 17, 445 (1970). . J Weiss, Physica (Amsterdam). 48273J. Weiss, Physica (Amsterdam) 48D, 273 (1991). . M R Maxey, J J Riley, Physics of Fluids. 26883M. R. Maxey and J. J. Riley, Physics of Fluids 26, 883 (1983). . A Gupta, Indian Institute of SciencePhD. ThesisA. Gupta, PhD. Thesis, Indian Institute of Science, un- published (2014). . M Wilczek, O Kamps, R Friedrich, Physica D: Nonlinear Phenomena. 2372090M. Wilczek, O. Kamps, and R. Friedrich, Physica D: Nonlinear Phenomena 237, 2090 (2008). . B Kadoch, D Castillo-Negrete, W J T Bos, K Schneider, Phys. Rev. E. 8336314B. Kadoch, D. del Castillo-Negrete, W. J. T. Bos, and K. Schneider, Phys. Rev. E 83, 036314 (2011). . B Kadoch, W J T Bos, K Schneider, Phys. Rev. Lett. 100184503B. Kadoch, W. J. T. Bos, and K. Schneider, Phys. Rev. Lett. 100, 184503 (2008). . W Braun, F De Lillo, B Eckhardt, Journal of Turbulence. 7W. Braun, F. De Lillo, and B. Eckhardt, Journal of Tur- bulence 7, (2006). . A Scagliarinia, 10.1080/14685248.2011.571261Journal of Turbulence. 12A. Scagliarinia, Journal of Turbulence, 12, N25, (2011); DOI: 10.1080/14685248.2011.571261. . H Xu, N T Ouellette, E Bodenschatz, Physical Review Letters. 9850201H. Xu, N.T. Ouellette, and E. Bodenschatz, Physical Re- view Letters 98, 050201 (2007). E G See, V Privman, In Chapter, Finite Size Scaling and Numerical Simulation of Statistical Systems. V. PrivmanSingaporeWorld ScientificSee, e.g., V. Privman, in Chapter I in "Finite Size Scal- ing and Numerical Simulation of Statistical Systems," ed. V. Privman (World Scientific, Singapore, 1990) pp 1-98. Finite-size scaling is used to evaluate inifinte-size-system exponents systematically at conventional critical points. Finite-size scaling is used to evaluate inifinte-size-system exponents systematically at conventional critical points; its analog for our study requires several DNSs, over a large range of Re λ , which lie beyond the scope of our investigation. its analog for our study requires several DNSs, over a large range of Re λ , which lie beyond the scope of our investigation. . A Bhatnagar, D Mitra, A Gupta, P Perlekar, R Pandit, to be publishedA. Bhatnagar, D. Mitra, A. Gupta, P. Perlekar, and R. Pandit, to be published. . S Hu, M Lundgren, A J Niemi, Phys. Rev. E. 8361908S. Hu, M. Lundgren, and A.J. Niemi, Phys. Rev. E 83, 061908 (2011) C Canuto, M Hussaini, A Quarteroni, T Zang, Spectral methods in Fluid Dynamics. BerlinSpinger-VerlagC. Canuto, M. Hussaini, A. Quarteroni, and T. Zang, Spectral methods in Fluid Dynamics (Spinger-Verlag, Berlin, 1988). . S Cox, P Matthews, Journal of Computational Physics. 176430S. Cox and P. Matthews, Journal of Computational Physics 176, 430 (2002). W Press, B Flannery, S Teukolsky, W Vetterling, Numerical Recipes in Fortran. CambridgeCambridge University PressW. Press, B. Flannery, S. Teukolsky, and W. Vetter- ling, Numerical Recipes in Fortran (Cambridge Univer- sity Press, Cambridge, 1992). . P Perlekar, R Pandit, New J. Phys. 1173003P. Perlekar and R. Pandit, New J. Phys. 11, 073003 (2009). . P Perlekar, Bangalore, IndiaIndian Institute of SciencePh.D. thesisP. Perlekar, Ph.D. thesis, Indian Institute of Science, Bangalore, India, 2009. . S S Ray, D Mitra, P Perlekar, R Pandit, Phys. Rev. Lett. 107184503S. S. Ray, D. Mitra, P. Perlekar, and R. Pandit, Phys. Rev. Lett. 107, 184503 (2011). . L Biferale, Phys. Rev. Lett. 9364502L. Biferale et al., Phys. Rev. Lett. 93, 064502 (2004). . J Bec, Journal of Fluid Mechanics. 550349J. Bec et al., Journal of Fluid Mechanics 550, 349 (2006).
[]
[ "Stick-Slip Contact Line Motion on Kelvin-Voigt Model Substrates", "Stick-Slip Contact Line Motion on Kelvin-Voigt Model Substrates", "Stick-Slip Contact Line Motion on Kelvin-Voigt Model Substrates", "Stick-Slip Contact Line Motion on Kelvin-Voigt Model Substrates" ]
[ "Dominic Mokbel ", "Sebastian Aland \nTU Bergakademie Freiberg\nAkademiestrasse 609599FreibergGermany\n", "Stefan Karpitschka \nMax Planck Intitute for Dyanmics and Self-Organization (MPIDS)\n37077GöttingenGermany\n", "\n1 HTW Dresden, Friedrich-List-Platz 101069DresdenGermany\n", "Dominic Mokbel ", "Sebastian Aland \nTU Bergakademie Freiberg\nAkademiestrasse 609599FreibergGermany\n", "Stefan Karpitschka \nMax Planck Intitute for Dyanmics and Self-Organization (MPIDS)\n37077GöttingenGermany\n", "\n1 HTW Dresden, Friedrich-List-Platz 101069DresdenGermany\n" ]
[ "TU Bergakademie Freiberg\nAkademiestrasse 609599FreibergGermany", "Max Planck Intitute for Dyanmics and Self-Organization (MPIDS)\n37077GöttingenGermany", "1 HTW Dresden, Friedrich-List-Platz 101069DresdenGermany", "TU Bergakademie Freiberg\nAkademiestrasse 609599FreibergGermany", "Max Planck Intitute for Dyanmics and Self-Organization (MPIDS)\n37077GöttingenGermany", "1 HTW Dresden, Friedrich-List-Platz 101069DresdenGermany" ]
[]
The capillary traction of a liquid contact line causes highly localized deformations in soft solids, tremendously slowing down wetting and dewetting dynamics by viscoelastic braking. Enforcing nonetheless large velocities leads to the so-called stick-slip instability, during which the contact line periodically depins from its own wetting ridge. The mechanism of this periodic motion and, especially, the role of the dynamics in the fluid have remained elusive, partly because a theoretical description of the unsteady soft wetting problem is not available so far. Here we present the first numerical simulations of the full unsteady soft wetting problem, with a full coupling between the liquid and the solid dynamics. We observe three regimes of soft wetting dynamics: steady viscoelastic braking at slow speeds, stick-slip motion at intermediate speeds, followed by a region of viscoelastic braking where stick-slip is suppressed by liquid damping, which ultimately gives way to classical wetting dynamics, dominated by liquid dissipation.
10.1209/0295-5075/ac6ca6
[ "https://export.arxiv.org/pdf/2201.04189v1.pdf" ]
245,877,451
2201.04189
6a5d344eb545c907115fb30d08bb62c2f902b55f
Stick-Slip Contact Line Motion on Kelvin-Voigt Model Substrates Dominic Mokbel Sebastian Aland TU Bergakademie Freiberg Akademiestrasse 609599FreibergGermany Stefan Karpitschka Max Planck Intitute for Dyanmics and Self-Organization (MPIDS) 37077GöttingenGermany 1 HTW Dresden, Friedrich-List-Platz 101069DresdenGermany Stick-Slip Contact Line Motion on Kelvin-Voigt Model Substrates The capillary traction of a liquid contact line causes highly localized deformations in soft solids, tremendously slowing down wetting and dewetting dynamics by viscoelastic braking. Enforcing nonetheless large velocities leads to the so-called stick-slip instability, during which the contact line periodically depins from its own wetting ridge. The mechanism of this periodic motion and, especially, the role of the dynamics in the fluid have remained elusive, partly because a theoretical description of the unsteady soft wetting problem is not available so far. Here we present the first numerical simulations of the full unsteady soft wetting problem, with a full coupling between the liquid and the solid dynamics. We observe three regimes of soft wetting dynamics: steady viscoelastic braking at slow speeds, stick-slip motion at intermediate speeds, followed by a region of viscoelastic braking where stick-slip is suppressed by liquid damping, which ultimately gives way to classical wetting dynamics, dominated by liquid dissipation. Introduction.-The capillary interaction of liquids with soft solids is a ubiquitous situation in natural or technological settings [1][2][3][4], ranging from droplets that interact with epithelia, for instance in the human eye [5], or epithelial cells governed by capillary physics [6], to ink-jet printing on flexible materials [7]. The capillary tractions, exerted by the liquid onto their soft support, cause strong deformations if the substrate is soft or the considered length scale is sufficiently small [8,9]. The typical scale below which capillarity deforms solids is given by the elastocapillary length, = γ/G 0 , the ratio of surface tension γ and static shear modulus G 0 . At three-phase contact lines, the length scale of the exerted traction lies in the molecular domain, deforming the solid into a sharp-tipped wetting ridge [10]. As a liquid spreads over a soft surface, the traction moves relative to the material points of the substrate. The necessary rearrangement of the solid deformation leads to strong viscoelastic dissipation which counteracts the motion, a phenomenon called viscoelastic braking [11][12][13][14][15][16][17][18][19]. At small speeds, the motion remains steady [11,12], whereas at large speeds, unsteady motion, frequently termed stick-slip, has been observed [13,[20][21][22][23][24]. In this mode, the contact line velocity and the apparent contact angle undergo strong, periodic oscillations. On paraffin gels, Kajiya et al. observed stick-slip motion only in an intermediate velocity range, returning to continuous motion if the speed was increased even further [21]. The origin of this stick-slip motion remains debated in literature. It is clear that the pinning and depinning is not associated with permanent surface features, but rather with the dynamics of the wetting ridge itself: the solid deformation cannot follow the fast contact line motion of the depinned (slip) phase of a stick-slip cycle [23,24]. Unclear, however, are the conditions upon which a contact line may escape from its ridge, thus eliminating the viscoelastic braking force. The depinning of a contact line from a sharp-tipped feature on a surface is governed by the Gibbs inequality [23,25]. Van Gorcum et al. [23] postulated a dynamical solid surface tension, which would alter the local force balance and thus allow the contact line to slide down the slope of the ridge. Still, the physicochemical origin of such dynamic solid surface tension remains elusive. Roche et al. [15] postulated the existence of a point force due to bulk viscoelasticity, but the shearthinning nature of typical soft polymeric materials would prevent such a singularity at the strain rates encountered in soft wetting [24,26]. Unclear as well is the role of the fluid phase during the cyclic motion, mainly because a comprehensive multi-physics model for the unsteady soft wetting problem is not available to date. Here we present the first fully unsteady numerical simulations of dynamical soft wetting, fully accounting for liquid and solid mechanics, and for the capillarity of the interfaces, by which we reveal the life cycle of stick-slip motion. We derive phase diagrams of steady and unsteady contact line motion by tuning parameters over large ranges, recov- ering stick-slip behavior at intermediate speeds. At small and large speeds, we observe steady motion, in quantitative agreement with an analytical model. Setup.- Figure 1 (a) shows the geometric setup of the numerical simulations. A hollow cylinder (undeformed inner radius R), made of a soft viscoelastic material (gray, thickness h s R), with a fixed (rigid) outer surface, is partially filled with a liquid (blue) and an ambient fluid phase (transparent). To keep physics conceivable, we use a minimal model and apply the Stokes limit and identical viscosities for fluid and ambient, and assume constant and equal surface tensions γ = γ s on all interfaces (liquid-ambient, solid-liquid, and solid-ambient). The inner surface of the soft viscoelastic cylinder wall is deformed into a wetting ridge due to the capillary action of the liquid meniscus (cf. panel (b)). We use an incompressible Kelvin-Voigt constitutive model, characterized by a frequency dependent complex modulus G * = G 0 +i η s ω, with static shear modulus G 0 and effective substrate viscosity η s , obtaining an elastocapillary length = γ/G 0 h s , a characteristic time scale τ = η s /G 0 , and a characteristic velocity v = /τ = γ/η s . Our numerical model is formulated with cylindrical symmetry, implementing the two-phase fluid by a phasefield approach. Thus the liquid-ambient interface has a finite thickness h s , and the capillary traction of the meniscus onto the solid is distributed over this characteristic width. The solid is modelled by a finite element approach, with a sharp interface toward the fluid. The grid size is about 5% of the elastocapillary length at the liquid-ambient interface, and typically about 20% outside of the interface region. All phases are fully coupled to each other by kinematic and stress boundary conditions. The material parameters are listed in table 1, details on the numerical model are given in [27] and the supplementary material [28]. The liquid meniscus is forced to move by imposing the fluxes Φ on either end of the cylinder, but can freely change its shape (curvature) in response to the the fluid flow. Thus the instantaneous contact line velocity v is not imposed, but rather it's long-term mean v = Φ/(π R 2 ). All simulations are started at t = 0 with a flat substrate, a flat meniscus, and a constant imposed flux at the boundaries, and run until a steady state or limit cycle has been reached. We compare our simulation results for the solid deformation to the analytical plane-strain model from [13], imposing a constant contact line velocity v = v and replacing the δ-shaped traction by which can be derived for the phase field model in equilibrium (see supplementary material for details [28]). Since h s R, the substrate deformation is well approximated by plane-strain conditions. Importantly, since , our analytical and numerical results do not significantly depend on the actual value of . Figure 1 (b) shows the quasistationary substrate deformation for several imposed velocities, comparing simulation results (markers) to the analytical model (lines), in excellent agreement. Note, that this comparison is only possible as steady ridge shapes are observed for the chosen velocities. In an intermediate velocity range we find unsteady cyclic shape dynamics, as detailed below, which cannot be grasped by the analytical model. T (x) = 3γ 4 √ 2 1 − tanh 2 x − vt √ 2 2 ,(1) Modes of contact line motion.-The dynamics of the contact line motion are characterized by the timedependent rotation φ = θ−θ eq of the liquid interface at the triple line (cf. Fig. 1 (a)). Figure 2 shows φ as a function of contact line position for the three characteristic regimes that we find in our simulations. At small speed (v v , blue), after some initial transient the contact line moves steadily, with a constant dynamic contact angle. Here, the relation between v and φ is permanently dominated by viscoelastic braking [11][12][13]. Once the forcing velocity exceed a critical value, the motion becomes unsteady, finding a limit cycle after an initial transient (red): the liquid interface rotation φ shows large oscillations, of peak-to-peak amplitude ∆φ on the order of the mean rotation φ, with a non-trivial waveform, as the contact line advances. This behavior is not captured by the simple analytical model. For larger speeds (yellow), we observe again a constant φ after an initial transient. We note here that the motion in this regime is very sensitive to discretization artifacts and requires rather fine grid resolutions to give consistent results. Movies illustrating contact line motion and substrate dynamics for the three modes in Figure 2 can be found in the supplementary data. Figure 3 shows a phase portrait of the contact line motion i.e., in terms of the physically relevant variables φ and v: φ = θ − θ eq is a measure of the imbalance in Young's equation, and thus a measure of the total dissipative force (liquid and solid). Multiplied with the instantaneous velocity, one obtains the total dissipated power per unit length of contact line, since our equations of motion are overdamped. For slow forcing speeds (blue), we observe a continuous, steady contact line motion, up to the scale of grid artifacts (mind the logarithmic scales). For intermediate forcing speeds (red), we observe a limitcycle: As the liquid rotation exceeds a well-defined maximum ( 1 ○ in Fig. 3), the contact line accelerates. In this phase, it surfs down it's own wetting ridge, releasing energy stored in the meniscus curvature, rate-limited partly by liquid dissipation ( 2 ○ in Fig. 3). It thus decelerates, and a new wetting ridge starts to grow, opposing the contact line motion further ( 3 ○ in Fig. 3) until the next cycle starts. For larger forcing speeds, the region covered by the limit cycle decreases until it virtually vanishes (yellow), up to grid artifacts. This is caused by the growing importance of liquid dissipation, which effectively limits, and finally prevents, the large-speed excursions during the slip phases. Regimes of contact line motion.-We characterize the contact line motion by φ max ( 1 ○ in Fig. 3), φ min ( 3 ○ in Fig. 3), and φ, the minimum, maximum, and mean values p-3 of φ in the stationary/limit cycle regime. Figure 4 shows these values as a function of the imposed (long-term mean) v. For small speeds, v = v, and the simulated φ (symbols) coincides with the result of the analytical model (black line), indicating the stability of steady contact line motion. The onset of stick-slip motion (red region) aligns with the maximum of φ observed in the analytical model where v is imposed instead of v. This was stipulated in [13] since the rotation φ is a measure for the dissipative (viscoelastic braking) force: A dissipative force that decreases with speed causes acceleration, and thus an unstable motion. The maximum braking force is observed at v = v = γ/(G 0 τ ), the elastocapillary velocity: The finite width of the traction distribution regularizes the dissipation singularity at the scale h s . Thus the contact line motion excites a dominant frequency ∼ v/ in the solid, corresponding to a dynamical elastocapillary length v ∼ γ s /G * (v/ ) = γ s /(η s v). Resonance is expected at ∼ v i.e., v ∼ γ s /η s , independent of the choice of , which is confirmed in our analytical model [28]. φ max remains approximately constant upon entering the stickslip regime, indicating a well-defined upper limit of the viscoelastic braking force also in unsteady situations (cf. location 1 ○ in Fig. 3). However, this force periodically drops to much smaller values, as indicated by the much smaller values of φ min . In these surfing phases ( 2 ○ in Fig. 3), liquid dissipation and the finite capillary energy stored in the curved meniscus are the rate-limiting factors. As the imposed v is increased further, the amplitude of the oscillation ∆φ shrinks, reaching virtually zero (indicated by the fading red region). In this regime, the reduced viscoelastic braking force (φ max ) limits the build-up of capillary energy in the meniscus, while liquid dissipation prevents its fast release. Thus the oscillatory motion is effectively damped out by liquid dissipation, while the overall motion is still governed by viscoelastic braking: φ ∼ v −1 closely follows the result from the analytical model. The increased mean liquid rotation for the largest velocity ∼ 0.28v , relative to the prediction of the analytical model, is caused by liquid dissipation. This can be rationalized by a comparison with the Cox-Voinov law for moving contact lines on rigid surfaces, which, given the capillary number ∼ 10 −2 , predicts rotations on this order of magnitude [29][30][31]. In this hydrodynamic regime, one returns to the classical wetting physics on rigid surfaces. In Figure 5, we summarize the dynamical wetting behavior in terms of these three modes, as a function of the imposed mean speed and the solid parameters. Steady small-speed, stick-slip, and steady high-speed modes are indicated by blue, red, and yellow discs, respectively. On panel (a), we vary on the vertical axis the solid and liquid surface tensions and thus the elastocapillary number α s = /h s , while keeping the Neumann angles of static wetting constant. The onset of stick-slip is located near v = v , given by the maximum of φ vs. v. This maximum is independent of α s , up to a small correction due to the finite thickness of the fluid interface, as can be shown by the analytical model (see Fig. 1 of the supplementary material [28]). In physical units, however, the onset of stick-slip is inversely proportional to γ s since v ∼ γ s . The transition to the fast continuous mode is, in scaled units, nearly independent of the surface tension. Consequentially, the stick-slip mode disappears at very low γ. Similarly, the solid viscosity η s (panel (b)) has no measurable impact on the critical v/v for the transition to stick slip, but the physical critical velocity is proportional to η s since v ∼ η −1 s . Thus, for small solid viscosities, the damping effect of liquid dissipation and thus the transition back to steady motion becomes noticeable already at smaller v/v , such that the stick-slip region ultimately disappears at very low η s . In any case, at very large speeds, liquid dissipation will take over, leading to wetting dynamics equivalent to those on rigid surfaces. Discussion.-In this Letter, we provide a comprehensive numerical analysis of dynamical soft wetting, including the physics of all relevant elements, the liquid, the solid, and the interfaces. For each element, we used the minimal required level of complexity, to keep physics intact and conceivable: The Stokes limit for the fluid, a Kelvin-Voigt constitutive relation for the soft solid, regularized at a constant scale , and constant and equal solid surface tensions on all three interfaces. This simple model already requires a complex, strongly coupled multi-physics modelling approach, and exhibits rich behaviors. Our numerical experiments cover a wide range of system parameters, and we reveal three regimes in which the dominant physical mechanisms differ: (i) a slow regime, in which the contact line motion is entirely dominated by the the dissipation in the solid. This regime is observed as long as the viscoelastic braking force increases with speed. (ii) an intermediate regime, in which the dominant ratelimiting mechanism periodically switches from solid to liquid dissipation. This regime starts where the viscoelastic braking force exhibits a maximum with respect to the contact line velocity. This maximum is caused by a resonance effect, due to the regularization of a singular dissipation at some finite (constant) length scale. Other mechanisms, like dynamic solid surface tensions (surface constitutive relations [23,[32][33][34][35][36]) or a constitutive relation that exhibits resonance (e.g., standard-linear solid [13]) would lead to the same phenomenology. (iii) a large-v-regime with continuous motion, yet governed by viscoelastic braking, in which liquid dissipation prevents strong oscillations of the meniscus. Since the viscoelastic braking force, in contrast to liquid dissipation, does not increase with velocity, we ultimately find again liquid dissipation dominating the contact line motion, and one recovers the wetting physics of rigid surfaces. With this first comprehensive overview of soft wetting physics scenarios, we provide a strong basis for interpreting different phenomenology observed in experiments, ranging from paraffins [21] over microelectronic sealants [8,23] to biology [5,6,37], and motivate experiments in the so-far little explored large-v regimes. * * * SK and SA acknowledge funding by the German Research Foundation (DFG project no. KA4747/2-1 to SK and AL1705/5-1 to SA). Simulations were performed at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden. Fig. 1 : 1(a) Dynamical wetting of a cylindrical cavity (radius R) with a soft viscoelastic wall (grey, thickness hs) by a two-phase fluid (blue & transparent). The contact line speed is controlled by the flux boundary condition on the rear end of the cavity. Inset: definition of the liquid-liquid interface rotation φ = θ − θeq relative to the equilibrium angle θeq = 90 • . (b) Quasistationary wetting ridge on the cavity wall for different velocities, comparison between FEM simulations (symbols) and the analytical model for constant v (lines). The liquid interface is aligned at x = 0, the blue region indicates the advancing liquid phase. Fig. 2 : 2Rotation φ of the liquid-ambient interface at the contact line, as a function of the contact line position, for different imposed mean velocities. Slow and fast speeds show a continuous motion (blue and yellow, respectively). For intermediate velocities (red) we observe a strong stick-slip behavior, in which the liquid angle oscillates with an amplitude that is comparable to its mean. Fig. 3 : 3Phase portraits for contact line motion. Blue: stable, stationary-motion regime. After an initial transient, the contact line finds a stationary constant value in the v-φ-plane. Discretization artifacts are visible for small speeds (mind the logarithmic scale scale). Red: stick-slip motion, characterized by a large limit cycle in the v-φ-plane. Yellow: at large speeds, stick-slip motion is suppressed, finding stationary point in the v-φ-plane again. Fig. 4 : 4Rotation φ of the liquid-ambient interface relative to its equilibrium orientation, as a function of the imposed mean velocity v. In the red region, the contact line motion is unsteady (stick-slip) in the simulations. The solid black line depicts the analytical calculation of the ridge tip rotation, for an imposed constant contact line velocity. Markers depict the maximum, mean, and minimum angle observed in the simulations. The onset of unsteady motion correlates with the maximum in ridge rotation. At large speeds, the amplitude of the angle oscillations decreases and the motion becomes stationary again until, finally, liquid dissipation becomes relevant. Fig. 5 : 5Phase diagrams for stick-slip behavior vs. contact line speed. Blue: steady contact line motion; red: stick-slip; yellow: high-speed continuous motion. (a) Tuning the magnitude of capillary forces γ = γs on the vertical axis. (b) Varying substrate viscosity on the vertical axis. p - 1 -arXiv:2201.04189v1 [cond-mat.soft] 11 Jan 2022v v 6.95 13.89 27.79 Table 1 : 1Material parameterssymbol value meaning η 1 mPa s liquid viscosity γ 38 mN/m liquid surface tension γ s 38 mN/m solid surface tension G 0 1 kPa static shear modulus η s 3 Pa s substrate viscosity h s 1 mm substrate thickness 4.75 µm interface thickness 38 µm elastocapillary length α s = γs G0 hs 0.038 elastocapillary number v = γ s /η s 0.0126 m/s characteristic velocity . B Andreotti, J H Snoeijer, Annual Review of Fluid Mechanics. 52285Andreotti B. and Snoeijer J. H., Annual Review of Fluid Mechanics, 52 (2020) 285. . H Bense, B Roman, J Bico, Mechanics and Physics of Solids at Micro-and Nano-Scales. 2019WileyBense H., Roman B. and Bico J., Mechanics and Physics of Solids at Micro-and Nano-Scales (Wiley) 2019 . Ch. Surface Effects on Elastic Structures. Ch. Surface Effects on Elastic Structures pp. 185-213. . L Chen, E Bonaccurso, T Gambaryan-Roisman, V Starov, N Koursari, Y Zhao, Current Opinion in Colloid & Interface Science. 3646Chen L., Bonaccurso E., Gambaryan-Roisman T., Starov V., Koursari N. and Zhao Y., Current Opinion in Colloid & Interface Science, 36 (2018) 46. . R W Style, A Jagota, C.-Y Hui, E R Dufresne, Annual Review of Condensed Matter Physics. 899Style R. W., Jagota A., Hui C.-Y. and Dufresne E. R., Annual Review of Condensed Matter Physics, 8 (2017) 99. Experimental Eye Research. F J Holly, M A Lemp, 11239Holly F. J. and Lemp M. A., Experimental Eye Re- search, 11 (1971) 239. . C Pérez-González, R Alert, C Blanch-Mercader, M Gómez-González, T Kolodziej, E Bazellieres, J Casademunt, X Trepat, Nature Physics. 1579Pérez-González C., Alert R., Blanch-Mercader C., Gómez-González M., Kolodziej T., Bazellieres E., Casademunt J. and Trepat X., Nature Physics, 15 (2018) 79. . H Wijshoff, Current Opinion in Colloid & Interface Science. 3620Wijshoff H., Current Opinion in Colloid & Interface Science, 36 (2018) 20. . R W Style, R Boltyanskiy, Y Che, J S Wettlaufer, L A Wilen, E R Dufresne, Phys. Rev. Lett. 11066103Style R. W., Boltyanskiy R., Che Y., Wettlaufer J. S., Wilen L. A. and Dufresne E. R., Phys. Rev. Lett., 110 (2013) 066103. . B Zhao, E Bonaccurso, G K Auernhammer, L Chen, Nano Letters. Zhao B., Bonaccurso E., Auernhammer G. K. and Chen L., Nano Letters, (2021) . . S J Park, B M Weon, J S Lee, J Lee, J Kim, J H Je, Nature Communications. 5Park S. J., Weon B. M., Lee J. S., Lee J., Kim J. and Je J. H., Nature Communications, 5 (2014) . . A Carré, J.-C Gastel, M E R Shanahan, Nature. 379432Carré A., Gastel J.-C. and Shanahan M. E. R., Na- ture, 379 (1996) 432. . D Long, A Ajdari, L Leibler, Langmuir. 125221Long D., Ajdari A. and Leibler L., Langmuir, 12 (1996) 5221. . S Karpitschka, S Das, M Van Gorcum, H Perrin, B Andreotti, J Snoeijer, Nat. Commun. 67891Karpitschka S., Das S., van Gorcum M., Perrin H., Andreotti B. and Snoeijer J., Nat. Commun., 6 (2015) 7891. . M Zhao, J Dervaux, T Narita, F Lequeux, L Limat, M Roché, Proc. Natl. Acad. Sci. U.S.A. 1151748Zhao M., Dervaux J., Narita T., Lequeux F., Limat L. and Roché M., Proc. Natl. Acad. Sci. U.S.A., 115 (2018) 1748. . J Dervaux, M Roché, L Limat, Soft Matter. 165157Dervaux J., Roché M. and Limat L., Soft Matter, 16 (2020) 5157. . C Henkel, J H Snoeijer, U Thiele, Soft Matter. 1710359Henkel C., Snoeijer J. H. and Thiele U., Soft Matter, 17 (2021) 10359. M Coux, J M Kolinski, Proceedings of the National Academy of Sciences. the National Academy of Sciences11732285Coux M. and Kolinski J. M., Proceedings of the Na- tional Academy of Sciences, 117 (2020) 32285. . F Y Leong, Le D.-V , Physics of Fluids. 3262102Leong F. Y. and Le D.-V., Physics of Fluids, 32 (2020) 062102. . K Smith-Mannschott, Q Xu, S Heyden, N Bain, J H Snoeijer, E R Dufresne, R W Style, Physical Review Letters. 126158004Smith-Mannschott K., Xu Q., Heyden S., Bain N., Snoeijer J. H., Dufresne E. R. and Style R. W., Physical Review Letters, 126 (2021) 158004. . G Pu, S J Severtson, Langmuir. 4685Pu G. and Severtson S. J., Langmuir, 24 (2008) 4685. . T Kajiya, A Daerr, T Narita, L Royon, F Lequeux, L Limat, Soft Matter. 9454Kajiya T., Daerr A., Narita T., Royon L., Lequeux F. and Limat L., Soft Matter, 9 (2013) 454. . S J Park, J B Bostwick, V D Andrade, J H Je, Soft Matter. 138331Park S. J., Bostwick J. B., Andrade V. D. and Je J. H., Soft Matter, 13 (2017) 8331. . M Van Gorcum, B Andreotti, J H Snoeijer, S Karpitschka, Phys. Rev. Lett. 121208003van Gorcum M., Andreotti B., Snoeijer J. H. and Karpitschka S., Phys. Rev. Lett., 121 (2018) 208003. . M Van Gorcum, S Karpitschka, B Andreotti, J H Snoeijer, Soft Matter. 161306van Gorcum M., Karpitschka S., Andreotti B. and Snoeijer J. H., Soft Matter, 16 (2020) 1306. . D C Dyson, Physics of Fluids. 31229Dyson D. C., Physics of Fluids, 31 (1988) 229. . S Karpitschka, S Das, M Van Gorcum, H Perrin, B Andreotti, J H Snoeijer, Proc. Natl. Acad. Sci. U.S.A. 1157233Karpitschka S., Das S., van Gorcum M., Perrin H., Andreotti B. and Snoeijer J. H., Proc. Natl. Acad. Sci. U.S.A., 115 (2018) E7233. . S Aland, D Mokbel, International Journal for Numerical Methods in Engineering. 122903Aland S. and Mokbel D., International Journal for Nu- merical Methods in Engineering, 122 (2021) 903. See Supplemental Material at http://... for additional details about the simulations and the analytical model. See Supplemental Material at http://... for additional details about the simulations and the analytical model. . R G Cox, Journal of Fluid Mechanics. 168169Cox R. G., Journal of Fluid Mechanics, 168 (1986) 169. . O V Voinov, Fluid Dynamics. 11714Voinov O. V., Fluid Dynamics, 11 (1977) 714. . J H Snoeijer, B Andreotti, Annual Review of Fluid Mechanics. 45269Snoeijer J. H. and Andreotti B., Annual Review of Fluid Mechanics, 45 (2013) 269. . Q Xu, R W Style, E R Dufresne, Soft Matter. 14916Xu Q., Style R. W. and Dufresne E. R., Soft Matter, 14 (2018) 916. . Q Xu, K E Jensen, R Boltyanskiy, R Sarfati, R W Style, E R Dufresne, Nature Communications. 8Xu Q., Jensen K. E., Boltyanskiy R., Sarfati R., Style R. W. and Dufresne E. R., Nature Communica- tions, 8 (2017) . . Z Liu, A Jagota, C.-Y Hui, Soft Matter. 166875Liu Z., Jagota A. and Hui C.-Y., Soft Matter, 16 (2020) 6875. . S Heyden, N Bain, Q Xu, R W Style, E R Dufresne, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 47720200673Heyden S., Bain N., Xu Q., Style R. W. and Dufresne E. R., Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 477 (2021) 20200673. . W Zhao, J Zhou, H Hu, C Xu, Q Xu, Soft Matter. Zhao W., Zhou J., Hu H., Xu C. and Xu Q., Soft Matter, (2021) . . M Prakash, D Quéré, J W M Bush, Science. 3206Prakash M., Quéré D. and Bush J. W. M., Science, 320 (2008) 931. p-6
[]
[ "MDP Optimal Control under Temporal Logic Constraints -Technical Report", "MDP Optimal Control under Temporal Logic Constraints -Technical Report" ]
[ "Xu Chu Ding ", "Stephen L Smith ", "Calin Belta ", "Daniela Rus " ]
[]
[]
In this paper, we develop a method to automatically generate a control policy for a dynamical system modeled as a Markov Decision Process (MDP). The control specification is given as a Linear Temporal Logic (LTL) formula over a set of propositions defined on the states of the MDP. We synthesize a control policy such that the MDP satisfies the given specification almost surely, if such a policy exists. In addition, we designate an "optimizing proposition" to be repeatedly satisfied, and we formulate a novel optimization criterion in terms of minimizing the expected cost in between satisfactions of this proposition. We propose a sufficient condition for a policy to be optimal, and develop a dynamic programming algorithm that synthesizes a policy that is optimal under some conditions, and sub-optimal otherwise. This problem is motivated by robotic applications requiring persistent tasks, such as environmental monitoring or data gathering, to be performed.
10.1109/cdc.2011.6161122
[ "https://arxiv.org/pdf/1103.4342v2.pdf" ]
11,308,928
1103.4342
163f64b3bc729db1f5e059bde83c20bbd0243a7e
MDP Optimal Control under Temporal Logic Constraints -Technical Report Xu Chu Ding Stephen L Smith Calin Belta Daniela Rus MDP Optimal Control under Temporal Logic Constraints -Technical Report In this paper, we develop a method to automatically generate a control policy for a dynamical system modeled as a Markov Decision Process (MDP). The control specification is given as a Linear Temporal Logic (LTL) formula over a set of propositions defined on the states of the MDP. We synthesize a control policy such that the MDP satisfies the given specification almost surely, if such a policy exists. In addition, we designate an "optimizing proposition" to be repeatedly satisfied, and we formulate a novel optimization criterion in terms of minimizing the expected cost in between satisfactions of this proposition. We propose a sufficient condition for a policy to be optimal, and develop a dynamic programming algorithm that synthesizes a policy that is optimal under some conditions, and sub-optimal otherwise. This problem is motivated by robotic applications requiring persistent tasks, such as environmental monitoring or data gathering, to be performed. I. INTRODUCTION In this paper, we consider the problem of controlling a (finite state) Markov Decision Process (MDP). Such models are widely used in various areas including engineering, biology, and economics. In particular, in recent results, they have been successfully used to model and control autonomous robots subject to uncertainty in their sensing and actuation, such as for ground robots [1], unmanned aircraft [2] and surgical steering needles [3]. Several authors [4]- [7] have proposed using temporal logics, such as Linear Temporal Logic (LTL) and Computation Tree Logic (CTL) [8], as specification languages for control systems. Such logics are appealing because they have well defined syntax and semantics, which can be easily used to specify complex behavior. In particular, in LTL, it is possible to specify persistent tasks, e.g., "Visit regions A, then B, and then C, infinitely often. Never enter B unless coming directly from D." In addition, off-the-shelf model checking algorithms [8] and temporal logic game strategies [9] can be used to verify the correctness of system trajectories and to synthesize provably correct control strategies. The existing works focusing on LTL assume that a finite model of the system is available and the current state can be precisely determined. If the control model is deterministic (i.e., at each state, an available control enables exactly one transition), control strategies from specifications given as This work was supported in part by ONR-MURI N00014-09-1051, ARO W911NF-09-1-0088, AFOSR YIP FA9550-09-1-020, and NSF CNS-0834260. X. C. Ding LTL formulas can be found through simple adaptations of off-the-shelf model checking algorithms [10]. If the control is non-deterministic (an available control at a state enables one of several transitions, and their probabilities are not known), the control problem from an LTL specification can be mapped to the solution of a Büchi or GR(1) game if the specification is restricted to a fragment of LTL [4], [11]. If the probabilities of the enabled transitions at each state are known, the control problem reduces to finding a control policy for an MDP such that a probabilistic temporal logic formula is satisfied [12]. By adapting methods from probabilistic model-checking [12]- [14], we have recently developed frameworks for deriving MDP control policies from LTL formulas [15], which is related to a number of other approaches [16], [17] that address the problem of synthesizing control policies for MDPs subject to LTL satisfaction constraints. In all of the above approaches, a control policy is designed to maximize the probability of satisfying a given LTL formula. However, no attempt has been made so far to optimize the long-term behavior of the system, while enforcing LTL satisfaction guarantees. Such an objective is often critical in many applications, such as surveillance, persistent monitoring, and pickup delivery tasks, where a robot must repeatedly visit some areas in an environment and the time in between revisits should be minimized. As far as we know, this paper is the first attempt to compute an optimal control policy for a dynamical system modeled as an MDP, while satisfying temporal logic constraints. This work begins to bridge the gap between our prior work on MDP control policies maximizing the probability of satisfying an LTL formula [15] and optimal path planning under LTL constraints [18]. We consider LTL formulas defined over a set of propositions assigned to the states of the MDP. We synthesize a control policy such that the MDP satisfies the specification almost surely, if such a policy exists. In addition, we minimize the expected cost between satisfying instances of an "optimizing proposition." The main contribution of this paper is two-fold. First, we formulate the above MDP optimization problem in terms of minimizing the average cost per cycle, where a cycle is defined between successive satisfaction of the optimizing proposition. We present a novel connection between this problem and the well-known average cost per stage problem. Second, we incorporate the LTL constraints and obtain a sufficient condition for a policy to be optimal. We present a dynamic programming algorithm that under some (heavy) restrictions synthesizes an optimal control policy, and a suboptimal policy otherwise. The organization of this paper is as follows. In Sec. II we provide some preliminaries. We formulate the problem in Sec. III and we formalize the connection between the average cost per cycle and the average cost per stage problem in Sec. IV. In Sec. V, we provide a method for incorporating LTL constraints. We present a case study illustrating our framework in Sec. VI and we conclude in Sec. VII II. PRELIMINARIES A. Linear Temporal Logic We employ Linear Temporal Logic (LTL) to describe MDP control specifications. A detailed description of the syntax and semantics of LTL is beyond the scope of this paper and can be found in [8], [13]. Roughly, an LTL formula is built up from a set of atomic propositions Π, standard Boolean operators ¬ (negation), ∨ (disjunction), ∧ (conjunction), and temporal operators (next), U (until), 3 (eventually), 2 (always). The semantics of LTL formulas are given over infinite words in 2 Π . A word satisfies an LTL formula φ if φ is true at the first position of the word; 2φ means that φ is true at all positions of the word; 3φ means that φ eventually becomes true in the word; φ 1 Uφ 2 means that φ 1 has to hold at least until φ 2 is true. More expressivity can be achieved by combining the above temporal and Boolean operators (more examples are given later). An LTL formula can be represented by a deterministic Rabin automaton, which is defined as follows. Definition II.1 (Deterministic Rabin Automaton). A deterministic Rabin automaton (DRA) is a tuple R = (Q, Σ, δ, q 0 , F ), where (i) Q is a finite set of states; (ii) Σ is a set of inputs (alphabet); (iii) δ : Q × Σ → Q is the transition function; (iv) q 0 ∈ Q is the initial state; and (v) F = {(L(1), K(1)), . . . , (L(M ), K(M ))} is a set of pairs of sets of states such that L(i), K(i) ⊆ Q for all i = 1, . . . , M . A run of a Rabin automaton R, denoted by r R = q 0 q 1 . . ., is an infinite sequence of states in R such that for each i ≥ 0, q i+1 ∈ δ(q i , α) for some α ∈ Σ. A run r R is accepting if there exists a pair (L, K) ∈ F such that 1) there exists n ≥ 0, such that for all m ≥ n, we have q m / ∈ L, and 2) there exist infinitely many indices k where q k ∈ K. This acceptance conditions means that r R is accepting if for a pair (L, K) ∈ F , r R intersects with L finitely many times and K infinitely many times. Note that for a given pair (L, K), L can be an empty set, but K is not empty. For any LTL formula φ over Π, one can construct a DRA with input alphabet Σ = 2 Π accepting all and only words over Π that satisfy φ (see [19]). We refer readers to [20] and references therein for algorithms and to freely available implementations, such as [21], to translate a LTL formula over Π to a corresponding DRA. B. Markov Decision Process and probability measure Definition II.2 (Labeled Markov Decision Process). A labeled Markov decision process (MDP) is a tuple M = (S, U, P, s 0 , Π, L, g), where S = {1, . . . , n} is a finite set of states; U is a finite set of controls (actions) (with slight abuse of notations we also define the function U (i), where i ∈ S and U (i) ⊆ U to represent the available controls at state i); P : S × U × S → [0, 1] is the transition probability function such that for all i ∈ S, j∈S P (i, u, j) = 1 if u ∈ U (i), and P (i, u, j) = 0 if u / ∈ U (i); s 0 ∈ S is the initial state; Π is a set of atomic propositions; L : S → 2 Π is a labeling function and g : S × U → R + is such that g(i, u) is the expected (non-negative) cost when control u ∈ U (i) is taken at state i. We define a control function µ : S → U such that µ(i) ∈ U (i) for all i ∈ S. A infinite sequence of control functions M = {µ 0 , µ 1 , . . .} is called a policy. One can use a policy to resolve all non-deterministic choices in an MDP by applying the action µ k (s k ) at state s k . Given an initial state s 0 , an infinite sequence r M M = s 0 s 1 . . . on M generated under a policy M is called a path on M if P (s k , µ k (s k ), s k+1 ) > 0 for all k. The index k of a path is called stage. If µ k = µ for all k, then we call it a stationary policy and we denote it simply as µ. A stationary policy µ induces a Markov chain where its set of states is S and the transition probability from state i to j is P (i, µ(i), j). We III. PROBLEM FORMULATION Consider a weighted MDP M = (S, U, P, s 0 , Π, L, g) and an LTL formula φ over Π. As proposed in [18], we assume that formula φ is of the form: φ = 23π ∧ ψ,(2) where the atomic proposition π ∈ Π is called the optimizing proposition and ψ is an arbitrary LTL formula. In other words, φ requires that ψ be satisfied and π be satisfied infinitely often. We assume that there exists at least one policy M of M such that M under M satisfies φ almost surely, i.e., Pr M M (φ) = 1 (in this case we simply say M satisfies φ almost surely). We let M be the set of all policies and M φ be the set of all policies satisfying φ almost surely. Note that if there exists a control policy satisfying φ almost surely, then there typically exist many (possibly infinite number of) such policies. We would like to obtain the optimal policy such that φ is almost surely satisfied, and the expected cost in between visiting a state satisfying π is minimized. To formalize this, we first denote S π = {i ∈ S, π ∈ L(i)} (i.e., the states where atomic proposition π is true). We say that each visit to set S π completes a cycle. Thus, starting at the initial state, the finite path reaching S π for the first time is the first cycle; the finite path that starts after the completion of the first cycle and ends with revisiting S π for the second time is the second cycle, and so on. Given a path r M M = s 0 s 1 . . ., we use C(r M M , N ) to denote the cycle index up to stage N , which is defined as the total number of cycles completed in N stages plus 1 (i.e., the cycle index starts with 1 at the initial state). The main problem that we consider in this paper is to find a policy that minimizes the average cost per cycle (ACPC) starting from the initial state s 0 . Formally, we have: Problem III.1. Find a policy M = {µ 0 , µ 1 , . . .}, M ∈ M φ that minimizes J(s 0 ) = lim sup N →∞ E N k=0 g(s k , µ k (s k )) C(r M M , N ) ,(3) where E{·} denotes the expected value. Prob. III.1 is related to the standard average cost per stage (ACPS) problem, which consist of minimizing J s (s 0 ) = lim sup N →∞ E N k=0 g(s k , µ k (s k )) N ,(4) over M, with the noted difference that the right-hand-side (RHS) of (4) is divided by the index of stages instead of cycles. The ACPS problem has been widely studied in the dynamic programming community, without the constraint of satisfying temporal logic formulas. The ACPC cost function we consider in this paper is relevant for probabilistic abstractions and practical applications, where the cost of controls can represent the time, energy, or fuel required to apply controls at each state. In particular, it is a suitable performance measure for persistent tasks, which can be specified by LTL formulas. For example, in a data gathering mission [18], an agent is required to repeatedly gather and upload data. We can assign π to the data upload locations and a solution to Prob. III.1 minimizes the expected cost in between data upload. In such cases, the ACPS cost function does not translate to a meaningful performance criterion. In fact, a policy minimizing (4) may even produce an infinite cost in (3). Nevertheless, we will make the connection between the ACPS and the ACPC problems in Sec. IV. Remark III.2 (Optimization Criterion). The optimization criterion in Prob. III.1 is only meaningful for specifications where π is satisfied infinitely often. Otherwise, the limit from Eq. (3) is infinite (since g is a positive-valued function) and Prob. III.1 has no solution. This is the reason for choosing φ in the form 23π ∧ ψ and for only searching among policies that almost surely satisfy φ. IV. SOLVING THE AVERAGE COST PER CYCLE PROBLEM A. Optimality conditions for ACPS problems In this section, we recall some known results on the ACPS problem, namely finding a policy over M that minimizes J s in (4). The reader interested in more details is referred to [22], [23] and references therein. Recall that the set of states of M is denoted by {1, . . . , n}. For each stationary policy µ, we use P µ to denote the transition probability matrix: P µ (i, j) = P (i, µ(i), j). Define vector g µ where g µ (i) = g(i, µ(i)). For each stationary policy µ, we can obtain a so-called gain-bias pair (J s µ , h s µ ), where J s µ = P * µ g µ , h s µ = H s µ g µ (5) with P * µ = lim N →∞ 1 N N −1 k=0 P k µ , H µ = (I − P µ + P * µ ) −1 − P * µ . (6) The vector J s µ = [J s µ (1), . . . , J s µ (n)] T is such that J s µ (i) is the ACPS starting at initial state i under policy µ. Note that the limit in (6) exists for any stochastic matrix P µ , and P * µ is stochastic. Therefore, the lim sup in (4) can be replaced by the limit for a stationary policy. Moreover, (J s µ , h s µ ) satisfies J s µ = P µ J s µ , J s µ + h s µ = g µ + P µ h s µ .(7) By noting that h s µ + v s µ = P µ v s µ ,(8) for some vector v s µ , we see that (J s µ , h s µ , v s µ ) is the solution of 3n linear equations with 3n unknowns. It has been shown that there exists a stationary optimal policy µ minimizing (4) over all policies, where its gain-bias pair (J s , h s ) satisfies the Bellman's equations for average cost per stage problems: J s (i) = min u∈U (i) n j=1 P (i, u, j)J s (j)(9) and J s (i)+h s (i) = min u∈Ū (i) g(i, u)+ n j=1 P (i, u, j)h s (j) ,(10) for all i = 1, . . . , n, whereŪ i is the set of controls attaining the minimum in (9). Furthermore, if M is single-chain, the optimal average cost does not depend on the initial state, i.e., J s µ (i) = λ for all i ∈ S. In this case, (9) is trivially satisfied andŪ i in (10) can be replaced by U (i). Hence, µ with gain-bias pair (λ1, h) is optimal over all polices if for all stationary policies µ we have: λ1 + h ≤ g µ + P µ h,(11) where 1 ∈ R n is a vector of all 1s and ≤ is component-wise. B. Optimality conditions for ACPC problems Now we derive equations similar to (9) and (10) for ACPC problems, without considering the satisfaction constraint, i.e., we do not limit the set of polices to M φ at the moment. We consider the following problem: Problem IV.2. Given a communicating MDP M and a set S π , find a policy µ ∈ M that minimizes (3). Note that, for reasons that will become clear in Sec. V, we assume in Prob. IV.2 that the MDP is communicating. However, it is possible to generalize the results in this section to an MDP that is not communicating. We limit our attention to stationary policies. We will show that, similar to the majority of problems in dynamic programming, there exist optimal stationary policies, thus it is sufficient to consider only stationary policies. For such policies, we use the following notion of proper policies, which is the same as the one used in stochastic shortest path problems (see [22]). Definition IV.3 (Proper Polices). We say a stationary policy µ is proper if, under µ, all initial states have positive probability to reach the set S π in a finite number of stages. We (3) starting from state i under policy µ. If policy µ is improper, then there exist some states i ∈ S that can never reach S π . In this case, since g(i, u) is positive for all i, u, we can immediately see that J µ (i) = ∞. We will first consider only proper policies. denote J µ = [J µ (1), . . . , J µ (n)] T where J µ (i) is the ACPC in Without loss of generality, we assume that S π = {1, . . . , m} (i.e., states m + 1, . . . , n are not in S π ). Given a proper policy µ, we obtain its transition matrix P µ as described in Sec. IV-A. Our goal is to express J µ in terms of P µ , similar to (5) in the ACPS case. To achieve this, we first compute the probability that j ∈ S π is the first state visited in S π after leaving from a state i ∈ S by applying policy µ. We denote this probability by P (i, µ, j). We can obtain this probability for all i ∈ S and j ∈ S π by the following proposition: Proposition IV.4. P (i, µ, j) satisfies P (i, µ, j) = n k=m+1 P (i, µ(i), k) P (k, µ(k), j)+P (i, µ(i), j).(12) Proof. From i, the next state can either be in S π or not. The first term in the RHS of (12) is the probability of reaching S π and the first state is j, given that the next state is not in S π . Adding it with the probability of next step is in S π and the state is j gives the desired result. We now define a n × n matrix P µ such that P µ (i, j) = P (i, µ, j) if j ∈ S π 0 otherwise(13) We can immediately see that P µ is a stochastic matrix, i.e., all its rows sum up to 1 or n j=1 P (i, µ, j) = 1. More precisely, m j=1 P (i, µ, j) = 1 since P (i, µ, j) = 0 for all j = m + 1, . . . , n. Using (12), we can express P µ in a matrix equation in terms of P µ . To do this, we need to first define two n × n matrices from P µ as follows: Proposition IV.5. If a policy µ is proper, then matrix I − − → P µ is non-singular. ← − P µ (i, j) = P µ (i, j) if j ∈ S π 0 otherwise (14) − → P µ (i, j) = P µ (i, j) if j / ∈ S π 0 otherwise (15) From Fig. 1, we can see that matrix P µ is "split" into ← − P µ and − → P µ , i.e., P µ = ← − P µ + − → P µ . 1 . . . m 1 · · · m m + 1 . . . n m + 1 · · · n − → P µ ← − P µ m + 1 · · · n m + 1 · · · n Pµ 1 · · · m 1 · · · m 0 0 Proof. Since µ is proper, for every initial state i ∈ S, the set S π is eventually reached. Because of this, and since − → P µ (i, j) = 0 if j ∈ S π , matrix − → P µ is transient, i.e., lim k→∞ − → P k µ = 0. From linear algebra (see, e.g., Ch. 9.4 of [24]), since − → P µ is transient and sub-stochastic, I − − → P µ is non-singular. We can then write (12) as the following matrix equation: P µ = − → P µ P µ + ← − P µ .(16) Since I − − → P µ is invertible, we have P µ = (I − − → P µ ) −1 ← − P µ .(17) Note that (16) and (17) do not depend on the ordering of the states of M, i.e., S π does not need to be equal to {1, . . . , m}. Next, we give an expression for the expected cost of reaching S π from i ∈ S under µ (if i ∈ S π , this is the expected cost of reaching S π again), and denote it asg(i, µ). Proposition IV.6.g(i, µ) satisfies g(i, µ) = n k=m+1 P (i, µ(i), k)g(k, µ) + g(i, µ(i)). (18) Proof. The first term of RHS of (18) is the expected cost from the next state if the next state is not in S π (if the next state is in S π then no extra cost is incurred), and the second term is the one-step cost, which is incurred regardless of the next state. We defineg µ such thatg µ (i) =g(i, µ), and note that (18) can be written as:g µ = − → P µgµ + g μ g µ = (I − − → P µ ) −1 g µ ,(19) where g µ is defined in Sec. IV-A. We can now express the ACPC J µ in terms of P µ and g µ . Observe that, starting from i, the expected cost of the first cycle isg µ (i); the expected cost of the second cycle is m j=1 P µ (i, µ, j)g µ (j); the expected cost of the third cycle is m j=1 m k=1 P µ (i, µ, j) P µ (j, µ, k)g µ (k); and so on. Therefore: J µ = lim sup C→∞ 1 C C−1 k=0 P k µgµ ,(20) where C represents the cycle count. Since P µ is a stochastic matrix, the lim sup in (20) can be replaced by the limit, and we have J µ = lim C→∞ 1 C C−1 k=0 P k µgµ = P * µgµ ,(21) where P * for a stochastic matrix P is defined in (6). We can now make a connection between Prob. IV.2 and the ACPS problem. Each proper policy µ of M can be mapped to a policyμ with transition matrix Pμ := P µ and vector of costs gμ :=g µ , and we have J µ = J s µ .(22) Moreover, we define h µ := h s µ . Together with J µ , pair (J µ , h µ ) can be seen as the gain-bias pair for the ACPC problem. We denote the set of all polices that can be mapped to a proper policy as Mμ. We see that a proper policy minimizing the ACPC maps to a policy over Mμ minimizing the ACPS. The by-product of the above analysis is that, if µ is proper, then J µ (i) is finite for all i, since P * µ is a stochastic matrix and g µ (i) is finite. We now show that, among stationary policies, it is sufficient to consider only proper policies. Proposition IV.7. Assume µ to be an improper policy. If M is communicating, then there exists a proper policy µ such that J µ (i) ≤ J µ (i) for all i = 1, . . . , n, with strict inequality for at least one i. Proof. We partition S into two sets of states: S π is the set of states in S that cannot reach S π and S →π as the set of states that can reach S π with positive probability. Since µ is improper and g(i, u) is postive-valued, S π is not empty and J µ (i) = ∞ for all i ∈ S π . Moreover, states in S π cannot visit S →π by definition. Since M is communicating, there exists some actions at some states in S π such that, if applied, all states in S π can now visit S π with positive probability and this policy is now proper (all states can now reach S π ). We denote this new policy as µ . Note that this does not increase J µ (i) if i ∈ S →π since controls at these states are not changed. Moreover, since µ is proper, J µ (i) < ∞ for all i ∈ S π . Therefore J µ (i) < J µ (i) for all i ∈ S π . Using the connection to the ACPS problem, we have: Proposition IV.8. The optimal ACPC policy over stationary policies is independent of the initial state. Proof. We first consider the optimal ACPC over proper policies. As mentioned before, if all stationary policies of an MDP satisfies the WA condition (see Def. IV.1), then the ACPS is equal for all initial states. Thus, we need to show that the WA condition is satisfied for allμ. We will use S π as set S r . Since M is communicating, then for each pair i, j ∈ S π , P (i, µ, j) is positive for some µ, therefore from (12), P µ (i, j) is positive for some µ (i.e., Pμ(i, j) is positive for someμ), and the first condition of Def. IV.1 is satisfied. Since µ is proper, the set S π can be reached from all i ∈ S. In addition, Pμ(i, j) = 0 for all j / ∈ S π . Thus, all states i / ∈ S π are transient under all policiesμ ∈ Mμ, and the second condition is satisfied. Therefore WA condition is satisfied and the optimal ACPS over Mμ is equal for all initial state. Hence, the optimal ACPC is the same for all initial states over proper policies. Using Prop. IV.7, we can conclude the same statement over stationary policies. The above result is not surprising, as it mirrors the result for a single-chain MDP in the ACPS problem. Essentially, transient behavior does not matter in the long run so the optimal cost is the same for any initial state. Using Bellman's equation (9) and (10), and in particular the case when the optimal cost is the same for all initial states (11), policyμ with the ACPS gain-bias pair (λ1, h) satisfying for allμ ∈ Mμ: λ1 + h ≤ gμ + Pμh(23) is optimal. Equivalently, µ that maps toμ is optimal over all proper policies. The following proposition shows that this policy is optimal over all policies in M, stationary or not. Proposition IV.9. The proper policy µ that maps toμ satisfying (23) is optimal over M. Proof. Consider a M = {µ 1 , µ 2, . . .} and assume it to be optimal. We first consider that M is stationary for each cycle, and the policy is µ k for the k-th cycle. Among this type of polices, from Prop. IV.7, we see that if M is optimal, then µ k is proper for all k. In addition, the ACPC of policy M is the ACPS with policy {μ 1 ,μ 2 , . . .}. Since the optimal policy of the ACPS isμ (stationary). Then we can conclude that if M is stationary in between cycles, then optimal policy for each cycle is µ and thus M = µ . Now we assume that M is not stationary for each cycle. Since g(i, u) > 0 for all i, u, and there exists at least one proper policy, the stochastic shortest path problem for S π admits an optimal stationary policy as a solution [22]. Hence, for each cycle k, the cycle cost can be minimized if a stationary policy is used for the cycle. Therefore, a policy which is stationary in between cycles is optimal over M, which is in turn, optimal if M = µ . The proof is complete. Unfortunately, it is not clear how to find the optimal policy from (23) except by searching through all policies in Mμ. This exhaustive search is not feasible for reasonably large problems. Instead, we would like equations in the form of (9) and (10), so that the optimizations can be carried out independently at each state. To circumvent this difficulty, we need to express the gainbias pair (J µ , h µ ), which is equal to (J s µ , h s µ ), in terms of µ. From (7), we have J µ = PμJ µ , J µ + h µ = gμ + Pμh µ . By manipulating the above equations, we can now show that J µ and h µ can be expressed in terms of µ (analogous to (7)) instead ofμ via the following proposition: Proposition IV.10. We have J µ = P µ J µ , J µ + h µ = g µ + P µ h µ + − → P µ J µ . (24) Moreover, we have (I − − → P µ )h µ + v µ = P µ v µ ,(25) for some vector v µ . Proof. We start from (7): J µ = PμJ µ , J µ + h µ = gμ + Pμh µ .(26) For the first equation in (26), using (17), we have J µ = PμJ µ J µ = (I − − → P µ ) −1 ← − P µ J µ (I − − → P µ )J µ = ← − P µ J µ J µ − − → P µ J µ = ← − P µ J µ J µ = ( − → P µ + ← − P µ )J µ J µ = P µ J µ . For the second equation in (26), using (17) and (19), we have J µ + h µ = gμ + Pμh µ J µ + h µ = (I − − → P µ ) −1 (g µ + ← − P µ h µ ) (I − − → P µ )(J µ + h µ ) = g µ + ← − P µ h µ J µ − − → P µ J µ + h µ − − → P µ h µ = g µ + ← − P µ h µ J µ + h µ − − → P µ J µ = g µ + ( − → P µ + ← − P µ )h µ J µ + h µ = g µ + P µ h µ + − → P µ J µ . Thus, (26) can be expressed in terms of µ as: J µ = P µ J µ , J µ + h µ = g µ + P µ h µ + − → P µ J µ . To compute J µ and h µ , we need an extra equation similar to (8). Using (8), we have h µ + v µ = Pμv µ h µ + v µ = (I − − → P µ ) −1 ← − P µ v µ (I − − → P µ )h µ + v µ = P µ v µ , which completes the proof. From Prop. IV.10, similar to the ACPS problem, (J µ , h µ , v µ ) can be solved together by a linear system of 3n equations and 3n unknowns. The insight gained when simplifying J µ and h µ in terms of µ motivate us to propose the following optimality condition for an optimal policy. Proposition IV.11. The policy µ with gain-bias pair (λ1, h) satisfying λ + h(i) = min u∈U (i) g(i, u) + n j=1 P (i, u, j)h(j) + λ n j=m+1 P (i, u, j) , (27) for all i = 1, . . . , n, is the optimal policy minimizing (3) over all policies in M. Proof. The optimality condition (27) can be written as: λ1 + h ≤ g µ + P µ h + − → P µ λ1,(28) for all stationary policies µ. Note that, given a, b ∈ R n and a ≤ b, if A is an n × n matrix with all non-negative entries, then Aa ≤ Ab. Moreover, given c ∈ R n , we have a + c ≤ b + c. From (28) we have λ1 + h ≤ g µ + P µ h + − → P µ λ1 λ1 − − → P µ λ1 + h ≤ g µ + P µ h λ1 − − → P µ λ1 + h ≤ g µ + ( ← − P µ + − → P µ )h λ1 − − → P µ λ1 + h − − → P µ h ≤ g µ + ← − P µ h (I − − → P µ )(λ1 + h) ≤ g µ + ← − P µ h (29) If µ is proper, then − → P µ is a transient matrix (see proof of Prop. IV.5), and all of its eigenvalues are strictly inside the unit circle. Therefore, we have (I − − → P µ ) −1 = I + − → P µ + − → P 2 µ + . . . . Therefore, since all entries of − → P µ are non-negative, all entries of (I − − → P µ ) −1 are non-negative. Thus, continuing from (29), we have (I − − → P µ )(λ1 + h) ≤ g µ + ← − P µ h λ1 + h ≤ (I − − → P µ ) −1 (g µ + ← − P µ h) λ1 + h ≤ gμ + Pμh for all proper policies µ and allμ ∈ Mμ. Hence,μ satisfies (23) and µ is optimal over all proper policies. Using Prop. IV.9, the proof is complete. We will present an algorithm that uses Prop. IV.11 to find the optimal policy in the next section. Note that, unlike (23), (27) can be checked for any policy µ by finding the minimum for all states i = 1, . . . , n, which is significantly easier than searching over all proper policies. V. SYNTHESIZING THE OPTIMAL POLICY UNDER LTL CONSTRAINTS In this section we outline an approach for Prob. III.1. We aim for a computational framework, which in combination with results of [15] produces a policy that both maximizes the satisfaction probability and optimizes the long-term performance of the system, using results from Sec. IV. A. Automata-theoretical approach to LTL control synthesis Our approach proceeds by converting the LTL formula φ to a DRA as defined in Def. II.1. We denote the resulting DRA as R φ = (Q, 2 Π , δ, q 0 , F ) with F = {(L(1), K(1)), . . . , (L(M ), K(M ))} where L(i), K(i) ⊆ Q for all i = 1, . . . , M . We now obtain an MDP as the product of a labeled MDP M and a DRA R φ , which captures all paths of M satisfying φ. Definition V.1 (Product MDP). The product MDP M × R φ between a labeled MDP M = (S, U, P, s 0 , Π, L, g) and a DRA R φ = (Q, 2 Π , δ, q 0 , F ) is obtained from a tuple P = (S P , U, P P , s P0 , F P , S Pπ , g P ), where (i) S P = S × Q is a set of states; (ii) U is a set of controls inherited from M and we define U P ((s, q)) = U (s); (iii) P P gives the transition probabilities: P P ((s, q), u, (s , q )) = P (s, u, s ) if q = δ(q, L(s)) 0 otherwise; (iv) s P0 = (s 0 , q 0 ) is the initial state; (v) F P = {(L P (1), K P (1)), . . . , (L P (M ), K P (M ))} where L P (i) = S × L(i), K P (i) = S × K(i), for i = 1, . . . , M ; (vi) S Pπ is the set of states in S P for which proposition π is satisfied. Thus, S Pπ = S π × Q; (vii) g P ((s, q), u) = g(s, u) for all (s, q) ∈ S P ; Note that some states of S P may be unreachable and therefore have no control available. After removing those states (via a simple graph search), P is a valid MDP and is the desired product MDP. With a slight abuse of notation we still denote the product MDP as P and always assume that unreachable states are removed. An example of a product MDP between a labeled MDP and a DRA corresponding to the LTL formula φ = 23π ∧ 23a is shown in Fig. 2. There is an one-to-one correspondence between a path s 0 s 1 , . . . on M and a path (s 0 , q 0 )(s 1 , q 1 ) . . . on P. Moreover, from the definition of g P , the costs along these two paths are the same. The product MDP is constructed so that, given a path (s 0 , q 0 )(s 1 , q 1 ) . . ., the corresponding path s 0 s 1 . . . on M generates a word satisfying φ if and only if, there exists (L P , K P ) ∈ F P such that the set K P is visited infinitely often and L P finitely often. A similar one-to-one correspondence exists for policies: Definition V.2 (Inducing a policy from P). Given policy M P = {µ P 0 , µ P 1 , . . .} on P, where µ P k ((s, q)) ∈ U P ((s, q)), it induces policy M = {µ 0 , µ 1 , . . .} on M by setting µ k (s k ) = µ P k ((s k , q k )) for all k. We denote M P | M as the policy induced by M P , and we use the same notation for a set of policies. An induced policy can be implemented on M by simply keeping track of its current state on P. Note that a stationary policy on P induces a non-stationary policy on M. From the one-to-one correspondence between paths and the equivalence of their costs, the expected cost in (3) from initial state s 0 under M P | M is equal to the expected cost from initial state (s 0 , q 0 ) under M P . For each pair of states (L P , K P ) ∈ F P , we can obtain a set of accepting maximal end components (AMEC): Definition V.3 (Accepting Maximal End Components). Given (L P , K P ) ∈ F P , an end component C is a communicating MDP (S C , U C , P C , K C , S Cπ , g C ) such that S C ⊆ S P , U C ⊆ U P , U C (i) ⊆ U (i) for all i ∈ S C , K C = S C ∩ K P , S Cπ = S C ∩ S Pπ , and g C (i, u) = g P (i, u) if i ∈ S C , u ∈ U C (i). If P (i, u, j) > 0 for any i ∈ S C and u ∈ U C (i), then j ∈ S C , in which case P C (i, u, j) = P (i, u, j). An accepting maximal end components (AMEC) is the largest such end component such that K C = ∅ and S C ∩ L P = ∅. Note that, an AMEC always contains at least one state in K P and no state in L P . Moreover, it is "absorbing" in the sense that the state does not leave an AMEC once entered. In the example shown in Fig. 2, there exists only one AMEC corresponding to (L P , K P ), which is the only pair of states in F P , and the states of this AMEC are shown in Fig. 3. A procedure to obtain all AMECs of an MDP was provided in [13]. From probabilistic model checking, a policy M = M P | M almost surely satisfies φ (i.e., M ∈ M φ ) if and only if, under policy M P , there exists AMEC C such that the probability of reaching S C from initial state (s 0 , q 0 ) is 1 (in this case, we call C a reachable AMEC). In [15], such an optimal policy was found by dynamic programming or solving a linear program. For states inside C, since C itself is a communicating MDP, a policy (not unique) can be easily constructed such that a state in K C is infinitely often visited, satisfying the LTL specification. B. Optimizing the long-term performance of the MDP For a control policy designed to satisfy an LTL formula, the system behavior outside an AMEC is transient, while the behavior inside an AMEC is long-term. The policies obtained in [15]- [17] essentially disregard the behavior inside an AMEC, because, from the verification point of view, the behavior inside an AMEC is for the most part irrelevant, as long as a state in K P is visited infinitely often. We now aim to optimize the long-term behavior of the MDP with respect to the ACPC cost function, while enforcing the satisfaction constraint. Since each AMEC is a communicating MDP, we can use results in Sec. IV-B to help obtaining a solution. Our approach consists of the following steps: (i) Convert formula φ to a DRA R φ and obtain the product MDP P between M and R φ ; (ii) Obtain the set of reachable AMECs, denoted as A; (iii) For each C ∈ A: Find a stationary policy µ →C (i), defined for i ∈ S \ S C , that reaches S C with probability 1 (µ →C is guaranteed to exist and obtained as in [15]); Find a stationary policy µ C (i), defined for i ∈ S C minimizing (3) for MDP C and set S Cπ while satisfying the LTL constraint; Define µ C to be: µ C = µ →C (i) if i / ∈ S C µ C (i) if i ∈ S C ,(30) and denote the ACPC of µ C as λ C ; (iv) We find the solution to Prob. III.1 by: J (s 0 ) = min C∈A λ C ,(31) and the optimal policy is µ C | M , where C is the AMEC attaining the minimum in (31). We now provide the sufficient conditions for a policy µ C to be optimal. Moreover, if an optimal policy µ C can be obtained for each C, we show that the above procedure indeed gives the optimal solution to Prob. III.1. Proposition V.4. For each C ∈ A, let µ C to be constructed as in (30), where µ C is a stationary policy satisfying two optimality conditions: (i) its ACPC gain-bias pair is (λ C 1, h), where λ C + h(i) = min u∈U C (i) g C (i, u) + j∈S C P (i, u, j)h(j) + λ C j / ∈S Cπ P (i, u, j) , (32) for all i ∈ S C , and (ii) there exists a state of K C in each recurrent class of µ C . Then the optimal cost for Prob. III.1 is J (s 0 ) = min C∈A λ C , and the optimal policy is µ C | M , where C is the AMEC attaining this minimum. Proof. Given C ∈ A, define a set of policies M C , such that for each policy in M C : from initial state (s 0 , q 0 ), (i) S C is reached with probability 1, (ii) S \S C is not visited thereafter, and (iii) K C is visited infinitely often. We see that, by the definition of AMECs, a policy almost surely satisfying φ belongs to M C | M for a C ∈ A. Thus, M φ = ∪ C∈A M C | M Since µ C (i) = µ →C (i) if i / ∈ S C , the state reaches S C with probability 1 and in a finite number of stages. We denote the probability that j ∈ S C is the first state visited in S C when C is reached from initial state s P0 as P C (j, µ →C , s P0 ). Since the ACPC for the finite path from the initial state to a state j ∈ S C is 0 as the cycle index goes to ∞, the ACPC from initial state s P0 under policy µ C is J(s 0 ) = j∈S C P C (j, µ →C , s P0 )J µ C (j).(33) Since C is communicating, the optimal cost is the same for all states of S C (and thus it does not matter which state in S C is first visited when S C is reached). We have J(s 0 ) = j∈S C P C (j, µ →C , (s 0 , q 0 ))λ C = λ C .(34) Applying Prop. IV.11, we see that µ C satisfies the optimality condition for MDP C with respect to set S Cπ . Since there exists a state of K C is in each recurrent class of µ C , a state in K C is visited infinitely often and it satisfies the LTL constraint. Therefore, µ C as constructed in (30) is optimal over M C and µ C | M is optimal over M C | M (due to equivalence of expected costs between M P and M P | M ). Since M φ = ∪ C∈A M C | M , we have that J (s 0 ) = min C∈A λ C and the policy corresponding to C attaining this minimum is the optimal policy. We can relax the optimality conditions for µ C in Prop. V.4 and require that there exist a state i ∈ K C in one recurrent class of µ C . For such a policy, we can construct a policy such that it has one recurrent class containing state i, with the same ACPC cost at each state. This construction is identical to a similar procedure for ACPS problems when the MDP is communicating (see [22, p. 203]). We can then use (30) to obtain the optimal policy µ C for C. We now present an algorithm (see Alg. 1) that iteratively updates the policy in an attempt to find one that satisfies the optimality conditions given in Prop. V.4, for a given C ∈ A. Note that Alg. 1 is similar in nature to policy iteration algorithms for ACPS problems. Proposition V.5. Given C, Alg. 1 terminates in a finite number of iterations. If it returns policy µ C with "optimal", then µ C satisfies the optimality conditions in Prop. V.4. If C is unichain (i.e., each stationary policy of C contains one recurrent class), then Alg. 1 is guaranteed to return the optimal policy µ C . Proof. If C is unichain, then since it is also communicating, µ C contains a single recurrent class (and no transient state). In this case, since K C is not empty, states in K C are recurrent and the LTL constraint is always satisfied at step 7 and 9 of Alg. 1. The rest of the proof (for the general case and Algorithm 1 : Policy iteration algorithm for ACPC Input: C = (S C , U C , P C , K C , S Cπ , g C ) Output: Policy µ C 1: Initialize µ 0 to a proper policy containing K C in its recurrent classes (such a policy can always be constructed since C is communicating) 2: repeat 3: Given µ k , compute J µ k and h µ k with (24) and (25) 4: Compute for all i ∈ S C : 5: if µ k (i) ∈Ū (i) for all i ∈ S C then 6: U (i) = arg min u∈U C (i) j∈S C P (i, u, j)J µ k (j)(35) Compute, for all i ∈ S C : M (i) = arg min u∈Ū (i) g C (i, u)+ j∈S C P (i, u, j)h µ k (j) + j / ∈S Cπ P (i, u, j)J µ k (j)(36) 7: Find µ k+1 such that µ k+1 (i) ∈M (i) for all i ∈ S C , and contains a state of K C in its recurrent classes. If one does not exist. Return: µ k with "not optimal" 8: else 9: Find µ k+1 such that µ k+1 (i) ∈Ū (i) for all i ∈ S C , and contains a state of K C in its recurrent classes. If one does not exist, Return: µ k with "not optimal" 10: end if 11: Set k ← k + 1 12: until µ k with gain-bias pair satisfying (32) and Return: µ k with "optimal" not assuming C to be unichain) is similar to the proof of convergence for the policy iteration algorithm for the ACPS problem (see [22, pp. 237-239]). Note that the proof is the same except that when the algorithm terminates at step 11 in Alg. 1, µ k satisfies (32) instead of the optimality conditions for the ACPS problem ( (9) and (10)). If we obtain the optimal policy for each C ∈ A, then we use (31) to obtain the optimal solution for Prob. III.1. If for some C, Alg. 1 returns "not optimal", then the policy returned by Alg. 1 is only sub-optimal. We can then apply this algorithm to each AMEC in A and use (31) to obtain a sub-optimal solution for Prob. III.1. Note that similar to policy iteration algorithms for ACPS problems, either the gain or the bias strictly decreases every time when µ is updated, so policy µ is improved in each iteration. In both cases, the satisfaction constraint is always enforced. Remark V.6 (Complexity). The complexity of our proposed algorithm is dictated by the size of the generated MDPs. We use | · | to denote cardinality of a set. The size of the DRA (|Q|) is in the worst case, doubly exponential with respect to |Σ|. However, empirical studies such as [20] have shown that in practice, the sizes of the DRAs for many LTL formulas are generally much lower and manageable. The size of product MDP P is at most |S| × |Q|. The complexity for the algorithm generating AMECs is at most quadratic in the size of P [13]. The complexity of Alg. 1 depends on the size of C. The policy evaluation (step 3) requires solving a system of 3 × |S C | linear equation with 3 × |S C | unknowns. The optimization step (step 4 and 6) each requires at most |U C | × |S C | evaluations. Checking the recurrent classes of µ is linear in |S C |. Therefore, assuming that |U C | is dominated by |S C | 2 (which is usually true) and the number of policies satisfying (35) and (36) for all i is also dominated by |S C | 2 , for each iteration, the computational complexity is O(|S C | 3 ). VI. CASE STUDY The algorithmic framework developed in this paper is implemented in MATLAB, and here we provide an example as a case study. Consider the MDP M shown in Fig. 4, which can be viewed as the dynamics of a robot navigating in an environment with the set of atomic propositions {pickup, dropoff}. In practice, this MDP can be obtained via an abstraction process (see [1]) from the environment, where its probabilities of transitions can be obtained from experimental data or accurate simulations. The goal of the robot is to continuously perform a pickupdelivery task. The robot is required to pick up items at the state marked by pickup (see Fig. 4), and drop them off at the state marked by dropoff. It is then required to go back to pickup and this process is repeated. This task can be written as the following LTL formula: φ = 23pickup ∧ 2(pickup ⇒ (¬pickup Udropoff)). The first part of φ, 23pickup, enforces that the robot repeatedly pick up items. The remaining part of φ ensures that new items cannot be picked up until the current items are dropped off. We denote pickup as the optimizing proposition, and the goal is to find a policy that satisfies φ with probability 1 and minimizes the expected cost in between visiting the pickup state (i.e., we aim to minimize the expected cost in between picking up items). We generated the DRA R φ using the ltl2dstar tool [21] with 13 states and 1 pair (L, K) ∈ F . The product MDP P after removing unreachable states contains 31 states (note that P has 130 states without removing unreachable states). There is one AMEC C corresponding to the only pair in F P and it contains 20 states. We tested Alg. 1 with a number of different initial policies and Alg. 1 produced the optimal policy within 2 or 3 policy updates in each case (note that C is not unichain). For one initial policy, the ACPC was initially 330 at each state of C, and it was reduced to 62.4 at each state when the optimal policy was found. The optimal policy is as follows: State 0 1 2 3 4 5 6 7 8 9 After pickup α β α α α γ γ α β α After dropoff α α α α α α γ α α α The first row of the above table shows the policy after pick-up but before drop-off and the second row shows the policy after drop-off and before another pick-up. VII. CONCLUSIONS We have developed a method to automatically generate a control policy for a dynamical system modelled as a Markov Decision Process (MDP), in order to satisfy specifications given as Linear Temporal Logic formulas. The control policy satisfies the given specification almost surely, if such a policy exists. In addition, the policy optimizes the average cost between satisfying instances of an "optimizing proposition", under some conditions. The problem is motivated by robotic applications requiring persistent tasks to be performed such as environmental monitoring or data gathering. We are currently pursuing several future directions. First, we aim to solve the problem completely and find an algorithm that guarantees to always return the optimal policy. Second, we are interested to apply the optimization criterion of average cost per cycle to more complex models such as Partially Observable MDPs (POMDPs) and semi-MDPs. define Paths M M and FPaths M M as the set of all infinite and finite paths of M under a policy M , respectively. We can then define a probability measure over the set Paths M M . For a path r M M = s 0 s 1 . . . s m s m+1 . . . ∈ Paths M M , the prefix of length m of r M M is the finite subsequence s 0 s 1 . . . s m . Let Paths M M (s 0 s 1 . . . s m ) denote the set of all paths in Paths M M with the prefix s 0 s 1 . . . s m . (Note that s 0 s 1 . . . s m is a finite path in FPaths M M .) Then, the probability measure Pr M M on the smallest σ-algebra over Paths M M containing Paths M M (s 0 s 1 . . . s m ) for all s 0 s 1 . . . s m ∈ FPaths M M is the unique measure satisfying: Pr M M {Paths M M (s 0 s 1 . . . s m )} = 0≤k<n P (s k , µ k (s k ), s k+1 ). Finally, we can define the probability that an MDP M under a policy M satisfies an LTL formula φ. A path r M M = s 0 s 1 . . . deterministically generates a word o = o 0 o 1 . . ., where o i = L(s i ) for all i. With a slight abuse of notation, we denote L(r M M ) as the word generated by r M M . Given an LTL formula φ, one can show that the set {r M M ∈ Paths M M : L(r M M ) φ} is measurable. We define: Pr M M (φ) := Pr M M {r M M ∈ Paths M M : L(r M M ) φ} (1) as the probability of satisfying φ for M under M .For more details about probability measures on MDPs under a policy and measurability of LTL formulas, we refer readers to a text in probabilistic model checking, such as[13]. Definition IV.1 (Weak Accessibility Condition). An MDP M is said to satisfy the Weak Accessibility (WA) condition if there exist S r ⊆ S, such that (i) there exists a stationary policy where j is reachable from i for any i, j ∈ S r , and (ii) states in S \ S r are transient under all stationary policies. MDP M is called single-chain (or weakly-communicating) if it satisfies the WA condition. If M satisfies the WA condition with S r = S, then M is called communicating. For a stationary policy, it induces a Markov chain with a set of recurrent classes. A state that does not belong to any recurrent class is called transient. A stationary policy µ is called unichain if the Markov chain induced by µ contains one recurrent class (and a possible set of transient states). If every stationary policy is unichain, M is called unichain. Fig. 1 . 1The Fig. 2 . 2The construction of the product MDP between a labeled MDP and a DRA. In this example, the set of atomic proposition is {a, π}. (a): A labeled MDP where the label on top of a state denotes the atomic propositions assigned to the state. The number on top of an arrow pointing from a state s to s is the probability P (s, u, s ) associated with a control u ∈ U (s). The set of states marked by ovals is Sπ. (b): The DRA R φ corresponding to the LTL formula φ = 23π ∧ 23a. In this example, there is one set of accepting states F = {(L, K)} where L = ∅ and K = {q 2 , q 3 } (marked by double-strokes). Thus, accepting runs of this DRA must visit q 2 or q 3 (or both) infinitely often. (c): The product MDP P = M × R φ where states of K P are marked by double-strokes and states of S Pπ are marked by ovals. The states with dashed borders are unreachable, and they are removed from S P . Fig. 3 . 3The states of the only AMEC corresponding to the product MDP inFig. 2. Fig. 4 . 4MDP capturing a robot navigating in an environment. {α, β, γ} is the set of controls at states. The cost of applying α, β, γ at a state where the control is available is 5, 10, 1, respectively. (e.g., g(i, α) = 5 if α ∈ U (i)) and C. Belta are with Department of Mechanical Engineering, Boston University, Boston, MA 02215, USA (email: {xcding; cbelta}@bu.edu). S. L. Smith is with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo ON, N2L 3G1 Canada (email: [email protected]). D. Rus is with the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA (email: [email protected]). s1, q1 s2, q0 s3, q2 s0, q3 s1, q3 s2, q3 s3, q3 s0, q1 s1, q1 s0, q2 s2, q2 s2, q1 s3, q1 s3, q2 s1, q2 s0, q0 s0, q4 s1, q4 s2, q4 s3, q4 Motion planning and control from temporal logic specifications with probabilistic satisfaction guarantees. M Lahijanian, J Wasniewski, S B Andersson, C Belta, Proc ICRA. ICRAAnchorage, AKM. Lahijanian, J. Wasniewski, S. B. Andersson, and C. Belta, "Motion planning and control from temporal logic specifications with proba- bilistic satisfaction guarantees," in Proc ICRA, Anchorage, AK, 2010, pp. 3227 -3232. Collision avoidance for unmanned aircraft using Markov decision processes. S Temizer, M J Kochenderfer, L P Kaelbling, T Lozano-Pérez, J K Kuchar, Proc AIAA GN&C. S. Temizer, M. J. Kochenderfer, L. P. Kaelbling, T. Lozano-Pérez, and J. K. Kuchar, "Collision avoidance for unmanned aircraft using Markov decision processes," in Proc AIAA GN&C, Toronto, Canada, Aug. 2010. The stochastic motion roadmap: A sampling framework for planning with Markov motion uncertainty. R Alterovitz, T Siméon, K Goldberg, RSS. R. Alterovitz, T. Siméon, and K. Goldberg, "The stochastic motion roadmap: A sampling framework for planning with Markov motion uncertainty," in RSS, Atlanta, GA, Jun. 2007. Where's Waldo? Sensor-based temporal logic motion planning. H Kress-Gazit, G Fainekos, G J Pappas, Proc ICRA. ICRARome, ItalyH. Kress-Gazit, G. Fainekos, and G. J. Pappas, "Where's Waldo? Sensor-based temporal logic motion planning," in Proc ICRA, Rome, Italy, 2007, pp. 3116-3121. Sampling-based motion planning with deterministic µ-calculus specifications. S Karaman, E Frazzoli, Proc CDC. CDCShanghai, ChinaS. Karaman and E. Frazzoli, "Sampling-based motion planning with deterministic µ-calculus specifications," in Proc CDC, Shanghai, China, 2009, pp. 2222-2229. Automatic synthesis of multiagent motion tasks based on LTL specifications. S G Loizou, K J Kyriakopoulos, Proc CDC. CDCParadise Island, BahamasS. G. Loizou and K. J. Kyriakopoulos, "Automatic synthesis of multiagent motion tasks based on LTL specifications," in Proc CDC, Paradise Island, Bahamas, 2004, pp. 153-158. Receding horizon temporal logic planning for dynamical systems. T Wongpiromsarn, U Topcu, R M Murray, Proc CDC, ShanghaiChinaT. Wongpiromsarn, U. Topcu, and R. M. Murray, "Receding horizon temporal logic planning for dynamical systems," in Proc CDC, Shang- hai, China, 2009, pp. 5997-6004. Model checking. E M Clarke, D Peled, O Grumberg, MIT PressE. M. Clarke, D. Peled, and O. Grumberg, Model checking. MIT Press, 1999. Synthesis of reactive(1) designs. N Piterman, A Pnueli, Y Saar, International Conference on Verification, Model Checking, and Abstract Interpretation. Charleston, SCN. Piterman, A. Pnueli, and Y. Saar, "Synthesis of reactive(1) designs," in International Conference on Verification, Model Checking, and Abstract Interpretation, Charleston, SC, 2006, pp. 364-380. A fully automated framework for control of linear systems from temporal logic specifications. M Kloetzer, C Belta, IEEE Trans Automatic Ctrl. 531M. Kloetzer and C. Belta, "A fully automated framework for control of linear systems from temporal logic specifications," IEEE Trans Automatic Ctrl, vol. 53, no. 1, pp. 287-297, 2008. Dealing with non-determinism in symbolic control. M Kloetzer, C Belta, Hybrid Systems: Computation and Control, ser. Lect. Notes Comp. Science, M. . Egerstedt and B. MishraSpringer VerlagM.Kloetzer and C. Belta, "Dealing with non-determinism in symbolic control," in Hybrid Systems: Computation and Control, ser. Lect. Notes Comp. Science, M. Egerstedt and B. Mishra, Eds. Springer Verlag, 2008, pp. 287-300. Formal verification of probabilistic systems. L De Alfaro, Stanford UniversityPh.D. dissertationL. De Alfaro, "Formal verification of probabilistic systems," Ph.D. dissertation, Stanford University, 1997. Principles of Model Checking. C Baier, J.-P Katoen, K G Larsen, MIT PressC. Baier, J.-P. Katoen, and K. G. Larsen, Principles of Model Check- ing. MIT Press, 2008. Probabilistic linear-time model checking: An overview of the automata-theoretic approach. M Vardi, Formal Methods for Real-Time and Probabilistic SystemsM. Vardi, "Probabilistic linear-time model checking: An overview of the automata-theoretic approach," Formal Methods for Real-Time and Probabilistic Systems, pp. 265-276, 1999. LTL control in uncertain environments with probabilistic satisfaction guarantees. X C Ding, S L Smith, C Belta, D Rus, Proc IFAC World C. to appearX. C. Ding, S. L. Smith, C. Belta, and D. Rus, "LTL control in uncertain environments with probabilistic satisfaction guarantees," in Proc IFAC World C, Milan, Italy, Aug. 2011, to appear. Markov decision processes and regular events. C Courcoubetis, M Yannakakis, IEEE Trans Automatic Ctrl. 4310C. Courcoubetis and M. Yannakakis, "Markov decision processes and regular events," IEEE Trans Automatic Ctrl, vol. 43, no. 10, pp. 1399- 1418, 1998. Controller synthesis for probabilistic systems. C Baier, M Größer, M Leucker, B Bollig, F Ciesinski, Proceedings of IFIP TCS'2004. Kluwer. IFIP TCS'2004. KluwerC. Baier, M. Größer, M. Leucker, B. Bollig, and F. Ciesinski, "Con- troller synthesis for probabilistic systems," in Proceedings of IFIP TCS'2004. Kluwer, 2004. Optimal path planning under temporal constraints. S L Smith, J Tůmová, C Belta, D Rus, Proc IROS. IROSTaipei, TaiwanS. L. Smith, J. Tůmová, C. Belta, and D. Rus, "Optimal path planning under temporal constraints," in Proc IROS, Taipei, Taiwan, Oct. 2010, pp. 3288-3293. Automata, logics, and infinite games: A guide to current research, ser. E Gradel, W Thomas, T Wilke, Lect. Notes Comp. Science. 2500Springer VerlagE. Gradel, W. Thomas, and T. Wilke, Automata, logics, and infinite games: A guide to current research, ser. Lect. Notes Comp. Science. Springer Verlag, 2002, vol. 2500. Experiments with deterministic ω-automata for formulas of linear temporal logic. J Klein, C Baier, Theoretical Computer Science. 3632J. Klein and C. Baier, "Experiments with deterministic ω-automata for formulas of linear temporal logic," Theoretical Computer Science, vol. 363, no. 2, pp. 182-195, 2006. ltl2dstar -LTL to deterministic Streett and Rabin automata. J Klein, J. Klein, "ltl2dstar -LTL to deterministic Streett and Rabin automata," http://www.ltl2dstar.de/, 2007, viewed September 2010. Dynamic programming and optimal control. D Bertsekas, Athena ScientificIID. Bertsekas, Dynamic programming and optimal control, vol. II. Athena Scientific, 2007. Markov decision processes: Discrete stochastic dynamic programming. M L Puterman, John Wiley and SonsM. L. Puterman, Markov decision processes: Discrete stochastic dynamic programming. John Wiley and Sons, 1994. Handbook of linear algebra. L Hogben, CRC PressL. Hogben, Handbook of linear algebra. CRC Press, 2007.
[]
[ "Statistical significance in high-dimensional linear models", "Statistical significance in high-dimensional linear models" ]
[ "Peter Bühlmann [email protected] \nSeminar für Statistik\nETH Zürich\nHG G17, CH-8092ZürichSwitzerland\n" ]
[ "Seminar für Statistik\nETH Zürich\nHG G17, CH-8092ZürichSwitzerland" ]
[ "Bernoulli" ]
We propose a method for constructing p-values for general hypotheses in a high-dimensional linear model. The hypotheses can be local for testing a single regression parameter or they may be more global involving several up to all parameters. Furthermore, when considering many hypotheses, we show how to adjust for multiple testing taking dependence among the p-values into account. Our technique is based on Ridge estimation with an additional correction term due to a substantial projection bias in high dimensions. We prove strong error control for our p-values and provide sufficient conditions for detection: for the former, we do not make any assumption on the size of the true underlying regression coefficients while regarding the latter, our procedure might not be optimal in terms of power. We demonstrate the method in simulated examples and a real data application.
10.3150/12-bejsp11
[ "https://arxiv.org/pdf/1202.1377v3.pdf" ]
88,503,296
1202.1377
b8db0ca28c530dd0d0671af5d1a1c238a5a03fee
Statistical significance in high-dimensional linear models 2013 Peter Bühlmann [email protected] Seminar für Statistik ETH Zürich HG G17, CH-8092ZürichSwitzerland Statistical significance in high-dimensional linear models Bernoulli 194201310.3150/12-BEJSP11arXiv:1202.1377v3 [stat.ME]global testinglassomultiple testingridge regressionvariable selectionWestfall-Young permutation procedure We propose a method for constructing p-values for general hypotheses in a high-dimensional linear model. The hypotheses can be local for testing a single regression parameter or they may be more global involving several up to all parameters. Furthermore, when considering many hypotheses, we show how to adjust for multiple testing taking dependence among the p-values into account. Our technique is based on Ridge estimation with an additional correction term due to a substantial projection bias in high dimensions. We prove strong error control for our p-values and provide sufficient conditions for detection: for the former, we do not make any assumption on the size of the true underlying regression coefficients while regarding the latter, our procedure might not be optimal in terms of power. We demonstrate the method in simulated examples and a real data application. Introduction Many data problems nowadays carry the structure that the number p of covariables may greatly exceed sample size n, i.e., p ≫ n. In such a setting, a huge amount of work has been pursued addressing prediction of a new response variable, estimation of an underlying parameter vector and variable selection, see for example the books by Hastie, Tibshirani and Friedman (2009), Bühlmann and van de Geer (2011) or the more specific review article by Fan and Lv (2010). With a few exceptions, see Section 1.3.1, the proposed methods and presented mathematical theory do not address the problem of assigning uncertainties, statistical significance or confidence: thus, the area of statistical hypothesis testing and construction of confidence intervals is largely unexplored and underdeveloped. Yet, such significance or confidence measures are crucial in applications where interpretation of parameters and variables is very important. The focus of this paper is the construction of p-values and corresponding multiple testing adjustment for This is an electronic reprint of the original article published by the ISI/BS in Bernoulli, 2013, Vol. 19, No. 4, 1212-1242. This reprint differs from the original in pagination and typographic detail. 2 P. Bühlmann a high-dimensional linear model which is often very useful in p ≫ n settings: Y = Xβ 0 + ε,(1.1) where Y = (Y 1 , . . . , Y n ) T , X is a fixed design n× p design matrix, β 0 is the true underlying p×1 parameter vector and ε is the n×1 stochastic error vector with ε 1 , . . . , ε n i.i.d. having E[ε i ] = 0 and Var(ε i ) = σ 2 < ∞; throughout the paper, p may be much larger n. We are interested in testing one or many null-hypotheses of the form: H 0,G : β 0 j = 0 for all j ∈ G,(1.2) where G ⊆ {1, . . . , p} is a subset of all the indices of the covariables. Of substantial interest is the case where G = {j} corresponding to a hypothesis for the individual jth regression parameter (j = 1, . . . , p). At the other end of the spectrum is the global null-hypothesis where G = {1, . . . , p}, and we allow for any G between an individual and the global hypothesis. Past work about high-dimensional linear models We review in this section an important stream of research for high-dimensional linear models. The more familiar reader may skip Section 1.1. The Lasso The Lasso (Tibshirani, 1996) β Lasso =β Lasso (λ) = argmin β ( Y − Xβ 2 2 /n + λ β 1 ), has become tremendously popular for estimation in high-dimensional linear models. The three main themes which have been considered in the past are prediction of the regression surface (and for a new response variable) with corresponding measure of accuracy X(β Lasso − β 0 ) 2 2 /n,(1.3) estimation of the parameter vector whose quality is assessed by β Lasso − β 0 q (q ∈ {1, 2}),(1.4) and variable selection or estimating the support of β 0 , denoted by the active set S 0 = {j; β 0 j = 0, j = 1, . . . , p} such that P[Ŝ = S 0 ] (1.5) is large for a selection (estimation) procedureŜ. Greenshtein and Ritov (2004) proved the first result closely related to prediction as measured in (1.3). Without any conditions on the deterministic design matrix X, except that the columns are normalized such that (n −1 X T X) jj ≡ 1, one has with high Significance in high-dimensional models 3 probability at least 1 − 2 exp(−t 2 /2): X(β Lasso (λ) − β 0 ) 2 2 /n ≤ 3/2λ β 0 1 , (1.6) λ = 4σ t 2 + 2 log(p) n , see Bühlmann and van de Geer (2011, Cor. 6.1). Thereby, we assume Gaussian errors but such an assumption can be relaxed (Bühlmann and van de Geer, 2011, formula (6.5)). From an asymptotic point of view (where p and n diverge to ∞), the regularization parameter λ ≍ log(p)/n leads to consistency for prediction if the truth is sparse with respect to the ℓ 1 -norm such that β 0 1 = o(λ −1 ) = o( n/ log(p)). The convergence rate is then at best O P (λ) = O P ( log(p)/n) assuming β 0 1 ≍ 1. Such a slow rate of convergence can be improved under additional assumptions on the design matrix X. The ill-posedness of the design matrix can be quantified using the concept of "modified" eigenvalues. Consider the matrixΣ = n −1 X T X. The smallest eigenvalue ofΣ is λ min (Σ) = min β β TΣ β. Of course, λ min (Σ) equals zero if p > n. Instead of taking the minimum on the righthand side over all p × 1 vectors β, we replace it by a constrained minimum, typically over a cone. This leads to the concept of restricted eigenvalues (Bickel, Ritov and Tsybakov 2009;Koltchinskii 2009aKoltchinskii , 2009bRaskutti, Wainwright and Yu 2010) or weaker forms such as the compatibility constants (van de Geer, 2007) or further slight weakening of the latter (Sun and Zhang, 2012). Relations among the different conditions and "modified" eigenvalues are discussed in van de Geer and and Bühlmann and van de Geer (2011, Ch. 6.13). Assuming that the smallest "modified" eigenvalue is larger than zero, one can derive an oracle inequality of the following prototype: with probability at least 1 − 2 exp(−t 2 /2) and using λ as in (1.6): X(β Lasso (λ) − β 0 ) 2 2 /n + λ β Lasso − β 0 1 ≤ 4λ 2 s 0 /φ 2 0 , (1.7) where φ 0 is the compatibility constant (smallest "modified" eigenvalue) of the fixed design matrix X (Bühlmann and van de Geer, 2011, Cor. 6.2). Again, this holds by assuming Gaussian errors but the result can be extended to non-Gaussian distributions. From (1.7), we have two immediate implications: from an asymptotic point of view, using λ ≍ log(p)/n and assuming that φ 0 is bounded away from 0, X(β Lasso (λ) − β 0 ) 2 2 /n = O P (s 0 log(p)/n), (1.8) β Lasso (λ) − β 0 1 = O P (s 0 log(p)/n),(1.9) i.e., a fast convergence rate for prediction as in (1.8) and an ℓ 1 -norm bound for the estimation error. We note that the oracle convergence rate, where an oracle would know the active set S 0 , is O P (s 0 /n): the log(p)-factor is the price to pay by not knowing the active 4 P. Bühlmann set S 0 . An ℓ 2 -norm bound can be derived as well: β Lasso (λ) − β 0 2 = O P ( s 0 log(p)/n) assuming a slightly stronger restricted eigenvalue condition. Results along these lines have been established by Bunea, Tsybakov and Wegkamp (2007), van de Geer (2008) who covers generalized linear models as well, Zhang and Huang (2008), Meinshausen andYu (2009), Bickel, Ritov andTsybakov (2009) among others. The Lasso is doing variable selection: a simple estimator of the active set S 0 iŝ S Lasso (λ) = {j;β Lasso;j (λ) = 0}. In order thatŜ Lasso (λ) has good accuracy for S 0 , we have to require that the non-zero regression coefficients are sufficiently large (since otherwise, we cannot detect the variables in S 0 with high probability). We make a "beta-min" assumption whose asymptotic form reads as min j∈S0 |β 0 j | ≫ s 0 log(p)/n. (1.10) Furthermore, when making a restrictive assumption for the design, called neighborhood stability, or assuming the equivalent irrepresentable condition, and choosing a suitable λ ≫ log(p)/n: P[Ŝ Lasso (λ) = S 0 ] → 1, see Meinshausen and Bühlmann (2006), Zhao and Yu (2006), and Wainwright (2009) establishes exact scaling results. The "beta-min" assumption in (1.10) as well as the irrepresentable condition on the design are restrictive and non-checkable. Furthermore, these conditions are essentially necessary (Meinshausen and Bühlmann 2006;Zhao and Yu 2006). Thus, under weaker assumptions, we can only derive a weaker yet useful result about variable screening. Assuming a restricted eigenvalue condition on the fixed design X and the "beta-min" condition in (1.10) we still have asymptotically that for λ ≍ log(p)/n: P[Ŝ(λ) ⊇ S 0 ] → 1 (n → ∞). (1.11) The cardinality of the estimated active set (typically) satisfies |Ŝ(λ)| ≤ min(n, p): thus if p ≫ n, we achieve a massive and often useful dimensionality reduction in the original covariates. We summarize that a slow convergence rate for prediction "always" holds. Assuming some "constrained minimal eigenvalue" condition on the fixed design X, we obtain the fast convergence rate in (1.8), and an estimation error bound as in (1.9); with the additional "beta-min" assumption, we obtain the practically useful variable screening property in (1.11). For consistent variable selection, we necessarily need a (much) stronger condition on the fixed design, and such a strong condition is questionable to be true in a practical problem. Hence variable selection might be a too ambitious goal with the Lasso. That is why the original translation of Lasso (Least Absolute Shrinkage and Selection Operator) may be better re-translated as Least Absolute Shrinkage and Screening Operator. We refer to Bühlmann and van de Geer (2011) for an extensive treatment of the properties of the Lasso. Other methods Of course, the three main inference tasks in a high-dimensional linear model, as described by (1.3), (1.4) and (1.5), can be pursued with other methods than the Lasso. An interesting line of proposals include concave penalty functions instead of the ℓ 1norm in the Lasso, see for example Fan and Li (2001) or Zhang (2010). The adaptive Lasso (Zou, 2006), analyzed in the high-dimensional setting by Huang, Ma and Zhang (2008) and van de Geer, Bühlmann and Zhou (2011), can be interpreted as an approximation of some concave penalization approach (Zou and Li, 2008). A related procedure to the adaptive Lasso is the relaxed Lasso (Meinshausen, 2007). Another method is the Dantzig selector (Candes and Tao, 2007) which has similar statistical properties as the Lasso (Bickel, Ritov and Tsybakov, 2009). Other algorithms include orthogonal matching pursuit (which is essentially forward variable selection) or L 2 Boosting (matching pursuit) which have desirable properties (Tropp 2004;Bühlmann 2006). Quite different from estimation of the high-dimensional parameter vector are variable screening procedures which aim for an analogous property as in (1.11). Prominent examples include the "Sure Independence Screening" (SIS) method (Fan and Lv, 2008), and high-dimensional variable screening or selection properties have been established for forward variable selection (Wang, 2009) and for the PC-algorithm (Bühlmann, Kalisch and Maathuis, 2010) ("PC" stands for the first names of its inventors, Peter Spirtes and Clark Glymour). Assigning uncertainties and p-values for high-dimensional regression At the core of statistical inference is the specification of statistical uncertainties, significance and confidence. For example, instead of having a variable selection result where the probability in (1.5) is large, we would like to have measures controlling a type I error (false positive selections), including p-values which are adjusted for large-scale multiple testing, or construction of confidence intervals or regions. In the high-dimensional setting, answers to these core goals are challenging. Meinshausen and Bühlmann (2010) propose Stability Selection, a very generic method which is able to control the expected number of false positive selections: that is, denoting by V = |Ŝ ∩ S c 0 |, Stability Selection yields a finite-sample upper bound of E[V ] (not only for linear models but also for many other inference problems). To achieve this, a very restrictive (but presumably non-necessary) exchangeability condition is made which, in a linear model, is implied by a restrictive assumption for the design matrix. On the positive side, there is no requirement of a "beta-min" condition as in (1.10) and the method seems to provide reliable control of E[V ]. Wasserman and Roeder (2009) propose a procedure for variable selection based on sample splitting. Using their idea and extending it to multiple sample splitting, Meinshausen, Meier and Bühlmann (2009) develop a much more stable method for construction of p-values for hypotheses H 0,j : β 0 j = 0 (j = 1, . . . , p) and for adjusting them in a non-naive way for multiple testing over p (dependent) tests. The main drawback of this 6 P. Bühlmann procedure is its required "beta-min" assumption in (1.10). And this is very undesirable since for statistical hypothesis testing, the test should control type I error regardless of the size of the coefficients, while the power of the test should be large if the absolute value of the coefficient would be large: thus, we should avoid assuming (1.10). Up to now, for the high-dimensional linear model case with p ≫ n, it seems that only Zhang and Zhang (2011) managed to construct a procedure which leads to statistical tests for H 0,j without assuming a "beta-min" condition. A loose description of our new results Our starting point is Ridge regression for estimating the high-dimensional regression parameter. We then develop a bias correction, addressing the issue that Ridge regression is estimating the regression coefficient vector projected to the row space of the design matrix: the corrected estimator is denoted byβ corr . Theorem 1 describes that under the null-hypothesis, the distribution of a suitably normalized a n,p |β corr | can be asymptotically and stochastically (componentwise) upperbounded: a n,p |β corr | as (|Z j | + ∆ j ) p j=1 , (1.12) (Z 1 , . . . , Z p ) ∼ N p (0, σ 2 n −1 Ω), for some known positive definite matrix Ω and some known constants ∆ j . This is the key to derive p-values based on this stochastic upper bound. It can be used for construction of p-values for individual hypotheses H 0,j as well as for more global hypotheses H 0,G for any subset G ⊆ {1, . . . , p}, including cases where G is (very) large. Furthermore, Theorem 2 justifies a simple approach for controlling the familywise error rate when considering multiple testing of regression hypotheses. Our multiple testing adjustment method itself is closely related to the Westfall-Young permutation procedure (Westfall and Young, 1993) and hence, it offers high power, especially in presence of dependence among the many test-statistics (Meinshausen, Maathuis, and Bühlmann, 2011). Relation to other work Our new method as well as the approach in Zhang and Zhang (2011) provide p-values (and the latter also confidence intervals) without assuming a "beta-min" condition. Both of them build on using linear estimators and a correction using a non-linear initial estimator such as the Lasso. Using e.g., the Lasso directly leads to the problem of characterizing the distribution of the estimator (in a tractable form): this seems very difficult in high-dimensional settings while it has been worked out for low-dimensional problems (Knight and Fu, 2000). The work by Zhang and Zhang (2011) is the only one which studies (sufficiently closely) related questions and goals as in this paper. The approach by Zhang and Zhang (2011) is based on the idea of projecting the highdimensional parameter vector to low-dimensional components, as occurring naturally in the hypotheses H 0,j about single components, and then proceeding with a linear estimator. This idea is pursued with the "efficient score function" approach from semiparametric statistics (Bickel et al., 1998). The difficulty in the high-dimensional setting is the construction of the score vector z j from which one can derive a confidence interval for β 0 j : Zhang and Zhang (2011) propose it as the residual vector from the Lasso when regressing X (j) against all other variables X (\j) (where X (J) denotes the design sub-matrix whose columns correspond to the index set J ⊆ {1, . . . , p}). They then prove the asymptotic validity of confidence intervals for finite, sparse linear combinations of β 0 . The difference to our work is primarily a rather different construction of the projection where we make use of Ridge estimation with a very simple choice of regularization. A drawback of our method is that, typically, it is not theoretically rate-optimal in terms of power. Model, estimation and p-values Consider one or many null-hypotheses as in (1.2). We are interested in constructing pvalues for hypotheses H 0,G without imposing a "beta-min" condition as in (1.10): the statistical test itself will distinguish whether a regression coefficient is small or not. Identifiability We consider model (1.1) with fixed design. Without making additional assumptions on the design matrix X, there is a problem of identifiability. Clearly, if p > n and hence rank(X) ≤ n < p, there are different parameter vectors θ such that Xβ 0 = Xθ. Thus, we cannot identify β 0 from the distribution of Y 1 , . . . , Y n (and fixed design X). Shao and Deng (2012) give a characterization of identifiability in a high-dimensional linear model (1.1) with fixed design. Following their approach, it is useful to consider the singular value decomposition X = RSV T , R n × n matrix with R T R = I n , S n × n diagonal matrix with singular values s 1 , . . . , s n , V p × n matrix with V T V = I n . Denote by R(X) ⊂ R p the linear space generated by the n rows of X. The projection of R p onto R(X) is then P X = X T (XX T ) − X = V V T , where A − denotes the pseudo-inverse of a squared matrix A. A natural choice of a parameter θ 0 such that Xβ 0 = Xθ 0 is the projection of β 0 onto R(X). Thus, θ 0 = P X β 0 = V V T β 0 . (2.1) Then, of course, β 0 ∈ R(X) if and only if β 0 = θ 0 . 8 P. Bühlmann Ridge regression Consider Ridge regression β = argmin β Y − Xβ 2 2 /n + λ β 2 2 = (n −1 X T X + λI p ) −1 n −1 X T Y, (2.2) where λ = λ n is a regularization parameter. By construction of the estimator,β ∈ R(X); and indeed, as discussed below,β is a reasonable estimator for θ 0 = P X β 0 . We denote bŷ Σ = n −1 X T X. The covariance matrix of the Ridge estimator, multiplied by n, is then Ω = Ω(λ) = (Σ + λ n I) −1Σ (Σ + λ n I) −1 (2.3) = V diag s 2 1 (s 2 1 + λ) 2 , . . . , s 2 n (s 2 n + λ) 2 V T , a quantity which will appear at many places again. We assume that Ω min (λ) := min j∈{1,...,p} Ω jj (λ) > 0. (2.4) We do not require that Ω min (λ) is bounded away from zero as a function of n and p. Thus, the assumption in (2.4) is very mild: a rather peculiar design would be needed to violate the condition, see also the equivalent formulation in formula (2.5) below. Furthermore, (2.4) is easily checkable. We denote by λ min =0 (A) the smallest non-zero eigenvalue of a symmetric matrix A. We then have the following result. Proposition 1. Consider the Ridge regression estimatorβ in (2.2) with regularization parameter λ > 0. Assume condition (2.4), see also (2.5). Then, max j∈{1,...,p} |E[β j ] − θ 0 j | ≤ λ θ 0 2 λ min =0 (Σ) −1 , min j∈{1,...,p} Var(β j ) ≥ n −1 σ 2 Ω min (λ). A proof is given in Section A.1, relying in large parts on Shao and Deng (2012). We now discuss under which circumstances the estimation bias is smaller than the standard error. Qualitatively, this happens if λ > 0 is chosen sufficiently small. For a more quantitative discussion, we study the behavior of Ω min (λ) as a function of λ and we obtain an equivalent formulation of (2.4). Lemma 1. We have the following: Significance in high-dimensional models 9 1. Ω min (λ) = min j n r=1 s 2 r (s 2 r + λ) 2 V 2 jr . From this we get: (2.4) holds if and only if min 1≤j≤p max 1≤r≤n,sr =0 V 2 jr > 0. (2.5) 2. Assuming (2.4), Ω min (0 + ) := lim λց0 + Ω min (λ) = min j n r=1;sr =0 1 s 2 r V 2 jr > 0. 3. if (2.4) holds: 0 < L C ≤ lim inf λ∈(0,C] Ω min (λ) ≤ M C < ∞, (2.6) for any 0 < C < ∞, and where 0 < L C < M C < ∞ are constants which depend on C and on the design matrix X (and hence on n and p). The proof is straightforward using the expression (2.3). The statement 3. says that for a given data-set, the variances of theβ j 's remain in a reasonable range even if we choose λ > 0 arbitrarily small; the statement doesn't imply anything for the behavior as n and p are getting large (as the data and design matrix change). From Proposition 1, we immediately obtain the following result. Corollary 1. Consider the Ridge regression estimatorβ in (2.2) with regularization parameter λ > 0 satisfying λΩ min (λ) −1/2 ≤ n −1/2 σ θ 0 −1 2 λ min =0 (Σ). (2.7) In addition, assume condition (2.4), see also (2.5). Then max j∈{1,...,p} (E[β j ] − θ 0 j ) 2 ≤ min j∈{1,...,p} Var(β j ). Due to the third statement in Lemma 1 regarding the behavior of Ω min (λ), (2.7) can be fulfilled for a sufficiently small value of λ (a more precise characterization of the maximal λ which fulfills (2.7) would require knowledge of θ 0 2 ). The projection bias and corrected Ridge regression As discussed in Section 2.1, Ridge regression is estimating the parameter θ 0 = P X β 0 given in (2.1). Thus, in general, besides the estimation bias governed by the choice of λ, there is an additional projection bias B j = θ 0 j − β 0 j (j = 1, . . . , p). Clearly, B j = (P X β 0 ) j − β 0 j = (P X ) jj β 0 j − β 0 j + k =j (P X ) jk β 0 k . In terms of constructing p-values, controlling type I error for testing H 0,j or H 0,G with j ∈ G, the projection bias has only a disturbing effect if β 0 j = 0 and θ 0 j = 0, and we only have to consider the bias under the null-hypothesis: B H0;j = k =j (P X ) jk β 0 k . (2.8) The bias B H0;j is also the relevant quantity for the case under the non null-hypothesis, see the brief comment after Proposition 2. We can estimate B H0;j bŷ B H0;j = k =j (P X ) jkβinit;k , whereβ init is an initial estimator such as the Lasso which guarantees a certain estimation accuracy, see assumption (A) below. This motivates the following bias-corrected Ridge estimator for testing H 0,j , or H 0,G with j ∈ G: β corr;j =β j −B H0;j =β j − k =j (P X ) jkβinit;k . (2.9) We then have the following representation. Proposition 2. Assume model (1.1) with Gaussian errors. Consider the corrected Ridge regression estimatorβ corr in (2.9) with regularization parameter λ > 0, and assume (2.4). Then,β corr;j = Z j + γ j (j = 1, . . . , p) Z 1 , . . . , Z p ∼ N p (0, n −1 σ 2 Ω), Ω = Ω(λ), γ j = (P X ) jj β 0 j − k =j (P X ) jk (β init;k − β 0 k ) + b j (λ), b j (λ) = E[β j (λ)] − θ 0 j . A proof is given in Section A.1. We infer from Proposition 2 a representation which could be used not only for testing but also for constructing confidence intervals: β corr;j (P X ) jj − β 0 j = Z j (P X ) jj − k =j (P X ) jk (P X ) jj (β init;k − β 0 k ) + b j (λ) (P X ) jj . Significance in high-dimensional models 11 The normalizing factors for the variables Z j bringing them to the N (0, 1)-scale are a n,p;j (σ) = n 1/2 σ −1 Ω −1/2 jj (j = 1, . . . , p) which are also depending on λ through Ω = Ω(λ). We refer to Section 4.1 where the unusually fast divergence of a n,p;j (σ) is discussed. The test-statistics we consider are simple functions of a n,p;j (σ)β corr;j . Stochastic bound for the distribution of the corrected Ridge estimator: Asymptotics We provide here an asymptotic stochastic bound for the distribution of a n,p;j (σ)β corr;j under the null-hypothesis. The asymptotic formulation is compact and the basis for the construction of p-values in Section 2.5, but we give more detailed finite-sample results in Section 6. We consider a triangular array of observations from a linear model as in (1.1): Y n = X n β 0 n + ε n , n = 1, 2, . . . , (2.10) where all the quantities and also the dimension p = p n are allowed to change with n. We make the following assumption. (A) There are constants ∆ j = ∆ j,n > 0 such that P pn j=1 a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) ≤ ∆ j,n → 1 (n → ∞). We will discuss in Section 2.4.1 constructions for such bounds ∆ j (which are typically not negligible). Our next result is the key to obtain a p-value for testing the null-hypothesis H 0,j or H 0,G , saying that asymptotically, a n,p;j (σ)|β corr;j | as. |W | + ∆ j , where W ∼ N (0, 1), and similarly for the multi-dimensional version withβ corr;G (where denotes "stochastically smaller or equal to"). Theorem 1. Assume model (2.10) with fixed design and Gaussian errors. Consider the corrected Ridge regression estimatorβ corr in (2.9) with regularization parameter λ n > 0 such that λ n Ω min (λ n ) −1/2 = o(min(n −1/2 θ 0 −1 2 λ min =0 (Σ))) (n → ∞) , and assume condition (A) and (2.4) (while for the latter, the quantity does not need to be bounded away from zero). Then, for j ∈ {1, . . . , p n } and if H 0,j holds: for all u ∈ R + , lim sup n→∞ (P[a n,p;j (σ)|β corr; j | > u] − P[|W | + ∆ j > u]) ≤ 0, 12 P. Bühlmann where W ∼ N (0, 1). Similarly, for any sequence of subsets {G n } n , G n ⊆ {1, . . . , p n } and if H 0,Gn holds: for all u ∈ R + , lim sup n→∞ P max j∈Gn a n,p;j (σ)|β corr;j | > u − P max j∈Gn (a n,p;j (σ)|Z j | + ∆ j ) > u ≤ 0, where Z 1 , . . . , Z P are as in Proposition 2. A proof is given in Section A.1. As written above already, due to the third statement in Lemma 1, the condition for λ n is reasonable. We note that the distribution of max j∈Gn (a n,p;j (σ)|Z j | + ∆ j ) does not depend on σ and can be easily computed via simulation. Bounds ∆ j in assumption (A) We discuss an approach for constructing the bounds ∆ j . As mentioned above, they should not involve any unknown quantities so that we can use them for constructing p-values from the distribution of |W | + ∆ j or max j∈Gn (a n,p;j (σ)|Z j | + ∆ j ), respectively. We rely on the (crude) bound a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) ≤ a n,p;j (σ) max k =j |(P X ) jk | β init − β 0 1 . (2.11) To proceed further, we consider the Lasso as initial estimator. Due to (1.7) we obtain a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) ≤ max k =j |a n,p;j (σ)(P X ) jk |4λ Lasso s 0 φ −2 0 ,(2.12) where the last inequality holds on a set with probability at least 1 − 2 exp(−t 2 /2) when choosing λ Lasso as in (1.6). The assumptions we require are summarized next. Lemma 2. Consider the linear model (2.10) with fixed design, having normalized columnsΣ jj ≡ 1, which satisfies the compatibility condition with constant φ 2 0 = φ 2 0,n . Consider the Lasso as initial estimatorβ init with regularization parameter λ Lasso = 4σ C log(p n )/n for some 2 < C < ∞. Assume that the sparsity s 0 = s 0,n = o((n/ log(p n )) ξ ) (n → ∞) for some 0 < ξ < 1/2, and that lim inf n→∞ φ 2 0,n > 0. Then, ∆ j :≡ max k =j |a n,p;j (σ)(P X ) jk |(log(p)/n) 1/2−ξ (2.13) satisfies assumption (A). A proof follows from (2.12). We summarize the results as follows. Corollary 2. Assume the conditions of Theorem 1 without condition (A) and the conditions of Lemma 2. Then, when using the Lasso as initial estimator, the statements in Theorem 1 hold. Significance in high-dimensional models 13 The construction of the bound in (2.13) requires the compatibility condition on the design and an upper bound for the sparsity s 0 . While the former is an identifiability condition, and some form of identifiability assumption is certainly necessary, the latter condition about knowing the magnitude of the sparsity is not very elegant. When assuming bounded sparsity s 0,n ≤ M < ∞ for all n, we can choose ξ = 0 with an additional constant M on the right-hand side of (2.13). In our practical examples in Section 5, we use ξ = 0.05. P -values Our construction of p-values is based on the asymptotic distributions in Theorem 1. For an individual hypothesis H 0,j , we define the p-value for the two-sided alternative as P j = 2(1 − Φ((a n,p;j (σ)|β corr;j | − ∆ j ) + )). (2.14) Of course, we could also consider one-sided alternatives with the obvious modification for P j . For a more general hypothesis H 0,G with |G| > 1, we use the maximum as test statistics (but other statistics such as weighted sums could be chosen as well) and denote byγ G = max j∈G a n,p;j (σ)|β corr;j |, J G (c) = P max j∈G (a n,p;j (σ)|Z j | + ∆ j ) ≤ c , where the latter is independent of σ and can be easily computed via simulation (Z 1 , . . . , Z p are as in Proposition 2). Then, the p-value for H 0,G , against the alternative being the complement H c 0,G , is defined as P G = 1 − J G (γ G ). (2.15) We note that when ∆ j ≡ ∆ is the same for all j, we can rewrite P G = 1 − P[max j∈G a n,p;j (σ) × |Z j | ≤ (γ G − ∆) + ] which is a direct analogue of (2.14). Error control follows immediately by the construction of the p-values. Corollary 3. Assume the conditions in Theorem 1. Then, for any 0 < α < 1, lim sup n→∞ P[P j ≤ α] − α ≤ 0 if H 0,j holds, lim sup n→∞ P[P G ≤ α] − α ≤ 0 if H 0,G holds. Furthermore, for any sequence α n → 0 (n → ∞) which converges sufficiently slowly, the statements also hold when replacing α by α n . A discussion about detection power of the method is given in Section 4. Further remarks about these p-values are given in Section A.4. 14 P. Bühlmann 2.5.1. Estimation of σ In practice, for the p-values in (2.14) and (2.15), we use the normalizing factor a n,p;j (σ) with an estimateσ. These p-values are asymptotically controlling the type I error if P[σ ≥ σ] → 1 (n → ∞). This follows immediately from the construction. We propose to use the estimatorσ from the Scaled Lasso method (Sun and Zhang, 2012). Assuming s 0 log(p)/n = o(1) (n → ∞) and the compatibility condition for the design, Sun and Zhang (2012) prove that |σ/σ − 1| = o P (1) (n → ∞). Multiple testing We aim to strongly control the familywise error rate P[V > 0] where V is the number of false positive selections. For simplicity, we consider first individual hypotheses H 0,j (j ∈ {1, . . . , p}). The generalization to multiple testing of general hypotheses H 0,G with |G| > 1 is discussed in Section 3.2. Based on the individual p-values P j , we want to construct corrected p-values P corr;j corresponding to the following decision rule: reject H 0,j if P corr;j ≤ α (0 < α < 1). We denote the associated estimated set of rejected hypotheses (the set of significant variables) byŜ α = {j; P corr;j ≤ α}. Furthermore, recall that S 0 = {j; β 0 j = 0} is the set of true active variables. The number of false positives using the nominal significance level α is the denoted by V α =Ŝ α ∩ S c 0 . The goal is to construct P corr;j such that P[V α > 0] ≤ α, or that the latter holds at least in an asymptotic sense. The method we describe here is closely related to the Westfall-Young procedure (Westfall and Young, 1993). Consider the variables Z 1 , . . . , Z p ∼ N p (0, σ 2 n −1 Ω) appearing in Proposition 2 or Theorem 1. Consider the following distribution function: F Z (c) = P min 1≤j≤p 2(1 − Φ(a n,p;j (σ)|Z j |)) ≤ c and define P corr;j = F Z (P j + ζ),(3.1) where ζ > 0 is an arbitrarily small number, e.g. ζ = 0.01 for using the method in practice. Regarding the choice of ζ = 0 (which we use in all empirical examples in Section 5), see the Remark appearing after Theorem 2 below. The distribution function F Z (·) is independent of σ and can be easily computed via simulation of the dependent, mean zero jointly Gaussian variables Z 1 , . . . , Z p . It is computationally (much) faster than simulation of the so-called minP-statistics (Westfall and Young, 1993) which would require fittinĝ β corr many times. Asymptotic justification of the multiple testing procedure We first derive familywise error control in an asymptotic sense. For a finite sample result, see Section 6. We consider the framework as in (2.10). Theorem 2. Assume the conditions in Theorem 1. For the p-value in (2.14) and using the correction in (3.1) with ζ > 0 we have: for 0 < α < 1, lim sup n→∞ P[V α > 0] ≤ α. Furthermore, for any sequence α n → 0 (n → ∞) which converges sufficiently slowly, it holds that lim sup n→∞ P[V αn > 0] − α n ≤ 0. A proof is given in Section A.1. Remark (Multiple testing correction in (3.1) with ζ = 0). We could modify the correction in (3.1) using ζ = 0: the statement in Theorem 2 can then be derived when making the additional assumption that sup n∈N sup u |F ′ n,Z (u)| < ∞,(3.2) where F n,Z (·) = F Z (·) is the distribution function appearing in (3.1) which depends in the asymptotic framework on n and (mainly on) p = p n . Verifying (3.2) may not be easy for general matrices Ω = Ω n,pn . However, for the special case where Z 1 , . . . , Z p are independent, F ′ Z (u) = pϕ(u)(1 − Φ(u)) p−1 which is nicely bounded as a function of u, over all values of p. Multiple testing of general hypotheses The methodology for testing many general hypotheses H 0,Gj with |G j | ≥ 1, j = 1, . . . , m is the same as before. Denote by S 0,G = {j; H 0,Gj does not hold} and by S c 0,G = {j; H 0,Gj holds}; note that these sets are determined by the true parameter vector β 0 . Since the p-value in (2.15) is of the form P Gj = 1 − J Gj (γ Gj ), we consider (1 − J Gj (γ Gj ,Z )) ≤ c , γ G,Z = max j∈G (a n,p;j (σ)|Z j |) which can be easily computed via simulation (and it is independent of σ). We then define the corrected p-value as P corr;Gj = F G,Z (P Gj + ζ), where ζ > 0 is a small value such as ζ = 0.01; see also the definition in (3.1) and the corresponding discussion for the case where ζ = 0 (which now applies to the distribution function F G,Z instead of F Z ). We denote byŜ G,α = {j; P corr;Gj ≤ α} and V G,α =Ŝ G,α ∩ S c 0,G . If J Gj (·) has a bounded first derivative, for all j, we can obtain the same result, under the same conditions, as in Theorem 2 (and without making a condition on the cardinalities of G j ). If J Gj (·) has not a bounded first derivative, we can get around this problem by modifying the p-value P Gj in (2.15) toP Gj = 1 − J Gj (γ Gj − ν) for any (small) ν > 0 and proceeding withP Gj . Sufficient conditions for detection We consider detection of alternatives H c 0,j or H c 0,G with |G| > 1. We use again the notation S 0 as in Section 3 and denote by a n ≫ b n that a n /b n → ∞ (n → ∞). Theorem 3. Consider the setting and assumptions as in Theorem 1. When considering individual hypotheses H 0,j : if j ∈ S 0 with |β 0 j | ≫ a n,p;j (σ) −1 |(P X ) jj | −1 max(∆ j , 1) there exists an α n → 0 (n → ∞) such that P[P j ≤ α n ] → 1 (n → ∞), while we still have for j ∈ S c 0 : lim sup n→∞ P[P j ≤ α n ] − α n ≤ 0 (see Corollary 3). 2. When considering individual hypotheses H 0,G with G = G n and |G n | > 1: if H c 0,G holds, with max j∈Gn |a n,p;j (σ)(P X ) −1 jj β 0 j | ≫ max max j∈Gn |∆ j |, log(|G n |) , there exists an α n → 0 (n → ∞) such that P[P Gn ≤ α n ] → 1 (n → ∞), while if H 0,G holds, lim sup n→∞ P[P Gn ≤ α n ] − α n ≤ 0 (see Corollary 3). 3. When considering multiple hypotheses H 0,j : if for all j ∈ S 0 , |β 0 j | ≫ a n,p;j (σ) −1 |(P X ) jj | −1 max(∆ j , log(p n )) there exists an α n → 0 (n → ∞) such that P[P corr;j ≤ α n ] → 1 (n → ∞) for j ∈ S 0 while we still have that lim sup n→∞ P[V αn > 0] − α n ≤ 0 (see Theorem 2). 4. If in addition, a n,p;j (σ) → ∞ for all j appearing in the conditions on β 0 j , we can replace in all the statements 1-3 the "≫" relation by "≥C", where 0 < C < ∞ is a sufficiently large constant. A proof is given in Section A.1. Under the additional assumption of Lemma 2, where the Lasso is used as initial estimator and using the bounds in (2.13), we obtain the bound (for statement 1 in Theorem 3): |β 0 j | ≥ C max max k =j |(P X ) jk | |(P X ) jj | log(p n ) n 1/2−ξ , 1 |(P X ) jj | a n,p;j (σ) −1 ,(4.1) where 0 < ξ < 1/2. This can be sharpened using the oracle bound, assuming known order of sparsity: ∆ orac;j = Ds 0,n max k =j a n,p;j (σ)|(P X ) jk | log(p n )/n for some D > 0 sufficiently large (for example, assuming s 0,n is bounded, and replacing s 0,n by 1 and choosing D > 0 sufficiently large). It then suffices to require |β 0 j | ≥ C max max k =j |(P X ) jk | |(P X ) jj | s 0,n log(p n ) n 1/2 , 1 |(P X ) jj |a n,p;j (σ) for 1. in Th. 3, (4.2) |β 0 j |≥ C max max k =j |(P X ) jk | |(P X ) jj | s 0,n log(p n ) n 1/2 , log(p n ) |(P X ) jj |a n,p;j (σ) for 3. in Th. 3, and analogously for the second statement in Theorem 3. Order of magnitude of normalizing factors The order of a n,p;j (σ) is typically much larger than √ n since in high dimensions, Ω jj is very small. This means that the Ridge estimatorβ j has a much faster convergence rate than 1/ √ n for estimating the projected parameter θ 0 j . This looks counter-intuitive at first sight: the reason for the phenomenon is that θ 0 2 can be much smaller than β 0 2 and hence, Ridge regression (which estimates the parameter θ 0 ) is operating on a much smaller scale. This fact is essentially an implication of the first statement in Lemma 1 (without the "min j " part). We can write Ω jj = n r=1 s 2 r (s 2 r + λ) 2 V 2 jr = p r=p−n+1 s 2 r−p+n (s 2 r−p+n + λ) 2 U 2 jr , where the columns of U = [U jr ] j,r=1,...,p contain the p eigenvectors of X T X, satisfying p j=1 U 2 jr = 1. For n ≪ p, only very few, namely n terms, are left in the summation while the normalization for U 2 jr is over all p terms. For further discussion about the fast convergence rate a n,p;j (σ) −1 , see Section A.4. While a n,p;j (σ) −1 is usually small, there is compensation with (P X ) −1 jj which can be rather large. In the detection bound in e.g., the first part of (4.2), both terms appearing in the maximum are often of the same order of magnitude; see also Figure 3 in Section A.4. Assuming such a balance of terms, we obtain in e.g., the first part of (4.2): |β 0 j | ≥ C max k =j |(P X ) jk | |(P X ) jj | s 0,n log(p n )/n. The value of κ j = max k =j |(P X ) jk |/|(P X ) jj | is often a rather small number between 0.05 and 4, see Table 1 in Section 5. For comparison, Zhang and Zhang (2011) establish under some conditions detection for single hypotheses H 0,j with β 0 j in the 1/ √ n range. For the extreme case with G n = {1, . . . , p n }, we are in the setting of detection of the global hypotheses, see for example Ingster, Tsybakov and Verzelen (2010) for characterizing the detection boundary in case of independent covariables. Here, our analysis of detection is only providing sufficient conditions, for rather general (fixed) design matrices. Numerical results As initial estimator forβ corr in (2.9), we use the Scaled Lasso with scale independent regularization parameter λ Scaled-Lasso = 2 log(p)/n: it provides an initial estimateβ init as well as an estimateσ for the standard deviation σ. The parameter λ for Ridge regression in (2.2) is always chosen as λ = 1/n, reflecting the assumption in Theorem 1 that it should be small. For single testing, we construct p-values as in (2.14) or (2.15) with ∆ j from (2.13) with ξ = 0.05. For multiple testing with familywise error control, we consider p-values as in (3.1) with ζ = 0 (and ∆ j as above). Simulations We simulate from the linear model as in (1.1) with ε ∼ N n (0, I), n = 100 and the following configurations: (M1) For both p ∈ {500, 2500}, the fixed design matrix is generated from a realization of n i.i.d. rows from N p (0, I). Regarding the regression coefficients, we consider active sets S 0 = {1, 2, . . ., s 0 } with s 0 ∈ {3, 15} and three different strengths of regression coefficients where β 0 j ≡ b (j ∈ S 0 ) with b ∈ {0.25, 0.5, 1}. (M2) The same as in (M1) but for both p ∈ {500, 2500}, the fixed design matrix is generated from a realization of n i.i.d. rows from N p (0, Σ) with Σ jk ≡ 0.8 (j = k) and Σ jj = 1. The resulting signal to noise ratios SNR = Xβ 0 2 /σ are rather small: p ∈ {500, 2500} (3, 0.25) (3, 0.5) (3, 1) Here, a pair such as (3, 0.25) denotes the values of s 0 = 3, b = 0.25 (where b is the value of the active regression coefficients). We consider the decision-rule at significance level α = 0.05 reject H 0,j if P j ≤ 0.05, (5.1) for testing single hypotheses where P j is as in (2.14) with plugged-in estimateσ. The considered type I error is the average over non-active variables: (p − s 0 ) −1 j∈S c 0 P[P j ≤ 0.05] (5.2) and the average power is s −1 0 j∈S0 P[P j ≤ 0.05]. (5.3) For multiple testing, we consider the adjusted p-value P corr;j from (3.1): the decision is as in (5.1) but replacing P j by P corr;j . We report the familywise error rate (FWER) P[V 0.05 > 0] and the average power as in (5.3) but the latter with using P corr;j . The results are displayed in Figure 1, based on 500 simulation runs per setting (with the same fixed design per setting). The subfigure (d) shows that the proposed method exhibits essentially four times a too large familywise error rate in multiple testing: it happens for scenarios with strongly correlated variables (model (M2)) and where the sparsity s 0 = 15 is large with moderate or large size of the coefficients (scenario (M2) with s 0 = 15 and coefficient size b = 0.25 is unproblematic). The corresponding number of false positives are reported in Table 3 in Section A.3. Values of P X The detection results in (4.1) and (4.2) depend on the ratio κ j = max k =j |(P X ) jk |/|(P X ) jj |. We report in Table 1 summary statistics of {κ j } j for various datasets. We clearly see that the values of κ j are typically rather small which implies good detection properties as discussed in Section 4. Furthermore, the values max k =j |(P X ) jk | occurring in the construction of ∆ j in Section 2.4.1 are typically very small (not shown here). Real data application We consider a problem about motif regression for finding the binding sites in DNA sequences of the HIF1α transcription factor. The binding sites are also called motifs, and they are typically 6-15 base pairs (with categorical values ∈ {A, C, G, T }) long. The data consists of a univariate response variable Y from CHIP-chip experiments, measuring the logarithm of the binding intensity of the HIF1α transcription factor on coarse DNA segments. Furthermore, for each DNA segment, we have abundance scores for p = 195 candidate motifs, based on DNA sequence data. Thus, for each DNA segment i we have Y i ∈ R and X i ∈ R p , where i = 1, . . . , n tot = 287 and p = 195. We consider a linear model as in (1.1) and hypotheses H 0,j for j = 1, . . . , p = 195: rejection of H 0,j then corresponds to a significant motif. This dataset has been analyzed in Meinshausen, Meier and Bühlmann (2009) who found one significant motif using their Table 1. Minimum, maximum and three quartiles of {κj } p j=1 for various designs X from different datasets. The first four are from the simulation models in Section 5.1. Although not relevant for the table, "Motif" (see Section 5.3) and "Riboflavin" have a continuous response while the last six have a class label (Dettling, 2004) dataset, (n, p) minj κj 0.25q{κj }j med{κj }j 0.75q{κj }j maxj κj p-value method for a linear model based on multiple sample splitting (which assumes the unpleasant "beta-min" condition in (1.10)). Since the dataset has n tot > p observations, we take one random subsample of size n = 143 < p = 195. Figure 2 reports the single-testing as well as the adjusted p-values for controlling the FWER. There is one significant motif with corresponding FWERadjusted p-value equal to 0.007, and the method in Meinshausen, Meier and Bühlmann (2009) based on the total sample with n tot found the same significant variable with FWER-adjusted p-value equal to 0.006. Interestingly, the weakly significant motif with p-value 0.080 is known to be a true binding site for HIF1α, thanks to biological validation experiments. When compared to the Bonferroni-Holm procedure for controlling FWER based on the raw p-values as shown in Figure 2 Thus, for this example, the multiple testing correction as in Section 3 does not provide large improvements in power over the Bonferroni-Holm procedure; but our method is closely related to the Westfall-Young procedure which has been shown to be asymptotically optimal for a broad class of high-dimensional problems (Meinshausen, Maathuis, and Bühlmann, 2011). Finite sample results We present here finite sample analogues of Theorem 1 and 2. Instead of assumption (A), we assume the following: (A ′ ) There are constants ∆ j > 0 such that P j=1 a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k )| ≤ ∆ j ≥ 1 − κ for some (small) 0 < κ < 1. We then have the following result. Proposition 3. Assume model (1.1) with Gaussian errors. Consider the corrected Ridge regression estimatorβ corr in (2.9) with regularization parameter λ > 0, and assume (2.4) and condition (A ′ ). Then, with probability at least 1 − κ, for j ∈ {1, . . . , p} and if H 0,j holds: a n,p;j (σ)|β corr;j | ≤ a n,p;j (σ)|Z j | + ∆ j + a n,p b(λ) ∞ , a n,p b(λ) ∞ = max j=1,...,p a n,p;j (σ)|b j (λ)| ≤ λ Ω min (λ) 1/2 n 1/2 σ −1 θ 0 2 λ min =0 (Σ) −1 . Similarly, with probability at least 1 − κ, for any subset G ⊆ {1, . . . , p} and if H 0,G holds: max j∈G a n,p;j (σ)|β corr;j | ≤ max j∈G (a n,p;j (σ)|Z j | + ∆ j ) + a n,p b(λ) ∞ . Significance in high-dimensional models 23 A proof is given in Section A.1. Due to the third statement in Lemma 1, Ω min (λ) −1/2 is bounded for a bounded range of λ ∈ (0, C]. Therefore, the bound for a n,p b(λ) ∞ can be made arbitrarily small by choosing λ > 0 sufficiently small. Theorem 2 is a consequence of the following finite sample result. Proposition 4. Consider the event E with probability P[E] ≥ 1 − κ where condition (A ′ ) holds. Then, when using the corrected p-values from (3.1), with ζ ≥ 0 (allowing also ζ = 0), we obtain approximate strong control of the familywise error rate: P[V α > 0] ≤ F Z (F −1 Z (α) − ζ + 2(2π) −1/2 a n,p b(λ) ∞ ) + (1 − P[E]). A proof is given in Section A.1. We immediately get the following bound for ζ ≥ 0: P[V α > 0] ≤ α + sup u |F ′ Z (u)|2(2π) −1/2 a n,p b(λ) ∞ + (1 − P[E]). Conclusions We have proposed a novel construction of p-values for individual and more general hypotheses in a high-dimensional linear model with fixed design and Gaussian errors. We have restricted ourselves to max-type statistics for general hypotheses but modifications to e.g., weighted sums are straightforward using the representation in Proposition 2. A key idea is to use a linear, namely the Ridge estimator, combined with a correction for the potentially substantial bias due to the fact that the Ridge estimator is estimating the projected regression parameter vector onto the row-space of the design matrix. The finding that we can "succeed" with a corrected Ridge estimator in a high-dimensional context may come as a surprise, as it is well known that Ridge estimation can be very bad for say prediction. Nevertheless, our bias corrected Ridge procedure might not be optimal in terms of power, as indicated in Section 4.1. The main assumptions we make are the compatibility condition for the design, i.e., an identifiability condition, and knowledge of an upper bound of the sparsity (see Lemma 2). A related idea of using a linear estimator coupled with a bias correction for deriving confidence intervals has been earlier proposed by Zhang and Zhang (2011). No tuning parameter. Our approach does not require the specification of a tuning parameter, except for the issue that we crudely bound the true sparsity as in (2.13); we always used ξ = 0.05, and the Scaled Lasso initial estimator does not require the specification of a regularization parameter. All our numerical examples were run without tuning the method to a specific setting, and error control with our p-value approach is often conservative while the power seems reasonable. Furthermore, our method is generic which allows to test for any H 0,G regardless whether the size of G is small or large: we present in the Section A.2 an additional simulation where |G| is large. For multiple testing correction or for general hypotheses with sets G where |G| > 1, we rely on the power of simulation since analytical formulae for max-type statistics under dependence seem in-existing: yet, our simulation is extremely simple as we only need to generate dependent multivariate Gaussian random variables. 24 P. Bühlmann Small variance of Ridge estimator. As indicated before, it is surprising that corrected Ridge estimation performs rather well for statistical testing. Although the bias due to the projection P X can be substantial, it is compensated by small variances σ 2 n −1 Ω jj of the Ridge estimator. It is not true that Ω jj 's become large as p increases: that is, the Ridge estimator has small variance for an individual component when p is very large, see Section 4.1. Therefore, the detection power of the method remains reasonably good as discussed in Section 4. Viewed from a different perspective, even though |(P X ) jj β 0 j | may be very small, the normalized version a n,p;j (σ)|(P X ) jj β 0 j | can be sufficiently large for detection since a n,p;j (σ) may be very large (as the inverse of the square root of the variance). The values of P X can be easily computed for a given problem: our analysis about sufficient conditions for detection in Section 4 could be made more complete by invoking random matrix theory for the projection P X (assuming that X is a realization of i.i.d. row-vectors whose entries are potentially dependent). However, currently, most of the results on singular values and similar quantities of X are for the regime p ≤ n (Vershynin, 2012), which leads in our context to the trivial projection P X = I, or for the regime p/n → C with 0 ≤ C < ∞ (El Karoui, 2008). Extensions. Obvious but partially non-trivial model extensions include random design, non-Gaussian errors or generalized linear models. From a practical point of view, the second and third issue would be most valuable. Relaxing the fixed design assumption makes part of the mathematical arguments more complicated, yet a random design is better posed in terms of identifiability. ≤ λ θ 0 2 λ min =0 (Σ) −1 σn −1/2 Ω 1/2 jj ≤ λ θ 0 2 λ min =0 (Σ) −1 σ −1 n 1/2 Ω min (λ) −1/2 . By using the representation from Proposition 2, invoking assumption (A ′ ) and assuming that the null-hypothesis H 0,j or H 0,G holds, respectively, the proof is completed. Proof of Theorem 1. Due to the choice of λ = λ n we have that a n,p b(λ n ) ∞ = o(1) (n → ∞). The proof then follows from Proposition 3 and invoking assumption (A) saying that the probabilities for the statements in Proposition 3 converge to 1 as n → ∞. Proof of Proposition 4 (basis for proving Theorem 2). Consider the set E where assumption (A ′ ) holds (whose probability is at least P[E] ≥ 1 − κ). Without loss of generality, we consider P j = 2(1 − Φ(a n,p;j (σ)|β corr;j | − ∆ j )) without the truncation at value 1 (implied by the positive part (a n,p;j (σ)|β corr;j | − ∆ j ) + ); in terms of decisions (rejection or non-rejection of a hypothesis), both versions for the p-value are equivalent. Then, on E and for j ∈ S c 0 : P j = 2(1 − Φ(a n,p;j (σ)|β corr;j | − ∆ j )) ≥ 2 1 − Φ a n,p;j (σ) β corr;j − k =j (P X ) jk (β init;k − β 0 k ) ≥ 2(1 − Φ(a n,p;j (σ)|Z j |)) − 2(2π) −1/2 a n,p b(λ) ∞ , where in the last inequality we used Proposition 2 and Taylor's expansion. Thus, on E: min j∈S c 0 P j ≥ min j∈S c 0 2 (1 − Φ(a n,p;j (σ)|Z j |)) − 2(2π) −1/2 a n,p b(λ) ∞ ≥ min j=1,...,p 2(1 − Φ(a n,p;j (σ)|Z j |)) − 2(2π) −1/2 a n,p b(λ) ∞ . Therefore, P min j∈S c 0 P j ≤ c ≤ P E ∩ min j∈S c 0 P j ≤ c + P[E c ] ≤ P min j=1,...,p 2(1 − Φ(a n,p;j (σ)|Z j |)) ≤ c + 2(2π) −1/2 a n,p b(λ) ∞ + P[E c ] = F Z (c + 2(2π) −1/2 a n,p b(λ) ∞ ) + P[E c ]. Using this we obtain: P[V α > 0] = P min j∈S c 0 P corr;j ≤ α = P min j∈S c 0 P j ≤ F −1 Z (α) − ζ ≤ F Z (F −1 Z (α) − ζ + 2(2π) −1/2 a n,p b(λ) ∞ ) + P[E c ]. This completes the proof. Proof of Theorem 2. Due to the choice of λ = λ n we have that a n,p b(λ n ) ∞ = o(1) (n → ∞). Furthermore, using the formulation in Proposition 4, assumption (A) translates to a sequence of sets E n with P[E n ] → 1 (n → ∞). We then use Proposition 4 and observe that for sufficiently large n: F Z (F −1 Z (α) − ζ + 2(2π) −1/2 a n,p b(λ n ) ∞ ) ≤ F Z (F −1 Z (α)) ≤ α. The modification for the case with α n → 0 sufficiently slowly follows analogously: note that the second last inequality in the proof above follows by monotonicity of F Z (·) and ζ > 2(2π) −1/2 a n,p b(λ n ) ∞ for n sufficiently large. This completes the proof. Proof of Theorem 3. Throughout the proof, α n → 0 is converging sufficiently slowly, possibly depending on the context of the different statements we prove. Regarding statement 1: it is sufficient that for j ∈ S 0 , a n,p;j (σ)|β corr;j | ≫ max(∆ j , 1). From Proposition 2, we see that this can be enforced by requiring a n,p;j (σ) |(P X ) jj β 0 j | − k =j (P X ) jk (β init;k − β 0 k ) − |Z j | − |b j (λ)| ≫ max(∆ j , 1). Since |a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k )| ≤ ∆ j , this holds if |β 0 j | ≫ 1 |(P X ) jj |a n,p;j (σ) max(∆ j , a n,p;j (σ)Z j , a n,p;j (σ)b j (λ), 1). (A.1) Due to the choice of λ = λ n (as in Theorem 1), we have a n,p;j (σ)b j (λ) ≤ a n,p (σ)b(λ) ∞ = o(1). Hence, (A.1) holds with probability converging to one if |β 0 j | ≫ 1 |(P X ) jj |a n,p;j (σ) max(∆ j , 1), completing the proof for statement 1. For proving the second statement, we recall that 1 − J G (c) = P max j∈G (a n,p;j (σ)|Z j | + ∆ j ) > c . Denote by W = max j∈G (a n,p;j (σ)|Z j | + ∆ j ) ≤W = max j∈G a n,p;j (σ)|Z j | + max j∈G ∆ j . Thus, P[W > c] ≤ P[W > c]. Therefore, the statement for the p-value P[P G ≤ α n ] is implied by PW [W >γ G ] ≤ α n . (A.2) Using the union bound and the fact that a n,p;j (σ)|Z j | ∼ N (0, 1) (but dependent over different values of j), we have that max j∈G a n,p;j (σ)|Z j | = O P ( log(|G|)). Therefore, (A.2) holds if γ G = max j∈G a n,p;j (σ)|β corr;j | ≫ max max j∈G ∆ j , log(|G|) . The argument is now analogous to the proof of the first statement above, using the representation from Proposition 2. Regarding the third statement, we invoke the rough bound P corr;j ≤ pP j , with the non-truncated Bonferroni corrected p-value at the right-hand side. Hence, max j∈S0 P corr;j ≤ α n is implied by max j∈S0 pP j = max j∈S0 2p(1 − Φ((a n,p;j (σ)|β corr;j | − ∆ j ) + )) ≤ α n . Since this involves a standard Gaussian two-sided tail probability, the inequality can be enforced (for certain slowly converging α n ) by max j∈S0 2 exp(log(p) − (a n,p;j (σ)|β corr;j | − ∆ j ) 2 + /2) = o P (1). The argument is now analogous to the proof of the first statement above, using the representation from Proposition 2. The fourth statement involves slight obvious modifications of the arguments above. A.2. P -values for H 0,G with |G| large We report here on a small simulation study for testing H 0,G with G = {1, 2, . . ., 100}. We consider model (M2) from Section 5.1 with 4 different configurations and we use the p-value from (2.15) with corresponding decision rule for rejection of H 0,G if the p-value is smaller or equal to the nominal level 0.05. Table 2 describes the result based on 500 independent simulations (where the fixed design remains the same). The method works well with much better power than multiple testing of individual hypotheses but worse Table 2. Testing of general hypothesis H0,G with |G| = 100 using the p-value in (2.15) with significance level 0.05. Second column: type I error; Third column: power; Fourth column: comparison with power using multiple individual testing and average power using individual testing without multiplicity adjustment (both for all p hypotheses H0,j (j = 1, . . . , p)) than average power for testing individual hypotheses without multiplicity adjustment (which is not a proper approach). This is largely in agreement with the theoretical results in Theorem 3. Furthermore, the type I error control is good. Model A.3. Number of false positives in simulated examples We show in Table 3 the number of false positives V = V 0.05 in the simulated scenarios where the FWER (among individual hypotheses) was found too large. Although the FWER is larger than 0.05, the number of false positives is relatively small, except for the extreme model (M2), p = 2500, s = 15, b = 1 which has a too large sparsity and a too strong signal strength. For the latter model, we would need to increase ξ in (2.13) to achieve better error control. A.4. Further discussion about p-values and bounds ∆ j in assumption (A) The p-values in (2.14) and (2.15) are crucially based on the idea of correction with the bounds ∆ j in Section 2.4.1. The essential idea is contained in Proposition 2: a n,p;j (σ)β corr;j = a n,p;j (σ)(P X ) jj − a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) + a n,p;j (σ)Z j + negligible term. There are three cases. If a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) = o P (1), (A.3) a correction with the bound ∆ j would not be necessary, but of course, it does not hurt in terms of type I error control. If a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) ≍ V, (A.4) for some non-degenerate random variable V , the correction with the bound ∆ j is necessary and assuming that ∆ j is of the same order of magnitude as V , we have a balance between ∆ j and the stochastic term a n,p;j (σ)Z j . In the last case where a n,p;j (σ) k =j (P X ) jk (β init;k − β 0 k ) → ∞, (A.5) the bound ∆ j would be the dominating element in the p-value construction. We show in Figure 3 that there is empirical evidence that (A.4) applies most often. Case (A.5) is comparable to a crude procedure which makes a hard decision about relevance of the underlying coefficients: if a n,p;j (σ)|β corr;j | > ∆ j holds, then H 0,j is rejected, and the rejection would be "certain" corresponding to a p-value with value equal to 0; and in case of a "≤" relation, the corresponding p-value would be set to one. This is an analogue to the thresholding rule: if |β init;j | > ∆ init holds, then H 0,j is rejected, (A.6) where ∆ init ≥ β init − β 0 ∞ , e.g. using a bound where ∆ init ≥ β init − β 0 1 . For example, (A.6) could be the variable selection estimator with the thresholded Lasso procedure (van de Geer, Bühlmann and Zhou, 2011). An accurate construction of ∆ init for practical use is almost impossible: it depends on σ and in a complicated way on the nature of the design through e.g. the compatibility constant, see (1.7). Our proposed bound ∆ j in (2.13) is very simple. In principle, its justification also depends on a bound for β init − β 0 1 , but with the advantage of "robustness". First, the bound a n,p;j (σ) max k =j |(P X ) jk | β init − β 0 1 appearing in (2.11) is not depending on σ anymore (since β init − β 0 1 scales linearly with σ). Secondly, the inequality in (2.11) is crude implying that ∆ j in (2.13) may still satisfy assumption (A) even if the bound of β init − β 0 1 is misspecified and too small. The construction of p-values as in (2.14) and (2.15) is much better for practical purposes (and for simulated examples) than using a rule as in (A.6). 30 P. Bühlmann F G,Z = P min j=1,...,m Figure 1 . 1Simulated data as described in Section 5.1. (a) and (b): Single testing with average type I error (5.2) on x-axis (log-scale) and average power (5.3) on y-axis. (c) and (d): Multiple testing with familywise error rate on x-axis (log-scale) and average power (5.3), but using Pcorr;j , on y-axis. Vertical dotted line is at abscissa 0.05. Each point corresponds to a model configuration. (a) and (c): 12 model configurations generated from independent covariates (M1); (b) and (d): 12 model configurations generated from equi-dependent covariates (M2). When an error is zero, we plot it on the log-scale at abscissa 10 −8 . (a), we have for the variables with smallest p-values: Figure 2 . 2Motif regression with n = 143 and p = 195. (a) Single-testing p-values as in (2.14); (b) Adjusted p-values as in (3.1) for FWER control. The p-values are plotted on the log-scale. Horizontal line is at y = 0.05. P[false rejection] P[true rejection] (power mult., power indiv.) (M2), p = 500, s = 3, 3 . 3Probabilities for false positives for simulation models from Section 5.1 in scenarios where the FWER is clearly overshooting the nominal level 0.05 Model P[V = 0] P[V = 1] P[V = 2] P[V = 3] P[V = 4] P[V ≥ 5] (M2), p = 500, s = 15, ), p = 2500, s = 15, b = 0 Figure 3 . 3Histogram of projection bias an,p;j(σ) k =j (PX) jk (β init;k − β 0 k ) over all values j = 1, . . . , p and over 100 independent simulation runs. Left: model (M2), p = 2500, s = 3, b = 1; Right: model (M2), p = 2500, s = 15, b = 1. Table AcknowledgementsI would like to thank Cun-Hui Zhang for fruitful discussions and Stephanie Zhang for providing an R-program for the Scaled Lasso.AppendixA.1. ProofsProof of Proposition 1. The statement about the bias is given inShao and Deng (2012)(proof of their Theorem 1). The covariance matrix ofβ isThen, for the variance we obtain Var(β j ) = n −1 σ 2 Ω jj ≥ n −1 σ 2 Ω min (λ).Proof of Proposition 2. We writêThe result then follows by defining Z j =β j − E[β j ] and using that θ 0 j = (P X β 0 ) j = (P X ) jj β 0 j + k =j (P X ) jk β 0 k .Proof of Proposition 3 (basis for proving Theorem 1). The bound from Proposition 1 for the estimation bias of the Ridge estimator leads to: a n,p b(λ) ∞ = max j=1,...,p a n,p;j (σ)|E[β j ] − θ 0 j | Simultaneous analysis of lasso and Dantzig selector. P J Bickel, Y Ritov, A B Tsybakov, Ann. Statist. 372533469Bickel, P.J., Ritov, Y. and Tsybakov, A.B. (2009). Simultaneous analysis of lasso and Dantzig selector. Ann. Statist. 37 1705-1732. MR2533469 Efficient and Adaptive Estimation for Semiparametric Models. P J Bickel, C A J Klaassen, Y Ritov, J A Wellner, Springer1623559New YorkBickel, P.J., Klaassen, C.A.J., Ritov, Y. and Wellner, J.A. (1998). Efficient and Adaptive Estimation for Semiparametric Models. New York: Springer. MR1623559 Boosting for high-dimensional linear models. P Bühlmann, MR2281878Ann. Statist. 34Bühlmann, P. (2006). Boosting for high-dimensional linear models. Ann. Statist. 34 559-583. MR2281878 Variable selection in highdimensional linear models: Partially faithful distributions and the PC-simple algorithm. P Bühlmann, M Kalisch, M H Maathuis, Biometrika. 972650737Bühlmann, P., Kalisch, M. and Maathuis, M.H. (2010). Variable selection in high- dimensional linear models: Partially faithful distributions and the PC-simple algorithm. Biometrika 97 261-278. MR2650737 P Bühlmann, S Van De Geer, Statistics for High-dimensional Data: Methods, Theory and Applications. HeidelbergSpringer2807761Bühlmann, P. and van de Geer, S. (2011). Statistics for High-dimensional Data: Methods, Theory and Applications. Springer Series in Statistics. Heidelberg: Springer. MR2807761 Sparsity oracle inequalities for the Lasso. F Bunea, A Tsybakov, M Wegkamp, Electron. J. Stat. 12312149Bunea, F., Tsybakov, A. and Wegkamp, M. (2007). Sparsity oracle inequalities for the Lasso. Electron. J. Stat. 1 169-194. MR2312149 The Dantzig selector: Statistical estimation when p is much larger than n. E Candes, T Tao, Ann. Statist. 35Candes, E. and Tao, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Statist. 35 2313-2351. MR2382644 BagBoosting for tumor classification with gene expression data. M Dettling, Bioinformatics. 20Dettling, M. (2004). BagBoosting for tumor classification with gene expression data. Bioin- formatics 20 3583-3593. Spectrum estimation for large dimensional covariance matrices using random matrix theory. N El Karoui, Ann. Statist. 362485012El Karoui, N. (2008). Spectrum estimation for large dimensional covariance matrices using random matrix theory. Ann. Statist. 36 2757-2790. MR2485012 Variable selection via nonconcave penalized likelihood and its oracle properties. J Fan, R Li, J. Amer. Statist. Assoc. 961946581Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348-1360. MR1946581 Sure independence screening for ultrahigh dimensional feature space. J Fan, J Lv, MR2530322J. R. Stat. Soc. Ser. B Stat. Methodol. 70Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B Stat. Methodol. 70 849-911. MR2530322 A selective overview of variable selection in high dimensional feature space. J Fan, J Lv, Statist. Sinica. 202640659Fan, J. and Lv, J. (2010). A selective overview of variable selection in high dimensional feature space. Statist. Sinica 20 101-148. MR2640659 Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. E Greenshtein, Y Ritov, Bernoulli. 102108039Greenshtein, E. and Ritov, Y. (2004). Persistence in high-dimensional linear predictor selec- tion and the virtue of overparametrization. Bernoulli 10 971-988. MR2108039 T Hastie, R Tibshirani, J Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New YorkSpringer27222942nd ed. Springer Series in StatisticsHastie, T., Tibshirani, R. and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. Springer Series in Statistics. New York: Springer. MR2722294 Adaptive Lasso for sparse high-dimensional regression models. J Huang, S Ma, C H Zhang, Statist. Sinica. 18Huang, J., Ma, S. and Zhang, C.H. (2008). Adaptive Lasso for sparse high-dimensional re- gression models. Statist. Sinica 18 1603-1618. MR2469326 Detection boundary in sparse regression. Y I Ingster, A B Tsybakov, N Verzelen, Electron. J. Stat. 42747131Ingster, Y.I., Tsybakov, A.B. and Verzelen, N. (2010). Detection boundary in sparse regression. Electron. J. Stat. 4 1476-1526. MR2747131 Asymptotics for lasso-type estimators. K Knight, W Fu, Ann. Statist. 281805787Knight, K. and Fu, W. (2000). Asymptotics for lasso-type estimators. Ann. Statist. 28 1356- 1378. MR1805787 The Dantzig selector and sparsity oracle inequalities. V Koltchinskii, Bernoulli. 152555200Koltchinskii, V. (2009a). The Dantzig selector and sparsity oracle inequalities. Bernoulli 15 799-828. MR2555200 Sparsity in penalized empirical risk minimization. V Koltchinskii, Ann. Inst. Henri Poincaré Probab. Stat. 452500227Koltchinskii, V. (2009b). Sparsity in penalized empirical risk minimization. Ann. Inst. Henri Poincaré Probab. Stat. 45 7-57. MR2500227 Relaxed Lasso. N Meinshausen, MR2409990Comput. Statist. Data Anal. 52Meinshausen, N. (2007). Relaxed Lasso. Comput. Statist. Data Anal. 52 374-393. MR2409990 High-dimensional graphs and variable selection with the lasso. N Meinshausen, P Bühlmann, Ann. Statist. 342278363Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436-1462. MR2278363 Stability selection. N Meinshausen, P Bühlmann, J. R. Stat. Soc. Ser. B Stat. Methodol. 722758523Meinshausen, N. and Bühlmann, P. (2010). Stability selection. J. R. Stat. Soc. Ser. B Stat. Methodol. 72 417-473. MR2758523 Asymptotic optimality of the Westfall-Young permutation procedure for multiple testing under dependence. N Meinshausen, M Maathuis, P Bühlmann, Ann. Statist. 393012412Meinshausen, N., Maathuis, M. and Bühlmann, P. (2011). Asymptotic optimality of the Westfall-Young permutation procedure for multiple testing under dependence. Ann. Statist. 39 3369-3391. MR3012412 p-values for high-dimensional regression. N Meinshausen, L Meier, P Bühlmann, J. Amer. Statist. Assoc. 104Meinshausen, N., Meier, L. and Bühlmann, P. (2009). p-values for high-dimensional regres- sion. J. Amer. Statist. Assoc. 104 1671-1681. MR2750584 Lasso-type recovery of sparse representations for highdimensional data. N Meinshausen, B Yu, Ann. Statist. 372488351Meinshausen, N. and Yu, B. (2009). Lasso-type recovery of sparse representations for high- dimensional data. Ann. Statist. 37 246-270. MR2488351 Restricted eigenvalue properties for correlated Gaussian designs. G Raskutti, M J Wainwright, B Yu, J. Mach. Learn. Res. 112719855Raskutti, G., Wainwright, M.J. and Yu, B. (2010). Restricted eigenvalue properties for correlated Gaussian designs. J. Mach. Learn. Res. 11 2241-2259. MR2719855 Estimation in high-dimensional linear models with deterministic design matrices. J Shao, X Deng, Ann. Statist. 40Shao, J. and Deng, X. (2012). Estimation in high-dimensional linear models with deterministic design matrices. Ann. Statist. 40 812-831. Scaled sparse linear regression. T Sun, C H Zhang, MR2999166Biometrika. 99Sun, T. and Zhang, C.H. (2012). Scaled sparse linear regression. Biometrika 99 879-898. MR2999166 Regression shrinkage and selection via the lasso. R Tibshirani, J. Roy. Statist. Soc. Ser. B. 581379242Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267-288. MR1379242 Greed is good: Algorithmic results for sparse approximation. J A Tropp, IEEE Trans. Inform. Theory. 50Tropp, J.A. (2004). Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50 2231-2242. MR2097044 The deterministic Lasso. S Van De Geer, JSM Proceedings. American Statistical Associationvan de Geer, S. (2007). The deterministic Lasso. In JSM Proceedings, 2007 140. American Statistical Association. High-dimensional generalized linear models and the lasso. S A Van De Geer, MR2396809Ann. Statist. 36van de Geer, S.A. (2008). High-dimensional generalized linear models and the lasso. Ann. Statist. 36 614-645. MR2396809 On the conditions used to prove oracle results for the Lasso. S A Van De Geer, P Bühlmann, Electron. J. Stat. 3van de Geer, S.A. and Bühlmann, P. (2009). On the conditions used to prove oracle results for the Lasso. Electron. J. Stat. 3 1360-1392. MR2576316 The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso). S Van De Geer, P Bühlmann, S Zhou, Electron. J. Stat. 5van de Geer, S., Bühlmann, P. and Zhou, S. (2011). The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso). Electron. J. Stat. 5 688-749. MR2820636 Introduction to the non-asymptotic analysis of random matrices. R Vershynin, Compressed Sensing (Y. Eldar and G. KutyniokCambridge Univ. Press2963170CambridgeVershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing (Y. Eldar and G. Kutyniok, eds.) 210-268. Cambridge: Cambridge Univ. Press. MR2963170 Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1-constrained quadratic programming (Lasso). M J Wainwright, IEEE Trans. Inform. Theory. 552729873Wainwright, M.J. (2009). Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1-constrained quadratic programming (Lasso). IEEE Trans. Inform. Theory 55 2183- 2202. MR2729873 Forward regression for ultra-high dimensional variable screening. H Wang, J. Amer. Statist. Assoc. 104Wang, H. (2009). Forward regression for ultra-high dimensional variable screening. J. Amer. Statist. Assoc. 104 1512-1524. MR2750576 High-dimensional variable selection. L Wasserman, K Roeder, Ann. Statist. 372543689Wasserman, L. and Roeder, K. (2009). High-dimensional variable selection. Ann. Statist. 37 2178-2201. MR2543689 Resampling-based Multiple Testing: Examples and Methods for P -value Adjustment. P Westfall, S Young, John Wiley & SonsNew YorkWestfall, P. and Young, S. (1993). Resampling-based Multiple Testing: Examples and Meth- ods for P -value Adjustment. New York: John Wiley & Sons. Nearly unbiased variable selection under minimax concave penalty. C H Zhang, Ann. Statist. 382604701Zhang, C.H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 38 894-942. MR2604701 The sparsity and bias of the LASSO selection in highdimensional linear regression. C H Zhang, J Huang, Ann. Statist. 36Zhang, C.H. and Huang, J. (2008). The sparsity and bias of the LASSO selection in high- dimensional linear regression. Ann. Statist. 36 1567-1594. MR2435448 Confidence intervals for low-dimensional parameters with high-dimensional data. C H Zhang, S Zhang, arXiv:1110.2563v1Available atZhang, C.H. and Zhang, S. (2011). Confidence intervals for low-dimensional parameters with high-dimensional data. Available at arXiv:1110.2563v1. On model selection consistency of Lasso. P Zhao, B Yu, J. Mach. Learn. Res. 72274449Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res. 7 2541-2563. MR2274449 The adaptive lasso and its oracle properties. H Zou, J. Amer. Statist. Assoc. 1012279469Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418-1429. MR2279469 One-step sparse estimates in nonconcave penalized likelihood models. H Zou, R Li, Ann. Statist. 362435443Zou, H. and Li, R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Ann. Statist. 36 1509-1533. MR2435443
[]
[ "A Case Study on Formal Verification of Self-Adaptive Behaviors in a Decentralized System", "A Case Study on Formal Verification of Self-Adaptive Behaviors in a Decentralized System" ]
[ "M Usman Iftikhar [email protected] \nSchool of Computer Science\nPhysics and Mathematics\nLinnaeus University\nVäxjöSweden\n", "Danny Weyns [email protected] \nSchool of Computer Science\nPhysics and Mathematics\nLinnaeus University\nVäxjöSweden\n" ]
[ "School of Computer Science\nPhysics and Mathematics\nLinnaeus University\nVäxjöSweden", "School of Computer Science\nPhysics and Mathematics\nLinnaeus University\nVäxjöSweden" ]
[ "11th International Workshop on Foundations of Coordination Languages and Self Adaptation (FOCLASA'12) EPTCS 91" ]
Self-adaptation is a promising approach to manage the complexity of modern software systems. A self-adaptive system is able to adapt autonomously to internal dynamics and changing conditions in the environment to achieve particular quality goals. Our particular interest is in decentralized selfadaptive systems, in which central control of adaptation is not an option. One important challenge in self-adaptive systems, in particular those with decentralized control of adaptation, is to provide guarantees about the intended runtime qualities. In this paper, we present a case study in which we use model checking to verify behavioral properties of a decentralized self-adaptive system. Concretely, we contribute with a formalized architecture model of a decentralized traffic monitoring system and prove a number of self-adaptation properties for flexibility and robustness. To model the main processes in the system we use timed automata, and for the specification of the required properties we use timed computation tree logic. We use the Uppaal tool to specify the system and verify the flexibility and robustness properties.
10.4204/eptcs.91.4
[ "https://arxiv.org/pdf/1208.4635v1.pdf" ]
19,033,468
1208.4635
cbd8f52d17eb50e81c6604c4be10d8d47211a20e
A Case Study on Formal Verification of Self-Adaptive Behaviors in a Decentralized System 2012 M Usman Iftikhar [email protected] School of Computer Science Physics and Mathematics Linnaeus University VäxjöSweden Danny Weyns [email protected] School of Computer Science Physics and Mathematics Linnaeus University VäxjöSweden A Case Study on Formal Verification of Self-Adaptive Behaviors in a Decentralized System 11th International Workshop on Foundations of Coordination Languages and Self Adaptation (FOCLASA'12) EPTCS 91 201210.4204/EPTCS.91.4 Self-adaptation is a promising approach to manage the complexity of modern software systems. A self-adaptive system is able to adapt autonomously to internal dynamics and changing conditions in the environment to achieve particular quality goals. Our particular interest is in decentralized selfadaptive systems, in which central control of adaptation is not an option. One important challenge in self-adaptive systems, in particular those with decentralized control of adaptation, is to provide guarantees about the intended runtime qualities. In this paper, we present a case study in which we use model checking to verify behavioral properties of a decentralized self-adaptive system. Concretely, we contribute with a formalized architecture model of a decentralized traffic monitoring system and prove a number of self-adaptation properties for flexibility and robustness. To model the main processes in the system we use timed automata, and for the specification of the required properties we use timed computation tree logic. We use the Uppaal tool to specify the system and verify the flexibility and robustness properties. Introduction Our society extensively relies on the qualities of software systems, e.g., the reliability of software for media, the performance of software for manufacturing, and the openness of software for enterprise collaborations. However, ensuring the required qualities of software that has to operate in dynamic environments poses severe engineering challenges. Self-adaptation is generally considered as a promising approach to manage the complexity of modern software systems [12,14,5,15]. Self-adaptation enables a system to adapt itself autonomously to internal dynamics and changing conditions in the environment to achieve particular quality goals. It is widely recognized that software architecture provides the right level of abstraction and generality to deal with the challenges of self-adaptation [17,8,14]. In particular, the use of an architecture-based approach can provide an appropriate level of abstraction to describe dynamic change in a system, such as the use of components, bindings and composition, rather than at the algorithmic level. Our particular interest is in decentralized self-adaptive systems, in which central control of adaptation is not an option. Examples are large-scale traffic systems, integrated supply chains, and federated cloud infrastructures. One important challenge in self-adaptive systems, in particular those with decentralized control of adaptation, is to provide guarantees about the required runtime quality properties. In previous research, we have defined formally founded design models for decentralized self-adaptive systems that cover structural aspects of self-adaptation [24]. These models support engineers with reasoning about structural properties, such as types and interface relations of different parts of the decentralized system. However, in order to provide guarantees about qualities, we need to complement this work with an approach to validate behavioral properties of decentralized self-adaptive systems. The need for research on for-mal verification of behavioral properties of self-adaptive systems is broadly recognized by the community [16,25,21,15]. This paper reports a first step of our research goal to develop an integrated approach to validate behavioral properties of decentralized self-adaptive systems to guarantee the required qualities [22]. This approach integrates three activities: (1) model checking of the behavior of a self-adaptive system during design, (2) model-based testing of the concrete implementation during development, and (3) runtime diagnosis after system deployment. The key underlying idea of the approach is to enhance validation of qualities by transfering formalization results over different phases of the software life cycle, e.g., model based testing starts with a verified model and a set of required properties and then intends to show that the implementation of the system behaves compliant with this model. The focus of this paper is on the first activity. Concretely, we present a case study of a decentralized traffic monitoring system and use model checking to guarantee a number of self-adaptation properties for flexibility and robustness. With flexibility we refer to the ability of the system to adapt dynamically with changing conditions in the environment, and robustness is the ability of the system to cope autonomously with errors during execution. We model the main system processes with timed automata and specify the required properties using timed computation tree logic (TCTL). We use the Uppaal tool that offers an integrated environment for modeling, simulation and verification, based on automata and a subset of TCTL. The remainder of this paper is structured as follows. In Section 2, we introduce the traffic monitoring system and explain a number of adaptation scenarios. In Section 3, we give a brief background on formal modeling with Uppaal. Section 4 presents the design model of the traffic monitoring system, and Section 5 explains how we verified key properties and discusses potential uses of the study results both as input for model based testing and as a starting point for the definition of a reusable behavior model for self-adaptive systems. We discuss related work in Section 6, and conclude with a summary and challenges ahead in Section 7. Traffic Monitoring System Intelligent transportation systems (ITS) is a worldwide initiative to exploit information and communication technology to improve traffic. 1 One of the challenges in this area is effective monitoring of traffic. In [23], we have introduced a monitoring system that provides information about traffic jams. This information can be used to reduce traffic congestion by different types of clients, such as traffic light controllers, driver assistance systems, etc. The main challenges of the system are: (1) inform clients of dynamic changing traffic jams, (2) realize this functionality in a decentralized way, avoiding the bottleneck of a centralized control center, (3) make the system robust to camera failures. Whereas the focus in [23] was on the structural aspects, here we focus on the behavioral aspects of the system' architecture. The system consists of a set of intelligent cameras, which are distributed along the road. An example of a highway is shown in Fig. 1. Each camera has a limited viewing range and cameras are placed to get an optimal coverage of the highway with a minimum overlap. To realize a decentralized solution, cameras collaborate in organizations: if a traffic jam spans the viewing range of multiple cameras, they form an organization that provides information to clients that have an interest in traffic jams. Fig. 1 shows two scenarios that require adaption. The first scenario concerns the dynamic adaptation of an organization from T0 to T1, where camera 2 joins the organization of cameras 3 and 4 after it monitors a traffic jam. The second scenario concerns robustness to a silent node failure, i.e., a failure in which a failing camera becomes unresponsive without sending any incorrect data. This scenario is shown from T2 to T3, where camera 2 fails. Since there are dependencies between the software running on different cameras (details below), such failures may bring the system in an inconsistent state and disrupt its services. Therefore, the system should be able to restore its services after a failure, although in degraded mode since the traffic state is no longer monitored in the viewing range of the failed camera. Figure 2a shows the primary components of the software deployed on each camera, i.e. the local camera system. The local traffic monitoring system provides the functionality to detect traffic jams and inform clients. The local traffic monitoring system is conceived as an agent-based system consisting of two components. The agent is responsible for monitoring the traffic and collaborating with other agents to report a possible traffic jam to clients. The organization middleware offers life cycle management services to set up and maintain organizations. We employ dynamic organizations of agents to support flexibility in the system, that is, agent organizations dynamically adapt themselves with changing traffic conditions. To access the hardware and communication facilities on the camera, the local traffic monitoring system can rely on the services provided by the distributed communication and host infrastructure. Dynamic Agent Organizations for Flexibility In normal traffic conditions, each agent belongs to a single member organization. However, when a traffic jam is detected that spans the viewing range of multiple neighboring cameras, organizations on these cameras will merge into one organization. To simplify the management of organizations and interactions with clients, the organizations have a master/slave structure. The master is responsible for managing the dynamics of that organization (merging and splitting) by synchronizing with its slaves and neighboring organizations and reporting traffic jams to clients. Therefore, the master uses the context information provided by its slaves about their local monitored traffic conditions. At T0, the example in Fig. 1 shows four single member organizations, org1 with agent1, org2 with agent2, and similar for org5, and org6. Furthermore, there is one merged organization, org34 with agent3 as master and agent4 Self-Healing Subsystem for Robustness To recover from camera failures, a self-healing subsystem is added to the local traffic monitoring system, as shown in Fig. 2a. The self-healing subsystem maintains a model of the current dependencies of the components of the local traffic monitoring system with other active cameras. Each working camera is in one of three distinct roles: master of a single member organization, master of an organization with slaves, or slave in an organization. As these roles come with certain responsibilities, each camera is dependent on a particular set of remote cameras in order to function properly: (1) a master of a single agent organization is dependent on its neighboring nodes; (2) a master with slaves is dependent on its slaves and its neighboring nodes; (3) a slave of an organization is dependent on its master and its neighboring nodes. To recover from camera failures, the subsystem contains repair actions for failure scenarios in different roles. Examples of actions are: halt the communication with the failed neighboring camera, elect a new master, and exchange the current monitored traffic state with another camera. To detect failures, the self-healing subsystem coordinates with self-healing subsystems on other cameras in the dependency model using a ping-echo mechanism. Cameras send periodically ping messages to dependent cameras and a failure is detected when a camera does not respond with an echo after a certain time. [23] provides a detailed description of the structural architecture of the traffic monitoring system. Uppaal Model-checking is verifying a given model w.r.t. a formally expressed requirement specification. Uppaal is a model-checking tool for verification of behavioral properties [4]. In Uppaal, a system is modeled as a network of timed automata, called processes. A timed automaton is a finite-state machine extended with clock variables. A clock variable evaluates to a real number, and clocks progress synchronously. It is important to note that fulfilled constraints for the clock values only enable state transitions but do not force them to be taken. A process is an instance of a parameterized template. A template can have local declared variables, functions, and labeled locations. The syntax to declare functions is similar to that of the C language. State of the system is defined by locations of the automata, clocks, and variables values. Uppaal uses a subset of TCTL for defining requirements, called the query language. The query language consists of state formulae and path formulae. State formulae describe individual states with regular expressions such as x >= 0. State formulae can also be used to test whether a process is in a given location, e.g., Proc.loc, where Proc is a process and loc is a location. Path formulae quantify over paths of the model and can be classified into reachability, safety, and liveness properties: • Reachability properties are used to check whether a given state formula φ can be satisfied by some reachable state. The syntax for writing this property is E <> φ . • Safety properties are used to verify that "something bad will never happen." There are two path formulae for checking safety properties. A[] φ expresses that a given state formula φ should be true in all reachable states, and E[] φ means that there should exist a path that is either infinite, or the last state has no outgoing transitions, called maximal path, such that φ is always true. • Liveness properties are used to verify that something eventually will hold, which is expressed as A <> φ . The property "whenever φ holds, eventually ψ will happen" is stronger and is expressed as φ → ψ. Processes communicate with each other through channels. Binary channels are declared as chan x. The sender x! can synchronize with the receiver x? through an edge. If there are multiple receivers x? then a single receiver will be chosen non-deterministically. The sender x! will be blocked if there is no receiver. Broadcast channels are declared as broadcast chan x. The syntax for sender x! and receiver x? is the same as for binary channels. However, a broadcast channel sends a signal to all the receivers, and if there is no receiver, the sender will not be blocked. Uppaal also supports arrays of channels. The syntax to declare them is chan x[N] or broadcast chan x[N], and sending and receiving signals are specified as x[id]! and x[id]?. Note that processes cannot pass data through signals. If a process wants to send data to another process then the sender has to put the data in a shared variable before sending a signal and the receiver will get the data from shared variable after receiving the signal. Uppaal offers a graphical user interface (GUI) and model checking engine. The GUI consists of three parts: the editor, the simulator and the verifier. The editor is used to create the templates. The simulator is similar to a debugger, which can be used to manually run the system, showing running process, their current location and the values of the variables and clocks. The verifier is used to check properties of the model as described above. Model Design in Uppaal We now discuss the formal design of the traffic monitoring system in Uppaal. We already explained in section 2 that each camera consists of 3 primary components: Agent, Organization Middleware and Selfhealing Subsystem. We have designed each of these components as a set of timed automata (templates) Figure 3: Environment processes that represent abstract processes. Fig. 2b (section 2.1) shows how the processes map to the components of the system. To instantiate a particular system model, each template is instantiated to one or several concrete processes. We use channels to enable processes to communicate within a camera and between cameras. To that end, the id of the receiver camera is used. We start by defining the different templates of the system. Then we explain how the templates are instantiated into a concrete system model. Environment Processes The environment is modeled as two simple timed automata: Release Traffic and Car. • Release Traffic is an abstract model of the traffic environment. Fig. 3a shows the template. The purpose of traffic release is to feed the system with cars after some non-deterministic time CAR GAP. Variable x is a local clock and whenever its value is greater than CAR GAP, the StartCar signal is emitted. Only one instance of the release traffic process will be running all the time. • Car is the abstract model of a car in the environment. Fig. 3b shows the template. Car waits for the startCar signal from the release traffic process. 2 Once started, the car moves along the subsequent viewing ranges of the cameras. Whenever a car enters/leaves the viewing range of a particular camera it emits a signal. This allows the camera agents to monitor traffic congestion. As in real traffic, the car template will have many running instances, each representing a car in the system. Agent Processes An agent is modeled as two timed automata: Camera and Traffic Monitor. • Each Camera has four basic states. In normal operation, the camera can be master with no slaves, master of an organization with slaves, or it can be slave. Additionally, the camera can be in the failed state, representing the status of the camera after a silent node failure. Fig. 4a shows the template. There is an instance with a unique id for each camera. Cameras in the master status are responsible for communicating the traffic conditions to clients, but this functionality is not modeled here. • Traffic Monitor keeps track of the actual traffic conditions based on the signals it receives from the cars and determines traffic congestion. It interacts with the local organizational manager to handle organization management. Fig. 4b Organization Middleware The organization middleware is modeled as one timed automaton: Organization Controller. • Organization Controller is responsible for managing organizations, based on the information it gets from the traffic monitor process of the agent. Fig. 5 shows the template. 3 An organization middleware process runs on each camera. A camera starts as master of a single member organization. When a congestion is detected the organization controller sends a request org signal to the neighboring camera in the direction of the traffic flow. Depending on the traffic conditions of the neighboring camera and the current role of the camera, the organizations may be restructured as follows. If traffic is not jammed in the viewing range of the neighbor, organizations are not changed. If traffic is jammed and the neighbor is Slave or MasterWithSlaves, the camera joins the organization as a slave. If both are masters of single member organizations (Master), the camera with the highest id becomes master with slaves of the joined organization (transition OrgOfferMaster to TurningMasterWithSlaves to MasterWithSlaves), while the other will become slave (transition OrgOfferMaster to TurningSlave to Slave). A master with slaves can add and remove slaves dynamically. When no slave remains, the master with slaves becomes master again. Whenever, the role of a camera changes, the organization controller informs the camera process to update its status via signals slave[id], master[id], or masterWithSlaves[id] respectively. If the organization controller receives the stopCam signal, it will go to Failure state, which represents a silent node failure. The controller will not respond until it is recovered via the startCam signal. Self-Healing Subsystem The self-healing subsystem is modeled as two automata: Self-Healing Controller and Pulse Generator. • Self-Healing Controller is used to detect failures of other cameras based on a ping-echo mechanism. Fig. 6a shows the template. A self-healing controller process runs on each camera. The self-healing controller sends periodically isAlive[ping] signals (based on WAIT TIME) to the self-healing controllers of the dependent cameras. If a camera does not respond in a certain time (ALIVE TIME) it adapts the organizational controller, either by removing a dependency in case a slave failed, or by restructuring the organization in case the master of the organization failed. • Pulse Generator is responsible to respond to the ping signals sent by other cameras to check whether a particular camera is alive or not. Fig. 6b shows the template. Definition Model Instance A concrete system model is defined in Uppaal's system declarations section by listing the processes that have to be composed into a system: The declaration defines a setup with 6 camera systems. As each local camera system has 5 processes (see Fig. 2), we will have 30 processes running, plus the environment processes. Unique identifiers are used to identify related processes to work together. We have used an array of channels to enable processes to communicate within each camera and between cameras. If a process has to communicate with another process of the same camera it uses the id of the camera. If a process wants to communicate with a process of another camera it will use the id of the other camera. Model Checking Based on the formal design of the traffic monitoring system, we can now check properties of the traffic monitoring system by Uppaal's verifier. We have divided the properties in 3 groups, respectively system invariants, correctness of dynamic organization adaptations, and correctness of robustness to silent node failures. We start by defining the properties. Then, we discuss the design and verification process. System Invariants The following properties should hold: • All cameras cannot be slave at the same time (the system would not provide its function, that is, there are no masters that inform clients about traffic congestion): I1: A[] not forall(i: cam_id) Camera(i).Slave • The system should not be in deadlock at any time. A deadlock may for example occur when the self-healing controllers are waiting on each other for responses to ping messages and none of them is able to send a response. Such situations should obviously be avoided. I2: A[] not deadlock Checking for deadlock is directly supported by Uppaal. Flexibility Properties To guarantee that the system adapts itself dynamically to the changing traffic conditions, we define properties that allow verification of correct merges of organizations, such as the first scenario described in Fig. 1. In this scenario, camera 1 merges with the existing organization of camera 2 and 3. In the resulting joined organization at T1, camera 1 is master and the other cameras are slaves. The properties for verifying correct merging are formulated as follows: • When the organization controller of a camera detects congestion (thus being master of a single member organization), then the camera merges with the neighboring organization in the direction of the traffic flow if this organization is detecting jammed traffic. Formally, we distinguish between three properties according to the current role of the neighboring camera, i.e., master with slaves, slave, and master of a single member organization. These properties are defined as follows: In case the neighboring camera is master with slaves (property F1), the camera joins the organization and becomes master. In case the neighboring camera is slave (property F2), the camera joins the neighboring organization as slave. In case both are masters of single member organizations (property F3), they merge and one of them becomes master. Robustness Properties To guarantee that the system recovers from a silent node failure, we define properties that allow verification of correct adaptations of organizations, such as the first scenario described in Fig. 1. In this scenario camera 2 fails at T3, which is the master of an organization with two slaves. Subsequently, the slaves elect a new master and the system recovers from the failure. To introduce failing cameras in the system, we modeled a virtual environment template that allows us to create sequences of traffic conditions, such as those described in Fig. 1. Fig. 7 shows this template. We focus here on properties to verify robustness of a failure of a camera in the role of master with slaves. The properties to verify robustness for failing cameras in other roles are similar. The scenario of Fig. 1 Verifying robustness of a failing master consists of three parts: 1. When the master camera fails, eventually, the self-healing controllers of the slaves detect the failure, 2. When the slaves have detected the failure, the organization controllers of the slaves will form a new organization, 3. Finally, the cameras will continue their function and monitor traffic jams. To verify the correct recovering of the organization we defined the following properties: • When camera 2 fails then eventually the self-healing controllers of the slaves detect the failure. R1: A<> SelfHealingController(2).Failed imply SelfHealingController(3).FailureDetected && SelfHealingController(4).FailureDetected • If organization controller 2 fails and organization controller 3 or 4 has detected this (by switching to master), then eventually the organization controller of either camera 3 or 4 switches to master with slave, and the other camera becomes slave. R2: OrganizationController(2).Failed && ((OrganizationController(3).Master && OrganizationController(4).Slave) || (OrganizationController(4).Master && OrganizationController(3).Slave)) --> OrganizationController (2).Failed && ( (OrganizationController(3).MasterWithSlaves && camera [3].slaves [4]) || (OrganizationController(4).MasterWithSlaves && camera [4].slaves [3])) • When camera 2 fails then eventually camera 3 and 4 will continue monitoring a traffic jam as a correct organization. R3: A<> Camera(2).Failed imply ((Camera(3).MasterWithSlaves && camera [3].slaves [4]) || (Camera(4).MasterWithSlaves && camera [4].slaves [3])) Finally, we verify whether neighbor relations 4 are correctly restored after a failure: R4: A<> forall(n: cam_id) forall(x:cam_id) (Camera(n).Failed && Camera(x).getLeftNeighbour() == n imply SelfHealingController(x).FailureDetected && Camera(x).getLeftNeighbour() == n -1) We defined a similar rule for neighbors on the right hand side. Design and Verification Process For the design of the models of the managed system (Camera, Traffic Monitor, and Organization Controller) and the managing system (Self-Healing Controller and Pulse Generator), we used Uppaal's simulator to check and correct the design. In this stage, we used the environment models (Release Traffic and Car) as shown in Fig. 3 to test the different models. Then we defined the system invariants and the flexibility and robustness properties. To verify these properties, we designed the virtual environment template as shown in Fig. 7. This restricted environment definition was needed, as verification of an arbitrary environment with randomly injected camera failures suffers from the state space problem. On the other hand, the virtual environment has the advantage that we could define the scenarios we wanted to verify with different number of cameras and increasing complexity. To give an idea of the time required for verification, we verified the different properties for an increasing number of cameras and measured the verification time. Verification for checking that the system works properly (invariant I1) increases from 1.2 sec for 6 cameras to 197.3 sec for 60 cameras, and from 7.2 to 1330.6 sec for checking deadlock (I2). We also measured the verification time for flexibility and robustness properties. Fig. 8 shows the results for properties F2 and R3. The figure shows that the verification time for these properties grows quasi linear with the number of cameras. By activating the "diagnostic trace" option, Uppaal can show counter examples in the simulator environment when a property is violated. This option allows to analyze the system's behavior with respect to the property and identify possible design errors. At the end of the design of the traffic monitoring system, all properties were satisfied which was a requirement as they are defined as system requirements. The Uppaal models of the traffic monitoring system with a prototype Java implementation of the system is available for download via: http://homepage.lnu.se/staff/daweaa/TrafficCaseUppaal.html. Discussion In this section, we discuss two topics. We start with explaining how the work presented in this paper fits in the integrated approach for validating quality properties of self-adaptive systems. Next, we discuss a reusable behavior model for self-adaptive systems that we derived from our study. As explained in the introduction, the work presented in this paper fits in an integrated approach that aims to exploit formalization results of model checking to support model-based testing of concrete implementations and runtime diagnosis after system deployment. To that end, it is our goal to employ the verified models to test the prototype implementation using model-based testing. The goal of model based testing is to show that the implementation of the system behaves compliant with this model. Model based testing uses a concise behavioral model of the system under test, and automatically generates test cases from the model. As the focus of model-based testing so far has mainly been on functional correctness of software systems [20], and self-adaptation if primarily concerned with quality properties, we face several challenges here. A key challenges is to identify the required models to support model-based testing of quality properties. We belief that environment models are a sine qua non for model based testing of runtime qualities, which is central to self-adaptation. In our case study, it is the environment model that specifies the failure events that have to be tested. An explicit model of the environment allows an engineer to precisely specify the failure scenarios of interest and the conditions under which the failures happens. For example, in the scenario shown in Fig. 7 a camera failure is generated after traffic is congested, which allows to test the correctness of the system when one of the cameras of an organization that monitors a traffic jam fails. Another challenge is to define proper test selection. As exhaustive testing of realistic systems is typically not feasible, the tester needs to steer test selection. As an example, to test the self-healing scenario described in Fig. 1, we could mark the RecoverFailure state in the automaton shown in Fig. 7 as a success state. We can then formulate a reachability property to check whether the system will always reach the recover failure state after the camera has failed. This property can be issued to the model checker to test whether the implementation conforms to the model with respect to this property. Finally, from our experiences with the case study, we derived an interesting model that maps the different types of behaviors of self-adaptive systems to zones of the state space. Fig. 9 shows an overview of the model. Figure 9: Zones in the state space that represent different behaviors of a self-adaptive system In the zone normal behavior, the system is performing its domain functionality. In our case study this corresponds to monitoring traffic jams. In the zone undesired behavior, the system is in a state where adaptation is required. In the case study, this corresponds both to a state where a reorganization is required, or a state where a camera failed. In the zone adaptive behavior, the system is adapting itself to deal with the undesired behavior. In the case study, this means either organizations are merging or splitting, or the system heals itself from a failure. Finally, the zone invalid behavior corresponds to states where the system should never be, e.g., deadlock in the case study. Properties of interest with respect to self-adaptation typically map to transitions between different zones. For example, property p1 in Fig. 9 refers to a transition from normal behavior to undesired behavior (e.g., property R1). Property p2 refers to the required adaptation of the system to deal with the undesired behavior, that is, the system leaves the undesired state, adapts itself and eventually, returns to normal behavior (e.g., R2 to R4). Properties p3 and p4 are examples of transitions to invalid behavior that should never occur (e.g., I1 and I2). Related Work We discuss related work on formal modeling of self-adaptation in three parts: fault-tolerance and selfrepair, verification of various properties, and integrated approaches. We conclude with a brief discussion of the position of the work presented in this paper. Fault-tolerance and self-repair. [25] introduces an approach to create formal models for the behavior of adaptive programs. The authors combine Petri Nets modelling with LTL for property checking, including correctness of adaptations and robustness properties. [10] presents a case study in formal modeling and verification of a robotic system with self-x properties that deal with failures and changing goals. The system is modeled as transition automata and correctness is checked using LTL and CTL (computational tree logic). [16] outlines an approach for modeling and analyzing fault tolerance and self-adaptive mechanisms in distributed systems. The authors use a modal action logic formalism, augmented with deontic operators, to describe normal and abnormal behavior. [7] models a program as a transition system, and present an approach that ensures that, once faults occur, the fault-intolerant program is upgraded to its fault-tolerant version at run-time. Various properties. [3] presents a verification technique for multi-agent systems from the mechatronic domain that exploits locality. The approach is based on graph, and graph transformations, and safety properties of the system are encoded as inductive invariants. [13] PobSAM is a flexible actor-based model that uses policies to control and adapt the system behavior. The authors use actor-based language Rebeca to check correctness and stability of adaptations. [11] presents a formal verification procedure to check for correct component refinements, which preserves properties verified for the abstract protocol definition. A reachability analysis is performed using timed story charts. [2] considers self-adaptive systems as a subclass of reactive systems. CSP (Communicating Sequential Processes) is used for the specification, verification and implementation of self-adaptive systems. Integrated approaches. [9] uses architectural constraints specified in Alloy as the basis for the specification, design and implementation of self-adaptive architectures for distributed systems. [18] proposes a model-based framework for developing robotic systems, with a focus on performance and failure handling. The systems behaviour is modelled as hybrid automata, and a dedicated language is proposed to specify reconfiguration requirements. The K-Components Framework [6] offers an integrated design approach for decentralized self-adaption in which the system's architecture is represented as a graph. A configuration manager monitors events, plans the adaptations, validates them, rewrites the graph and adapts the underlying system. [1] presents the PLASTIC approach that supports context-aware adaptive services. PLASTIC uses Chameleon, a formal framework for adaptive Java applications. Position of our work. In this paper, we focus on modeling and verifying combined properties of flexibility and robustness of a real-world system. To that end, we use a well-established formal method, i.e. model checking via the Uppaal tool. Most existing formal approaches for self-adaptive systems assume a central point of control to realize adaptations. We target systems in which control of adaptations is decentralized, that is, managing systems detect the need for adaptations and coordinate to realize the required adaptations locally. Most researchers employ formal methods in one stage of the software life cycle; notable exceptions are [9,25,18]. The formal approach used in this paper supports architectural design, but fits in an integrated formally founded approach to validate the qualities of self-adaptive systems that aims to exploit formal work products during subsequent stages of the software life cycle. Conclusions and Challenges Ahead In this paper, we presented a case study on formal modeling and verification of a decentralized selfadaptive system. The Uppaal tool allowed us to model the system and verify the required flexibility and robustness properties. We defined a dedicated environment model both to verify specific adaptation scenarios and manage the state space problem. This work fits in our long term research objective to develop an integrated approach for formal analysis of decentralized self-adaptive systems that combines verification of architectural models with model-based testing of applications to guarantee the required runtime qualities. As the next step in our research, we plan to build upon the work presented in this paper in two ways. First, we plan to study how we can apply verified architecture models to test concrete implementations using model based testing [19]. As model based testing has mainly focused on functional testing so far, a key challenge here is to extend the approach to test quality attributes. Second, we plan to elaborate on the initial zone-based model of self-adaptive systems that defines the different types of behavior of this class of systems. To that end, we are currently performing a systematic literature review on model checking of self-adaptive systems to map existing work on the model. Figure 1 : 1Self-healing scenario Figure 2 : 2Local Camera System as slave. At T1, the traffic jam spans the viewing range of cameras 2, 3 and 4. As a result, organizations org2 and org34 have merged to form org24 with agent2 as master. When the traffic jam resolves, the organization is split dynamically. Figure 4 : 4Agent processes traffic monitor detects the car via the camEnter channel. Similarly when a car goes out of the range of a camera, the traffic monitor detects this through the camLeave channel. The traffic monitors determines a traffic jam by comparing the total number of cars in its viewing range with the CAPACITY. Based on this, the monitor may interact with the organization controller to adapt the organizations. Figure 5 : 5Organization Figure 6 : 6Processes self-healing subsystem Figure 8 : 8Verification time for increasing number of cameras shows the template. For each camera, one traffic monitoring process instance is running all the time. Whenever a car enters into the viewing range of a camera, thestopCam[id]? stopCam[id]? slave[id]? master[id]? masterWithSlaves[id]? startCam[id]? stopCam[id]? master[id]? slave[id]? MasterWithSlaves Failed Slave Master (a) Camera Congestion NoCongestion camEnter[id]? cars++ camLeave[id]? cars-- camEnter[id]? cars++ camLeave[id]? cars-- cars< CAPACITY no_cong[id]! cars >= CAPACITY cong[id]! (b) Traffic monitor To define a concrete model, each template has to be instantiated to concrete processes. The process instances are defined in Uppaal's project declarations section:system Camera, TrafficMonitor, OrganizationController, SelfHealingController, PulseGenerator, ReleaseTraffic, Car; const int N = 6; // # Camera typedef int[0, N-1] cam_id; ... // Global Constants const int CAM_TIME = 10; const int RECOVER_TIME = 500; ... // Channels chan startCar; chan camEnter[N], camLeave[N], no_cong[N]; ... chan reqTrafficJam, repTrafficJam[N]; broadcast chan request_org[N], change_master[N], congestion[N]; ... is defined in the template's declarations section as follows:clock x; cam_id id = 0; bool isTrafficJam(cam_id id){ if (id == 0) return false; if (id == 1) return true; if (id == 2) return true; if (id == 3) return true; if (id == 4) return false; if (id == 5) return false; return false; } Start RecoverFailure id = 0, x = 0 id++, x = 0 id++, x = 0 SetFailure SetCongestion id = 0, x = 0 setMasterIDs() id x > RECOVER_TIME && id == N 1 isStarted(id) !isStopped(id) id < N 1 id == N 1 !isStarted(id) id < N 1 startCam[id]! isTrafficJam(id) id > 0 stopCam[id]! congestion[id]! isStopped(id) !isTrafficJam(id) id == 0 Figure 7: Environment template to inject camera failures bool isStopped(cam_id id){ if (id == 0) return false; if (id == 1) return true; if (id == 2) return false; if (id == 3) return false; if (id == 4) return false; if (id == 5) return false; return false; } bool isStarted(cam_id id){ ... void setMasterIDs(){ for (i : cam_id) camera[i].m_cam = i; id = N-1; } http://ec.europa.eu/transport/its/, http://www.its.dot.gov/ startCar is the initial location marked by two circles. Committed states are marked with C. These states cannot delay, and the next transition must involve an outgoing edge of at least one of the committed locations. For the scenario with 6 cameras, we assigned the first camera as the neighbor of the last and vice versa. F1: A<> forall(n : cam_id). F1: A<> forall(n : cam_id) CongestionDetected && Camera(n+1). Organizationcontroller, MasterWithSlaves && camera. n].slaves[n+1OrganizationController(n).CongestionDetected && Camera(n+1).MasterWithSlaves imply Camera(n).MasterWithSlaves && camera[n].slaves[n+1] CongestionDetected && Camera(n+1). F2: A<> forall(n : cam_id) forall(x : cam_id) OrganizationController(n). Slave && camera. x].slaves[n+1] imply Camera(x).MasterWithSlaves && camera[x].slaves[nF2: A<> forall(n : cam_id) forall(x : cam_id) OrganizationController(n).CongestionDetected && Camera(n+1).Slave && camera[x].slaves[n+1] imply Camera(x).MasterWithSlaves && camera[x].slaves[n] MasterWithSlaves && camera[n].slaves[n+1] || Camera(n+1). F3: A<> forall(n : cam_id) OrganizationController(n). MasterWithSlaves && camera. n+1].slaves[nF3: A<> forall(n : cam_id) OrganizationController(n).CongestionDetected && Camera(n+1).Master && OrganizationController(n+1).CongestionDetected imply Camera(n).MasterWithSlaves && camera[n].slaves[n+1] || Camera(n+1).MasterWithSlaves && camera[n+1].slaves[n] Context-Aware Adaptive Services: The PLASTIC Approach. M Autili, P Di Benedetto, &amp; P Inverardi, 10.1007/978-3-642-00593-0_9Fundamental Approaches to Software Engineering. Springer5503M. Autili, P. Di Benedetto & P. Inverardi (2009): Context-Aware Adaptive Services: The PLASTIC Approach. In: Fundamental Approaches to Software Engineering, Lecture Notes in Computer Science 5503, Springer, doi:10.1007/978-3-642-00593-0 9. A CSP-based framework for the specification, verification, and implementation of adaptive systems. B Bartels, &amp; M Kleine, 10.1145/1988008.1988030ACMSoftware Engineering for Adaptive and Self-Managing SystemsB. Bartels & M. Kleine (2011): A CSP-based framework for the specification, verification, and implemen- tation of adaptive systems. In: Software Engineering for Adaptive and Self-Managing Systems, ACM, doi:10.1145/1988008.1988030. Symbolic invariant verification for systems with dynamic structural adaptation. B Becker, D Beyer, H Giese, F Klein, &amp; D Schilling, 10.1145/1134285.113429728th International Conference on Software Engineering. B. Becker, D. Beyer, H. Giese, F. Klein & D. Schilling (2006): Symbolic invariant verification for sys- tems with dynamic structural adaptation. In: 28th International Conference on Software Engineering, doi:10.1145/1134285.1134297. G Behrmann, UPPAAL 4.0International Conference on Quantitative Evaluation of Systems. G. Behrmann et al. (2006): UPPAAL 4.0. In: International Conference on Quantitative Evaluation of Systems. B Cheng, 10.1007/978-3-642-02161-9_1Software Engineering for Self-Adaptive Systems: A Research Roadmap. In: Software Engineering for Self-Adaptive Systems. Springer5525B. Cheng et al. (2009): Software Engineering for Self-Adaptive Systems: A Research Roadmap. In: Software Engineering for Self-Adaptive Systems, LNCS 5525, Springer, doi:10.1007/978-3-642-02161-9 1. Decentralised Coordination of Self-Adaptive Components for Autonomic Distributed Systems. J Dowling, Trinity CollegeUniversity of DublinPh.D. thesisJ. Dowling (2004): Decentralised Coordination of Self-Adaptive Components for Autonomic Distributed Systems. Ph.D. thesis, University of Dublin, Trinity College. A Ebnenasir, 10.1109/SEAMS.2007.5Designing Run-Time Fault-Tolerance Using Dynamic Updates. In: Software Engineering for Adaptive and Self-Managing Systems. A. Ebnenasir (2007): Designing Run-Time Fault-Tolerance Using Dynamic Updates. In: Software Engineer- ing for Adaptive and Self-Managing Systems, IEEE, doi:10.1109/SEAMS.2007.5. Rainbow: Architecture-Based Self-Adaptation with Reusable Infrastructure. D Garlan, S Cheng, A Huang, B Schmerl, &amp; P Steenkiste, 10.1109/MC.2004.175IEEE Computer. 3710D. Garlan, S. Cheng, A. Huang, B. Schmerl & P. Steenkiste (2004): Rainbow: Architecture-Based Self- Adaptation with Reusable Infrastructure. IEEE Computer 37(10), doi:10.1109/MC.2004.175. I Georgiadis, J Magee, &amp; J Kramer, 10.1145/582129.582135Self-organising software architectures for distributed systems. I. Georgiadis, J. Magee & J. Kramer (2002): Self-organising software architectures for distributed systems. In: 1st Workshop on Self-healing Systems, ACM, doi:10.1145/582129.582135. Formal Modeling and Verification of Systems with Self-x Properties. M Gudemann, F Ortmeier, &amp; W Reif, 10.1007/11839569_4Lecture Notes in Computer Science. SpringerM. Gudemann, F. Ortmeier & W. Reif (2006): Formal Modeling and Verification of Systems with Self-x Properties. Lecture Notes in Computer Science, Springer, doi:10.1007/11839569 4. Reusing dynamic communication protocols in self-adaptive embedded component architectures. C Heinzemann, &amp; S Henkler, 10.1145/2000229.200024614th Symposium on Component-based Software Engineering. ACMC. Heinzemann & S.Henkler (2011): Reusing dynamic communication protocols in self-adaptive embed- ded component architectures. In: 14th Symposium on Component-based Software Engineering, ACM, doi:10.1145/2000229.2000246. The Vision of Autonomic Computing. J Kephart, &amp; D Chess, 10.1109/MC.2003.1160055IEEE Computer Magazine. 361J. Kephart & D. Chess (2003): The Vision of Autonomic Computing. IEEE Computer Magazine 36(1), doi:10.1109/MC.2003.1160055. Formal analysis of policy-based self-adaptive systems. N Khakpour, R Khosravi, M Sirjani, &amp; S Jalili, 10.1145/1774088.1774613In: Symposium on Applied Computing, ACM. N. Khakpour, R. Khosravi, M. Sirjani & S. Jalili (2010): Formal analysis of policy-based self-adaptive systems. In: Symposium on Applied Computing, ACM, doi:10.1145/1774088.1774613. Self-Managed Systems: An Architectural Challenge. J Kramer, &amp; J Magee, 10.1109/FOSE.2007.19Future of Software Engineering. J. Kramer & J. Magee (2007): Self-Managed Systems: An Architectural Challenge. Future of Software Engineering, doi:10.1109/FOSE.2007.19. R De Lemos, Software Engineering for Self-Adaptive Systems: A Second Research Roadmap. In: Software Engineering for Self-Adaptive Systems. Springer7475R. de Lemos et al. (2012): Software Engineering for Self-Adaptive Systems: A Second Research Roadmap. In: Software Engineering for Self-Adaptive Systems, LNCS 7475, Springer. Towards specification, modelling and analysis of fault tolerance in self managed systems. J Magee, &amp; T Maibaum, 10.1145/1137677.1137684Self-adaptation and Self-managing Systems, ACM. J. Magee & T. Maibaum (2006): Towards specification, modelling and analysis of fault tolerance in self managed systems. In: Self-adaptation and Self-managing Systems, ACM, doi:10.1145/1137677.1137684. Architecture-based runtime software evolution. P Oreizy, N Medvidovic, &amp; R Taylor, 10.1109/ICSE.1998.671114International Conference on Software Engineering. P. Oreizy, N. Medvidovic & R. Taylor (1998): Architecture-based runtime software evolution. In: Interna- tional Conference on Software Engineering, doi:10.1109/ICSE.1998.671114. Model-Based Self-Adaptive Embedded Programs with Temporal Logic Specifications. L Tan, 10.1109/QSIC.2006.416th International Conference on Quality Software. L. Tan (2006): Model-Based Self-Adaptive Embedded Programs with Temporal Logic Specifications. In: 6th International Conference on Quality Software, doi:10.1109/QSIC.2006.41. M Utting, &amp; B Legeard, Practical Model-Based Testing: A Tools Approach. Morgan-KaufmannM. Utting & B. Legeard (2007): Practical Model-Based Testing: A Tools Approach. Morgan-Kaufmann. M Utting, 10.1002/stvr.456A taxonomy of model-based testing approaches. Testing, Verification and Reliability. M. Utting et al. (2011): A taxonomy of model-based testing approaches. Testing, Verification and Reliability, doi:10.1002/stvr.456. E Vassev, &amp; M Hinchey, 10.1109/MC.2009.174ASSL: A Software Engineering Approach to Autonomic Computing. 42E. Vassev & M. Hinchey (2009): ASSL: A Software Engineering Approach to Autonomic Computing. Com- puter 42(6), pp. 90-93, doi:10.1109/MC.2009.174. Towards an Integrated Approach for Validating Qualities of Self-Adaptive Systems. D Weyns, 10.1145/2338966.2336803ISSTA Workshop on Dynamic Analysis (WODA). D. Weyns (2012): Towards an Integrated Approach for Validating Qualities of Self-Adaptive Systems. In: ISSTA Workshop on Dynamic Analysis (WODA), doi:10.1145/2338966.2336803. The MACODO Middleware for Context-Driven Dynamic Agent Organzations. D Weyns, R Haesevoets, A Helleboogh, T Holvoet, &amp; W Joosen, 10.1145/1671948.1671951ACM Transactions on Autonomous and Adaptive Systems. 51D. Weyns, R. Haesevoets, A. Helleboogh, T. Holvoet & W. Joosen (2010): The MACODO Middleware for Context-Driven Dynamic Agent Organzations. ACM Transactions on Autonomous and Adaptive Systems 5(1), doi:10.1145/1671948.1671951. FORMS: Unifying Reference Model for Formal Specification of Distributed Self-Adaptive Systems. D Weyns, S Malek, &amp; J Andersson, 10.1145/2168260.2168268ACM Transactions on Autonomous and Adaptive Systems. 71D. Weyns, S. Malek & J. Andersson (2012): FORMS: Unifying Reference Model for Formal Specifica- tion of Distributed Self-Adaptive Systems. ACM Transactions on Autonomous and Adaptive Systems, 7(1), doi:10.1145/2168260.2168268. Model-based development of dynamically adaptive software. J Zhang, &amp; B Cheng, 10.1145/1134285.113433728th International Conference on Software Engineering. ACMJ. Zhang & B. Cheng (2006): Model-based development of dynamically adaptive software. In: 28th Interna- tional Conference on Software Engineering, ACM, doi:10.1145/1134285.1134337.
[]
[ "Throughput-Fairness Tradeoffs in Mobility Platforms", "Throughput-Fairness Tradeoffs in Mobility Platforms" ]
[ "Arjun Balasingam \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Karthik Gopalakrishnan \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Radhika Mittal \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Venkat Arun \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Ahmed Saeed \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Mohammad Alizadeh \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Hamsa Balakrishnan \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "★ \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n", "Hari Balakrishnan \nMassachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n\n" ]
[ "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n", "Massachusetts Institute of Technology\nUniversity of Illinois at Urbana-Champaign\n" ]
[]
This paper studies the problem of allocating tasks from different customers to vehicles in mobility platforms, which are used for applications like food and package delivery, ridesharing, and mobile sensing. A mobility platform should allocate tasks to vehicles and schedule them in order to optimize both throughput and fairness across customers. However, existing approaches to scheduling tasks in mobility platforms ignore fairness.We introduce Mobius, a system that uses guided optimization to achieve both high throughput and fairness across customers. Mobius supports spatiotemporally diverse and dynamic customer demands. It provides a principled method to navigate inherent tradeoffs between fairness and throughput caused by shared mobility. Our evaluation demonstrates these properties, along with the versatility and scalability of Mobius, using traces gathered from ridesharing and aerial sensing applications. Our ridesharing case study shows that Mobius can schedule more than 16,000 tasks across 40 customers and 200 vehicles in an online manner.
10.1145/3458864.3467881
[ "https://arxiv.org/pdf/2105.11999v1.pdf" ]
235,186,818
2105.11999
655d654d2dcdf852b378b63014c4aa125ef25a0c
Throughput-Fairness Tradeoffs in Mobility Platforms Arjun Balasingam Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Karthik Gopalakrishnan Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Radhika Mittal Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Venkat Arun Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Ahmed Saeed Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Mohammad Alizadeh Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Hamsa Balakrishnan Massachusetts Institute of Technology University of Illinois at Urbana-Champaign ★ Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Hari Balakrishnan Massachusetts Institute of Technology University of Illinois at Urbana-Champaign Throughput-Fairness Tradeoffs in Mobility Platforms CCS CONCEPTS • Computer systems organization → RoboticsSensor networks• Computing methodologies → Planning and scheduling• Applied computing → Transportation• Networks → Network resources allocation KEYWORDS mobility platforms, vehicle routing, aerial sensing, ridesharing, re- source allocation, optimization This paper studies the problem of allocating tasks from different customers to vehicles in mobility platforms, which are used for applications like food and package delivery, ridesharing, and mobile sensing. A mobility platform should allocate tasks to vehicles and schedule them in order to optimize both throughput and fairness across customers. However, existing approaches to scheduling tasks in mobility platforms ignore fairness.We introduce Mobius, a system that uses guided optimization to achieve both high throughput and fairness across customers. Mobius supports spatiotemporally diverse and dynamic customer demands. It provides a principled method to navigate inherent tradeoffs between fairness and throughput caused by shared mobility. Our evaluation demonstrates these properties, along with the versatility and scalability of Mobius, using traces gathered from ridesharing and aerial sensing applications. Our ridesharing case study shows that Mobius can schedule more than 16,000 tasks across 40 customers and 200 vehicles in an online manner. INTRODUCTION The past decade has seen the rapid proliferation of mobility platforms that use a fleet of mobile vehicles to provide different services. Popular examples include package delivery (UPS, DHL, FedEx, Amazon), food delivery (DoorDash, Grubhub, Uber Eats), and rideshare services (Uber, Lyft). In addition, new types of mobility platforms are emerging, such as drones-as-a-service platforms [21,27,32,48] for deploying different sensing applications on a fleet of drones. In these mobility platforms, the vehicle fleet of cars, vans, bikes, or drones is a shared infrastructure. The platform serves multiple customers, with each customer requiring a set of tasks to be completed. For instance, each restaurant subscribing to DoorDash is a customer, with several food delivery orders (or tasks) in a city. Similarly, an atmospheric chemist and a traffic analyst might subscribe to a drones-as-a-service platform, each with their own sensing applications to collect air quality measurements and traffic videos, respectively, at several locations in the same urban area. Multiplexing tasks from different customers on the same vehicles can increase the efficiency of mobility platforms because vehicles can amortize their travel time by completing co-located tasks (belonging to either the same or different customers) in the same trip. We study the problem of scheduling spatially distributed tasks from multiple customers on a shared fleet of vehicles. This problem involves (i) assigning tasks to vehicles and (ii) determining the order in which each vehicle must complete its assigned tasks. The constraints are that each vehicle has bounded resources (fuel or battery). While several variants of this scheduling problem have been studied, the objective has typically been to complete as many tasks as possible in bounded time, or to maximize aggregate throughput (task completion rate) [23,44]. We identify a second-equally important-scheduling requirement, which has emerged in today's customer-centric mobility platforms: fairness of customer throughput to ensure that tasks from different customers are fulfilled at similar rates. 1 For example, in food delivery, the platform should serve restaurants equitably, even if it means spending time or resources on restaurants with patrons far from the current location of the vehicles. A ridesharing platform should ensure that riders from different neighborhoods are served equitably, which ridesharing platforms today do not handle well, a phenomenon known as "destination discrimination" [35,45,49]. We seek an online scheduler for mobility platforms that achieves both high throughput and fairness. A standard approach to achieving these goals is to track the resource usage and work done on behalf of different users in a fine-grained way and equalize resource consumption across users. Such fine-grained accounting and attribution is difficult with shared mobility: the resource used is a moving vehicle traveling toward its next task, but making that trip has a knock-on benefit, not only for the next task served, but for subsequent ones as well. However, the benefit of a specific trip is not equal across the subsequent tasks. Although it may be possible to develop a fair scheduler that achieves high throughput using fine-grained accounting and attribution, it is likely to be complex. We turn, instead, to an approach that has been used in both societal and computing systems: optimization, which may be viewed as a search through a set of feasible schedules to maximize a utility function. In our case, we can establish such a function, optimize it using both the task assignment and path selection, and then route vehicles accordingly. In a typical mobility problem, the planning time frame for optimization could be between 30 minutes and several hours, involving hundreds of vehicles, dozens of customers, and tens of thousands of tasks. The scale of this problem pushes the limits of state-of-the-art vehicle routing solvers [7]. Moreover, fairness objectives lead to nonlinear utility functions, which make the optimization much more challenging. As a benchmark, optimizing the routes for 3 vehicles and 17 tasks over 1 hour, using the CPLEX solver [28] with a nonlinear objective function, takes over 10 hours [36]. To address these problems, a natural approach is to divide the desired time duration into shorter rounds, and then run the utility optimization. When we do this, something interesting emerges in mobility settings: the space of feasible solutions-each solution being an achievable set of rates for the customers-often collapses into a rather small and disturbingly suboptimal set! These feasible solutions are either fair but with dismal throughput, or with excellent throughput but starving several customers. A simple example helps see why this happens. Consider a map with three areas, 1 , 2 , 3 , each distant from the others. There are several tasks in each area: in 1 , all the tasks are for customer 1 ; in 2 , all the tasks are for customer 2 , and in 3 , all the tasks are for two other customers, 3 and 4 . Suppose that there are two vehicles. Over a duration of a few minutes, we could either have the two vehicles focus on only two areas, achieving high throughput but ignoring the third area and reducing fairness, or, we could have them move between areas after each task to ensure fairness, but waste a lot of time traveling, degrading throughput. It is not possible here to achieve both throughput and fairness over a short timescale. Yet, over a long time duration, we can swap vehicles between regions to amortize the movement costs. This shows that planning over a longer timescale permits feasible schedules that are better than what a shorter timescale would permit. Our contribution, Mobius, divides the desired time duration into rounds, and produces the feasible set of allocations for that round using a standard optimizer. Mobius guides the optimizer toward a solution that is not in the feasible set for one round but can be achieved over multiple rounds. This guiding is done by aiming for an objective that maximizes a weighted linear sum of customer rates in each round. The weights are adjusted dynamically based on the long-term rates achieved for each customer thus far. The result is a practical system that achieves high throughput and fairness over multiple rounds. This approach of achieving long-term fairness by setting appropriate weights across rounds allows us to use off-the-shelf solvers for the weighted Vehicle Routing Problem (VRP) for path planning in each round. Importantly, this design allows Mobius to optimize for fairness in the context of any VRP formulation, making this work complementary to the vast body of prior work on vehicle routing algorithms [3,5,8,23]. Scheduling over multiple rounds also allows Mobius to handle tasks that arrive dynamically or expire before being done. Moreover, Mobius supports a tunable level of fairness modeled by -fair utility functions [31], which generalize the familiar notions of max-min and proportional fairness. We have implemented Mobius and evaluated it via extensive trace-driven emulation experiments in two real-world settings: (i) a ridesharing service, based on real Lyft ride request data gathered over a day, ensuring fair quality-of-service to different neighborhoods in Manhattan; and (ii) urban sensing using drones for measuring traffic congestion, parking lot occupancy, cellular throughput, and air quality. We find that: 1. Relative to a scheduler that maximizes only throughput, Mobius compromises only 10% of platform throughput in order to enforce max-min fairness. 2. Compared to dedicating vehicles to customers, Mobius improves vehicle utilization by 30-50% by intelligently sharing vehicles amongst customers. 3. Mobius can compute fair online schedules at a city scale, involving 40 customers, 200 vehicles, and over 16,000 tasks. PROBLEM SETUP Every customer subscribing to a mobility platform submits several requests over time. Each request specifies a task (e.g., gather sensor data or deliver package) and a corresponding location. The platform schedules trips for each vehicle over multiple rounds. It takes into account any changes in a customer's requirements (in the form of new task requests or expiration of older unfulfilled tasks) at the beginning of each round. We say that a customer has a backlog if they have more tasks than can be completed by all available resources within the allocated time. For simplicity of exposition, we assume each customer is backlogged (our evaluation in §7 relaxes this assumption). Let be the set of customers, and ( ) be the set of tasks requested by customer during a scheduling round . We denote ( ) as the throughput achieved for customer in scheduling round , i.e., the total number of tasks in ( ) that are fulfilled divided by the round duration. We denote ( ) as the long-term throughput for each customer , after scheduling rounds, i.e., ( ) = 1 =1 ( ) if rounds are of equal duration. A good scheduling algorithm should achieve the following objectives: • Platform Throughput. Maximize the total long-term throughput after round , i.e., ∈ ( ). • Customer Fairness. For any two customers 1 , 2 ∈ with backlogged tasks, ensure 1 ( ) = 2 ( ). Equalizing long-term per-customer throughputs ( ) provides a desirable measure of fairness for many mobility platforms: higher per-customer throughputs correlate with other performance metrics, such as lower task latency and higher revenue. Our evaluation ( §7) quantifies the impact of optimizing for a fair allocation of throughputs on other platform-specific quality-of-service metrics. Prior algorithms for scheduling tasks on a shared fleet of vehicles have focused on the VRP, i.e., only considered maximizing platform throughput [23,44]. Achieving per-customer fairness introduces three new challenges: Challenge #1: Attributing vehicle time to customers. Vehicle time and capacity are scarce. Consider the example in Fig. 1, with two customers and two vehicles; customer 1 has two densely-packed clusters of tasks, while customer 2 has two dispersed clusters of tasks. We show schedules and tasks fulfilled by Mobius and three other policies: (i) maximizing throughput, (ii) dedicating a vehicle per customer, and (iii) alternating round-robin between customer tasks. Notice that, to the left of the depot (center of the map), customer 2's tasks can be picked up on the way to customer 1's tasks. Thus, multiplexing both customers' tasks on the same vehicle is more desirable than dedicating a vehicle per customer, because it amortizes resources to serve both customers. However, sharing vehicles amongst customers complicates our ability to reason about fairness, because the travel time between the tasks of different customers cannot be attributed easily to each one. Figure 1: An example with two customers, two vehicles, and a 6-minute planning horizon. Mobius computes a schedule that (i) achieves a similar total throughput to that of the max throughput schedule, and (ii) preserves the customer-level fairness achieved by the round-robin and dedicated schedules. Figure 2: Imposing fairness at short timescales (e.g., one round trip) degrades throughput. Executing Options 1 and 2 provides fairness at longer timescales and leads to greater total throughput. Mobius Max Throughput ⨁ ⨁ ⨁ Max Challenge #2: Timescale of fairness. Fig. 2 shows two customers and one vehicle that must return home to refuel. A high-throughput schedule would dedicate the vehicle to one of the customers. By contrast, a fair schedule would require the vehicle to round-robin customer tasks, achieving low throughput due to travel. Over a longer time duration, however, we can execute two max-throughput schedules (Options 1 and 2) to achieve both fairness and high throughput. Challenge #3: Spatiotemporal diversity of tasks. In Fig. 1, the two customers' tasks have different spatial densities. The high-throughput schedule favors customer 1. A max-min fair schedule should, by contrast, ensure that customer 2 gets its fair share of the throughput, even if it comes at the cost of higher travel time and lower platform throughput. Striking the right balance between fairly serving a customer with more dispersed tasks and reducing extra travel time is a non-trivial problem. Customer tasks may also vary with time. For example, a food delivery service might receive new requests from restaurants, or an atmospheric scientist may want to update sensing locations that they submitted to a drone service provider based on prior observations. The mobility platform must handle the dynamic arrival and expiration of tasks. OVERVIEW Any resource-constrained system exhibits an inherent tradeoff between throughput and fairness. In the best case, the most fair schedule would also have the highest throughput; however, due to the challenges described in §2, it is impossible to realize this goal in many mobility settings. Mobius instead strives for customer fairness with the best possible platform throughput; its approach is to trade some short-term fairness for a boost in throughput, while improving fairness over a longer timescale. In each round , Mobius uses a VRP solver to maximize a weighted sum of customer throughputs ( ). 2 Mobius sets the weights in each round to find a high throughput schedule that is approximately fair in that round. By accounting for the long-term throughputs ( ) delivered to each customer in prior rounds, it is able to equalize ( ) over multiple rounds. We formalize this notion of balancing high throughput with fairness in §4. Mobius uses an iterative search algorithm requiring multiple invocations of a VRP solver to find a schedule that strikes the appropriate balance. Customers Our approach of trading off short-term fairness for throughput and longer-term fairness raises a natural question: why not directly schedule over a longer time horizon, rather than dividing the scheduling problem into rounds? Scheduling in rounds is desirable for several reasons: (i) their duration can correlate with the fuel or battery constraints of the vehicles, (ii) it provides a target timescale at which Mobius strives to provide fairness, (iii) shorter timescales make the NP-hard VRP problem more tractable to solve, and (iv) it enables Mobius to adapt to temporal variations in customer demand that are captured at the beginning of each round. Fig. 3 shows the architecture of Mobius. In each round, customers update their task requests. Mobius then computes the best weights, generates a schedule, and dispatches the vehicles. At the end of the round, Mobius updates each customer's throughput, ( ), and uses this information to select weights in the next round. BALANCING THROUGHPUT & FAIRNESS We now provide the intuition behind our approach for balancing throughput and fairness using the example shown in Fig. 4 Figure 4: Visualizing feasible allocations of throughput for a small problem with two customers and two vehicles. Allocations on the convex boundary trade short-term fairness for throughput. The convex boundary becomes denser over time, making the target allocation achievable. are two customers, each requesting tasks from distributions shown on the map in Fig. 4a. We have two vehicles, each starting at ⊕. For simplicity, in §4.1, we consider planning schedules in 10-minute rounds, where the vehicles return to their start locations after 10 minutes. We renew all tasks at the beginning of each round trip. Then, in §4.2, we explain how Mobius generalizes to dynamic settings where customer tasks change with time, and vehicles do not need to return to their starting locations. Scheduling on the Convex Boundary Feasible allocations. We first consider the set of schedules that are feasible within the time constraint. Fig. 4b shows the tradeoff between throughput and fairness amongst these feasible schedules. Each dot represents an allocation produced by a feasible schedule; the coordinates of the dot indicate the throughputs of the respective customers. We generate the schedules by solving the VRP for each possible subset of customer tasks. 3 We also indicate the = line (dotted gray), which corresponds to fair allocations that give equal throughput to each customer. Note that in this example both vehicles can more easily service Customer 1. Hence, an allocation that maximizes total throughput without regard to fairness (labeled ) favors Customer 1. Pareto frontier of feasible allocations. The Pareto frontier over all feasible allocations is denoted by the dashed orange line, containing , , , , and . If an allocation on the Pareto Frontier achieves throughputs of 1 and 2 for Customers 1 and 2 respectively, there exists no feasible allocation (ˆ1,ˆ2) such that 1 > 1 andˆ2 > 2 . The allocation that maximizes total throughput will always lie on the Pareto frontier. An allocation on the Pareto frontier is strictly superior, and therefore more desirable than other feasible allocations. So which allocation on the Pareto frontier do we pick? A strictly fair allocation lies at the point where the Pareto frontier intersects the = line (labeled in Fig. 4b). However, allocation has low total throughput, because the vehicles spend a significant part of the 10 minutes traveling between task clusters. Convex boundary of the Pareto frontier. To capture the subset of allocations that do not significantly compromise throughput, we use the convex boundary of all feasible allocations, denoted by the turquoise line in Fig. 4b. The convex boundary is the smallest polygon around the feasible set such that no vertex bends inward [9], and the corner points are the vertices determining this polygon. The target allocation is the point where the = lines intersects the boundary (shown in red). It has high throughput and is fair, but it may not be feasible (as in this example). Is it still possible to achieve the target throughput in such cases? Scheduling over multiple rounds. Our key insight is that it is possible to achieve the target allocation over multiple rounds of scheduling by selecting different feasible allocations on the convex boundary in each round. In a given round, Mobius chooses the feasible allocation on the convex boundary that best achieves our fairness criteria. In our example, it chooses allocation in its first round. By choosing allocation over allocation (which achieves equal throughput), Mobius compromises on short-term fairness for a boost in throughput. However, as we discuss next, it compensates for this choice in subsequent rounds. Notice that if Mobius instead chooses , it would not be able to recover from the resulting loss in throughput. As we compute a 10-minute schedule for each round, the set of feasible allocations expands; this allows Mobius to compensate for any prior deviation in fairness. Fig. 4c illustrates how the feasible set evolves over several 10-minute rounds of planning. The feasible allocations (denoted by gray dots) possible after round are derived from the cumulative set of tasks completed in rounds. Notice that over the four rounds, the set of feasible allocations becomes denser, and the Pareto frontier approaches the convex boundary. Thus, the target allocation (i.e., the allocation on the convex boundary that coincides with the = line) becomes feasible. In summary, the key insights driving the design of Mobius are: (i) the convex boundary describes a set of allocations that trade off short-term fairness for a boost in throughput, and (ii) the Pareto frontier approaches the convex boundary over multiple rounds of planning, making it possible to correct for unfairness over a slightly longer timescale. Scheduling in Dynamic Environments In practice, environments are more dynamic: customer tasks may not recur at the same locations, and vehicles need not return to their start locations regularly. Thus, the convex boundary may not be identical in each round. However, in practice, because (i) vehicles move continuously over space and (ii) customer tasks tend to observe spatial locality, the convex boundary does not change drastically over time. To illustrate this, we extend the example in Fig. 4, by creating a map with the same densities as in Fig. 4a, but with 50 tasks per customer. To simulate dynamics, we create a new task for each customer every 3 minutes at a location chosen uniformly at random within a bounding box. We still consider two vehicles starting at the same location (i.e., in the middle of customer 1's cluster) and plan in 10-minute rounds. We eliminate the return-to-home constraint. 4 In order to adapt to the customers' changing tasks, we compute new 10-minute schedules every 1 minute (i.e., 10-minute rounds slide in time by 1 minute). We run this simulation for 60 minutes. In order to understand how these dynamics impact the convex boundary as we plan iteratively, we show in Fig. 4d the convex boundary of 10-minute schedules at each 1-minute replanning interval. Notice that the convex boundaries hover around a narrow band, indicating that we can still track the target throughput reliably. The red square marks the value of the average target throughput across all 50 convex boundaries; we also mark the throughput achieved by Mobius's scheduling algorithm ( §5). In addition to the convex boundary remaining relatively stable from one timestep to the next, this method of replanning at much quicker intervals (e.g., 1 minute) than the round duration (e.g., 10 minutes) makes Mobius resilient to uncertainty in the environment. 4 For instance, Mobius can react to streaming requests in a punctual manner, and can also incorporate requests that are unfulfilled due to unexpected delays (e.g., road traffic or wind). Moreover, since Mobius uses a VRP solver as a building block to compute its schedule ( §3), it can also leverage algorithms that solve the stochastic VRP [8], where requests arrive and disappear probabilistically. Visualizing Routes Scheduled by Mobius To illustrate how Mobius converges to fair per-customer allocations without significantly degrading platform throughput, in Fig. 5 we show 3 consecutive 10-minute round schedules computed by Mobius, for the dynamic example in Fig. 4d. In Rounds 1 and 3, we observe that Mobius decides to dedicate one vehicle to each customer in order to give them both sufficiently high throughput; here, customer 2 receives lower throughput because its tasks are more dispersed. However, in Round 2, Mobius compensates for this short-term unfairness by scheduling an additional vehicle to customer 2, while also collecting a few tasks for customer 1 in the outbound trip. MOBIUS SCHEDULING ALGORITHM Based on the insights in §4, we design Mobius to compute a schedule on the convex boundary in each round, such that the long-term throughputs ( ) approach the target allocation. Mobius works in two steps: (1) In each round, Mobius finds the support allocations, which we define as the corner points on the convex boundary of the current round, near the target allocation ( §5.1). For example, in Fig. 4b, Mobius would find support allocations and . (2) Amongst the support allocations found in step (1), Mobius selects the one that steers the long-term throughputs ( ) toward the target allocation ( §5.2). In this section, we present Mobius in the context of strict fairness (i.e., ( ) must lie along the = line). §5.3 provides a theoretical analysis of Mobius's optimality under simplifying assumptions, and §5.4 describes our implementation. In §6, we extend Mobius's formulation to work with a class of fairness objectives. Finding Support Allocations Since the convex boundary of the Pareto frontier is equivalent to the convex boundary of the feasible set of schedules, a naive way to find the support allocations is to compute the Pareto frontier, take its convex boundary, and then identify the support allocations near the target allocation. However, computing the Pareto frontier is computationally expensive because it requires invoking an NP-hard solver an exponential number of times in the number of tasks. Mobius uses a VRP solver as a building block to find a subset of the corner points of the convex boundary around the target allocation. The VRP involves computing a path P for each vehicle , defined as an ordered list of tasks from the set of all tasks { ( ) | ∈ }, such that the time to complete P does not exceed the total time budget for a round. VRP solvers maximize the platform throughput without regard to fairness. We capture different priorities amongst customer tasks by assigning a weight to each customer 's tasks. Let x ∈ R | | represent a throughput vector, where is the throughput for customer , and let w ∈ R | | represent a weight vector, with a weight for each customer . 5 The weighted VRP seeks to maximize the total weighted throughput of the system, where each task is allowed a weight. We can describe this as a mixed-integer linear program: argmax P , ∀ ∈ ∑︁ ∈ = argmax P , ∀ ∈ w ⊺ x (1) s.t. (P ) ≤ ∀ ∈ (2) P is a valid path ∀ ∈ ,(3) where (·) specifies the time to complete a path. Equation (2) enforces that, for each vehicle, the time to execute the selected path does not exceed the budget. Equation (3) captures constraints that are specific to the vehicles (e.g., if vehicles must return to home at the end of each round) and customers (e.g., if tasks are only valid during specific windows during the scheduling horizon). The weighted VRP (also called the prize-collecting VRP) is NP-hard, but there are several known algorithms with optimality bounds [5,44]. Using weights to find the corner points. We can adjust the weight vector w in order to capture a bias toward a particular customer; w describes a direction in the customer throughput space, reflecting that bias. that prioritizes customer 1 (i.e., along the -axis), and w 2 = (0,1) finds a schedule that prioritizes customer 2 (i.e., along the -axis). Searching on the convex boundary. Recall that, for strict fairness, the target allocation is the point where the = line intersects the convex boundary for the current round ( §4). At the start of the search, Mobius does not yet know the convex boundary, so it cannot know the target allocation. To find allocations on the convex boundary, Mobius employs an iterative search algorithm, analogous to binary search; in each stage, it tries to find a new allocation on the convex boundary in the direction of the = line. Mobius begins the search with allocations along the customer axes. For two customers, it begins with weights w 1 and w 2 above, which gives two allocations on the convex boundary. In each stage of the search, Mobius computes a new weight vector, using allocations found on the convex boundary in the previous stage, in order to find a new allocation on the convex boundary. It terminates when no new allocation can be found. By searching in the right direction, Mobius only needs to compute a subset of corner points on the convex boundary. To better illustrate the algorithm, consider the example in Fig. 6b, with 2 customers. Mobius starts the search by looking at the two extreme points on the customer 1 ( 1 ) and customer 2 ( 2 ) axes, which correspond to prioritizing all vehicles for either customer. So in stage 1, Mobius computes these schedules, using the weight vectors w 1 = (1,0) and w 2 = (0,1), which give the allocations and , respectively, in Fig. 6b. After stage 1, { , } is the current set of corner points determining the convex boundary. In the next stage, Mobius computes a new weight w to continue the search in the direction normal to (Fig. 6a). Let the equation for the face be 1 1 + 2 2 = , where 1 , 2 , and can be derived using the known solutions on the line, and . So, by invoking the VRP solver (Equation (1)) with w = ( 1 , 2 ), we try to find a schedule on the convex boundary, with the highest throughput in the direction normal to . Letˆ1 andˆ2 be the throughputs for the schedule computed with weight w. If (ˆ1,ˆ2) lies above this line, i.e., 1ˆ1 + 2ˆ2 > , then the point (ˆ1,ˆ2) is a valid extension to the convex boundary. In this example, Mobius finds a new allocation ; so, the new set of corner points is { , , }. Notice that this extension in stage 2 creates two new faces on the convex boundary, and . But, the = line only passes through . So, in stage 3, Mobius continues the search, extending by the computing the weights as described above (normal to ), and discovers a new allocation . Finally, Mobius tries to extend the face because it intersects the = line. It finds no valid extension, and so, it terminates its search on the face , and returns the support allocations and . Generalizing to more customers. Mobius computes a weight for each customer ∈ , i.e., w ∈ R | | . Faces on the convex boundary become | |-dimensional hyperplanes, described by the equation ∈ = . Mobius solves for w using the | | allocations that define each face, and finds | | support allocations at the end of the search. Recall from the example in §5.1 that each stage produced 2 new faces and that Mobius only continued the search by extending 1 face. With | | customers, even with | | new faces after each stage, Mobius only invokes the VRP solver once to continue the search. A naive algorithm, by contrast, would require | | calls to the VRP solver in each stage. Thus Mobius scales easily with more customers by pruning the search space efficiently. Scheduling Over Rounds In each round, Mobius finds | | support allocations, which determine the face of the convex boundary that contains the target allocation. It then selects a support allocation among these | | such that the per-customer long-term throughputs ( ) approach the target throughput. By tracking ( ) over many rounds, Mobius can select allocations that compensate for any unwanted bias introduced to some customer in a prior round. Mathematically, to choose a schedule in round , Mobius considers the effect of each support allocation x( ) on the average throughput x( + 1). The average throughput is defined for each customer as ( +1) = ( ) + (1− ) ( ), where = 1/( +1). Mobius chooses x( ) such that x( +1) is closest to the = line (in Euclidean distance). Optimality of Mobius Mobius is optimal in a round. We can prove that Mobius finds the support allocations nearest the target throughput (in Euclidean distance). We illustrate this through the example in Fig. 7a, where the corner points of the convex boundary are { , , , , }, and is closest to the target allocation. In the previous stage, Mobius discovered , and it needs to pick one face to continue the search. The shaded yellow regions indicate the extensible regions of the two candidate faces and . The extensible region of a face describes the space of allocations that can be obtained by searching with the weight vector that defines that face, while maintaining a convex boundary ( §5.1). Since Mobius finds a new allocation on the convex boundary in every stage of the search, no allocation can exist outside these regions; otherwise, the resulting set of discovered allocations would no longer be convex. Thus, the best face for Mobius to continue the search is indeed , because its extensible region is the only one that may contain a better allocation closer to the = line. App. B.1 6 includes a formal proof that the optimal support allocation (i.e., the allocation closest to the line = ) is unique and that Mobius finds it. Optimality over multiple rounds. Under a static task arrival model, we can show that the schedules computed by Mobius achieve throughputs that are optimal at the end of every round, i.e., the achieved throughput has the minimum distance possible to the target allocation after each round. This model assumes the convex boundary remains the same across rounds. One way to realize this is to require (i) the vehicles return to their starting locations at the end of each round, and (ii) all tasks are renewed at the beginning of each round. We make these simplifying assumptions only for ease of analysis; our evaluation in §7 does not use them. We describe an intuition for this result below. 6 Per the static task arrival model, the convex boundary is the same in each subsequent round; therefore, Mobius finds the same support allocations in every round. By taking into account the long-term per-customer rates, ( ), Mobius oscillates between these support allocations in each round at the right frequency, such that ( ) ∀ ∈ converges to the target allocation over multiple rounds. We illustrate this in Fig. 7b, which shows the support allocations and . The face contains the target allocation, denoted by the star. Because Mobius oscillates between and , the allocation ( 1 ( ), 2 ( )) must lie along . Mobius chooses in the first round because its throughput is closer than to the target allocation. In the second round, it chooses , moving the average throughput to 1 1 . In the third round, Mobius chooses , moving the average throughput to 2 1 . Notice that if it had instead chosen in the third round, the average throughput would be 1 2 , which is further away from the target throughput. Thus, this myopic choice between and results in the closest solution to the target allocation after any number of rounds. Additionally, notice that the length of the jump (e.g., from to 1 1 and from 1 1 to 2 1 ) decreases in each round; therefore, Mobius converges to the target throughput. Implementation We implement the core Mobius scheduling system in 2,300 lines of Go. 7 It plugs directly with external VRP solvers implemented in Python or C++ [25,39]. Mobius exposes a simple, versatile interface to customers, which we call an interest map. An interest map consists of a list of desired tasks, where each task includes a geographical location, the time to complete the task once the vehicle has reached the location, and a task deadline (if applicable). In each round, Mobius gathers and merges interest maps from all customers, before computing a schedule. At the end of each round, it informs the customers of the tasks that have been completed, and customers can submit updated interest maps. Interest maps serve as an abstraction for Mobius to ingest and aggregate customer requests; however, the merged interest map is directly compatible with standard weighted VRP formulations [5,19] without modification. Thus, Mobius acts as an interface between customers and vehicles, using a VRP solver as a primitive in its scheduling framework (Fig. 3). Bootstrapping VRP solvers. Since the VRP is NP-hard [44], solvers resort to heuristics to optimize Equation (1). In practice, we find that state-of-the-art solvers do not compute optimal solutions; however, we can aid these solvers with initial schedules that the heuristics can improve upon. We warm-start the VRP solvers with initial schedules generated by the following policies: (i) maximizing throughput, (ii) dedicating vehicles (assuming a sufficient number of vehicles), and (iii) a greedy heuristic that maximizes our utility function ( §6). 8 At the beginning of each round, Mobius builds a suite of warm start solutions. Then, prior to invoking the VRP solver with some weight vector w, Mobius chooses the initial schedule from its warm start suite with the highest weighted throughput (i.e., objective of Equation (1)). Mobius also caches the schedules found from all invocations to the VRP solver ( §5.1), to use for warm start throughout the round. Mobius parallelizes all independent calls to the VRP solver (e.g., when computing warm start schedules and when generating | | schedules to initialize the search along the convex boundary). GENERALIZING TO -FAIRNESS The fairness objective we have considered so far aims to provide all customers with the same long-term throughput (maximizing the minimum throughput). However, an operator of a mobility platform may be willing to slightly relax their preference for fairness for a boost in throughput. To navigate throughput-fairness tradeoffs, we can generalize Mobius's algorithm ( §5) to optimize for a general class of fairness objectives. We use the -parametrized family of utility functions , developed originally to characterize fairness in computer networks [31]: (y) = ∑︁ ∈ 1− 1− ,(4) where y ∈ R | | and is the throughput of customer (either shortterm or long-term ). captures a general class of concave utility functions, where ∈ R ≥0 controls the degree of fairness. For instance, when = 0, the utility simplifies to the throughputmaximizing objective defined in Equation 1 (assuming all customers have the same weight). By contrast, when → ∞, the objective becomes maximizing the minimum customer's throughput (i.e., maxmin fairness). = 1 9 corresponds to proportional fairness, where the sum of log-throughputs of all customers is maximized; this ensures that no individual customer's throughput is completely starved. Generalizing Mobius's search algorithm. When Mobius generalizes to -fairness, the target allocation is no longer simply the point on the convex boundary that intersects the = line. The target allocation is instead the allocation on the convex boundary with the greatest utility . When searching the convex boundary in each round, Mobius determines which candidate face contains the target throughput by using Lagrange Multipliers to find the point along the face 10 with the greatest utility. Once it finds each support allocation x, Mobius incorporates the historical throughput x to select the schedule with greatest cumulative utility ( x( +1) + (1− )x( )), where is as defined in §5.2. An example. Fig. 8 shows a time-series chart of long-term customer and platform throughputs for the example described in §4.2. By adapting to different schedules on the convex boundary, Mobius converges to a fair allocation of rates without degrading total 8 App. C describes this heuristic in detail. 9 is not defined at = 1, so we take the limit as → 1. throughput. allows Mobius to compute expressive schedules; for instance, = 1 strives to maximize total throughput without starving either customer. Additionally, Mobius (max-min) 11 converges to a fair allocation of long-term throughputs within 20 minutes. REAL-WORLD EVALUATION Online Trace-Driven Emulation We implement a trace-driven emulation framework to compare Mobius against other scheduling schemes, under the same realworld environment. This framework replays timestamped traces of requests submitted by each customer, by streaming tasks to the scheduler as they arrive, and sending task results back to the customer. Capturing environment dynamics and uncertainty. To emulate dynamic customer demand, our emulation framework streams tasks according to the timestamps in the trace-so Mobius has no visibility into future tasks. To emulate uncertainty in customer demand, we cancel tasks that are not scheduled in 10 minutes. Additionally, the case studies in §7.2 and §7.3 consider scenarios where at least one customer is backlogged (defined in §2). If no customers are backlogged, then the platform can fulfill all tasks within the planning horizon, and the resulting schedule would have maximal throughput and fairness. Thus, the problems are only interesting when at least one customer is backlogged; Mobius is effective and required only in such situations. Backend VRP solver. We use the Google OR-Tools package [39] as our backend weighted VRP solver (Equation (1)). OR-Tools is a popular package for solving combinatorial optimization problems, and supports a variety of VRP constraints, including budget, capacity, 11 Mobius approximates max-min fairness ( → ∞) with = 100. pickup/dropoff, and time windows. Our case studies involve VRPs with different sets of constraints. We run our experiments on an Amazon EC2 c5.9xlarge instance with 36 CPUs. Baselines. In our experiments, we evaluate Mobius's throughput and fairness against two baseline routing algorithms: (i) a max throughput scheduler, and (ii) dedicated vehicles. The max throughput scheduler simply runs the backend VRP solver on the same input of customer tasks fed into Mobius for a round. This solution provides a benchmark on the platform capacity, and quantifies the maximum achievable total throughput. We compute the "dedicated vehicles" schedule by first distributing the vehicles evenly among all customers, 12 and then invoking the max throughput scheduler once for each customer. This solution provides a benchmark schedule that divides vehicle time equally among all customers. As shown in §2, round-robin scheduling achieves very low throughput; hence we omit it from the results in this section. To the best of our knowledge, Mobius is the first algorithm that explicitly optimizes for customer fairness in mobility platforms. We considered evaluating Mobius by running a scheduler that optimizes throughput and fairness over a longer timescale using a mixed-integer linear program solver (e.g., Gurobi [25] or CPLEX [28]); however, this is not feasible in practice, because (i) customer demands arrive in a streaming fashion, and (ii) these solvers do not scale beyond tens of tasks [36]. Thus, we believe the baselines described above offer reasonable comparisons for Mobius. Microbenchmarks. In addition to the real-world case studies ( §7.2- §7.3), we also evaluate Mobius on microbenchmarks created from synthetic customer demand, including scenarios where Mobius is optimal (under the static task arrival model, §5.3). We compare Mobius against max throughput, dedicating vehicles, and round robin, and show, through controlled experiments, that (i) it provides provably good throughput and fairness for a variety of spatial demand patterns, (ii) it scales for different numbers of vehicles, (iii) it controls its timescale of fairness, and (iv) it can tune its fairness parameter . We also report the runtime of Mobius in various environments. We include these results in App. D and App. E. Case Study: Lyft Ridesharing in Manhattan Setting. Motivated by the issue of "destination discrimination" [35,45,49] discussed in §1, we consider a ridesharing service that receives requests from different neighborhoods (customers) in a large metro area. Some neighborhoods are easier to travel to than others, and rider demand out of a neighborhood can vary with the time of day. We show that Mobius can guarantee a fair quality-of-service (in terms of max-min fair task fulfillment) to all neighborhoods throughout the course of a day, without significantly compromising throughput. We also show that, although it optimizes for an equal allocation of throughputs, Mobius does not degrade other quality-of-experience metrics, such as rider wait times. We further demonstrate that Mobius is a scalable online platform that generates schedules for a large city-scale problem. Limousine Commission, involving 40 neighborhoods (zones) in Manhattan over the course of a day [14]. Each request consists of a pickup and a dropoff zone, and we seek to provide pickups from all zones equitably. The map in Fig. 9 (left) demarcates the customer zones. Fig. 9 (right) illustrates the scale of this scheduling problem. It visualizes traffic on the top 1,000 (out of 3,300) pickup-dropoff pairs; the color of each arrow indicates the volume of ride requests for that pickup-dropoff location. Notice that both the distance of rides and the volume of requests originating from zones vary vastly throughout the island. A significant fraction of requests arrive into and depart from Lower Manhattan. Some zones in Upper Manhattan have as few as 15 unique outbound trajectories, while other zones have hundreds. Moreover, ridesharing demand varies significantly with the time of day. For instance, a busy zone near Midtown Manhattan sees the load vary from around 200 to 600 requests/hour, and a quiet zone near Central Park experiences a minimum load of 3 requests/hour and peak load of 24 requests/hour. Notice that the dynamic range of demand throughout the 13 hours also varies across zones. Experiment setup. This ridesharing problem maps to the capacitated pickup/delivery VRP formulation [19]. It computes schedules that maximize the total number of completed rides, such that (i) a ride's pickup and dropoff are completed on the same vehicle, and (ii) each vehicle is completing at most one ride request at any point in time. We configure the solver to retrieve real-time traffic-aware travel time estimates from the Google Maps API [24], and we constrain OR-Tools to report a solution within 3 minutes. We use the trace described above in our emulation framework ( §7.1). We compute schedules for a fleet of 200 vehicles. 13 In order to ensure that the schedules are not myopic, we plan our routes with 45-minute horizons; however, to reduce rider wait times, we recompute the schedule every 10 minutes, while ensuring that we honor any requests that we have already committed to in the schedule. We assume that riders cancel requests that are not incorporated into a schedule within 10 minutes of the request time. Fairness with high vehicle utilization. Since Mobius plans continuously, having several allocations on the convex boundary at its disposal, we expect it to converge to a fair allocation of rates, despite the skew in demand. Fig. 10 shows the long-term throughputs achieved for each zone by different scheduling algorithms, after 13 hours. The color of each zone in the map indicates that zone's throughput. Bright colors correspond to high throughput, and a homogeneous mix of colors indicates a fair allocation. Beneath the maps, we also stack the zone throughputs to indicate how each 13 The number of vehicles does not matter, since we compare Mobius to the platform capacity (from the max throughput scheduler). scheduler divides up the total platform throughput across the zones; ideally we would like large, evenly-sized blocks. The max throughput scheduler divides the platform throughput most unevenly across zones. In particular, we see that while it serves nearly 200 rides/hour out of the Financial District (Lower Manhattan), it virtually starves zones near Central Park. From the demand map (Fig. 9), notice that (i) a majority of rides originate from Lower Manhattan, and (ii) most of these trips are destined for neighboring zones. Thus, the best policy to maximize the total number of trips completed is to stay in Lower Manhattan, which is what the max throughput scheduler does. The bar chart indicates that dedicating 5 vehicles to each zone results in 40% lower platform throughput than the max throughput scheduler. This is because a heterogeneous demand across zones cannot be effectively satisfied by an equal division of resources (vehicles). Nevertheless, Fig. 10 shows that this scheduler shares the platform throughput most evenly across zones. The division of perzone throughputs is not perfectly even, in spite of dedicating an equal number of vehicles, because (i) ride requests from different zones can have different trip lengths, and (ii) some zones have inherently low demand and do not backlog the system, leaving some vehicles idle. By contrast, Mobius strikes the best balance between throughput and fairness. It achieves roughly equal zone throughputs, while compromising only 10% of the maximum platform throughput. Compared to dedicating vehicles, we see, from the map, that Mobius achieves higher throughput for most zones by identifying an incentive to chain requests from different zones. For example, Mobius combines two requests from different zones into the same trip, when the dropoff of the first request is close to the pickup of the second request. While this helps improve efficiency, Mobius also prioritizes pickups from zones with a historically low throughput to ensure fairness across zones. This ridesharing simulation reveals that it is possible to achieve a fair allocation of rates in a practical setting without significantly degrading platform throughput. Controlling the timescale of fairness. Mobius's replanning interval controls the timescale over which it is fair. The more often that Mobius replans, the more up-to-date its record of long-term customer throughputs; Mobius can then adapt to short-term unfairness 9 quickly by finding a more suitable schedule on the convex boundary. Recall that when replanning frequently, the convex boundary does not change drastically between scheduling intervals ( §4.2), if the spatial distribution of tasks do not change rapidly with time. So, in practice, we do not expect to deviate far from the ideal target throughput. Fig. 11 shows the long-term throughputs achieved for two zones, for replanning timescales of 10 minutes and 15 minutes. Mobius equalizes throughputs better when it replans more frequently. Rider wait times. Platform operators prefer high throughput schedules because they translate directly to high revenue; low throughput would lead to more cancelled rides. While Fig. 10 demonstrates that Mobius is fair without degrading throughput, we would like to know if optimizing for fairness impacts rider wait time (i.e., the time between requesting a ride and being picked up). Fig. 12 compares the distributions of rider wait times for rides originating from Bloomingdale District (a quiet neighborhood west of Central Park) and from Midtown Center (a busy district near Times Square). We compute wait times are only for fulfilled tasks. Notice that in both zones-with two very different demand patterns-the distribution of wait times for Mobius is comparable to that of the max throughput scheduler. We observe that the wait times in the quiet zone are slightly higher for Mobius (average of 17 minutes, compared with 15 minutes for max throughput). This is because the wait times for Mobius are computed for significantly more tasks (Mobius fulfills 66.7% more ride requests than does max throughput). The schedule that dedicates vehicles sees higher wait times than Mobius, especially when rides originate from a busy zone (e.g., Midtown Center), since vehicles would be idle until they return to their assigned zone to pick up a new rider. Scalability. This case study demonstrates that Mobius is practical at an urban scale. In fact, when scheduling its fleet of taxis, New York City's Yellow Cab restricts its scheduling region to Manhattan and organizes its requests according to approximately 40 taxi zones [7,13]. In our experiments, the backend VRP solver (i.e., max throughput scheduler) computes each 45-minute schedule in 3 minutes (capped by the timeout). We observe that Mobius takes 5-6 minutes; Mobius sees a speedup by (i) parallelizing calls to the VRP solver and (ii) warm-starting the VRP solver with initial schedules ( §5.4). These optimizations help Mobius easily scale to tens of thousands of tasks. We believe we can further improve the speed by leveraging parallelism in the backend VRP solver [43] (OR-Tools does not expose a multi-threaded solver). Case Study: Shared Aerial Sensing Platform Setting. The recent proliferation of commodity drones has generated an increased interest in the development of aerial sensing and data collection applications [2,4,16,20,33,34], as well as generalpurpose drone orchestration platforms [26,37,40]. An emerging mobility platform is a drones-as-a-service system [21,27,32,46,48], where developers submit apps to a platform that deploys these app tasks on a shared fleet of drones. App (customer) semantics in a drone sensing platform can show significant heterogeneity in both space and time. To ensure a satisfactory quality-of-service for all applications, a scheduler must not only efficiently multiplex tasks from different applications in each flight (typically constrained to 20 minutes due to the battery life [17]), but also share task completion throughput equitably across apps. Since apps can be reactive (i.e., sensing preferences change as apps receive measured data), Mobius must additionally provide a sustained rate-of-progress to each app, as opposed to "bursty" throughput. Sensing apps. We implement 5 popular urban sensing apps to evaluate Mobius in this drones-as-a-service context, summarized in Fig. 13. Fig. 14 depicts the locations for the sensing tasks submitted by each app. We describe each app below: • The Traffic app continuously monitors road traffic congestion over 11 contiguous segments of road in an urban area. To measure average vehicle speed, it collects 10-second video clips at each road segment, detects all cars using YOLOv3 [42], and tracks the trajectory [11] of each vehicle. After gathering multiple initial samples at all 11 locations, the app prioritizes the locations with the highest variance in speed, in order to collapse uncertainty in its overall estimates of road congestion. • The Parking app counts parked cars at 3 sites, by monitoring each lot for 1 minute; to maintain fresh estimates of counts, this app renews these 3 tasks after 10 minutes. • The Air Quality app measures PM2.5 concentration around a plume [1], submitting a candidate list of 100 one-time sampling locations. This app is also reactive; on receiving a measurement, it updates a Gaussian Process model [41] and cancels any unfulfilled tasks with high predicted accuracy. • The iPerf app builds a map of cellular coverage in the air, by profiling throughput at 100 spatially-dispersed locations. It renews all tasks after each cycle of 100 measurements is complete. • The Roof app submits 60 one-time tasks to image roofs over a residential area. Notice that these apps collectively have a variety of spatiotemporal characteristics. For instance, the Traffic app changes its requests with time, based on the uncertainty in speed estimates and the freshness in measurements. By contrast, the Air Quality app changes its requests with space, using a statistical model to collapse 10 Figure 13: Summary of aerial sensing applications, which span a variety of spatial demand and reactive/continuous sensing preferences. We collected ground truth data for each of these applications using real drones, and created traces to evaluate Mobius. uncertainty in a task based on nearby measurements. The iPerf app has no temporal preferences, and instead functions as a "free-riding" app that gathers quick measurements over a large area. Ground-truth data collection. To run our drones-as-a-service platform on real-world sensor data, which is critical to the performance of the reactive and continuous monitoring apps, we separately gather 90 minutes of ground-truth data for each app, using real drones. This gives us a trace of timestamped measurement values of each app. We then use our trace-driven emulation framework ( §7.1) to evaluate different scheduling algorithms. Fig. 13 shows highlights from our data collection. For example, to collect ground-truth for the Traffic app, we instrument 6 DJI Mavic Pros [17] to continuously gather video and track cars over the 11 measurement locations (Fig. 14) for 90 minutes. Similarly, for the iPerf and Air Quality apps, we program a DJI F450 drone [18] equipped with an LTE dongle and a PM2.5 sensor [1] to gather measurements at their respective measurement locations. We instrument our drone to communicate its location, battery status, and measurement data to a dashboard hosted on an EC2 instance, from which we observe the drone's progress on our laptop. Experiment setup. We configure our backend solver to estimate travel time as the Euclidean distance between the sensing tasks plus the sensing time for the destination task. In order to be sufficiently reactive to the Traffic and Air Quality apps, we schedule in 5-minute rounds, and require that the drones return to recharge their batteries every 15 minutes. We run our trace-driven emulation framework with 5 drones. Additionally, we configure the Roof app to join the system after 30 minutes. High throughput, high fairness. To understand how Mobius divides the platform throughput, we show the long-term throughput for each app over 90 minutes in Fig. 15. Mobius (max-min) achieves 55% more throughput than dedicating drones and only 15% less throughput than maximizing throughput. Mobius with a proportional fairness objective similarly outperforms max throughput and dedicated vehicles in navigating the throughput-fairness tradeoff. Note that the throughputs of the Air Quality and Roof apps decay with time, after their one-time tasks are fulfilled. Because these apps have variable demand (e.g., 100 tasks for iPerf and 3 tasks for Parking), studying throughput is not sufficient. Hence, we plot the tasks completed as a fraction of demand for each app in Fig. 16. Notice that, under Mobius, even the most starved app (iPerf) completes nearly 34% of its tasks; by contrast, max throughput and dedicated drones deliver worst-case task completions of 30% and 13%, respectively. Even though dedicating drones guarantees equal drone time for each app, it is extremely unfair toward apps with higher demand or more spatially-distributed tasks. Impacts of sensing and travel times. Fig. 14 would suggest that the Air Quality and Roof tasks are easier to service, since their tasks are more spatially concentrated; however, their tasks take 20 seconds each (Fig. 13). The max throughput scheduler understands this tradeoff in terms of maximizing throughput, and thus prioritizes the iPerf app, since its 10-second tasks (Fig. 13) are cheap to complete. By contrast, Mobius additionally understands how to navigate this tradeoff in terms of fairness; for instance, it forgoes some iPerf tasks to complete more 20-second AQI measurements. Reliable rate-of-progress. In enforcing either proportional or max-min fairness, Mobius does not starve any app, at any instant of time. Indeed, Fig. 15 indicates that Mobius delivers a reliable rate-of-progress to the Air Quality app, gradually giving it roughly 3 tasks/min over the first 20 minutes. By contrast, the max throughput scheduler is more "bursty", and only services this app after 20 minutes. As a result, we find that, with Mobius, the root-mean-square error (RMSE) of the Gaussian Process model for the air quality drops more rapidly. Catering to transient apps. Recall that the Roof app joins the platform after 30 minutes. Fig. 15 indicates that Mobius rapidly adapts to this change in demand with a spike in throughput for the Roof app at the cost of lower throughput for the iPerf and Air Quality apps. Notice that this spike in Mobius's schedule is larger in magnitude than the one in the max throughput schedule. This is because Mobius realizes that, when the Roof app joins, it has no accumulated throughput, while other apps have amassed higher throughput from living in the system for longer. Fig. 17 (right) plots the routes for all 5 drones during minutes 30-35; all drones immediately flock to the Roof app. With Mobius, an operator can choose to respond to the arrival of new apps by discounting throughput accumulated in prior rounds. Fig. 17 (left) shows how Mobius can control the Roof app's rate of task fulfillment, with a discount factor of 0.1. RELATED WORK Shared mobility and sensing platforms. Ridesharing platforms rely on different flavors of the VRP; these systems have typically been interested in maximizing profit (i.e., throughput) [3,12], minimizing the size of the fleet [47], and planning in an online fashion [7]. Similarly, there has been a large amount of recent work on drones-as-a-service platforms, which have primarily addressed challenges surrounding data acquisition [46], multi-tenancy and security [27], and programming interfaces [26,37]. All of these systems use a throughput-maximizing algorithm under the hood. Mobius is motivated by the advent of customer-centric mobility platforms in a variety of domains, where guarantees on quality-of-service to customers are paramount to the viability [45] of these services [35]. Vehicle routing problem. The VRP has been extensively studied by the Operations Research community [44]. Many variants of the problem have been considered, ranging from the budget-constrained VRP [5], capacitated VRP [23], VRP with time windows [19], predictive routing under stochastic demands [8,26], etc. Prior work has extended the VRP to consider multiple objectives, such as minimizing the variance in vehicle travel time or tasks completed by each vehicle [29]. These load balancing objectives, however, do not consider customer-level fairness, which is the focus of Mobius. Moreover, Mobius abstracts out fairness from the underlying vehicle scheduling problem, making its techniques complementary to the large body of work on the VRP and its variants. Fair resource allocation in computer systems. Our approach to formalizing throughput and fairness in mobile task fulfillment is inspired by -fair bandwidth allocation in computer networks [31,38]. However, as noted in §1, mobility platforms introduce new challenges around attributing cost to serve customers, that do not arise when addressing fairness in switch scheduling [15], congestion control [30], and multi-resource compute environments [22]. Mobius develops a novel set of techniques to address these challenges. CONCLUSION We developed Mobius, a scheduling system that can deliver both high throughput and fairness in shared mobility platforms. Mobius uses the insight that, when operating over rounds, scheduling on the convex boundary of feasible allocations, as opposed to the Pareto frontier, provably improves on fairness with time. We showed that Mobius can handle a variety of spatial and temporal demand distributions, and that it consistently outperforms baselines that aim to maximize throughput or achieve fairness at smaller timescales. Additionally, through real-world ridesharing and aerial sensing case studies, we demonstrated that Mobius is versatile and scalable. There are several opportunities for extending the capabilities of Mobius. First, Mobius assumes that customers are not adversarial. Developing strategyproof mechanisms that incentivize truthful reporting of tasks by customers is an open problem. Second, we design Mobius to only balance customer throughputs. We believe the optimization techniques we developed ( §5) can be extended to support other platform objectives, such as task latency, vehicle revenue, and driver fairness. Finally, incorporating predictive scheduling, where the platform can strategically position vehicles in anticipation of future tasks, is an interesting direction for future work, as it can further increase platform throughput. ACKNOWLEDGMENTS We thank Songtao He, Favyen Bastani, Sam Madden, our shepherd, and the anonymous MobiSys reviewers for their helpful discussions and thoughtful feedback. This research was supported in part by the NSF under Graduate Research Fellowship grant #2389237. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The NASA University Leadership Initiative (grant #80NSSC20M0163) provided funds to assist the authors with their research, but this article solely reflects the opinions and conclusions of its authors and not any NASA entity. A MOBIUS SCHEDULER DESIGN A.1 Searching for -Fair Allocation §6 explains how Mobius generalizes its formulation to support a class of -fair objectives. Recall that, in each round, the target allocation is the allocation on the convex boundary with the greatest utility ; Mobius finds the support allocations that are closest in utility to the target allocation. In each stage of the search, Mobius uses Lagrange Multipliers to identify the face containing the allocation that maximizes . Specifically, for a face described by the equation ∈ = , Mobius computes the allocation * 1 ,..., * | | with greatest utility, subject to the constraint that it lies on the face. The Lagrangian can be written as: L (x, ) = (x) − ∑︁ ∈ − where x ∈ R | | . To find the utility-maximizing allocation x * , we set ∇L = 0, and solve for x * : ∇L =                  * 1 − 1 . . . * − . . . * | | − | | ∈ * −                  = 0 Substituting / * = (1/ * ) , we get: * = ( ) −1/ To solve for , we substitute Equation (5) into the last element of ∇L: =       ∈ (1−1/ )       −(6) Mobius computes * for the | | faces that arise from the most recent extension, and identifies the one face that contains * within the boundaries of its vertices. Mobius then extends this face in the next stage. For example, in stage 2 in Fig. 6b, we compute * for faces and , and continue the search in stage 3 above because it contains the * (i.e., it intersects the = line). A.2 Algorithm Algorithm 1 provides pseudocode for Mobius's scheduling algorithm. When a mobility platform is initialized, Mobius starts executing the RunMobius() function, first initializing the long-term average throughput x. In each round, it computes an initial face using the | | basis weights e k ∀ ∈ , where e k is a vector of zeros, with the -th element set to 1. This gives an initial allocation on every axis in the | |-dimensional customer throughput. Then Mobius runs the SearchBoundary() function to compute the face containing the | | support allocations around the target throughput. It chooses the allocation x * that maximizes the total average throughput. the local maximum is the global maximum, and the optimal point is on the boundary of [6,Theorem 8.3]. □ Corollary 1. There exists exactly one candidate face at any stage of the convex boundary (among all possible faces in Alg. 1, line 14) for which OptInFace() is true. Proof. It follows from Lemma 1 that the optimal over the current convex boundary is unique. This means that exactly one face must have the optimal within its face. □ Definition 1. A point p is said to be above (or below) a face described as ∈ = if ∈ > (or ∈ < ). Lemma 2. Let ∈ be a face among all candidate faces during a given stage. Any allocation resulting from an initial call to SearchBoundary( ) will never lie above any face in \{ }. Proof. We prove this by contradiction. Suppose p was above two faces and˜. Then there will exist at least one support allocation x of face˜which could not have been found from a previous call to ExtendFace(), and p would have been found instead of x. For example, in Fig. 6a, any point above the extensible regions of and would contradict the existence of . Thus, p cannot lie above more than one face. This argument is true for any subsequent calls to SearchBoundary(), and any subsequent allocation obtained from extending a face has to be below all other faces. □ Lemma 3. Let p, contained in face , be the maximizer of . The utility of any allocation in the extensible region of a face ≠ will be lower than the utility at allocation p. Proof. Since is concave, and is maximized at point p on , the value of will only increase if evaluated at a point above . From Lemma 2, we know that every extension of a face other than will be below , and thus have a lower utility than the current value evaluated at p. □ Theorem 1. In each round, SearchBoundary() returns the face on the convex boundary that contains the allocation that maximize . Proof. In every round, SearchBoundary() iteratively extends the face that contains the optimal over all faces. Lemma 3 guarantees that any subsequent exploration of a face required by SearchBoundary() will only increase , while pruning out the search spaces which cannot improve the solution. When maximizing a concave objective function over a convex set, the local optimal is the global optimal. Therefore, the solution returned when SearchBoundary() terminates is the maximum of over . □ B.2 Mobius Converges to the Target Throughput In our problem setting, customer tasks are only presented to Mobius for the current round, and no knowledge of future task locations is assumed. 14 In this section, we show an interesting result: under the static task arrival model ( §5.3), Mobius, although myopic, results in allocations that are globally optimal. In other words, asymptotically, Mobius achieves the same average throughput allocation that a 14 A greedy approach (discussed in App. C) would be a regret-free online algorithm for this planning problem. utility-maximizing oracle, which jointly planned over multiple rounds, would achieve. Definition 2. A task distribution is said to be static if the set of throughput allocations (denoted as ) is the same for all rounds. Definition 3. The maximum of over the convex boundary of is defined as the optimal long term throughput, and is denoted as x * . The set of all feasible average throughput allocations after rounds is denoted as . We prove the following: (i) in every round, Mobius chooses the solution which maximizes (x( )) on the convex boundary of , and (ii) x( ) converges to x * with an error that decreases as 1/ . For brevity, we consider the case with two customers (| | = 2). However, the intuition and results generalize for any number of customers. The proof outline is as follows 1. Lemma 4 and 5 characterize the evolution of the convex boundary 2. Lemma 6 proves that x * lies on this boundary. 3. Lemma 7 shows that Mobius is optimal at the end of any finite round . 4. Finally, we prove asymptotic optimality and describe the rate of convergence in Theorem 2. Lemma 4. For any round , the convex boundary of remains constant. Proof. The allocations in are obtained by averaging the throughput obtained over rounds of (see Fig. 4c). Since is convex, any average of allocations in will remain the same boundary. Thus, the convex boundary of and are the same. □ Proof. Consider one face in the convex boundary of . From Lemma 4, we know that the convex boundary of the average throughput at any subsequent round will remain the same (see Fig. 4c). However, with subsequent rounds, linear combinations of two corner points that constitute a face will create new allocations that lie along the same face. For example, in Fig. 7b, the face gets "denser" with more allocations, as the number of rounds increases. Remember that in every round, Mobius can only choose between throughput allocations or to modify the average. In particular, by round , allocations of and allocations of result in an allocation = + , where and are integers. Now, since there are − 1 possible combinations of and that sum up to , we get −1 equidistant allocations on the face. □ Lemma 6. x * lies on the face in the convex boundary of . Proof. The maximizer of in the long term searches over the convex boundary of . Since is a concave function, and the search space is a convex set, x * must lie on the face of the convex boundary (Lemma 1). □ Lemma 7. For any round , Mobius finds the utility-maximal allocation on the convex boundary of . Proof. We prove this by induction on the number of rounds. Theorem 1 proves the base case, where = 1. Suppose Mobius finds the highest utility solution on the convex boundary of . We want to show that the schedule that Mobius computes (Algorithm 1) has the highest utility on the convex boundary of +1 . We use the example in Fig. 7b to build this argument. Suppose has the highest utility after = + rounds. Without loss of generality, we assume that the optimal allocation x * lies between and +1 −1 (rather than and −1 +1 ). We now argue that Mobius chooses or appropriately at round +1 to ensure that the average throughput is the optimal at the end of round +1. Figure 18 illustrates the setup for this proof. Recall that is a concave function. Thus, the utility function evaluated over the face is also concave. This implies that (i) the utility at is higher than any allocation in ( , ], and (ii) the utility at +1 −1 is higher than any allocation in [ , +1 −1 ). Also, since was the optimal solution at round (by the inductive hypothesis), the utility at is higher than the utility at +1 −1 . Thus the utility for any allocation in [ +1 −1 , ] is lower bounded by the utility at +1 −1 . The above relations prove that in round +1, +2 −1 can never have the highest utility. Thus the optimal throughput on the convex boundary of +1 is either +1 or +1 . This results in two cases: • +1 is the optimal, in which case Mobius would choose allocation for round +1 to reach the optimal. • +1 is the optimal, in which case Mobius would choose allocation for round +1 to reach the optimal. Note that by eliminating +2 −1 as an optimal allocation, the two remaining candidates can be reached by appropriately choosing or in round + 1. This concludes the induction argument and proves the lemma. □ Note that Lemma 7 proves that at the end of round , Mobius not only reaches the best allocation at the end of round , but it also achieves the best allocation for each preceding round before . Theorem 2. Mobius (i.e. Alg. 1) converges to x * such that the distance between x and x * decreases as O (1/ ), where is the number of rounds. Proof. From Lemma 5, we know that at any round there are −1 feasible cumulative allocations (excluding the extreme points) on a face, and that these allocations split the face into equally-spaced segments of length ∝ 1 . Thus, the best allocation in converges to the optimal x * with an error that is bounded by O (1/ ). Since Lemma 7 establishes that Mobius chooses the optimal allocation in , the result follows. □ C GREEDY HEURISTIC TO MAXIMIZE §5.4 describes how Mobius builds a suite of warm start schedules to assist the VRP solver in maximizing a weighted sum of customer throughputs. Since Mobius is guided by a utility function ( §6), we implement a heuristic that computes an -fair schedule by performing a greedy maximization of . Note that this algorithm is not guaranteed to result in a schedule on the convex boundary; we instead use it as an initial schedule to warm start the VRP solver ( §5.4). The greedy heuristic uses the same formulation as the VRP ( §2). It computes routes for each vehicle ∈ subject to the budget constraints. Mobius constructs a schedule iteratively; in each iteration, it executes two steps: (1) It constructs an -fair path for each vehicle that meets the budget constraints by trying to maximize . (2) Then, it invokes a VRP solver with the tasks selected by the paths in (1), to build a high throughput schedule to complete the fair allocation of tasks. At the end of each iteration, it takes the VRP schedule generated by (2) and tries to squeeze more tasks into the path, preserving fairness. It terminates when no new task can be added according to the greedy optimization in step (1). It then runs the VRP one final time, with a very high weight on the final set of -fair tasks and lower weight on all other customer tasks, so that it can pack the schedule with more tasks to achieve a schedule with high total throughput. Intuitively, the iterations over steps (1) and (2) create the -fair schedule with the highest possible throughput, according to the greedy approximation. Then, with the final packing step, we try to boost the throughput of the schedule by fulfilling any additional tasks, without compromising the -fair allocation we have already committed to. Before starting to construct a path iteratively, we internally maintain the total number of tasks ℎ currently fulfilled by the path, for each customer. To do a greedy maximization of in each iteration of constructing the path, we sort all customer tasks according to the return-on-investment for completing each task. Recall that is the set of tasks requested by customer and is the time budget for a round ( §2). The new throughput for customer by fulfilling a task ∈ is = (ℎ +1)/ . We compute a task ∈ as: ( ) = (x) − (h) ( , )(7) where is the last task in the path and (·) is the cost to travel from to . Then, to select the next task to add to the path we simply find the task with the greatest return-on-investment: argmax ∈ ∀ ∈ ( )(8) D RUNTIME OF MOBIUS In each round, Mobius uses an efficient algorithm to find support allocations near the target allocation, without having to compute all corner points of the convex hull in each round. This allows Mobius to invoke the VRP solver sparingly in order to find an allocation of rates that steers the long-term rates toward the target. We report 16 some highlights from profiling Mobius in Table 1. These results suggest that the computational overhead for deploying Mobius would be negligible for many mobility applications. # Cust. # of Tasks # of Vehicles Round Duration (min) Runtime (s) 2 100 2 10 15 3 150 3 15 20 4 200 4 15 35 6 567 6 90 51 6 999 6 90 59 6 567 24 90 88 6 999 24 90 105 E MICROBENCHMARKS We evaluate Mobius on several microbenchmarks involving synthetic customer traces. We vary (1) the spatial distribution of customer tasks, (2) the timescale at which tasks are requested, and (3) the degree to which a schedule is fair. Customers submit at most 40 tasks, each taking 10 seconds to fulfill, in any round. Between rounds, they renew any fulfilled tasks at the same location. The travel time between any two nodes is based on their Euclidean distance, assuming a constant travel speed of 10 m/s. E.1 Robustness to Spatial Demand We evaluate Mobius's ability to deliver a fair allocation of customer throughputs, in the presence of highly diverse spatial demand. We construct 4 very different maps (Fig. 19a), with 2 vehicles starting at ⊕. For this experiment, we assume that the customer tasks arrive from a static task arrival model ( §5.3): the vehicles make a round-trip in each round, and customers renew any fulfilled tasks at the start of every round. Fig. 19b shows the average per-customer and total throughputs achieved by different schemes after 50 rounds. For all maps, Mobius (with max-min fairness) indeed provides a fair allocation of average customer throughputs. The other baseline schedules exhibit variable performance depending on the task distribution. For example, in Map A, both max throughput and dedicating vehicles achieve dismal fairness. The max throughput schedule only serves customer 1's cluster, and dedicating a vehicle to customer 2 cannot deliver a fair share of throughput, given the round-trip budget constraints. When customers' tasks overlap and have similar spatial density (Map D), the max throughput schedule provides a roughly fair allocation of rates, and the round-robin schedule achieves reasonably high throughput. Dedicating vehicles suffers from poor throughput when there is an incentive to pool tasks from multiple customers into a single vehicle (Map B and Map C). Focusing on Map A. To illustrate how Mobius converges to fair per-customer allocations without significantly degrading platform throughput, we show in Fig. 20 schedules computed for Map A (Fig. 19a). On the top row, we show max throughput, round-robin, and dedicated schedules; on the bottom row, we show schedules computed by Mobius over 3 consecutive rounds. In round 1, Mobius exploits a sharing incentive to pool some of customer 1's tasks into the journey to customer 2's tasks, achieving a similar throughput to the max throughput schedule. By contrast, the max throughput schedule starves customer 2, and the round-robin schedule wastes time moving between clusters. Mobius is able to compensate for short-term unfairness across multiple rounds of scheduling. Fig. 20 shows the first 3 schedules that Mobius computed for max-min fairness, assuming fulfilled tasks reappear, as before. Although Mobius does not starve customer 2 in round 1, it still delivers 4× higher throughput to customer 1. However, as we see in round 2, Mobius compensates for this unfairness by prioritizing customer 2, while still exploiting sharing incentive and collecting a few tasks for customer 1 during the round trip. The schedule in round 3 is identical to that in round 1; since tasks arrive according to a static model in this example, the convex hull remains the same across rounds, and so Mobius oscillates between the same two support allocations (schedules). The max throughput and dedicated schedules suffer from a persistent bias in throughput (Fig. 19b) due to the skew in spatial demand. E.2 Expressive Schedules with Mobius's parameter allows the platform operator to control fairness; with higher , the platform trades off some total throughput for a fairer allocation of per-customer rates. Fig. 19c shows, for three different values of , the long-term throughput for each customer and the platform throughput over time. For all maps (Fig. 19a), as we increase the degree of fairness , Mobius compromises some platform throughput. Mobius's scheduler is expressive; for instance, if an operator would like high throughput, with the only constraint that no customer gets starved, she can run Mobius with = 1, which ensures a proportionally fair allocation of throughputs. Map A shows an example of this. Furthermore, Mobius indeed converges to the target throughput ( §5.3); this is best illustrated with the max-min schedules, where the customer throughputs converge to the same rate. Mobius can also converge to any allocation of rates in the spectrum between maximum throughput and max-min fairness (e.g., = 10). E.3 Timescale of Fairness The duration of a round in Mobius is a parameter; for instance, an operator could set the round length to be the vehicles' fuel time, or the desired timescale of fairness. We expect that, with shorter durations, Mobius can converge faster to a fair allocation. To study this behavior, we consider, in Fig. 21, max-min fair schedules generated by Mobius on Map A. We consider three round durations (5 mins, 7.5 mins, and 15 mins), requiring the vehicles to return home every 15 minutes, as before. There are three interesting takeways. First, for all round durations, Mobius provides an equal allocation of rates to both customers. Second, we see that Mobius achieves lower platform throughput for shorter round durations; this is because schedules computed at shorter timescales are more myopic. Third, shorter round durations allow Mobius to converge to faster to an equal allocation rates. In particular, we see that Mobius at a 5-min timescale achieves the fair allocation within 150 minutes, but at a 15-min timescale, it takes nearly 400 minutes to converge. Thus, the timescale of fairness of fairness allows an operator to trade some total platform throughput for faster convergence. Additionally, the schedules generated with 5-min and 7.5 min timescales do not observe the static task arrival assumption ( §5.3). Since the vehicles begin some rounds away from their start locations, 17 Fig. 19a. The shape of the convex boundary describes the inherent tradeoff between fairness and high throughput. E.4 Geometry of the Convex Boundary Mobius finds an approximately fair schedule in each round by constructing corner points of the convex boundary. The convex boundary of achievable throughputs succinctly captures the tradeoffs between servicing different customers, based on their spatial demand. Fig. 22 shows the convex boundaries for four different maps of tasks (shown in Fig. 19a). We construct the convex boundary using an extended version of Mobius's search boundary. Specifically instead of searching the face containing the utility optimum on the current convex boundary, we search all faces, i.e., extend the convex boundary in all directions. The terminating conditions remain the same: we know we have reached face on the convex boundary when we cannot extend it further. The geometry of the convex boundary indicates which customers the platform can service more easily. For instance, notice that the boundaries for Map A and Map C are both skewed toward customer 1, since the platform incurs less overhead to service both customers. In contrast, the convex boundary for Map D is symmetric, since each customer is equally easy to service. 18 E.5 Varying Number of Vehicles The results in §7 show that dedicating vehicles can (i) miss out on sharing incentive, leading to lower platform throughput, and (ii) lead to unfairness in situations where it is inherently harder to service some customers. However, dedicating vehicles is only a viable policy when the number of vehicles is a multiple of the number of customers. Mobius makes no assumptions about the number of vehicles in the mobility platform; in this section, we study the performance of Mobius in a mobility platform with different numbers of vehicles. Fig. 23 shows the per-customer average throughputs ( ) achieved by Mobius (for max-min fairness) on Map B (Fig. 19a) for different numbers of vehicles. In all cases, Mobius converges to a max-min fair allocation of rates. As expected, the throughput of the platform increases with more vehicles, since the platform can complete more in parallel. E.6 A Case with Three Customers §E showed a controlled study of the properties of Mobius, in environments with two customers. Fig. 24 shows an example with three customers and three vehicles, all starting at ⊕. We let customers renew fulfilled tasks after every round. We consider a fairness timescale of 5 minutes, and require that the vehicles return home every 15 minutes (i.e., 3 rounds); so we relax the assumption on static task arrival, i.e., the convex boundary is identical every 3 rounds. Fig. 24 shows time series chart of per-customer long-term throughputs achieved by Mobius (for = 10 and = 100) and by dedicating vehicles. The schedule that dedicates vehicles to customers misses out on the opportunity to fulfill tasks for customer 1 on the way to customer 3's cluster. Additionally, notice that Mobius can provide a fair allocation of rates for 3 customers, and allows Mobius to control the degree to which the rates converge to the same value. Fig. 6avisualizes w in a 2-D customer throughput space. A solver optimizing for Equation (1) searches for the schedule with the highest throughput in the direction of w[10], thus requiring the schedule to lie on the convex boundary. For example, w 1 = (1,0) finds the schedule on the convex boundary Figure 6 : 6Using a blackbox VRP solver as a building block, Mobius runs an iterative search algorithm to find the support allocations. Figure 7 : 7Mobius (a) finds the support allocations nearest the target allocation in each round, and (b) converges to the target allocation. Figure 8 : 810 App. A.1 shows how to find the face containing the target throughput.Mobius (Prop. Fair) Mobius (alpha=3) Mobius (Mobius can tune its allocation to deliver proportional fairness ( = 1) and max-min fairness (approximated with = 100). Figure 9 : 9Ridesharing demand. We use a 13-hour trace of 16,817 timestamped Lyft ride requests, published by the New York City Taxi and Maps of zones (customers) and demand in Manhattan, indicating skews in both spatial coverage and volume of ride requests. Figure 10 : 10Long-term throughputs for zones in Manhattan after 13 hours. A good scheduler should have a stacked plot with large evenly-sized blocks, and a map with bright (high throughput) and homogeneous (fair) colors across zones. Figure 11 :Figure 12 : 1112Time series of long-term throughputs for two zones for different replanning horizons. Frequent replanning ensures fairness (equal throughputs) at shorter timescales. Distributions of rider wait times for two zones. Even though Mobius compromises some throughput for fairness, it delivers similar wait times as the max throughput scheduler. Figure 14 : 14Map of tasks for 5 aerial sensing apps, spanning a 1 square mile area in Cambridge, MA. Mobius replans every 5 minutes, in order to incorporate new requests. Each drone returns to recharge every 15 minutes. Figure 15 :Figure 16 : 1516Long-term throughputs achieved over 90 minutes. Mobius achieves high throughput and best shares it amongst the apps. Percentage of tasks completed per app. Mobius fulfills nearly all requests for the Traffic and Parking apps, before allocating "excess" vehicle time to the more backlogged apps. Figure 17 : 17Discounting long-term throughput allows Mobius to gradually respond to the sudden presence of the transient Roof app, instead of dedicating all drones to it. Lemma 5 . 5At round , each face of the convex boundary contains −1 equidistant allocations. Figure 18 : 18Proof setup for Lemma 7. The face is the same as inFig. 7 Figure 19 : 19Comparing customer throughputs and platform throughput achieved by Mobius and other schemes. Customer tasks stream in according to a static task arrival model. Mobius consistently outperforms other schemes by striking a balance between throughput and fairness. Figure 20 :Figure 21 :Figure 22 : 202122Snapshot of per-round schedules computed by Mobius (for 3 rounds) and other policies. Mobius compensates for short-term unfairness by switching between schedules on the convex hull over rounds. Other schemes suffer from persistent bias or low throughput. Mobius converges to the fair allocation of throughputs regardless of the timescale of fairness. Scheduling in shorter rounds converges faster to the fair allocation of rates, but longer round durations lead to schedules with greater platform throughput. the convex hull changes across rounds. Still, Mobius is robust in this setting and provides very similar rates to both customers. The case studies ( §7.2- §7.3) provide more realistic examples where the static task arrival assumption is relaxed. Convex boundaries computed by Mobius for the different maps shown in Figure 23 :Figure 24 : 2324Long-term per-customer rates computed by Mobius on Map D inFig. 19a, for different provisioning of vehicles. Mobius vs. dedicating vehicles for example with 3 customers. Mobius converges to a fair allocation of rates for customers, when the assumption on static task arrival is relaxed. 2 Cust. 1 28 tasks 4 tasks 15 tasks 21 tasks Cust. 2 13 tasks 4 tasks 13 tasks 19 tasks Total 41 tasks 8 tasks 28 tasks 40 tasks Mobius Max Throughput Cust. 1 28 tasks 4 tasks 15 tasks 21 tasks Cust. 2 13 tasks 4 tasks 13 tasks 19 tasks Total 41 tasks 8 tasks 28 tasks 40 tasks Figure 3: In each round, Mobius uses a VRP solver to compute a schedule that maximizes a weighted sum of throughputs, and automatically adjusts the weights across rounds to improve fairness.Vehicles Mobius + Historical Throughput Weight Search & Update submit tasks & constraints receive feedback & update execute schedule completed tasks candidate schedules vehicle health modular input Cust. 1 Cust. 2 Cust. 3 VRP Solver . ThereStart Cust. 1 Cust. 2 (a) Map. Two vehicles start at ⊕. A C B Target Allocation D E (b) Feasible throughputs in 1 round. Round 3 Round 4 Round 1 Round 2 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 0 2 4 6 Cust. 1 Tput (tasks/round) Cust. 2 Tput (tasks/round) (c) Feasible throughputs over 4 rounds. 0 10 20 30 40 0 10 20 30 40 50 Cust 1 Tput (tasks/round) Cust 2 Tput (tasks/round) Avg Target Tput Mobius (d) Convex boundary dynamics. Manhattan and demonstrate that it scales to large online problems. In §7.3, we deploy Mobius on a shared aerial sensing system, involving multiple apps with diverse spatiotemporal preferences. Our evaluation focuses on answering the following questions:• How does Mobius compare to traditional approaches in online scheduling for large-scale mobility problems? • How robust is Mobius in the presence of dynamic spatiotemporal demand from customers? • How can we tune Mobius's timescale of fairness?• What other benefits does Mobius provide to customers, beyond optimizing per-customer throughputs?We evaluate Mobius using trace-driven emulation ( §7.1) in two real-world mobility platforms. In §7.2, we apply Mobius to Lyft ridesharing in Table 1 : 1Performance of Mobius on different input sizes. The method we develop also applies to weighted fairness. arXiv:2105.11999v1 [cs.CY] 25 May 2021 Throughput-Fairness Tradeoffs in Mobility Platforms Balasingam, Gopalakrishnan, Mittal, et al. We formally define the VRP in §5. The VRP is NP-hard ( §5), but because the input size is small for this example, we use Gurobi[25] to compute optimal schedules. §7 further evaluates the effectiveness of Mobius's algorithm for dynamic, real-world customer demand. x and w vary with each round . We drop the round index whenever there is no ambiguity about the current round. See App. B.2 for a formal proof. 7 github.com/mobius-scheduler/mobius Dedicating vehicles is most suitable when the number of vehicles is a multiple of the number of customers. Algorithm 1 Mobius Scheduler1:procedure RunMobius( )2:Initialize history x ∈ R | | , with x = 03:for each round do4:i ← InitFace() ⊲ Use basis weights e k .5:face ← SearchBoundary( , i)6:x * ← argmax x∈face (x+x)7:Execute schedule with throughput x * .8:x ← x+x * 9: function SearchBoundary( , face)10:p ext ← ExtendFace(face) ⊲ Compute w for face; call VRP.11:if p ext exists then12:for p face ∈ face do 13:if OptInFace( , candidate) then15:return SearchBoundary( , candidate)16:return face 18: function OptInFace( , face)19:opt ← ComputeOpt( , face)⊲ Equation(5)20:if opt lies within face thenB.1 Mobius is Optimal in a RoundWe denote as the set of all possible allocations in a round. We show that for every round, SearchBoundary() (Alg. 1, line 5) returns a face on the convex boundary of that maximizes the utility function . The structure of the proof is as follows:1. Lemma 1 and Corollary 1 identify that the maximizer of on is unique. 2. Lemma 2 shows that any point in the extensible region of a face will not lie above other faces. 3. In Lemma 3, we note that extending faces that do not contain the utility maximizing allocation for the stage results in a lower utility. 4. Finally, we piece together these ideas in Theorem 1 in order to prove the optimality of SearchBoundary() over one round.Lemma 1. For any convex polyhedron ∈ R + , there is a unique point that maximizes for ∈ (0,∞], and the maximum lies on the boundary of .Proof.is strictly concave for any finite . When maximizing a concave function over a convex set, the optimum point is unique, 5 air quality sensor. Adafruit, Pm2, Adafruit. Pm2.5 air quality sensor. https://learn.adafruit.com/pm25-air-quality- sensor. Airborne optical and thermal remote sensing for wildfire detection and monitoring. R S Allison, J M Johnston, G Craig, S Jennings, Sensors. 1681310R. S. Allison, J. M. Johnston, G. Craig, and S. Jennings. Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors, 16(8):1310, 2016. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment. J Alonso-Mora, S Samaranayake, A Wallar, E Frazzoli, D Rus, Proc. Natl. Acad. Sci. USA. 1143J. Alonso-Mora, S. Samaranayake, A. Wallar, E. Frazzoli, and D. Rus. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment. Proc. Natl. Acad. Sci. USA, 114(3):462-467, 2017. A machine learning approach for identifying mosquito breeding sites via drone images. A Amarasinghe, C Suduwella, C Elvitigala, L Niroshan, R J Amaraweera, K Gunawardana, P Kumarasinghe, K D Zoysa, C Keppetiyagama, Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems. M. R. Eskiciogluthe 15th ACM Conference on Embedded Network Sensor SystemsDelft, NetherlandsACM68A. Amarasinghe, C. Suduwella, C. Elvitigala, L. Niroshan, R. J. Amaraweera, K. Gunawardana, P. Kumarasinghe, K. D. Zoysa, and C. Keppetiyagama. A machine learning approach for identifying mosquito breeding sites via drone images. In M. R. Eskicioglu, editor, Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, SenSys 2017, Delft, Netherlands, November 06-08, 2017, pages 68:1-68:2. ACM, 2017. The prize collecting traveling salesman problem. E Balas, Networks. 196E. Balas. The prize collecting traveling salesman problem. Networks, 19(6):621-636, 1989. Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB. A Beck, Convex Optimization. SIAMA. Beck. Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB, chapter Chapter 8: Convex Optimization, pages 147-168. SIAM, 2014. Online vehicle routing: The edge of optimization in large-scale applications. D Bertsimas, P Jaillet, S Martin, Oper. Res. 671D. Bertsimas, P. Jaillet, and S. Martin. Online vehicle routing: The edge of optimization in large-scale applications. Oper. Res., 67(1):143-162, 2019. A vehicle routing problem with stochastic demand. D J Bertsimas, Oper. Res. 403D. J. Bertsimas. A vehicle routing problem with stochastic demand. Oper. Res., 40(3):574-585, 1992. Convex Optimization, chapter Convex Sets. S Boyd, L Vandenberghe, Cambridge University PressS. Boyd and L. Vandenberghe. Convex Optimization, chapter Convex Sets, page 21-66. Cambridge University Press, 2004. Convex Optimization, chapter Convex Optimization Problems. S Boyd, L Vandenberghe, Cambridge University PressS. Boyd and L. Vandenberghe. Convex Optimization, chapter Convex Optimization Problems, pages 146-148. Cambridge University Press, 2004. The OpenCV Library. Dr. Dobb's Journal of Software Tools. G Bradski, G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000. Empty-car routing in ridesharing systems. A Braverman, J G Dai, X Liu, L Ying, Oper. Res. 675A. Braverman, J. G. Dai, X. Liu, and L. Ying. Empty-car routing in ridesharing systems. Oper. Res., 67(5):1437-1452, 2019. . N T , N. T. . Taxi & limousine commission -homepage. L Commission, L. Commission. Taxi & limousine commission -homepage. . N T , N. T. . Tlc trip record data. L Commission, L. Commission. Tlc trip record data. https://www1.nyc.gov/site/tlc/about/tlc- trip-record-data.page. Analysis and simulation of a fair queueing algorithm. A J Demers, S Keshav, S Shenker, Proceedings of the ACM Symposium on Communications Architectures & Protocols. L. H. Landweberthe ACM Symposium on Communications Architectures & ProtocolsAustin, TX, USAACM89A. J. Demers, S. Keshav, and S. Shenker. Analysis and simulation of a fair queueing algorithm. In L. H. Landweber, editor, SIGCOMM '89, Proceedings of the ACM Symposium on Communications Architectures & Protocols, Austin, TX, USA, September 19-22, 1989, pages 1-12. ACM, 1989. Trackio: Tracking first responders inside-out. A Dhekne, A Chakraborty, K Sundaresan, S Rangarajan, 16th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2019. J. R. Lorch and M. YuBoston, MAUSENIX AssociationA. Dhekne, A. Chakraborty, K. Sundaresan, and S. Rangarajan. Trackio: Tracking first responders inside-out. In J. R. Lorch and M. Yu, editors, 16th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2019, Boston, MA, February 26-28, 2019, pages 751-764. USENIX Association, 2019. . Dji, Dji, DJI. Dji -official website. https://www.dji.com/. Flame wheel arf kit: Multirotor flying platform for entertaining and amateur ap. Dji, DJI. Flame wheel arf kit: Multirotor flying platform for entertaining and amateur ap. https://www.dji.com/flame-wheel-arf. The pickup and delivery problem with time windows. Y Dumas, J Desrosiers, F Soumis, European Journal of Operational Research. 541Y. Dumas, J. Desrosiers, and F. Soumis. The pickup and delivery problem with time windows. European Journal of Operational Research, 54(1):7-22, 1991. Cooperative surveillance and pursuit using unmanned aerial vehicles and unattended ground sensors. J C L Fargeas, P T Kabamba, A R Girard, Sensors. 151J. C. L. Fargeas, P. T. Kabamba, and A. R. Girard. Cooperative surveillance and pursuit using unmanned aerial vehicles and unattended ground sensors. Sensors, 15(1):1365-1388, 2015. Operating system for drones. Flytbase, Flytos, FlytBase. Flytos: Operating system for drones. https://flytbase.com/flytos/. Dominant resource fairness: Fair allocation of multiple resource types. A Ghodsi, M Zaharia, B Hindman, A Konwinski, S Shenker, I Stoica, Proceedings of the 8th USENIX Symposium on Networked Systems Design and Implementation. D. G. Andersen and S. Ratnasamythe 8th USENIX Symposium on Networked Systems Design and ImplementationBoston, MA, USAUSENIX AssociationA. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski, S. Shenker, and I. Stoica. Dominant resource fairness: Fair allocation of multiple resource types. In D. G. Andersen and S. Ratnasamy, editors, Proceedings of the 8th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2011, Boston, MA, USA, March 30 -April 1, 2011. USENIX Association, 2011. The Vehicle Routing Problem: Latest Advances and New Challenges. B Golden, S Raghavan, E Wasil, Operations Research/Computer Science Interfaces Series. Springer USB. Golden, S. Raghavan, and E. Wasil. The Vehicle Routing Problem: Latest Advances and New Challenges. Operations Research/Computer Science Interfaces Series. Springer US, 2008. Google maps platform | distance matrix api. Inc Google, Google, Inc. Google maps platform | distance matrix api. https : Gurobi Optimization, LLC. Gurobi optimizer reference manual. 2020Gurobi Optimization, LLC. Gurobi optimizer reference manual. http://www.gurobi.com", 2020. Beecluster: drone orchestration via predictive optimization. S He, F Bastani, A Balasingam, K Gopalakrishnan, Z Jiang, M Alizadeh, H Balakrishnan, M J Cafarella, T Kraska, S Madden, MobiSys '20: The 18th Annual International Conference on Mobile Systems, Applications, and Services. E. de Lara, I. Mohomed, J. Nieh, and E. M. BeldingToronto, Ontario, CanadaACMS. He, F. Bastani, A. Balasingam, K. Gopalakrishnan, Z. Jiang, M. Alizadeh, H. Balakrishnan, M. J. Cafarella, T. Kraska, and S. Madden. Beecluster: drone orchestration via predictive optimization. In E. de Lara, I. Mohomed, J. Nieh, and E. M. Belding, editors, MobiSys '20: The 18th Annual International Conference on Mobile Systems, Applications, and Services, Toronto, Ontario, Canada, June 15-19, 2020, pages 299-311. ACM, 2020. Androne: Virtual drone computing in the cloud. A V Hof, J Nieh, Proceedings of the Fourteenth EuroSys Conference. G. Candea, R. van Renesse, and C. Fetzerthe Fourteenth EuroSys ConferenceDresden, GermanyACM6A. V. Hof and J. Nieh. Androne: Virtual drone computing in the cloud. In G. Candea, R. van Renesse, and C. Fetzer, editors, Proceedings of the Fourteenth EuroSys Conference 2019, Dresden, Germany, March 25-28, 2019, pages 6:1-6:16. ACM, 2019. . Ibm, Ibm, Optimizer, IBM. Ibm cplex optimizer. https://www.ibm.com/analytics/cplex-optimizer, 2021. Multi-objective vehicle routing problems. N Jozefowiez, F Semet, E Talbi, Eur. J. Oper. Res. 1892N. Jozefowiez, F. Semet, and E. Talbi. Multi-objective vehicle routing problems. Eur. J. Oper. Res., 189(2):293-309, 2008. Fairness and stability of end-to-end congestion control. F Kelly, Eur. J. Control. 92-3F. Kelly. Fairness and stability of end-to-end congestion control. Eur. J. Control, 9(2-3):159-176, 2003. Rate control for communication networks: shadow prices, proportional fairness and stability. F P Kelly, A K Maulloo, D K H Tan, J. Oper. Res. Soc. 493F. P. Kelly, A. K. Maulloo, and D. K. H. Tan. Rate control for communication networks: shadow prices, proportional fairness and stability. J. Oper. Res. Soc., 49(3):237-252, 1998. Integrating uavs into the cloud using the concept of the web of things. S Mahmoud, N Mohamed, J Al-Jaroodi, 631420:1-631420:10J. Robotics. S. Mahmoud, N. Mohamed, and J. Al-Jaroodi. Integrating uavs into the cloud using the concept of the web of things. J. Robotics, 2015:631420:1-631420:10, 2015. Indoor follow me drone. W Mao, Z Zhang, L Qiu, J He, Y Cui, S Yun, Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys'17. T. Choudhury, S. Y. Ko, A. Campbell, and D. Ganesanthe 15th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys'17Niagara Falls, NY, USAACMW. Mao, Z. Zhang, L. Qiu, J. He, Y. Cui, and S. Yun. Indoor follow me drone. In T. Choudhury, S. Y. Ko, A. Campbell, and D. Ganesan, editors, Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys'17, Niagara Falls, NY, USA, June 19-23, 2017, pages 345-358. ACM, 2017. Multi-uav monitoring with priorities and limited energy resources. V Mersheeva, G Friedrich, Proceedings of the Twenty-Fifth International Conference on Automated Planning and Scheduling, ICAPS 2015. R. I. Brafman, C. Domshlak, P. Haslum, and S. Zilbersteinthe Twenty-Fifth International Conference on Automated Planning and Scheduling, ICAPS 2015Jerusalem, IsraelAAAI PressV. Mersheeva and G. Friedrich. Multi-uav monitoring with priorities and limited energy resources. In R. I. Brafman, C. Domshlak, P. Haslum, and S. Zilberstein, editors, Proceedings of the Twenty-Fifth International Conference on Automated Planning and Scheduling, ICAPS 2015, Jerusalem, Israel, June 7-11, 2015, pages 347-356. AAAI Press, 2015. Discrimination, Regulation, and Design in Ridehailing. S Middleton, 5Massachusetts Institute of TechnologyMaster's thesisS. Middleton. Discrimination, Regulation, and Design in Ridehailing. Master's thesis, Massachusetts Institute of Technology, 5 2018. Multi-objective vehicle routing problem with cost and emission functions. J C Molina, I Eguia, J Racero, F Guerrero, Procedia -Social and Behavioral Sciences. XI Congreso de Ingenieria del Transporte160CITJ. C. Molina, I. Eguia, J. Racero, and F. Guerrero. Multi-objective vehicle routing problem with cost and emission functions. Procedia -Social and Behavioral Sciences, 160:254-263, 2014. XI Congreso de Ingenieria del Transporte (CIT 2014). Team-level programming of drone sensor networks. L Mottola, M Moretta, K Whitehouse, C Ghezzi, Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems, SenSys '14. Á. Lédeczi, P. Dutta, and C. Luthe 12th ACM Conference on Embedded Network Sensor Systems, SenSys '14Memphis, Tennessee, USAACML. Mottola, M. Moretta, K. Whitehouse, and C. Ghezzi. Team-level programming of drone sensor networks. In Á. Lédeczi, P. Dutta, and C. Lu, editors, Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems, SenSys '14, Memphis, Tennessee, USA, November 3-6, 2014, pages 177-190. ACM, 2014. Numfabric: Fast and flexible bandwidth allocation in datacenters. K Nagaraj, D Bharadia, H Mao, S Chinchali, M Alizadeh, S Katti, Proceedings of the ACM SIGCOMM. M. P. Barcellos, J. Crowcroft, A. Vahdat, and S. Kattithe ACM SIGCOMMK. Nagaraj, D. Bharadia, H. Mao, S. Chinchali, M. Alizadeh, and S. Katti. Num- fabric: Fast and flexible bandwidth allocation in datacenters. In M. P. Barcellos, J. Crowcroft, A. Vahdat, and S. Katti, editors, Proceedings of the ACM SIGCOMM 2016 . Conference, Florianopolis, Brazil, ACMConference, Florianopolis, Brazil, August 22-26, 2016, pages 188-201. ACM, 2016. . L Perron, V Furnon, L. Perron and V. Furnon. Or-tools. https://developers.google.com/optimization/ routing/vrp. ASTRO: autonomous, sensing, and tetherless networked drones. R Petrolo, Y Lin, E W Knightly, Proceedings of the 4th ACM Workshop on Micro Aerial Vehicle Networks, Systems, and Applications. the 4th ACM Workshop on Micro Aerial Vehicle Networks, Systems, and ApplicationsDroNet@MobiSys; Munich, GermanyACMR. Petrolo, Y. Lin, and E. W. Knightly. ASTRO: autonomous, sensing, and tetherless networked drones. In Proceedings of the 4th ACM Workshop on Micro Aerial Vehicle Networks, Systems, and Applications, DroNet@MobiSys 2018, Munich, Germany, June 10-15, 2018, pages 1-6. ACM, 2018. Gaussian processes in machine learning. C E Rasmussen, Advanced Lectures on Machine Learning, ML Summer Schools. O. Bousquet, U. von Luxburg, and G. RätschCanberra, Australia; Tübingen, GermanySpringer3176Revised LecturesC. E. Rasmussen. Gaussian processes in machine learning. In O. Bousquet, U. von Luxburg, and G. Rätsch, editors, Advanced Lectures on Machine Learning, ML Summer Schools 2003, Canberra, Australia, February 2-14, 2003, Tübingen, Germany, August 4-16, 2003, Revised Lectures, volume 3176 of Lecture Notes in Computer Science, pages 63-71. Springer, 2003. Darknet: Open source neural networks in c. J Redmon, J. Redmon. Darknet: Open source neural networks in c. http : //pjreddie.com/darknet/, 2013-2016. Parallel iterative search methods for vehicle routing problems. É D Taillard, Networks. 238É. D. Taillard. Parallel iterative search methods for vehicle routing problems. Networks, 23(8):661-673, 1993. P Toth, D Vigo, The Vehicle Routing Problem. 9P. Toth and D. Vigo, editors. The Vehicle Routing Problem, volume 9 of SIAM monographs on discrete mathematics and applications. SIAM, 2002. What is destination discrimination? https. Uber TechnologiesUber Technologies. What is destination discrimination? https : Farmbeats: An iot platform for data-driven agriculture. D Vasisht, Z Kapetanovic, J Won, X Jin, R Chandra, S N Sinha, A Kapoor, M Sudarshan, S Stratman, 14th USENIX Symposium on Networked Systems Design and Implementation. A. Akella and J. HowellBoston, MA, USAUSENIX AssociationD. Vasisht, Z. Kapetanovic, J. Won, X. Jin, R. Chandra, S. N. Sinha, A. Kapoor, M. Sudarshan, and S. Stratman. Farmbeats: An iot platform for data-driven agriculture. In A. Akella and J. Howell, editors, 14th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2017, Boston, MA, USA, March 27-29, 2017, pages 515-529. USENIX Association, 2017. Addressing the minimum fleet problem in on-demand urban mobility. M M Vazifeh, P Santi, G Resta, S H Strogatz, C Ratti, Nat. 5577706M. M. Vazifeh, P. Santi, G. Resta, S. H. Strogatz, and C. Ratti. Addressing the min- imum fleet problem in on-demand urban mobility. Nat., 557(7706):534-538, 2018. UAV as a service: A network simulation environment to identify performance and security issues for commercial uavs in a coordinated, cooperative environment. J Yapp, R Seker, R F Babiceanu, Modelling and Simulation for Autonomous Systems -Third International Workshop. J. HodickýRome, Italy9991MESASJ. Yapp, R. Seker, and R. F. Babiceanu. UAV as a service: A network simulation environment to identify performance and security issues for commercial uavs in a coordinated, cooperative environment. In J. Hodický, editor, Modelling and Simulation for Autonomous Systems -Third International Workshop, MESAS 2016, Rome, Italy, June 15-16, 2016, Revised Selected Papers, volume 9991 of Lecture Notes in Computer Science, pages 347-355, 2016. Did uber just enable discrimination by destination?. D Zipper, D. Zipper. Did uber just enable discrimination by destination? https://www.bloomberg.com/news/articles/2019-12-11/the-discrimination- risk-in-uber-s-new-driver-rule, 2019.
[]
[ "Design of a Planar Eleven Antenna for Optimal MIMO Performance as a Wideband Micro Base-station Antenna", "Design of a Planar Eleven Antenna for Optimal MIMO Performance as a Wideband Micro Base-station Antenna" ]
[ "Aidin Razavi ", "Wenjie Yu ", "Senior Member, IEEEJian Yang ", "Senior Member, IEEEAndrés Alayón Glazunov " ]
[]
[]
A new low-profile planar Eleven antenna is designed for optimal MIMO performance as a wideband MIMO antenna for micro base-stations in future wireless communication systems. The design objective has been to optimize both the reflection coefficient at the input port of the antenna and the 1-bitstream and 2-bitstream MIMO efficiency of the antenna at the same time, in both the Rich Isotropic MultiPath (RIMP) and Random Line-of-Sight (Random-LOS) environments. The planar Eleven antenna can be operated in 2-, 4-, and 8-port modes with slight modifications. The optimization is performed using genetic algorithms. The effects of polarization deficiencies and antenna total embedded efficiency on the MIMO performance of the antenna are further studied. A prototype of the antenna has been fabricated and the design has been verified by measurements against the simulations.
null
[ "https://arxiv.org/pdf/1804.05971v1.pdf" ]
64,777,158
1804.05971
7c0895d9e19afe666aea17abbc6a3bc7e7755bfb
Design of a Planar Eleven Antenna for Optimal MIMO Performance as a Wideband Micro Base-station Antenna Aidin Razavi Wenjie Yu Senior Member, IEEEJian Yang Senior Member, IEEEAndrés Alayón Glazunov Design of a Planar Eleven Antenna for Optimal MIMO Performance as a Wideband Micro Base-station Antenna SUBMITTED TO JOURNAL 1Index Terms-Eleven antennaMIMO efficiencyRIMPRandom-LOSGenetic algorithm optimization A new low-profile planar Eleven antenna is designed for optimal MIMO performance as a wideband MIMO antenna for micro base-stations in future wireless communication systems. The design objective has been to optimize both the reflection coefficient at the input port of the antenna and the 1-bitstream and 2-bitstream MIMO efficiency of the antenna at the same time, in both the Rich Isotropic MultiPath (RIMP) and Random Line-of-Sight (Random-LOS) environments. The planar Eleven antenna can be operated in 2-, 4-, and 8-port modes with slight modifications. The optimization is performed using genetic algorithms. The effects of polarization deficiencies and antenna total embedded efficiency on the MIMO performance of the antenna are further studied. A prototype of the antenna has been fabricated and the design has been verified by measurements against the simulations. throughput and Probability of Detection (PoD), instead of the static antenna characteristics such as radiation pattern, directivity and gain. In these cases, channels are emulated and links are established in an Over-The-Air (OTA) setup, so the statistical system level performance of the wireless system can be evaluated. The systematic characterization approach is proposed in [8] for OTA measurement evaluation of wireless devices. In this approach two extreme reference environments (namely edge environments) are studied. The first edge environment is the Rich Isotropic MultiPath (RIMP) environment where multiple propagation paths are present between the two ends of the wireless link and the channel undergoes Rayleigh fading. At the receiving side, multipath environment can be emulated by several incoming waves with uncorrelated amplitudes, phases, polarizations and angles of arrival (AoA) [9]. The rich isotropic multipath environment is the hypothetical extreme multipath environment defined as a reference, for the convenience of measurement. Isotropic refers to uniform distribution of AoA of the incoming waves within 4π solid angle, while the term Rich means the number of incoming waves is large, typically more than 100 [8]. When the intended coverage of the antenna is limited (such as wall or ceiling mounted antennas with half-sphere coverage), Rich MultiPath (RMP) is a more accurate term to use. We herein use RIMP as a general term covering both isotropic and coverage-limited cases. The RIMP environment is usually emulated in a reverberation chamber (RC) which is fitted with reflectors and mode stirrers to generate the rich environment. The second edge environment is the Line-Of-Sight environment (LOS), where reflections and diffraction in the environment are small. On the other hand, and the direct path between the transmitter and the receiver is unobstructed and there is one dominant path between the two ends of the link. Anechoic chambers (AC), with absorbers fitted on the walls, are traditionally used for emulation of the LOS environment. Anechoic chambers are mainly used for antennas with directive beams, which are intended for fixed installations. However, in mobile communications, the situation is not static due to the randomness in the orientation in which the wireless terminal is held by the users. To distinguish this situation from the traditional LOS where the antennas on the two sides are fixed, we call this environment Random-LOS. When the distance between the base station and the user is short (the case for micro base-station) or when the operating frequency is high at millimeter waves, the Random-LOS scenario will be more relevant than the RIMP and fixed LOS scenarios. The real-life propagation environment is a combination of both RIMP and Random-LOS. The edge environments and the real-life scenario are related through a hypothesis stating: If a wireless device works well in both RIMP and Random-LOS, it will also work well in real-life environment [8]. In the current paper, we propose a new low-profile planar MIMO Eleven antenna as shown in Fig. 1, including two branches located in two separate planes corresponding to two different polarizations orthogonal to each other, for the applications in future wireless communication systems. We characterize the performance of the antenna in both RIMP and Random-LOS environments. However, when it comes to small cell sizes and low powers, multipath fading decreases and Random-LOS will play a larger role. Hence, our focus is on the Random-LOS scenario. The planar Eleven MIMO antenna has a simple geometry and therefore a low manufacturing cost. The design criteria is to optimize both the reflection coefficient and the 1-bitstream and 2-bitstream MIMO efficiency. In order to analyze the system performance in the edge environments, we use ViRM-lab, a computer program investigating performance of wireless terminals in multipath and LOS with arbitrary incident waves [10]. A prototype of the antenna has been fabricated and the design has been verified by measurements against the simulations. Simulated and measured results are presented in the paper. II. THEORY AND FIGURE OF MERIT A. Digital Threshold Receiver Model The Probability of Detection (PoD) is the probability that a bitstream is received at the receiver with no errors. It can be described as the normalized throughput of the system. In this paper we employ the Ideal Digital Threshold Receiver (IDTR) Model [11] in order to obtain the PoD from the probability distribution of the received power. This model was originally introduced to model the throughput of digital communication systems in the RIMP environment. However, it can easily be extended to Random-LOS environment, as well. The IDTR model which relates probability distribution of the received signal power to the PoD, is based on the simple fact that in modern digital communication systems, the bit error rate in a stationary additive white Gaussian Noise (AWGN) channel, changes abruptly from 100% to 0% at a certain threshold signal-to-noise ratio (SNR), due to the use of advanced error correction schemes. The threshold level is determined by the receiver and the performance of the wireless system. According to the IDTR model, the PoD is determined by [11]: PoD(P/P th ) = TPUT(P/P th ) TPUT max = 1 − CDF(P th /P ),(1) where PoD is the probability of detection function, TPUT is the average throughput, TPUT max is the maximum possible throughput (depending on the system specifications), CDF is the Cumulative Distribuion Function (CDF) of the received fading power (P rec ), P th is the receiver's threshold level and P is a reference value which is proportional to the transmitted power and defined according to the environment. To illustrate the IDTR model with an example, let's assume the example of an isotropic antenna in rich multipath environment (i.i.d. case). The CDF of the received power, which is of a Reyleigh distribution, is plotted in Fig. 2(a), where the reference power P is chosen as the average received power (P = P av ). This CDF plot shows that, e.g., in 9% of the states, the received power is at least 10 dB below the reference level P av . This means that for the remaining 91% of the states, the received power is not more than 10 dB below the reference level. This means that if the threshold level P th is 10 dB below the the reference level, there is 91% probability that the received power is above the threshold level which means no bit error, so the PoD is 91% in this case. The PoD of the i.i.d. case with the above-mentioned assumptions is plotted in Fig. 2(b), which shows that in order to maintain an error-free link for 91% of the time, the average received power needs to be at least 10 dB above the threshold level, i.e., 10 dBt. B. MIMO efficiency The power required to maintain 95% PoD level is often used as a metric for the performance of digital communication systems [12], [13]. This metric represents the power that is required at the transmitter side so that the receiver can detect 95% of the data packets, for a fixed coding and modulation scheme. MIMO efficiency is defined by the degradation of P/P th at 95% PoD compared to an ideal receiver antenna. The reference antenna is chosen depending on the intended coverage of the antenna and the considered environment, i.e., RIMP or Random-LOS. MIMO efficiency can be expressed as: η MIMO = PoD −1 0 (0.95) PoD −1 (0.95) ,(2) where PoD −1 is the functional inverse of PoD, and PoD 0 represents the PoD of the reference antenna. The reference power level P in (1) should be the same for both the reference antenna and the antenna under study, but its actual value does not affect the MIMO efficiency. We will study the performance of the planar Eleven antenna in both 1-bitstream and 2-bitstream scenarios, and evaluate the MIMO efficiency in both cases. As implied by the names, in the 1-bitstream system, one data stream is transmitted between the two ends of the link, while in the 2-bitstream system two independent data streams are carried in the link. Since most wireless terminals are limited to two antenna ports, we limit this work up to the 2-bitstream case. However at the base station side, where the planar Eleven antenna is located, 2, 4, and 8 antenna ports can be available depending on the configuration. Maximal-Ratio Combining (MRC) and Zero-Forcing (ZF) algorithms are used to combine the multiple antenna ports for 1-bitstream and 2-bitstream cases, respectively. For a 1-bitstream system, the efficiency corresponds in reality to a SIMO efficiency. However, in the current paper we use MIMO efficiency as a general term covering both SIMO and MIMO scenarios. C. Reference Antenna As mentioned earlier, the choice of the reference antenna depends on the environment. For Random-LOS, the reference is chosen as an isotropic lossless antenna, which is polarizationmatched to any random polarization of the incoming wave. This hypothetical reference is an antenna which provides constant output power, regardless of the AoA and polarization of the incoming wave. Similarly in the RIMP case, the reference is chosen as an isotropic lossless antenna in a rich multipath environment. Therefore, the output power at the port of this reference antenna, follows a Rayleigh distribution. If the intended coverage of the antenna is not the whole sphere, instead of an isotropic antenna, the reference antenna is chosen such that its radiation pattern is uniform in the whole intended coverage and is zero out of the intended coverage. In the case of limited coverage, the amplitude of the reference antenna is adjusted proportional to the solid angle of the intended coverage, so that the total radiated power is the same as the isotropic antenna. The intended coverage area of the current planar Eleven antenna is 120 • in both azimuth an elevation planes. This coverage area is the same for both RIMP and Random-LOS cases. The coverage area can be described in spherical coordinate system as: π/6 ≤ θ ≤ 5π/6, −π/3 ≤ φ ≤ π/3.(3) III. MODELING AND OPTIMIZATION A. Layout In order to lower the manufacture cost, we use the flat configuration of the Eleven antenna as shown in Fig. 1. In this work, we design the prototype for a two-port dualpolarized MIMO antenna with a simple feeding structure at the center. Therefore, the antenna is composed of two orthogonal radiation panels, one at the upper layer and the other at lower layer with a separation of 3 mm. In fact, if four-port or eight-port antennas should be used, the two-layer structure can become one layer with even much lower manufacture cost and simpler feeding structure at the center. The foldeddipole pairs are in geometric progression in dimensions with a scaling factor k from the most inner pairs to the most outer ones and cascaded one after another. The radiation arms at two sides of a panel are connected at the center with an edge-coupled microstrip line (twin lines above ground plane) which is excited differentially through two coaxial cables. The antenna is defined by 9 geometric parameters for each panel, with the definition shown in Fig. 4 and listed in Table I along with their corresponding optimal values. The antenna is designed via optimization to produce maximum MIMO efficiency according to (2) for both Random-LOS and RIMP and low reflection coefficient (S 11 ) over the frequency band from 1.6 to 2.8 GHz. In order to have wideband performance, the number of the cascaded foldeddipole pairs should be large, which consequently increases the size of the antenna. It is observed that 8 folded-dipole pairs are enough to achieve the bandwidth requirement. Then, all geometric parameters, according to Table I, have gone through optimization of PoD and S 11 to be determined. To find the initial values of the geometric parameters for the optimization, we tune all parameters one by one while keeping others fixed. The function of every parameter i.e. how it will affect the S 11 and PoD, can be observed through this process. Then, the initial values and the parameter scanning range have been determined. B. Optimization Genetic Algorithm [14] is used for optimization process. The geometric parameters influence the performance of both S 11 and PoD, and thus are treated as genes. Each generation consists of 400 samples. Initial population is randomly generated with a uniform distribution within the range which was previously obtained through parameter sweep process. The population is ranked according to the value of their maximum S 11 over the frequency band. The PoD of all the samples is also evaluated via ViRM-Lab through the simulated far-field function. Fifteen samples of the four hundred with the lowest maximum S 11 and acceptable PoD are selected as elites and given the chance to mate each other pairwise. Each pair will generate 2 children. Then, the roulette wheel selection rule is used to pick fifteen more samples in the remaining 385 samples to generate offsprings in the same way as the elites. The optimization converges after only 3 generations. The total size of the antenna is 337 × 337 × 37mm 3 (see Fig. 1 and details in Fig. 3). Fig. 5 shows the fabricated prototype of the optimized planar Eleven antenna. The prototype is made for 4-port dual polarization. By using a wideband hybrid junction as a balun, the prototype is operated in the 2-port mode, and all measurements were done for this 2-port antenna. The configuration of this antenna makes it very flexible to also have 8 ports with minor modification of the feeding structure. IV. SIMULATION AND MEASUREMENT RESULTS We present also the simulation results of this Eleven MIMO antenna with 4 ports and 8 ports. Fig. 6 shows the simulated and measured reflection coefficient of the 2-port dual polar antenna. Both the simulation model and the prototype of the 2-port antenna require a balun for feeding the antenna. The 2-port antenna mode is simulated by using ideal differential feeding in CST. For the prototype a wideband hybrid junction is used as the balun. Due to this reason, the actual simulated and measured reflection coefficients of the 2-port mode have a certain difference. The reflection coefficient is not included in the MIMO efficiency calculations. However, we need to keep in mind that in general poor matching will degrade the MIMO performance. A. Dual-Polarized 2-port MIMO Antenna Assuming the high branch along x-axis and the low branch along y-axis, the simulated and measured radiation patterns in φ = 0 and φ = 90 • planes are plotted in Fig. 7, for the beginning, center and the end of the frequency band. We can observe that there is a good agreement between simulations and measurements. The far-field functions (both amplitude and phase) of the antenna for the two orthogonal polarizations have been measured for the angle steps of ∆θ = 5 • and ∆φ = 15 • . Then, interpolation was used to get the measured farfield function with angle steps of ∆θ = 1 • and ∆φ = 1 • . The simulated far-field function was obtained from CST with the same resolution. The MIMO efficiency of the 2-port antenna for both 1bitstream 2 × 1 and 2-bitstream 2 × 2 systems based on the simulated and the measured far-field functions is plotted in Fig. 8 and Fig. 9 for Random-LOS and RIMP, respectively. These figures show good agreement between the simulation and measurement results. Furthermore we can observe that the MIMO efficiency has relatively little variation over the frequency bandwidth. The Random-LOS MIMO efficiency is low at lower frequencies as observed in Fig. 8. In order to gain better insight into the reason for this low efficiency, we should look at the spatial distribution of MIMO efficiency in MIMO coverage plots. MIMO efficiency defined by (2) can be calculated for individual AoAs, where only the polarization is random. This is specially useful when dealing with 2-bitstream systems in Random-LOS. The reference antenna's coverage area is still the same as before. Therefore, at some AoAs the ratio in (2) can be larger than 1. The coverage plots of the planar Eleven antenna for 2-bitstream are shown in Fig. 10 at three frequencies. It can be observed that at lower frequencies, the efficiency is very low at some directions. Whereas it is more homogeneous at higher frequencies. Comparing this to the plots in Fig. 8 we can observe how this corresponds to lower 2-bitstream MIMO efficiency at lower frequencies. Dual-polarized antennas provide orthogonal and equalamplitude far-field patterns only in limited directions. It is well known that with orthogonal and amplitude balanced antenna ports, it is possible to combine the channels on the ports of a dual-polarized antenna in such a way that the polarization mismatch can be completely compensated between the receiver and transmitter ends of the link [9,Sec. 3.10]. But, the presence of polarization deficiencies between the two ports of the antenna, impairs this capability and leads to degradation in MIMO efficiency. Assuming that the far-field functions of the two receiving antenna ports are defined as G 1 (θ, φ) and G 2 (θ, φ) at any direction (θ, φ) in space, two types of polarization deficiency, namely amplitude imbalance (I a ) and polarization non-orthogonality (I p ) are defined in [15] as: I a (θ, φ) = max {|G 1 |, |G 2 |} min {|G 1 |, |G 2 |} (4) I p (θ, φ) = |G 1 · G * 2 | |G 1 | |G 2 | .(5) Both I a and I p are zero when the antenna ports are ideally orthogonal and amplitude-balanced, and they increase with the presence of polarization deficiencies. The spatial distribution of I a and I p of the 2-port planar Eleven antenna is illustrated in Fig. 11 at different frequencies. Comparison of these plots with Fig. 10, clearly shows how the presence of the polarization deficiencies affect the spatial distribution of the MIMO coverage. In the presence of high polarization deficiency, more power is required at the transmitter side in order to acheive 95% PoD, which means reduced efficiency. −60 • −30 • 0 • 30 • 60 • −60 • −30 • 0 • 30 • 60 • φ θ −20 −15 −10 −5 0 5 −60 • −30 • 0 • 30 • 60 • −60 • −30 • 0 • 30 • 60 • φ θ −20 −15 −10 −5 0 5 −60 • −30 • 0 • 30 • 60 • −60 • −30 • 0 • 30 • 60 • B. Dual-Polarized 4-port and 8-port MIMO Antenna As mentioned earlier, in addition to the 2-port mode, the planar Eleven antenna can be operated in 4-port and 8-port modes. In principle, more antenna ports can improve the MIMO efficiency by providing more diversity. Of course, this improvement is dependent on the correlation between the antenna ports, mutual coupling and embedded antenna efficiency. 1-and 2-bitstream MIMO efficiency of the 4-port antenna in Random-LOS environment are plotted in Fig. 12 to Fig. 8, it is evident that the MIMO efficiency improves by employing the antenna in 4-port mode of operation. Here, the total embedded antenna efficiency, including the reflection coefficient at the antenna ports is used to determine the MIMO efficiency. The total embedded efficiencies of ports on low and high branches, are plotted in Fig. 13 for further reference. 1-and 2-bitstream MIMO efficiency of the 8-port antenna in Random-LOS environment are plotted in Fig. 14. Compared to Fig. 8, it can be observed that the MIMO efficiency is generally degraded over the bandwidth of operation. This degradation is largely due to low total embedded efficiencies of the antenna ports. The total embedded efficiencies of ports on low and high branches in 8-port mode, are plotted in Fig. 15 which clearly illustrates the effect of the antenna total embedded efficiency on the MIMO efficiency. The total embedded efficiency and MIMO efficiency of the 8-port antenna can be improved by addition of a proper matching circuit. In comparison of the different operation modes of the antenna, we can conclude that the antenna performs best in the 4-port mode. Also in the 2-port mode, the performance of the antenna is acceptable. But using the present planar Eleven antenna in 8-port mode is not recommended. V. CONCLUSION A novel planar type Eleven Antenna is designed for micro base-station working in 1.6 GHz to 2.8 GHz frequency band. The flat structure of the antenna makes the manufacturing process simple and the low-profile makes it a suitable candidate for wall-mounted applications. The antenna can be operated in 2-, 4-, and 8-port modes, with small modifications. For micro base-stations, Random-LOS is more pronounced compared to RIMP, due to use of lower power and smaller cell size. The performance of the antenna is evaluated in Random-LOS and RIMP environments for an intended coverage of 120 • in both elevation and azimuth planes. The MIMO efficiency of the antenna has small variation over the frequency band. The spatial MIMO coverage of the antenna and the effect of polarization deficiencies are studied in the 2-port mode of operation. The 4-port mode provides higher MIMO efficiency due to increased diversity. However, the 8-port mode's MIMO efficiency is severely impaired due to sub-optimal matching and embedded antenna efficiency. Fig. 1 . 1CST model of the present planar MIMO Eleven antenna with a size of 337 × 337 × 37 mm 3 . Fig. 2 . 2Illustration of ideal threshold receiver model for i.i.d. multipath case (a) the CDF, and (b) the PoD. Fig. 3 .Fig. 4 . 34Detail of the CST model of the planar Eleven MIMO antenna. Illustration of design parameters. Fig. 5 . 5The fabricated prototype of the planar Eleven antenna. Fig. 6 . 6The reflection coefficient (a) simulated with differential feeding, and (b) measured with wideband hybrid junction. Simulated and measured radiation patterns of the 2-port antenna at (top) 1.6 GHz, (center) 2.2 GHz, and (bottom) 2.8 GHz, in (left) φ = 0 and (right) φ = 90 • planes. Here, the high branch is oriented along x-axis and the low branch along y-axis. Fig. 9 . 91-bitstream and 2-bitstream MIMO efficiency of the two port antenna in RIMP. Fig. 11 . 11. 10. 2-bitstream MIMO coverage plots of the two port antenna in Random-LOS at (a) 1.6 GHz, (b) 2.2 GHz, and (c) 2.8 GHz. −60 • −30 • 0 • 30 • 60 • Spatial distribution of polarization deficiencies Ia and Ip of the present 2-port antenna. Higher value means worse performance. Fig. 13 . 13Total embedded efficiency of the ports on low and high branches, 4-port antenna. Fig. 15 . 15Total embedded efficiency of the ports on low and high branches, 8-port antenna. TABLE I DESCRIPTION IAND VALUES OF GEOMETRIC PARAMETERSPar. Description Low branch High branch k scaling factor 1.2739 1.2757 k d scaling factor of horizontal distance be- tween dipole pairs 0.7218 0.7661 k da scaling factor of dipole width 0.0097 0.0093 k dc scaling factor of gap width in dipole pair 0.0349 0.0283 kw a scaling factor of dipole arm width 0.0213 0.0216 k l scaling factor of dipole arm length 0.2137 0.2066 s separation between transmission lines [mm] 3.1613 2.9210 wt width of the transmission line [mm] 1.2999 1.3149 (a) Transmission line part (b) Side view . ComparedFig. 12. 1-bitstream 4 × 1 and 2-bitstream 4 × 2 MIMO efficiency of the 4-port antenna in Random-LOS.1.6 2 2.4 2.8 −12 −10 −8 −6 −4 Frequency [GHz] η MIMO [dB] 1-bitstream -Simulation 2-bitstream -Simulation 1.6 2 2.4 2.8 −4 −3 −2 −1 0 Frequency [GHz] Total efficiency [dB] Low branch ports High branch ports Frequency [GHz]Fig. 14. 1-bitstream 8 × 1 and 2-bitstream 8 × 2 MIMO efficiency of the 8-port antenna in Random-LOS.Frequency [GHz] Total efficiency [dB]Low branch ports High branch ports1.6 2 2.4 2.8 −18 −16 −14 −12 −10 −8 −6 −4 η MIMO [dB] 1-bitstream -Simulation 2-bitstream -Simulation 1.6 2 2.4 2.8 −16 −12 −8 −4 0 The requirements, challenges, and technologies for 5G of terrestrial mobile telecommunication. S Chen, J Zhao, IEEE Communications Magazine. 525S. Chen and J. Zhao, "The requirements, challenges, and technologies for 5G of terrestrial mobile telecommunication," IEEE Communications Magazine, vol. 52, no. 5, pp. 36-43, 2014. The eleven antenna: a compact low-profile decade bandwidth dual polarized feed for reflector antennas. R Olsson, P.-S Kildal, S Weinreb, IEEE Transactions on Antennas and Propagation. 542R. Olsson, P.-S. Kildal, and S. Weinreb, "The eleven antenna: a compact low-profile decade bandwidth dual polarized feed for reflector antennas," IEEE Transactions on Antennas and Propagation, vol. 54, no. 2, pp. 368-375, 2006. Design and realization of a linearly polarized eleven feed for 1-10 ghz. J Yang, X Chen, N Wadefalk, P.-S Kildal, IEEE Antennas and Wireless Propagation Letters. 8J. Yang, X. Chen, N. Wadefalk, and P.-S. Kildal, "Design and realization of a linearly polarized eleven feed for 1-10 ghz," IEEE Antennas and Wireless Propagation Letters, vol. 8, pp. 64-68, 2009. Cryogenic 2-13 ghz eleven feed for reflector antennas in future wideband radio telescopes. J Yang, M Pantaleev, P.-S Kildal, B Klein, Y Karandikar, L Helldner, N Wadefalk, C Beaudoin, IEEE Transactions on antennas and propagation. 596J. Yang, M. Pantaleev, P.-S. Kildal, B. Klein, Y. Karandikar, L. Helldner, N. Wadefalk, and C. Beaudoin, "Cryogenic 2-13 ghz eleven feed for reflector antennas in future wideband radio telescopes," IEEE Trans- actions on antennas and propagation, vol. 59, no. 6, pp. 1918-1934, 2011. Monopulse tracking performance of multi-port eleven antenna for use in terminals for satellite communications. J Yin, J Aas, J Yang, P.-S Kildal, The Second European Conference on Antennas and Propagation. EdinburghJ. Yin, J. Aas, J. Yang, and P.-S. Kildal, "Monopulse tracking perfor- mance of multi-port eleven antenna for use in terminals for satellite communications," in The Second European Conference on Antennas and Propagation (EuCAP 2007), Edinburgh, 11-16 November 2007, 2007. Measurements of diversity gain and radiation efficiency of the eleven antenna by using different measurement techniques. J Yang, S Pivnenko, T Laitinen, J Carlsson, X Chen, Eucap. J. Yang, S. Pivnenko, T. Laitinen, J. Carlsson, and X. Chen, "Measure- ments of diversity gain and radiation efficiency of the eleven antenna by using different measurement techniques," Eucap 2010, 2010. Comparison of ergodic capacities from wideband MIMO antenna measurements in reverberation chamber and anechoic chamber. X Chen, P.-S Kildal, J Carlsson, J Yang, IEEE Antennas and Wireless Propagation Letters. 10X. Chen, P.-S. Kildal, J. Carlsson, and J. Yang, "Comparison of ergodic capacities from wideband MIMO antenna measurements in reverberation chamber and anechoic chamber," IEEE Antennas and Wireless Propa- gation Letters, vol. 10, pp. 446-449, 2011. New approach to OTA testing: RIMP and pure-LOS reference environments & a hypothesis. P Kildal, J Carlsson, Antennas and Propagation (EuCAP), 2013 7th European Conference on. IEEEP. Kildal and J. Carlsson, "New approach to OTA testing: RIMP and pure-LOS reference environments & a hypothesis," in Antennas and Propagation (EuCAP), 2013 7th European Conference on. IEEE, 2013, pp. 315-318. Foundations of antennas: A unified approach for Line-Of-Sight and Multipath. P.-S Kildal, P.-S. Kildal, Foundations of antennas: A unified approach for Line- Of-Sight and Multipath. . Kildal Antenn AB. Kildal Antenn AB, 2015, available at www.amazon.com and www.kildal.se. Ray based multipath simulation tool for studying convergence and estimating ergodic capacity and diversity gain for antennas with given far-field functions. U Carlberg, J Carlsson, A Hussain, P.-S Kildal, 2010 Conference Proceedings of ICECom. IEEEU. Carlberg, J. Carlsson, A. Hussain, and P.-S. Kildal, "Ray based multipath simulation tool for studying convergence and estimating ergodic capacity and diversity gain for antennas with given far-field functions," in 2010 Conference Proceedings of ICECom. IEEE, 2010, pp. 1-4. Threshold receiver model for throughput of wireless devices with MIMO and frequency diversity measured in reverberation chamber. P.-S Kildal, A Hussain, X Chen, C Orlenius, A Skårbratt, J Åsberg, T Svensson, T Eriksson, Antennas and Wireless Propagation Letters, IEEE. 10P.-S. Kildal, A. Hussain, X. Chen, C. Orlenius, A. Skårbratt, J.Åsberg, T. Svensson, and T. Eriksson, "Threshold receiver model for throughput of wireless devices with MIMO and frequency diversity measured in reverberation chamber," Antennas and Wireless Propagation Letters, IEEE, vol. 10, pp. 1201-1204, 2011. Interpreting the total isotropic sensitivity and diversity gain of LTE-enabled wireless devices from over-the-air throughput measurements in reverberation chambers. A Hussain, P.-S Kildal, A A Glazunov, Access, IEEE. 3A. Hussain, P.-S. Kildal, and A. A. Glazunov, "Interpreting the total isotropic sensitivity and diversity gain of LTE-enabled wireless devices from over-the-air throughput measurements in reverberation chambers," Access, IEEE, vol. 3, pp. 131-145, 2015. Throughput modeling and measurement in an isotropicscattering reverberation chamber. X Chen, IEEE Transactions on. 624Antennas and PropagationX. Chen, "Throughput modeling and measurement in an isotropic- scattering reverberation chamber," Antennas and Propagation, IEEE Transactions on, vol. 62, no. 4, pp. 2130-2139, 2014. Electromagnetic Optimization by Genetic Algorithms, ser. Wiley Series in Microwave and Optical Engineering. Y Rahmat-Samii, E Michielssen, WileyY. Rahmat-Samii and E. Michielssen, Electromagnetic Optimization by Genetic Algorithms, ser. Wiley Series in Microwave and Optical Engineering. Wiley, 1999. Characterizing polarization-MIMO antennas in random-LOS propagation channels. A Razavi, A Glazunov, P.-S Kildal, J Yang, Under reviewA. Razavi, A. Alayón Glazunov, P.-S. Kildal, and J. Yang, "Characteriz- ing polarization-MIMO antennas in random-LOS propagation channels," Under review, 2016.
[]
[ "Automated Polysomnography Analysis for Detection of Non-Apneic and Non-Hypopneic Arousals using Feature Engineering and a Bidirectional LSTM Network", "Automated Polysomnography Analysis for Detection of Non-Apneic and Non-Hypopneic Arousals using Feature Engineering and a Bidirectional LSTM Network" ]
[ "Ali Bahrami Rad ", "Morteza Zabihi ", "Zheng Zhao ", "Moncef Gabbouj ", "Aggelos K Katsaggelos ", "Simo Särkkä " ]
[]
[]
Objective: The aim of this study is to develop an automated classification algorithm for polysomnography (PSG) recordings to detect non-apneic and non-hypopneic arousals. Our particular focus is on detecting the respiratory effortrelated arousals (RERAs) which are very subtle respiratory events that do not meet the criteria for apnea or hypopnea, and are more challenging to detect. Methods: The proposed algorithm is based on a bidirectional long short-term memory (BiLSTM) classifier and 465 multi-domain features, extracted from multimodal clinical time series. The features consist of a set of physiology-inspired features (n = 75), obtained by multiple steps of feature selection and expert analysis, and a set of physiology-agnostic features (n = 390), derived from scattering transform. Results: The proposed algorithm is validated on the 2018 PhysioNet challenge dataset. The overall performance in terms of the area under the precision-recall curve (AUPRC) is 0.50 on the hidden test dataset. This result is tied for the second-best score during the follow-up and official phases of the 2018 PhysioNet challenge. Conclusions: The results demonstrate that it is possible to automatically detect subtle non-apneic/nonhypopneic arousal events from PSG recordings. Significance: Automatic detection of subtle respiratory events such as RERAs together with other non-apneic/non-hypopneic arousals will allow detailed annotations of large PSG databases. This contributes to a better retrospective analysis of sleep data, which may also improve the quality of treatment.
null
[ "https://arxiv.org/pdf/1909.02971v1.pdf" ]
202,231,843
1909.02971
811305a69b9f1641ea18867dc3b6cb8ff4a5d270
Automated Polysomnography Analysis for Detection of Non-Apneic and Non-Hypopneic Arousals using Feature Engineering and a Bidirectional LSTM Network Ali Bahrami Rad Morteza Zabihi Zheng Zhao Moncef Gabbouj Aggelos K Katsaggelos Simo Särkkä Automated Polysomnography Analysis for Detection of Non-Apneic and Non-Hypopneic Arousals using Feature Engineering and a Bidirectional LSTM Network 1Index Terms-Polysomnographyclinical time seriessleep arousalrespiratory effort-related arousalfeature engineeringscattering transformclassificationrecurrent neural network (RNN)long-short term memory (LSTM) Objective: The aim of this study is to develop an automated classification algorithm for polysomnography (PSG) recordings to detect non-apneic and non-hypopneic arousals. Our particular focus is on detecting the respiratory effortrelated arousals (RERAs) which are very subtle respiratory events that do not meet the criteria for apnea or hypopnea, and are more challenging to detect. Methods: The proposed algorithm is based on a bidirectional long short-term memory (BiLSTM) classifier and 465 multi-domain features, extracted from multimodal clinical time series. The features consist of a set of physiology-inspired features (n = 75), obtained by multiple steps of feature selection and expert analysis, and a set of physiology-agnostic features (n = 390), derived from scattering transform. Results: The proposed algorithm is validated on the 2018 PhysioNet challenge dataset. The overall performance in terms of the area under the precision-recall curve (AUPRC) is 0.50 on the hidden test dataset. This result is tied for the second-best score during the follow-up and official phases of the 2018 PhysioNet challenge. Conclusions: The results demonstrate that it is possible to automatically detect subtle non-apneic/nonhypopneic arousal events from PSG recordings. Significance: Automatic detection of subtle respiratory events such as RERAs together with other non-apneic/non-hypopneic arousals will allow detailed annotations of large PSG databases. This contributes to a better retrospective analysis of sleep data, which may also improve the quality of treatment. breathing disorders, sleep-related movement disorders, hypersomnias of central origin, parasomnias, and circadian rhythm sleep disorders [2]. In this study, we pay special attention to sleep arousals induced by sleep-related breathing disorders. However, sleep arousals which are identified by transitions from deeper sleep states to lighter ones can also occur either spontaneously or in association with other sleep disorders and/or environmental stimuli. Sleep arousals are characterized by sudden shifts in electroencephalography (EEG) frequency [3]. However, depending on the type of sleep disorders, arousals may be manifested on other biosignals too. For example, sleep-related breathing disorders, which are characterized by respiratory or ventilatory disturbance during sleep [4], lead to arousals detectable from biosignals such as airflow, respiratory effort signals (chest and abdominal), and arterial oxygen saturation (SaO 2 ) along with EEG. Furthermore, bruxism, defined as unconscious clenching, grinding, or bracing of the teeth during sleep [5], is a type of sleep-related movement disorder which leads to arousals observable from chin EMG and EEG [6]. Therefore, analysis of the patterns of the aforementioned clinical time series together with other biosignals such as electrooculography (EOG) and electrocardiography (ECG), which are recorded during a typical polysomnography (PSG) test, provide important information for sleep arousal detection. Despite the recent attempts to automate PSG-based sleep analysis [7]- [13], arousal detection is still done manually by expert sleep technologists. Typical contemporary PSG datasets can consist of hundreds to thousands of cases, and each case contains more than a dozen clinical time series with about eight-hour long. Manual analysis of such datasets is a laborintensive and time-consuming process, which highly depends on the sleep technologist's experience and skill [14], and consequently limits the PSG-based sleep-related studies. Our aim is to develop a machine learning algorithm to automatically detect arousal events in PSG recordings. We use the same objective as appointed by the PhysioNet/Computing in Cardiology (CinC) Challenge 2018 [15], [16]. According to the PhysioNet challenge rules, the target arousals are those which are neither apneic nor hypopneic. This excludes all apnea types including obstructive, central, and mixed events [17] as well as hypopnea from our analysis. Our particular focus is on detecting the respiratory effortrelated arousals (RERAs) which account for 99.6% of all target arousals available in the PhysioNet training dataset [16]. RERA is a sequence of breaths lasting at least 10 seconds, characterized by extended inspiratory phase, paradoxical movement of the chest and abdomen, and/or flattening of inspiratory airflow that leads to an arousal from sleep [18], [19]. RERAs are very subtle respiratory events which do not meet criteria for apnea or hypopnea and are more challenging to detect [20]. Despite its subtle nature and moderate manifestation on biosignals, RERAs can cause fatigue and daytime sleepiness [21], not to mention an excessive number of RERAs is also associated with raised blood hypertension [22] and car accidents [23]. Aside from RERAs, the remaining 0.4% target arousals of this study consist of other types of sleeprelated breathing disorders, sleep-related movement disorders, environmental stimuli, and spontaneous arousals. The current study is a continuation of our prior work [24] in the sense that it is developed for the follow-up phase of the 2018 PhysioNet challenge, and then assessed on the same dataset with the same evaluation criteria. However, it is a thoroughly independent body of research by virtue of the following facts. First, in our prior work, we proposed an automatic feature learning procedure based on a 2D convolutional neural network (CNN) [25] and state distance representation [26] of clinical time series. However, in the current study, we extract hand-engineered features from various time series based on the combination of expert knowledge and feature selection techniques. Second, in our previous study, we used a limited number of PSG channels (only 3 biosignals), but here we use all available PSG data (13 biosignals). Third, the development of the previous algorithm involved minimum/no physiological knowledge; the currently proposed method is developed based on prior knowledge of the physiological process during arousals. Fourth, we also extract an alternative semi-automatic set of features using state-of-the-art scattering transform [27] and investigate ways to increase its performance for PSG classification. Fifth, we utilize a recurrent neural network (RNN) based on long short-term memory (LSTM) units [28] for sequence modeling of sleep microstructures and transient events. The developed software is available in the PhysioNet system and will be released under an open-source license, according to the PhysioNet timeline. II. DATASET AND EVALUATION CRITERIA We use the same dataset and scoring mechanism as provided by the 2018 PhysioNet/CinC challenge. The dataset comprises 1983 cases of in-laboratory PSG recordings. The data were recorded by Massachusetts General Hospital's (MGH) Sleep Lab in the Sleep Division together with the Computational Clinical Neurophysiology Laboratory, and the Clinical Data Animation Center according to the American Academy of Sleep Medicine (AASM) practice standards [16]. The recordings consist of 13 biosignals as follow: • six EEG channels for recording cortical activity of three brain regions, based on the International 10-20 System: frontal: F3-M2 and F4-M1 (PSG channels 1, 2), central: C3-M2 and C4-M1 (PSG channels 3, 4), occipital: O1-M2 and O2-M1 (PSG channels 5, 6); • the left side EOG for recording eye movements (PSG channel 7); • chin EMG for measuring chin muscle activity (PSG channel 8); • two respiratory effort signals for recording thoracoabdominal movements: abdominal (PSG channel 9), chest (PSG channel 10); • respiratory airflow (PSG channel 11); • arterial oxygen saturation (SaO 2 ) (PSG channel 12); • ECG for measuring heart activity (PSG channel 13). All biosignals except SaO 2 were sampled at 200 Hz. The SaO 2 was upsampled to 200 Hz for convenience. All recordings were annotated according to AASM standard by seven clinical experts, but one expert was used for each recording. The recordings were scored for sleep stages and then annotated into three classes: non-target arousal, target arousal, and non-arousal events. The non-target arousals are those regions in PSG recordings with apneic or hypopneic arousals, and the target arousals are the regions which meet either of the following conditions: (i) 2 seconds before the onset of RERA to 10 seconds after its ending; (ii) 2 seconds before the onset of non-RERA, non-apneic, and non-hypopneic arousal to 2 seconds after its ending. As it was stated earlier, 99.6% of target arousals in the training dataset are related to RERAs. The remaining 0.4% are distributed among arousals related to snoring, partial airway obstruction, Cheyne-Stokes breathing, hypoventilation, bruxism, periodic leg movement, noise, and spontaneous. The dataset is divided into two disjoint subsets of training (n = 994 subjects) and testing (n = 989). The labels of the testing dataset are hidden and are reserved to be used by PhysioNet challenge organizers to evaluate the performance of the submitted algorithms. The performance is assessed using the area under the precision-recall curve (AUPRC) for binary classification between target arousal and non-arousal regions. The non-target arousal regions are not considered for evaluation. More information on the evaluation criteria is available in [15] and [16]. In addition to AUPRC which is the primary evaluation criterion, we calculated the area under the receiver operating characteristic curve (AUROC) as a secondary evaluation criterion. III. FEATURE ENGINEERING After preprocessing of PSG recordings as described in Section III-A, we extract 465 features from each 5-second analysis window. The features are categorized into two groups: physiology informed and physiology agnostic features. The physiology informed features are extracted based on our physiological knowledge of sleep arousal and its manifestations on biosignals. However, this set of features are not solely based on physiology, but instead, we extract a large number of hand-engineered features based on our prior knowledge of sleep arousals, and then during multiple steps of feature selection and expert judgments, remove the irrelevant and/or redundant ones (see Section III-B). On the other hand, the physiology agnostic features are entirely derived based on our knowledge of signal processing and machine learning without any physiological consideration (see Section III-C). A. Preprocessing and Data Preparation The 60 Hz powerline artifact is removed using a bandstop filter. Moreover, an inspection of the spectral content of biosignals indicates the presence of an extra 80 Hz artifact in some recordings. This might be related to the second harmonic of the powerline artifact (120 Hz) which due to the aliasing effect presents itself as an 80 Hz false frequency component. The 80 Hz artifact is filtered out as well. Then, the high-amplitude muscle-generated artifacts due to body movements are removed by simple thresholding: if the instantaneous magnitude of the biosignal exceeds 8 times the interquartile range of its amplitude, it is replaced by zero value. Furthermore, the dynamic range of the signal amplitude is normalized by dividing the instantaneous amplitude by 8 times the interquartile range. The last two steps (i.e., highamplitude artifact removal and dynamic range normalization) are applied to all biosignals except SaO 2 and ECG. Finally, each PSG recording is segmented into 5-second nonoverlapping triangular windows. From now on, all the analyses are done on these 5-second windows. B. Physiology Informed Features In the initial phase of this study, we extracted more than 900 features from all biosignals. The extracted features were from various domains such as time, frequency (or spectral), time-frequency, and phase space. The number of features is then reduced through multiple steps of feature selection methods and expert judgment. In the first step, 250 features are removed after a feature ranking procedure using a random forest classifier similarly to [29] and [30]. Then, the features derived by nonlinear analysis of biosignals in the reconstructed phase space [31] are removed to speed up the feature extraction process. Although these features contribute to a better classification result by ∼0.02 points in terms of AUPRC, we remove them from our analysis due to the run-time constraint applied by the PhysioNet challenge organizers. In the next step, all time-frequency features, obtained from the ordinary discrete wavelet transform (DWT), are removed and replaced by features derived from the scattering transform [27]. The problem with DWT is that it is covariant to translation and one needs to extract the ad hoc translation invariant features similarly to [32]. Since the calculation of scattering transform features involves no physiological knowledge, we treat them as physiology agnostic features, discussed separately in Section III-C. In the last step, we applied our proposed heuristic feature selection method (see Section IV) to the remaining 500 features to derive the final set of 75 physiology informed features. In the following, we describe these 75 features which can be further categorized into two subgroups: respiratoryrelated and non-respiratory-related features. The respiratoryrelated features, described in Section III-B1, are extracted from biosignals related to the respiratory process such as abdominal, chest, airflow, and SaO 2 . The non-respiratory-related features, described in Section III-B2, are extracted from EEGs, EOG, chin EMG, and ECG. 1) Respiratory-related features: Monitoring respiratory activity using relevant biosignals such as airflow, abdominal and chest, as well as oxygen saturation (SaO 2 ) reveals abnormalities and/or complications related to breathing [33]. For example, SaO 2 indicates changes in blood oxygen level which is an important marker for the detection of sleep apnea or other respiratory problems. The respiratory-related biosignals also capture information about snoring, respiratory rate, airway obstruction, and the strength of inhalation and expiration [34]. For instance, the morphology and movement patterns of the chest and abdomen (e.g., biphasic, paradoxical, and in-phase) and/or the shape of the airflow signal (flatten vs. normal) are important indicators for detection of RERAs [35], [36]. Furthermore, snoring can be derived from the highfrequency periodic oscillation of airflow [37], or it might even appear as an artifact on the non-respiratory-related chin EMG biosignal [18]. In the following, we describe the selected 41 respiratoryrelated features, among them, there are 13 cross-channel and 28 isolated-channel features. (i) Thirteen cross-channel features are extracted from the abdominal, chest, and airflow signals (PSG channels 9-11) using correlation analysis, hypothesis testing, and multichannel signal decomposition. The first six features are the Pearson correlation coefficients and the p-values for testing the hypothesis that there is no relationship between each pair of signals (null hypothesis). The next seven cross-channel features are extracted after factorization of the the matrix X formed by these signals each of length N X =   x 9 1 x 9 2 · · · x 9 N x 10 1 x 10 2 · · · x 10 N x 11 1 x 11 2 · · · x 11 N   .(1) We consider the singular value decomposition (SVD) of X [38], that is, X = U Σ 0 V ,(2) where U and V are 3×3 and N ×N orthogonal matrices, respectively, and Σ 0 is a 3 × N block matrix in which Σ is a 3 × 3 diagonal matrix with singular values σ 1 , σ 2 , and σ 3 in the diagonal, that is, Σ =   σ 1 0 0 0 σ 2 0 0 0 σ 3   ,(3) and 0 is a 3×(N −3) zero matrix. By convention U , V , and Σ are organized such that σ 1 ≥ σ 2 ≥ σ 3 ≥ 0. The singular values σ 1 , σ 2 , σ 3 , their arithmetic and geometric means, their standard deviation (STD), and the ratio of σ 1 /σ 3 are then the next seven cross-channel features. (ii) Six features are extracted from the abdominal signal. The first two features are the STD and the root mean square (RMS) values of the signal. Then the signal is modeled as an order 10 autoregressive (AR) process, that is, x n = − 10 k=1 a k x n−k + v n ,(4) where x n and v n are the n-th samples of the signal and input white noise, respectively, and a 1 , a 2 , · · · , a 10 are the parameters of the model, estimated by Burg's method [39]. The third feature is the ninth parameter (a 9 ) of the above-mentioned AR model. It is worth mentioning that there are various approaches for choosing a good value for the order of the AR model such as minimizing either the Akaike or the Bayesian information criteria [40], [41]. However, in this work, our purpose is not to design an optimum model for signal representation, but we are merely looking for those parameters (i.e., features) that are informative enough to be used for discrimination between arousal and nonarousal classes. Therefore, instead of being preoccupied with the optimum model selection, we choose a model order with a moderate value (e.g., 10) and during a feature selection procedure, choose the discriminative parameters. Furthermore, the respiratory-related abdominal spectrum (i.e., low-frequency interval of the abdominal spectrum) is divided into the following five frequency bands: 0.01-0.4 Hz, 0.4-0.75 Hz, 0.75-1.2 Hz, 1.2-1.6 Hz, and 1.6-3 Hz. The signal power in the frequency band between f 1 -f 2 Hz, P(f 1 , f 2 ), is estimated by the area under the power spectral density curve,P (f ), that is, 6, 3)). The last feature is (STD(ẍ) × STD(ẋ))/STD(x), in which x,ẋ, andẍ are the airflow signal and its first and second forward differences, respectively, that is, P(f 1 , f 2 ) = f2 f1P (f ) df,(5)x n = x n+1 − x n , x n = x n+2 − 2x n+1 + x n .(6) (v) We extract 5 features from SaO 2 , namely, mean, STD, RMS, and mean frequency of the power spectrum of the signal, as well as STD of the signal first difference. 2) Non-respiratory-related features: AASM guidelines define arousal as an abrupt shift in EEG frequency including alpha (8-13 Hz), theta (4-8 Hz), and frequencies above 16 Hz lasting at least 3 seconds, and is preceded by at least 10 seconds of stable sleep [42]. Moreover, during a rapid eye movement (REM) stage, this EEG frequency shift needs to be accompanied by concurrent increases in submental (chin) EMG amplitude, to be recognized as arousal. On the other hand, non-rapid eye movement (NREM) arousals may occur without the aforementioned increase in chin EMG. We extract various features from different EEG frequency bands including delta (0.1-4 Hz), theta (4-8 Hz), alpha (8)(9)(10)(11)(12)(13), sigma (13)(14)(15)(16), and beta (16)(17)(18)(19)(20)(21)(22)(23)(24)(25). Moreover, EOG-and EMG-based features are extracted to differentiate between EEG arousals during REM and NREM, and ECGbased features are extracted to provide complementary information about sleep-related breathing disorders as well as autonomic arousals [43]. In the following, the selected 34 nonrespiratory-related features are described. (i) Seven features are extracted from each frontal EEG and EOG (PSG channels 1, 2, 7) as follow: RMS, STD, skewness, and kurtosis of biosignals, together with a 3 and a 5 parameters of AR model in (4) 100)). (iv) The following two features are extracted from ECG signal: P(7.5, 12)/P (12,16) and P(12, 16)/(P(7.5, 12) + P(16, 25)). C. Physiology Agnostic Features One of the challenges in classification is handling a substantial amount of intra-class variability which is not helpful for discrimination between different classes. Removing or minimizing this irrelevant information and preserving useful interclass variabilities may significantly increase the classifier's performance. The scattering transform proposed by Mallat [27] is a systematic approach to address this problem by building locally invariant, stable, and informative representations while preserving the signal norm and most of the inter-class variabilities. The scattering transform is a deep representation which mimics a CNN in the sense that it propagates the input signal across a sequence of linear filters followed by pooling and nonlinearities [44]. However, contrary to a CNN in which the filters have adaptive weights obtained through a gradientbased learning strategy and error back-propagation [45], the scattering transform is derived by cascading predefined filters, namely wavelets. To be more precise, the scattering transform is a deep signal representation, derived by cascading wavelet transform moduli followed by an averaging operator (i.e., lowpass filtering) [27]. The logic behind this new transformation is to derive a translation invariant representation of the original signal which is also stable to small deformations like time warping. In this work, we design a 2-layer scattering network with corresponding filter banks illustrated in Fig. 1. We use Gabor wavelets (i.e., approximately analytic wavelets constructed by frequency modulation of Gaussian windows) [46] whose central frequencies of the mother wavelets in the first and second filter banks are calculated as follows ω 1 0 = 1 + 2 −1/Q1 2 × f s 2 = 85.35 Hz,(7)ω 2 0 = 1 + 2 −1/Q2 2 × f s 2 = 75.00 Hz.(8) Here, quality factors Q 1 = 2 and Q 2 = 1 are the number of wavelets per octave for the first and second filter banks, and f s = 200 Hz is the sampling frequency. We design this scattering network such that the resulting representation is invariant to 5-second translation which leads to J 1 = 13 and J 2 = 8 wavelet scales in the first and second filter banks. Other wavelets ψ k j k (t) in the filter banks are derived by dilating the mother wavelets ψ k 0 (t) by a factor of 2 1/Q k ψ k j k (t) = 2 −j k /Q k ψ k 0 (2 −j k /Q k t),(9) where k ∈ {1, 2} indicates the layer index in the scattering network and j k ∈ {0, 1, 2, · · · , J k − 1} indicates the scale index. In the Fourier domain, these filter banks can be represented byψ k j k (ω) =ψ k 0 (2 j k /Q k ω),(10) whose magnitudes are demonstrated in Fig. 1. If the central frequency ofψ k 0 (·) is ω k 0 , then the central frequency of ψ k j k (·) is 2 −j k /Q k ω k 0 . In other words, the frequency axis is divided in a (base-two) logarithmic manner. However, for Q > 1 in order to cover the entire frequency spectrum the first J filters (i.e., ψ 0 , ψ 1 , . . . , ψ J−1 ) cover the higher-frequency interval in a logarithmic manner, and the lower-frequency interval is covered by P equally-spaced filters with the same bandwidth as ψ J−1 . This is due to the fact that the filter ψ J−1 has the smallest bandwidth in frequency and the largest time-support which should be smaller than the predefined 5second translation invariant scale. Although these P filters are not dilations of ψ J−1 , for simplicity they are still called wavelets [47]. In this work for the first filter bank, J = 13 and P = 1, and for the second filter bank J = 8 and since Q = 1, P = 0 (see Fig. 1). Derived by zooming in the 0−5 Hz frequency interval of the first filter bank, Fig. 2 shows that ψ 13 has the same bandwidth as ψ 12 , but the bandwidth of the other filters increases exponentially. The time-domain representations of the complex wavelets corresponding to the analytical filters in Fig. 2 are demonstrated in Fig. 3. Fig. 2. Re{ψj 1 } and Im{ψj 1 } are the real and imaginary parts of the complex wavelet functions corresponding to analytical filters shown in Fig. 2. φ is the approximation function whose corresponding low-pass filter is not shown in Fig. 2. The 2-layer scattering network used in this work can be summarized as follows S 0 x = x φ (11) U 1 x = |x ψ j1 | (12) S 1 x = |x ψ j1 | φ (13) U 2 x = ||x ψ j1 | ψ j2 | (14) S 2 x = ||x ψ j1 | ψ j2 | φ,(15) where is convolution and |·| is the complex modulus operator. In (11) the zeroth-order scattering coefficient is calculated by low-pass filtering (or weighted time-averaging) of the original signal x (i.e., by convolution of x with the approximation function φ). By this low-pass filtering, high-frequency content of x is lost. This high-frequency content can be recovered by the wavelet transform. So, in (12) the variation of signal x at different j 1 scales is calculated by convolution of x with wavelets ψ j1 . At a first glance it seems that the complex modulus operator | · | in (12) results in information loss as well, but it can be shown that at least for a specific family of wavelets, x can be reconstructed from |x ψ j | up to a global phase (i.e., up to multiplication by a unitary complex number) and the reconstruction operator is continuous (but not uniformly continuous) [48]. So, the main source of information loss is the low-pass filtering which is needed for generating shift-invariant features. In (13) the first-order scattering coefficients are calculated by low-pass filtering of the firstorder wavelet scattering modulus U 1 x, and yet again the lost information is recovered in (14) in which the next wavelet scattering modulus is calculated by convolution of U 1 x with the second layer wavelets ψ j2 . Finally, in (15) the secondorder scattering coefficients are calculated. This process can be repeated an arbitrary number of times to generate more and more shift-invariant features. However, we stop it after generating the second-order scattering coefficients since the higher order coefficients have very low energy which can be neglected in the analysis [49], and they do not contribute towards improving the classification results [50]. This structure mimics a CNN in the sense that the convolutional layers (i.e., wavelet transforms x ψ) are followed by nonlinearities (i.e., modulus operations |·|), and then they are followed by average pooling (i.e., low-pass filtering | · | φ). However, it is different from a CNN mainly because filters are not data-driven but predefined, and there is no weight sharing among different scales. In this work, we extract wavelet scattering coefficients for 6 biosignals: EOG, abdominal, chest, airflow, SaO2, and ECG (PSG channels 7, 9, 10, 11, 12, and 13). Since non-orthogonal Gabor wavelets have significant overlap in the frequency domain (see Figs. 1 and 2), the resulting scattering coefficients are redundant. In order to speed up the analysis and decrease the memory usage, we downsample the features by a factor of 4. The final number of features for each biosignal is 65, resulting in a total number of 390 features. The last point to discuss here is that one should not misinterpret our so-called physiology agnostic feature extraction as a domain agnostic method. We use the term "physiology agnostic" to highlight the fact that these features are not inspired by physiological knowledge of biosignals. However, the scattering transform is not a true domain agnostic method since the discovery of invariants and stability conditions to deformations which has a pivotal role in the success of this transformation is domain-dependent. It is obvious that invariants and stability conditions for image and texture data such as spatial translation, rotation, scaling, and partial occlusion [51], [52] are different for audio and speech signals such as time shifting, time warping deformation, frequency transposition, and frequency warping [50], [53]. IV. CLASSIFICATION In the intermediate phase of this work after feature engineering, we relied on the sliding window method [55] to classify Fig. 4. LSTM memory cell with forget gate f t as proposed in [54]. In the original LSTM, proposed in [28], there was no forget gate. each 5-second segment of the PSG data using a random forest classifier [56]. On average, the best-achieved result was 0.18 in terms of AUPRC, with high variance among different chunks of the data. The main reason for this low performance is that the temporal information and dependencies among different segments of the time series are lost. To address this shortcoming we use an LSTM network [28] which is a type of RNN with a gating mechanism that controls the flow of information [57]. Contrary to "vanilla" RNN which suffers from the vanishing and exploding gradient problem [58] and consequently does not capture the long-range dependencies, LSTM addresses the aforementioned problem and captures richer contextual information of the time series, thanks to the gating mechanism. σ(·) σ(·) tanh(·) σ(·) g t ot it f t ⊗ + ⊗ ⊗ tanh(·) ct−1 ht−1 xt ct ht ht • • • • • • • • In this work, we analyze the PSG recordings retrospectively and since the past, present, and future information of the time series is available at analysis time, we can use a bidirectional LSTM (BiLSTM) variant. Each BiLSTM layer consists of two layers of LSTMs: causal and anticausal counterparts. A single unit of a causal LSTM which processes the time series forward in time is illustrated in Fig. 4. It consists of four gates that control the flow of information through the following equations i t = σ (W i x t + U i h t−1 + b i ) (16) f t = σ (W f x t + U f h t−1 + b f ) (17) g t = tanh (W g x t + U g h t−1 + b g ) (18) o t = σ (W o x t + U o h t−1 + b o ) (19) c t = f t ⊗ c t−1 + i t ⊗ g t (20) h t = o t ⊗ tanh(c t ).(21) Here, i, f , g, and o are the vectors related to the input gate, forget gate, candidate cell gate, and output gate, respectively, for the entire layer of units or memory cells. Moreover, vectors c and h are the cell and hidden states, respectively, and W * , U * , and b * are respectively the input weight matrix, recurrent weight matrix, and bias vector for the gate denoted by * ∈ {i, f, g, o}. σ(·) and tanh(·) denote respectively sigmoid and hyperbolic tangent activation functions, and ⊗ is the Hadamard product. The anticausal LSTM which processes the time series backward in time is similar to the forward LSTM with reverse time order which leads to similar equations with different weights and biases (W * , U * , b * ); moreover, h t−1 and c t−1 are replaced respectively by h t+1 and c t+1 . The outputs of the two LSTMs are then concatenated to capture the contextual information of the whole time series. The architecture of the proposed BiLSTM network is illustrated in Fig. 5 in which 3 layers of BiLSTMs with 400 hidden units per layer (200 for each LSTM) are followed by a leaky rectifier linear unit (Leaky ReLu) layer, a fully connected layer, and a softmax layer. We have scrutinized and evaluated several different combinations, to empirically identify the best architecture. To name a few, we have examined a different number of BiLSTM layers, different number of memory cells per layer, multiple activation functions (e.g., linear, ReLu, Leaky ReLu, and sigmoid), different number of fully connected layers, and different parameters for Leaky ReLu layer. Leaky Relu has the following equation f (x) = x for x > 0, ax for x ≤ 0,(22) where typically a is a small number (e.g., 0.01). However, we have obtained the best result with a = 0.5. The theoretical reason behind this observation is not clear which is not an uncommon situation in the field of deep learning. Although there are studies which discuss the effect of different nonlinearities [59]- [61], they mainly focus on CNNs and suffer from the lack of mathematical rigor. We have also applied the dropout mechanism [62], [63] between different layers of the network, but the classification accuracy declined. We implemented our proposed method in MATLAB R2018b which only has an input/output dropout layer. However, for RNNs there is a more effective type of dropout mechanism which is applied to recurrent layers [64]. In fact, since the employed dropout mechanism was not useful, we decided not to use it and instead selected a set of discriminative features before feeding them to the BiLSTM network. For physiology informed features we proposed a heuristic feature selection method as follows. At first, we pretrain the BiLSTM network with 500 features, and rank them using the following ad hoc score: S k = * ∈{i,f,g,o} N j=1 |W * (j, k)| + |W * (j, k)| ,(23) and then select the 75 top-ranked features. In (23), W * (j, k) and W * (j, k) are the weights of the connections between the k-th feature and the j-th memory cell of the forward and backward LSTMs in the first BiLSTM layer, respectively, | · | is the absolute value, and N = 200 is the number of memory cells of each LSTM. For physiology agnostic features we do not apply any feature selection and feed them (390 features) directly to the network. Before feeding the network with training data, all PSG segments with non-target arousal labels are removed. Then, the recordings were sorted based on the feature sequence length. Test data 0.50 - - - - - The sorted data are further divided into mini-batches with a size of 20 subjects. Feature sequences inside each mini-batch are zero-padded in order to have the same length. The network is trained by these mini-batches to obtain the weights and biases which minimize the cross-entropy loss function using the Adam optimization algorithm [65]. In order to address the class imbalance problem, we use a weighted cross-entropy loss function with 0.9 and 0.1 weights for target arousal and nonarousal classes, respectively. Moreover, we use 0.005 learning rate which is 5 times larger than the default value of the Adam algorithm. By choosing this value, we obtain a better result and have a faster training phase. Other important parameters such as exponential decay rates of the first and second moment estimates are set to their default values: 0.9 and 0.999. The training is done during 30 epochs, but after every 10 epochs, the learning rate drops to 70% of its previous value. Finally, we also employ the gradient norm clipping techniques [66] by putting a further constraint on the gradient norm g not to be larger than 1. If g > 1, the gradient g is replaced by g/ g . The reason for using this technique is that if the gradient has a very large value, then the update term in the gradient descent-based algorithm may cause the parameters to jump to a point far from their current position, increasing the value of the loss function, thus wasting most of the efforts made so far to reach the current point [57]. To prevent this issue we move a smaller distance in the gradient direction. V. RESULTS The performance of our proposed method, consisting of 465 features and a BiLSTM network, for classification of PSG data for sleep arousal detection is assessed on the training dataset using a 10-fold cross-validation procedure. The proposed method achieves average scores of 0.49 and 0.90 for AUPRC and AUROC, respectively. Moreover, to evaluate the performance on the test dataset with hidden labels, the ensemble of the BiLSTM networks, trained on the aforementioned 10-fold cross-validation committee, is submitted to the PhysioNet system. The ensemble classifier achieves the stateof-the-art AUPRC score of 0.50, which is the second-best score during the follow-up and official phases of the 2018 PhysioNet challenge. This result is also 0.31 points better than our prior work [24]. Table I shows the classification performance of different sets of features using the same BiLSTM architecture. The set of physiology informed features achieves the best average scores of 0.51 AUPRC and 0.92 AUROC, even better than our submitted method. This result is the same as the result of the winner algorithm of the 2018 PhysioNet challenge [67] on the training dataset. The performance of the physiology agnostic features (i.e., scattering transform features) is worse than the results of the physiology informed features by 0.05 and 0.03 points in terms of AUPRC and AUROC, respectively. However, it is still among the top 5 PhysioNet algorithms on the training dataset. The performance of the physiology informed features may raise a question concerning our submitted method. The reason that the method with the inferior result (0.49 vs. 0.51; see Table I) is submitted for evaluation on the test dataset is that in the intermediate stage of this work in order to speed up the experiments, the performances of different methods were assessed by holdout validation strategy and the proposed method with 465 features achieved the best results. However, when we evaluate the models using the 10-fold cross-validation assessment technique we notice that the 75 physiology informed features outperform our proposed method by 0.02 points in terms of the AUPRC score. Since we only had one submission for the proposed algorithm, we could not evaluate the performance of our physiology informed features on the test dataset. Table II shows the detailed performance of different types of features for sleep arousal detection using 10-fold crossvalidation on the training dataset. Although limited in scope, for the sake of simplicity we use the same BiLSTM network architecture for all experiments. It is clear that for a more reliable comparison, the network architecture and parameters need to be optimized for each set of features. The only parameter which is altered for different sets of features is the learning rate of the Adam optimization algorithm. The last points to discuss are two technicalities. First, the time resolution for analysis of the PSG data is 5 seconds. In other words, for each 5-second window, our classification algorithm generates only one label (probability of arousal) and in order to have the sample-wise probability of arousals as demanded by PhysioNet, we repeat the value 1000 (= 5×200) times. Second, in Tables I and II for training different folds whenever the optimization algorithm gets stuck at a local minimum or much more probably at a saddle point [68], [69] we rerun the training phase with different network initialization. VI. DISCUSSION In this study, we investigate a comprehensive set of handengineered features for retrospective analysis of PSG data using a BiLSTM classifier for non-apneic/non-hypopneic arousal detection. We extract multi-domain features from different modalities. During multiple steps of feature selection techniques and expert judgment, the irrelevant and/or redundant features are eliminated to obtain a set of 75 physiology informed features. The final set of 465 features are built upon these 75 and an additional set of 390 features derived using a state-of-the-art scattering transform. The features are then fed into a BiLSTM network to classify the PSG data. Our proposed method achieves the second best score of 0.50 AUPRC on the hidden test dataset of the 2018 PhysioNet challenge. In this section, we scrutinize the results and discuss ways to further improve them. A. Comparative Evaluations of Selected Features The best single type of features in Table II are the crosschannel features, which achieve average scores of 0.42 and 0.88 in terms of AUPRC and AUROC, respectively. To the best of our knowledge, this is the first time that p-values and SVDbased features are proposed for analysis of respiratory effort signals (chest and abdominal) alongside respiratory airflow. The next best single type of features are the ones extracted from the abdominal-only signal with an average score of 0.38 AUPRC. The features extracted from the chest, airflow, SaO 2 , and EEG signals have nearly 0.29 AUPRC. EOG and chin EMG have also the same performance of 0.22 AUPRC. However, since the number of features extracted from chin EMG and ECG is low, they are combined together, resulting in 0.25 AUPRC. The respiratory-related features, obtained by combining feature types 1-5, have a high AUPRC score of 0.46 for arousal detection. This is not surprising, considering that most of the arousals are RERAs and for detecting them respiratoryrelated biosignals such as airflow, chest, and abdominal play a pivotal role. However, the interesting observation is that the performance of SaO 2 is as good as EEG, although the degree of oxygen desaturation is not a requirement for RERA detection [70]. If non-respiratory-related features, obtained by combining feature types 6-8, with 0.34 AUPRC score are added to the aforementioned respiratory-related features the resulting physiology informed features have average scores of 0.51 and 0.92 in terms of AUPRC and AUROC, respectively. This is the best-achieved result among all combination of features. Regarding the selected EEG-based features, although AASM guidelines determine arousals as abrupt EEG frequency shifts toward rhythms such as alpha, theta, and/or beta above 16 Hz [42], our experiments show that only delta (0.1-4 Hz) power is chosen as a discriminative feature for arousal detection (see Section III-B2). This is an intriguing observation, not expected a priori. However, some studies support the hypothesis of a continuum in arousal activities which start from delta and K-complex bursts toward EEG arousals and full awakening [71], [72]. More specifically, an increase in delta power can be a pre-arousal activity which may or may not culminate to an EEG arousal [71]. Furthermore, in [73] and [74] the association between arousals and K-complexes or delta bursts preceding the events are confirmed. This occurs for arousals in NREM sleep stage but not during REM. Besides, in both upper airway resistance syndrome (UARS) and obstructive sleep apnea syndrome (OSAS), airway opening is associated with an increase in delta power which can be followed by an EEG arousal [75]- [77]. Since RERAs are increased in both UARS and OSAS, it might be a reason that delta power is one of the selected features, especially because we use a BiLSTM classifier capable of analyzing the sequence of transient events. However, we stress that at this point we cannot identify the causes of this observation with certainty and it requires further investigation. On top of that, we do not claim that alpha and beta powers are not important, but maybe that their information is covered by other features. For example, the RMS and STD of the signal amplitude can partially retain alpha and beta information. Recall that alpha and beta are low-amplitude high-frequency EEG rhythms. Another interesting observation is that contrary to the AASM guidelines which recommend central and occipital EEG channels as primary signals for detecting EEG arousals, in our experiments most of the EEG-based features are selected from frontal channels. This might be partially advocated by the fact that delta activity and K-complexes predominate in the frontal lobe of the brain [78]- [80]. In addition, in [81] the authors show that for in-home sleep stage scoring the analysis of either frontal or central EEG channels leads to similar results when the recordings are scored by automatic Michele Sleep Scoring (MSS) system [82], [83]. Table II also shows that the performance of the physiology agnostic features based on the scattering transform is less than the performance of the physiology informed features by 0.05 and 0.03 points in terms of AUPRC and AUROC, respectively. This may have the following possible causes. First, in order to decrease the computational complexity in calculating the scattering transform, only 6 PSG time series are used (i.e., the remaining 7 time series including 6 EEGs and 1 chin EMG are not used). Second, to expedite the analysis and decrease the memory usage, the extracted features are downsampled by a factor of 4. Third, we believe that the way we treat the scattering transform as a physiology agnostic method restricts the performance of this set of features by not including any prior physiological information in designing the filter banks; we use the same filter banks for all biosignals (see Fig. 1). However, we already saw in Sections III-B that different clinical time series carry important information in different frequency bands. Forth, as we discussed earlier in Section III-C the invariants and stability conditions for different biosignals need to be explored for achieving the optimal performance of the scattering transform. Although our first motivation to use the scattering transform was to semi-automatically derive a set of informative features with minimum expert intervention, a more efficient approach is to include minimum physiological information at least in designing the filter banks for different biosignals. Despite the aforementioned constraint imposed by us on using the scattering transform, yet again we underline that even with this suboptimal handling of this method, the result solely based on the scattering transform is still among the top 5 algorithms on the training dataset. Future developments may include the design of the optimal filter banks for each biosignal separately. B. Adaptation for Real-Time Classification Although the proposed algorithm is designed for retrospective analysis of PSG data, with minor adaptation it can be used for real-time classification, possibly with an only 5-second delay. It is worth mentioning that all top-ranked algorithms for sleep arousal detection in PhysioNet 2018 challenge [84]- [87] use longer analysis windows. The 5-second analysis window used by our algorithm makes it a potential candidate for real-time classification if needed. For real-time classification, we need to design the artifact removal filter based on the information in the current and/or the previous time-windows. To be more precise, we need to remove all future information which is available in the present version of our algorithm; in the current setup, the threshold for the artifact removal filter for each biosignal is calculated from the information of the entire time series (see section III-A). After this stage, we only need to replace the BiLSTM layers by LSTM ones in the classifier (see Fig. 5). In that case, our algorithm can classify the PSG data in real-time. However, its performance degrades drastically by 0.13 and 0.05 points to 0.36 and 0.85 in terms of AUPRC and AUROC, respectively. This is to be expected since future information of the time series is not utilized in the LSTM network and may also be due to the non-optimized network architecture and parameters for the new setup. C. Study Limitations The main limitation of the study is the annotation process of the PhysioNet dataset. During the labeling process, seven sleep technologists annotated the dataset. However, given the burden involved in manual annotation, each PSG recording was annotated by only one sleep technologist [16]. This calls into question both the consistency and the reliability of the annotations. Although the performance of our proposed method indicates that our algorithm can replicate the experts' annotations accurately, its medical significance is limited by the labeling process. It is clear that high inter-and intra-rater agreement between sleep technologists would lead to a more reliable annotated dataset. This consequently results in greater clinical significance. Moreover, the number of submissions and the run-time constraint, imposed by the PhysioNet challenge rules, are the other limitations of this study. VII. CONCLUSION We have designed and implemented an automated PSGbased classification algorithm to detect non-apneic and nonhypopneic arousals. We have demonstrated and validated its performance using the 2018 PhysioNet challenge dataset, which is the newest and largest publicly available PSG dataset. Our proposed algorithm has tied for the second-best score during the follow-up and official phases of the 2018 PhysioNet challenge, by achieving the state-of-the-art performance of 0.50 in terms of AUPRC. In this study, we have also paid special attention to extracting features based on the physiological process during RERAs, which is missing in typical end-to-end deep learning algorithms. We have investigated and evaluated the importance of different types of features for automatic arousal detection. We have had interesting findings regarding the selected features, which were not expected a priori and may contribute to a better understanding of the RERAs, helping us for developing new automated algorithms. Besides, we have developed an alternative semi-automatic PSG-based feature extraction method using scattering transform and discussed possible directions for improving the performance. VIII. ACKNOWLEDGMENT Fig. 1 . 1Magnitudes of the frequency spectra of the wavelets in the two filter banks. In the first filter bank Q = 2, J = 13, and P = 1, thus the number of wavelets is 14 (= J + P ). In the second filter bank J = 8 and since Q = 1, P = 0. Thus, the number of wavelets is 8 (= J + P ). whereP (f ) is estimated using Burg's AR model with an empirically derived order 30. The last three features are P(0.01, 0.4), P(0.4, 0.75), and the ratio of P(0.75, 1.2)/P(1.2, 1.6). ( iii) Five features are extracted from the chest signal, namely, RMS, STD, skewness, P(0.01, 0.4), and P(0.75, 1.2)/P(1.2, 1.6). (iv) Twelve features are extracted from airflow. We extracted RMS and skewness of the signal along with its power in five frequency bands: P(0.01, 0.4), P(0.4, 0.75), P(0.75, 1.2), P(1.2, 1.6), and P(1.6, 3). Moreover, the next four features are the nonlinear combinations of these features as follow: P(0.4, 0.75) × P(1.2, 1.6), P(0.75, 1.2)×P(1.2, 1.6), P(0.75, 1.2)/P(1.2, 1.6), and P(0.01, 0.4)/(P(0.75, 1.2) + P(1. Fig. 2 . 2Magnitudes of the frequency spectra of the wavelets in the first filter bank ofFig. 1are shown in the interval from 0 to 5 Hz. ψ 13 has the same bandwidth as ψ 12 , but the bandwidths of ψ 12 to ψ 0 are increasing exponentially. and P(0.1, 4) in (5). (ii) RMS and a 3 in (4) are calculated for each central and occipital EEG (PSG channels 3-6). (iii) Three features are extracted from chin EMG as follow: RMS and kurtosis of the signal, in addition to P(0.1, 15)/(P(30, 45) + P(70, Fig. 3 . 3Time-domain representation of the complex wavelets corresponding to the analytical filters in Fig. 5 . 5The architecture of the proposed BiLSTM network. TABLE I THE ICLASSIFICATION PERFORMANCE OF ALL FEATURES (SUBMITTED FOR PHYSIONET CHALLENGE) TOGETHER WITH PHYSIOLOGY INFORMED AND PHYSIOLOGY AGNOSTIC FEATURESAll features Physiology informed Physiology agnostic (n = 465) features (n = 75) features (n = 390) Training data AUPRC AUROC AUPRC AUROC AUPRC AUROC fold 1 0.46 0.90 0.50 0.92 0.42 0.89 fold 2 0.48 0.89 0.47 0.92 0.53 0.90 fold 3 0.50 0.91 0.56 0.93 0.49 0.89 fold 4 0.48 0.90 0.55 0.92 0.46 0.89 fold 5 0.43 0.91 0.51 0.92 0.46 0.88 fold 6 0.55 0.91 0.53 0.91 0.50 0.90 fold 7 0.54 0.90 0.51 0.92 0.43 0.90 fold 8 0.52 0.91 0.44 0.90 0.45 0.88 fold 9 0.52 0.91 0.49 0.91 0.44 0.90 fold 10 0.42 0.89 0.50 0.92 0.40 0.87 Mean 0.49 0.90 0.51 0.92 0.46 0.89 (STD) (0.04) (0.01) (0.04) (0.01) (0.04) (0.01) TABLE II THE IICLASSIFICATION RESULTS OF DIFFERENT SETS OF FEATURES ON THE TRAINING DATASETFeature type Number of features # PSG channel AUPRC AUROC 1) cross-channel 13 9-11 0.42 (0.04) 0.88 (0.01) 2) abdominal 6 9 0.38 (0.04) 0.86 (0.01) 3) chest 5 10 0.29 (0.03) 0.82 (0.01) 4) airflow 12 11 0.29 (0.03) 0.79 (0.02) 5) SaO 2 5 12 0.28 (0.05) 0.82 (0.04) 6) EEGs 22 1-6 0.28 (0.04) 0.82 (0.01) 7) EOG 7 7 0.22 (0.04) 0.77 (0.02) 8) chin EMG + ECG 5 8,13 0.25 (0.03) 0.80 (0.01) Respiratory-related 41 9-12 0.46 (0.04) 0.90 (0.01) Non-respiratory-related 34 1-8,13 0.33 (0.03) 0.85 (0.01) Physiology informed 75 1-13 0.51 (0.04) 0.92 (0.01) Physiology agnostic 390 7,9-13 0.46 (0.04) 0.89 (0.01) All 465 1-13 0.49 (0.04) 0.90 (0.01) The authors would like to thank Leo Kärkkäinen for his insightful and valuable comments on this work. Handbook of sleep medicine. A Y Avidan, P C Zee, Lippincott Williams & WilkinsA. Y. Avidan and P. C. Zee, Handbook of sleep medicine. Lippincott Williams & Wilkins, 2011. Current classification of sleep disorders. M Thorpy, Synopsis of Sleep Medicine. Apple Academic PressM. Thorpy, "Current classification of sleep disorders," in Synopsis of Sleep Medicine, S. R. Pandi-Perumal, Ed. Apple Academic Press, 2016, pp. 83-98. M Bonnet, EEG arousals: scoring rules and examples. A preliminary report from sleep disorders atlas task force of the American Sleep Disorders Association. 15M. Bonnet et al., "EEG arousals: scoring rules and examples. A preliminary report from sleep disorders atlas task force of the American Sleep Disorders Association," Sleep, vol. 15, no. 2, pp. 173-184, 1992. Sleep medicine in neurology. D Kirsch, John Wiley & SonsD. Kirsch, Sleep medicine in neurology. John Wiley & Sons, 2013. Epidemiology of sleep disorders. I Trosman, A Ivanenko, Synopsis of Sleep Medicine. Apple Academic PressI. Trosman and A. Ivanenko, "Epidemiology of sleep disorders," in Synopsis of Sleep Medicine, S. R. Pandi-Perumal, Ed. Apple Academic Press, 2016, pp. 65-82. Sleep-related movement disorders and their unique motor manifestations. R E Salas, Principles and Practice of Sleep Medicine. ElsevierR. E. Salas et al., "Sleep-related movement disorders and their unique motor manifestations," in Principles and Practice of Sleep Medicine. Elsevier, 2017, pp. 1020-1029. Large-scale automated sleep staging. H Sun, Sleep. 4010H. Sun et al., "Large-scale automated sleep staging," Sleep, vol. 40, no. 10, 2017. Expert-level sleep scoring with deep neural networks. S , Journal of the American Medical Informatics Association. 2512S. Biswal et al., "Expert-level sleep scoring with deep neural networks," Journal of the American Medical Informatics Association, vol. 25, no. 12, pp. 1643-1650, 2018. Neural network analysis of sleep stages enables efficient diagnosis of narcolepsy. J B Stephansen, Nature Communications. 91J. B. Stephansen et al., "Neural network analysis of sleep stages enables efficient diagnosis of narcolepsy," Nature Communications, vol. 9, no. 1, 2018. SeqSleepNet: End-to-end hierarchical recurrent neural network for sequence-to-sequence automatic sleep staging. H Phan, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 273H. Phan et al., "SeqSleepNet: End-to-end hierarchical recurrent neu- ral network for sequence-to-sequence automatic sleep staging," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 3, pp. 400-410, March 2019. Joint classification and prediction CNN framework for automatic sleep stage classification. H Phan, IEEE Transactions on Biomedical Engineering. 665H. Phan et al., "Joint classification and prediction CNN framework for automatic sleep stage classification," IEEE Transactions on Biomedical Engineering, vol. 66, no. 5, pp. 1285-1296, May 2019. Detection of REM sleep behaviour disorder by automated polysomnography analysis. N Cooray, Clinical Neurophysiology. 1304N. Cooray et al., "Detection of REM sleep behaviour disorder by automated polysomnography analysis," Clinical Neurophysiology, vol. 130, no. 4, pp. 505-514, 2019. Automatic human sleep stage scoring using deep neural networks. A Malafeev, Frontiers in neuroscience. 12A. Malafeev et al., "Automatic human sleep stage scoring using deep neural networks," Frontiers in neuroscience, vol. 12, 2018. Digital analysis and technical specifications. T , Journal of clinical sleep medicine. 302T. Penzel et al., "Digital analysis and technical specifications," Journal of clinical sleep medicine, vol. 3, no. 02, pp. 109-120, 2007. You snooze, you win: the PhysioNet/Computing in Cardiology Challenge. M M Ghassemi, M. M. Ghassemi et al. (2018) You snooze, you win: the PhysioNet/Computing in Cardiology Challenge 2018. [Online]. You snooze, you win: the physionet/computing in cardiology challenge 2018. M M Ghassemi, 2018 Computing in Cardiology (CinC). M. M. Ghassemi et al., "You snooze, you win: the physionet/computing in cardiology challenge 2018," in 2018 Computing in Cardiology (CinC). . IEEE. IEEE, 2018. The American Academy of Sleep Medicine inter-scorer reliability program: respiratory events. R S Rosenberg, S Van Hout, Journal of clinical sleep medicine. 1004R. S. Rosenberg and S. Van Hout, "The American Academy of Sleep Medicine inter-scorer reliability program: respiratory events," Journal of clinical sleep medicine, vol. 10, no. 04, pp. 447-454, 2014. Scoring of sleep stages, breathing, and arousals. M D L Santos, M Hirshkowitz, Oxford Textbook of Sleep Disorders, S. Chokroverty and L. Ferini-StrambiOxford University PressM. D. L. Santos and M. Hirshkowitz, "Scoring of sleep stages, breathing, and arousals," in Oxford Textbook of Sleep Disorders, S. Chokroverty and L. Ferini-Strambi, Eds. Oxford University Press, 2017. The AASM manual for the scoring of sleep and associated events: Rules, terminology and technical specifications. R B Berry, American Academy of Sleep MedicineR. B. Berry et al., "The AASM manual for the scoring of sleep and associated events: Rules, terminology and technical specifications," American Academy of Sleep Medicine, 2012. Characterization of obstructive nonapneic respiratory events in moderate sleep apnea syndrome. C Cracowski, American journal of respiratory and critical care medicine. 1646C. Cracowski et al., "Characterization of obstructive nonapneic respi- ratory events in moderate sleep apnea syndrome," American journal of respiratory and critical care medicine, vol. 164, no. 6, pp. 944-948, 2001. A cause of excessive daytime sleepiness: the upper airway resistance syndrome. C Guilleminault, Chest. 1043C. Guilleminault et al., "A cause of excessive daytime sleepiness: the upper airway resistance syndrome," Chest, vol. 104, no. 3, pp. 781-787, 1993. Upper airway resistance syndrome, nocturnal blood pressure monitoring, and borderline hypertension. C Guilleminault, Chest. 1094C. Guilleminault et al., "Upper airway resistance syndrome, nocturnal blood pressure monitoring, and borderline hypertension," Chest, vol. 109, no. 4, pp. 901-908, 1996. Habitually sleepy drivers have a high frequency of automobile crashes associated with respiratory disorders during sleep. J F Masa, American Journal of respiratory and critical care medicine. 1624J. F. Masa et al., "Habitually sleepy drivers have a high frequency of automobile crashes associated with respiratory disorders during sleep," American Journal of respiratory and critical care medicine, vol. 162, no. 4, pp. 1407-1412, 2000. Automatic sleep arousal detection using state distance analysis in phase space. M Zabihi, 2018 Computing in Cardiology (CinC). M. Zabihi et al., "Automatic sleep arousal detection using state distance analysis in phase space," in 2018 Computing in Cardiology (CinC). . IEEE. IEEE, 2018. Deep learning. Y Lecun, Nature. 5217553436Y. LeCun et al., "Deep learning," Nature, vol. 521, no. 7553, p. 436, 2015. Classification of time-series images using deep convolutional neural networks. N Hatami, Tenth International Conference on Machine Vision (ICMV 2017). 10696N. Hatami et al., "Classification of time-series images using deep convolutional neural networks," in Tenth International Conference on Machine Vision (ICMV 2017), vol. 10696, 2018. Group invariant scattering. S Mallat, Communications on Pure and Applied Mathematics. 6510S. Mallat, "Group invariant scattering," Communications on Pure and Applied Mathematics, vol. 65, no. 10, pp. 1331-1398, 2012. Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997. Detection of atrial fibrillation in ECG hand-held devices using a random forest classifier. M Zabihi, 2017 Computing in Cardiology (CinC). IEEEM. Zabihi et al., "Detection of atrial fibrillation in ECG hand-held de- vices using a random forest classifier," in 2017 Computing in Cardiology (CinC). IEEE, 2017. ECG rhythm analysis during manual chest compressions using an artefact removal filter and random forest classifiers. I Isasi, 2018 Computing in Cardiology (CinC). IEEEI. Isasi et al., "ECG rhythm analysis during manual chest compressions using an artefact removal filter and random forest classifiers," in 2018 Computing in Cardiology (CinC). IEEE, 2018. Detecting strange attractors in turbulence. F Takens, Dynamical systems and turbulence. WarwickSpringerF. Takens, "Detecting strange attractors in turbulence," in Dynamical systems and turbulence, Warwick 1980. Springer, 1981, pp. 366-381. ECG-based classification of resuscitation cardiac rhythms for retrospective data analysis. A B Rad, IEEE Transactions on Biomedical Engineering. 6410A. B. Rad et al., "ECG-based classification of resuscitation cardiac rhythms for retrospective data analysis," IEEE Transactions on Biomed- ical Engineering, vol. 64, no. 10, pp. 2411-2418, 2017. The parasomnias and other sleep-related movement disorders. M J Thorpy, G Plazzi, Cambridge University PressM. J. Thorpy and G. Plazzi, The parasomnias and other sleep-related movement disorders. Cambridge University Press, 2010. . J F Pagel, S R Pandi-Perumal, Primary Care Sleep Medicine: A Practical Guide. SpringerJ. F. Pagel and S. R. Pandi-Perumal, Primary Care Sleep Medicine: A Practical Guide. Springer, 2014. Assessment of thoracoabdominal bands to detect respiratory effort-related arousal. J Masa, European Respiratory Journal. 224J. Masa et al., "Assessment of thoracoabdominal bands to detect res- piratory effort-related arousal," European Respiratory Journal, vol. 22, no. 4, pp. 661-667, 2003. Asynchronous thoracoabdominal movements in chronic airflow obstruction (CAO)," in Modeling and Control of Ventilation. M Goldman, SpringerM. Goldman et al., "Asynchronous thoracoabdominal movements in chronic airflow obstruction (CAO)," in Modeling and Control of Venti- lation. Springer, 1995, pp. 95-100. Atlas of Sleep Medicine. L E Krahn, CRC PressL. E. Krahn et al., Atlas of Sleep Medicine. CRC Press, 2010. An introduction to frames and Riesz bases. O Christensen, Springer2nd edO. Christensen, An introduction to frames and Riesz bases, 2nd ed. Springer, 2016. A new analysis technique for time series data. J P Burg, NATO Advanced Study Institute of Signal Processing with emphasis on Underwater Acoustics. New YorkIEEE PressJ. P. Burg, "A new analysis technique for time series data," in NATO Advanced Study Institute of Signal Processing with emphasis on Under- water Acoustics. New York: IEEE Press, 1968. A new look at the statistical model identification. H Akaike, IEEE Transactions on Automatic Control. 196H. Akaike, "A new look at the statistical model identification," IEEE Transactions on Automatic Control, vol. 19, no. 6, pp. 716-723, 1974. Estimating the dimension of a model. G Schwarz, The Annals of Statistics. 62G. Schwarz et al., "Estimating the dimension of a model," The Annals of Statistics, vol. 6, no. 2, pp. 461-464, 1978. The AASM manual for the scoring of sleep and associated events: rules, terminology and technical specifications. R B Berry, American Academy of Sleep Medicine. R. B. Berry et al., The AASM manual for the scoring of sleep and associ- ated events: rules, terminology and technical specifications. American Academy of Sleep Medicine, 2018. Polysomnography and other investigations for sleep disorders. Z Zaiwalla, R Killick, Oxford Textbook of Clinical Neurophysiology. Oxford Univ. Press187Z. Zaiwalla and R. Killick, "Polysomnography and other investigations for sleep disorders," in Oxford Textbook of Clinical Neurophysiology. Oxford Univ. Press, 2017, vol. 187. Understanding deep convolutional networks. S Mallat, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 37420150203S. Mallat, "Understanding deep convolutional networks," Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engi- neering Sciences, vol. 374, no. 2065, p. 20150203, 2016. Learning representations by back-propagating errors. D E Rumelhart, Nature. 3239D. E. Rumelhart et al., "Learning representations by back-propagating errors," Nature, vol. 323, no. 9, 1986. S Mallat, A Wavelet Tour of Signal Processing. Orlando, FL, USAAcademic Press, IncThe Sparse Way. 3rd ed.S. Mallat, A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way, 3rd ed. Orlando, FL, USA: Academic Press, Inc., 2008. Multiscale scattering for audio classification. J Andén, S Mallat, ISMIR. Miami, FL. J. Andén and S. Mallat, "Multiscale scattering for audio classification." in ISMIR. Miami, FL, 2011, pp. 657-662. Phase retrieval for the Cauchy wavelet transform. S Mallat, I Waldspurger, Journal of Fourier Analysis and Applications. 216S. Mallat and I. Waldspurger, "Phase retrieval for the Cauchy wavelet transform," Journal of Fourier Analysis and Applications, vol. 21, no. 6, pp. 1251-1309, 2015. Exponential decay of scattering coefficients. I Waldspurger, 2017 International Conference on Sampling Theory and Applications (SampTA). IEEEI. Waldspurger, "Exponential decay of scattering coefficients," in 2017 International Conference on Sampling Theory and Applications (SampTA). IEEE, 2017, pp. 143-146. Deep scattering spectrum. J Andén, S Mallat, IEEE Transactions on Signal Processing. 6216J. Andén and S. Mallat, "Deep scattering spectrum," IEEE Transactions on Signal Processing, vol. 62, no. 16, pp. 4114-4128, 2014. Invariant scattering convolution networks. J Bruna, S Mallat, IEEE transactions on pattern analysis and machine intelligence. 35J. Bruna and S. Mallat, "Invariant scattering convolution networks," IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1872-1886, 2013. Rotation, scaling and deformation invariant scattering for texture discrimination. L Sifre, S Mallat, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionL. Sifre and S. Mallat, "Rotation, scaling and deformation invariant scattering for texture discrimination," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 1233- 1240. Classification with joint time-frequency scattering. J Andén, arXiv:1807.08869arXiv preprintJ. Andén et al., "Classification with joint time-frequency scattering," arXiv preprint arXiv:1807.08869, 2018. Learning to forget: Continual prediction with LSTM. F A Gers, Neural Computation. 1210F. A. Gers et al., "Learning to forget: Continual prediction with LSTM," Neural Computation, vol. 12, no. 10, pp. 2451-2471, 2000. Machine learning for sequential data: A review. T G Dietterich, Joint IAPR international workshops on statistical techniques in pattern recognition (SPR) and structural and syntactic pattern recognition (SSPR). SpringerT. G. Dietterich, "Machine learning for sequential data: A review," in Joint IAPR international workshops on statistical techniques in pattern recognition (SPR) and structural and syntactic pattern recognition (SSPR). Springer, 2002, pp. 15-30. Random forests. L Breiman, Machine learning. 451L. Breiman, "Random forests," Machine learning, vol. 45, no. 1, pp. 5-32, 2001. I Goodfellow, Deep Learning. MIT PressI. Goodfellow et al., Deep Learning. MIT Press, 2016. Learning long-term dependencies with gradient descent is difficult. Y Bengio, IEEE transactions on neural networks. 52Y. Bengio et al., "Learning long-term dependencies with gradient descent is difficult," IEEE transactions on neural networks, vol. 5, no. 2, pp. 157-166, 1994. Fast and accurate deep network learning by exponential linear units (ELUs). C Djork-Arné, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)6C. Djork-Arné et al., "Fast and accurate deep network learning by exponential linear units (ELUs)," in Proceedings of the International Conference on Learning Representations (ICLR), vol. 6, 2016. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. K He, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionK. He et al., "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026-1034. Empirical evaluation of rectified activations in convolutional network. B Xu, arXiv:1505.00853arXiv preprintB. Xu et al., "Empirical evaluation of rectified activations in convolu- tional network," arXiv preprint arXiv:1505.00853, 2015. Improving neural networks by preventing coadaptation of feature detectors. G E Hinton, arXiv:1207.0580arXiv preprintG. E. Hinton et al., "Improving neural networks by preventing co- adaptation of feature detectors," arXiv preprint arXiv:1207.0580, 2012. Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, The Journal of Machine Learning Research. 151N. Srivastava et al., "Dropout: a simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014. A theoretically grounded application of dropout in recurrent neural networks. Y Gal, Z Ghahramani, Advances in neural information processing systems. Y. Gal and Z. Ghahramani, "A theoretically grounded application of dropout in recurrent neural networks," in Advances in neural information processing systems, 2016, pp. 1019-1027. Adam: A method for stochastic optimization. D P Kingma, J Ba, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," in Proceedings of the International Conference on Learning Represen- tations, 2015. On the difficulty of training recurrent neural networks. R Pascanu, International Conference on Machine Learning. R. Pascanu et al., "On the difficulty of training recurrent neural net- works," in International Conference on Machine Learning, 2013, pp. 1310-1318. Automated detection of sleep arousals from polysomnography data using a dense convolutional neural network. M Howe-Patterson, 2018 Computing in Cardiology (CinC). IEEEM. Howe-Patterson et al., "Automated detection of sleep arousals from polysomnography data using a dense convolutional neural network," in 2018 Computing in Cardiology (CinC). IEEE, 2018. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Y N Dauphin, Advances in Neural Information Processing Systems. Y. N. Dauphin et al., "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization," in Advances in Neural Information Processing Systems, 2014, pp. 2933-2941. The loss surfaces of multilayer networks. A Choromanska, Artificial Intelligence and Statistics. A. Choromanska et al., "The loss surfaces of multilayer networks," in Artificial Intelligence and Statistics, 2015, pp. 192-204. Upper airway resistance syndrome. M A C Bornemann, Geriatric Otolaryngology. CRC PressM. A. C. Bornemann et al., "Upper airway resistance syndrome," in Geriatric Otolaryngology. CRC Press, 2006, pp. 437-448. The nature of arousal in sleep. P Halász, Journal of Sleep Research. 131P. Halász et al., "The nature of arousal in sleep," Journal of Sleep Research, vol. 13, no. 1, pp. 1-23, 2004. Cardiac activation during arousal in humans: further evidence for hierarchy in the arousal response. E Sforza, Clinical Neurophysiology. 1119E. Sforza et al., "Cardiac activation during arousal in humans: further evidence for hierarchy in the arousal response," Clinical Neurophysiol- ogy, vol. 111, no. 9, pp. 1611-1619, 2000. Quantitative analysis of sleep EEG microstructure in the time-frequency domain. F. De Carli, Brain Research Bulletin. 635F. De Carli et al., "Quantitative analysis of sleep EEG microstructure in the time-frequency domain," Brain Research Bulletin, vol. 63, no. 5, pp. 399-405, 2004. CAP and arousals in the structural development of sleep: an integrative perspective. M G Terzano, Sleep medicine. 33M. G. Terzano et al., "CAP and arousals in the structural development of sleep: an integrative perspective," Sleep medicine, vol. 3, no. 3, pp. 221-229, 2002. Upper airway resistance syndrome: central electroencephalographic power and changes in breathing effort. J E Black, American Journal of Respiratory and Critical Care Medicine. 1622J. E. Black et al., "Upper airway resistance syndrome: central elec- troencephalographic power and changes in breathing effort," American Journal of Respiratory and Critical Care Medicine, vol. 162, no. 2, pp. 406-411, 2000. Arousal, EEG spectral power and pulse transit time in UARS and mild OSAS subjects. D Poyares, Clinical Neurophysiology. 11310D. Poyares et al., "Arousal, EEG spectral power and pulse transit time in UARS and mild OSAS subjects," Clinical Neurophysiology, vol. 113, no. 10, pp. 1598-1606, 2002. Within-night variation in respiratory effort preceding apnea termination and EEG delta power in sleep apnea. R B Berry, Journal of Applied Physiology. 854R. B. Berry et al., "Within-night variation in respiratory effort preceding apnea termination and EEG delta power in sleep apnea," Journal of Applied Physiology, vol. 85, no. 4, pp. 1434-1441, 1998. Topographical distribution of spindles and Kcomplexes in normal subjects. L Mccormick, Sleep. 2011L. McCormick et al., "Topographical distribution of spindles and K- complexes in normal subjects," Sleep, vol. 20, no. 11, pp. 939-941, 1997. Clinical correlates of frontal intermittent rhythmic delta activity (FIRDA). E A Accolla, Clinical Neurophysiology. 1221E. A. Accolla et al., "Clinical correlates of frontal intermittent rhythmic delta activity (FIRDA)," Clinical Neurophysiology, vol. 122, no. 1, pp. 27-31, 2011. Atlas of brain mapping: topographic mapping of EEG and evoked potentials. K Maurer, T Dierks, Springer Science & Business MediaK. Maurer and T. Dierks, Atlas of brain mapping: topographic mapping of EEG and evoked potentials. Springer Science & Business Media, 2012. Accuracy of automatic polysomnography scoring using frontal electrodes. M Younes, Journal of Clinical Sleep Medicine. 1205M. Younes et al., "Accuracy of automatic polysomnography scoring using frontal electrodes," Journal of Clinical Sleep Medicine, vol. 12, no. 05, pp. 735-746, 2016. Performance of an automated polysomnography scoring system versus computer-assisted manual scoring. A Malhotra, Sleep. 364A. Malhotra et al., "Performance of an automated polysomnography scoring system versus computer-assisted manual scoring," Sleep, vol. 36, no. 4, pp. 573-582, 2013. Utility of technologist editing of polysomnography scoring performed by a validated automatic system. M Younes, Annals of the American Thoracic Society. 128M. Younes et al., "Utility of technologist editing of polysomnography scoring performed by a validated automatic system," Annals of the American Thoracic Society, vol. 12, no. 8, pp. 1206-1218, 2015. Automatic detection of target regions of respiratory effort-related arousals using recurrent neural networks. H M Þráinsson, 2018 Computing in Cardiology (CinC). IEEEH. M. Þráinsson et al., "Automatic detection of target regions of respiratory effort-related arousals using recurrent neural networks," in 2018 Computing in Cardiology (CinC). IEEE, 2018. Identification of arousals with deep neural networks (DNNs) using different physiological signals. R He, 2018 Computing in Cardiology (CinC). IEEER. He et al., "Identification of arousals with deep neural networks (DNNs) using different physiological signals," in 2018 Computing in Cardiology (CinC). IEEE, 2018. Using auxiliary loss to improve sleep arousal detection with neural network. B Varga, 2018 Computing in Cardiology (CinC). IEEEB. Varga et al., "Using auxiliary loss to improve sleep arousal detection with neural network," in 2018 Computing in Cardiology (CinC). IEEE, 2018. Automated recognition of sleep arousal using multimodal and personalized deep ensembles of neural networks. A Patane, 2018 Computing in Cardiology (CinC). IEEEA. Patane et al., "Automated recognition of sleep arousal using multi- modal and personalized deep ensembles of neural networks," in 2018 Computing in Cardiology (CinC). IEEE, 2018.
[]
[ "Lee", "Lee" ]
[ "Yang Wang ", "Jaap H Abbring \nCentER\nDepartment of Econometrics & OR\nSchool of Business\nTilburg University\nP.O. Box 901535000 LETilburgThe Netherlands; and CEPR\n\nUniversity of Chicago\n5807 South Woodlawn Avenue60637ChicagoILUSA\n", "Øystein Daljord [email protected]:[email protected]. " ]
[ "CentER\nDepartment of Econometrics & OR\nSchool of Business\nTilburg University\nP.O. Box 901535000 LETilburgThe Netherlands; and CEPR", "University of Chicago\n5807 South Woodlawn Avenue60637ChicagoILUSA" ]
[ "Norets and Tang" ]
The recent literature often citesFang and Wang (2015)for analyzing the identification of time preferences in dynamic discrete choice under exclusion restrictions (e.g.Gayle et al., 2018). Fang and Wang's Proposition 2 claims generic identification of a dynamic discrete choice model with hyperbolic discounting. This claim uses a definition of "generic" that does not preclude the possibility that a generically identified model is nowhere identified.To illustrate this point, we provide two simple examples of models that are generically identified in Fang and Wang's sense, but that are, respectively, everywhere and nowhere identified. We conclude that Proposition 2 is void: It has no implications for identification of the dynamic discrete choice model. We show that its proof is incorrect and incomplete and suggest alternative approaches to identification. * This comment incorporates material from Appendix B of the August 2018 version of Abbring and Daljord (2019) (arXiv:1808.10651v1 [econ.EM]), which we have deleted from the current draft of that paper. Thanks to Hanming Fang, Christian Hansen, and Eduardo Souza-Rodrigues for helpful comments and discussion.
10.1111/iere.12434
[ "https://arxiv.org/pdf/1905.07048v2.pdf" ]
158,047,040
1905.07048
d44c17d0212d5ef248a3aaad65bf06c524604d6a
Lee ChanCopyright Chan2012. 2013. 2013. 2014. 2014. 2015. 2016. 2017 Yang Wang Jaap H Abbring CentER Department of Econometrics & OR School of Business Tilburg University P.O. Box 901535000 LETilburgThe Netherlands; and CEPR University of Chicago 5807 South Woodlawn Avenue60637ChicagoILUSA Øystein Daljord [email protected]:[email protected]. Lee Norets and Tang Chan2012. 2013. 2013. 2014. 2014. 2015. 2016. 2017discount factordynamic discrete choicegeneric identificationtransversality theorem JEL codes: C25C61 The recent literature often citesFang and Wang (2015)for analyzing the identification of time preferences in dynamic discrete choice under exclusion restrictions (e.g.Gayle et al., 2018). Fang and Wang's Proposition 2 claims generic identification of a dynamic discrete choice model with hyperbolic discounting. This claim uses a definition of "generic" that does not preclude the possibility that a generically identified model is nowhere identified.To illustrate this point, we provide two simple examples of models that are generically identified in Fang and Wang's sense, but that are, respectively, everywhere and nowhere identified. We conclude that Proposition 2 is void: It has no implications for identification of the dynamic discrete choice model. We show that its proof is incorrect and incomplete and suggest alternative approaches to identification. * This comment incorporates material from Appendix B of the August 2018 version of Abbring and Daljord (2019) (arXiv:1808.10651v1 [econ.EM]), which we have deleted from the current draft of that paper. Thanks to Hanming Fang, Christian Hansen, and Eduardo Souza-Rodrigues for helpful comments and discussion. 1 Introduction Fang and Wang (2015) studied the identification an infinite-horizon, stationary dynamic discrete choice model with partially naive hyperbolic time preferences. In each period, the agent chooses an action i from I ≡ {0, 1, . . . , I}, I ∈ N, after she observes that period's Markov state (x, ε), where x takes values in a finite set X and ε = (ε 0 , ε 1 , . . . , ε I ) ∈ R I+1 . 1 This returns instantaneous utility u * i (x, ε) = u i (x) + i . It also controls the evolution of x: Given choice i in state (x, ), it takes the value x ∈ X in the next period with probability π(x |x, i). In contrast, the components of ε are mutually independent with type-1 extreme value distributions, independently from x , (x, ε), and choice i. The agent has rational expectations; in particular, she believes x to evolve according to the controlled Markov transition distribution π. She discounts future utility with a standard factor δ and present bias factor β, and perceives future selves to have present bias factorβ. With a normalization u 0 (x) = 0 for all x ∈ X , the model's unknown primitives are an IX-vector u with the values of u i (x) for i ∈ I/{0} and x ∈ X , the discount function parameters (β,β, δ), and a matrix Π with the state transition probabilities π(x |x, i) for i ∈ I, x ∈ X , and x ∈ X . Here, X = |X | is the number of elements of X . The econometrician's data are the state transition probabilities Π and a matrixP that collects the conditional probabilities P i (x) that the agents chooses i in state x, i ∈ I and x ∈ X . Because probabilities sum to one, the (I +1)X +(I +1)X 2 choice and transition probabilities in P , Π can be represented by a vector that stacks IX + (I + 1)X(X − 1) of them. We adopt this representation and take P , Π ∈ [0, 1] IX+(I+1)X(X−1) ⊂ R IX+(I+1)X(X−1) . 2 1 This paper's footnotes document various minor errors and inconsistencies in Fang and Wang that are not central to our comments, but that we have corrected in the main text to ensure clarity and consistency. Here, for example, we have included ε 0 in ε. Fang and Wang (p. 568) specified ε = (ε 1 , . . . , ε I ) ∈ R I and only assumed u * i (x, ε) = u i (x)+ i for i ∈ I/{0}. However, Fang and Wang subsequently used u * 0 (x, ε) = u 0 (x)+ε 0 , with ε 0 , . . . , ε I independent with type-1 extreme value distributions, to get logit choice probabilities P i (x). 2 Fang and Wang's online Appendix C instead specifies P , Π ∈ ∆ (I+1)X ×    X copies ∆ X × · · · × ∆ X    I+1 ⊂ R IX+(I+1)X(X−1) , without defining ∆. We guess that, for J ∈ N, ∆ J ≡ {(p 1 , . . . , p J ) ∈ R J : p 1 ≥ 0, . . . , p J ≥ 0; J j=1 p j = 1} denotes a probability simplex, but then ∆ (I+1)X × ∆ X × · · · × ∆ X I+1 lies in a IX + (I + 1)X(X − 1)dimensional linear subspace of R (I+1)X+(I+1)X 2 rather than in R IX+(I+1)X(X−1) . However, all that matters for the reading of Fang and Wang's Proposition 2 and our comments is that Fang and Wang use Lebesgue measure on R IX+(I+1)X(X−1) to decide between generic and exceptional sets of data; see Footnote 10. The transition probabilities Π directly identify the agent's (rational) beliefs. The conditional choice probabilitiesP are linked to the model's primitives by the assumption that the agent's actions follow a stationary perception-perfection perfect strategy profile of the decision problem with beliefs Π and some utilities u * and discount factors (β * ,β * , δ * ). The extreme-value assumption ensures that the conditional choice probabilities P i (x) have the logit form. As in the special case with geometric discounting (β =β = 1), an application of Hotz and Miller (1993)'s choice probability inversion gives IX equations that relate the IX + 3 parameters (u, β,β, δ) to the data (P, Π), one for each log choice probability contrast ln P i (x) − ln P 0 (x), i ∈ I/{0} and x ∈ X . 3 For its main identification result (Proposition 2), Fang and Wang (p. 579) assumed that the observed state can be partitioned as x = (x r , x e ); where x r takes values in X r , x e takes values in X e , X = X r × X e , and |X e | ≥ 2; and that its Assumption 5 holds for all (x r , x e ) ∈ X and (x r , x e ) ∈ X . 4 The first part 5 of its Assumption 5 then requires that u i (x r , x e ) = u i (x r , x e ) for all i ∈ I/{0}, x r ∈ X r , and (x e , x e ) ∈ X e × X e ,(1) which are I (|X e | − 1) |X r | different and nontrivial exclusion restrictions, one for each i ∈ I/{0}, each x r ∈ X r , and each of the |X e | − 1 distinct pairs of subsequent x e and x e in the (arbitrarily) ordered set X e . 6 Taken together, the IX constraints resulting from Hotz and Miller's choice probability inversion and those in (1) implied by the exclusion restrictions form a system of IX + I (|X e | − 1) |X r | nonlinear equations in the IX +3 parameters (u, β,β, δ) and the data (P, Π). Fang and Wang denoted this system with G u, β,β, δ; P , Π = 0. (2) 3 Fang and Wang provided the analysis leading to these equations, but not the final equations themselves. 4 Fang and Wang used the same notation for random variables and their realizations and, in Assumption 5, incorrectly referred to the state's values x 1 and x 2 as "state variables." 5 The second part of Assumption 5 requires that transition probabilities π(·|x, i) for some choice i differ between the same (x r , x e ) and (x r , x e ). This condition cannot possibly be necessary for Fang and Wang's Proposition 2 to be true, as it holds generically according to its definition of "generic" (see Section 2). 6 Fang and Wang's online Appendix C instead states that "the data must also satisfy the additional I × |X e | × |X r | equations requiring that u i (x r , x e ) = u i (x r ) for each i ∈ I/{0}, each x e ∈ X e and each x r ∈ X r ." Its subsequent analysis fails to appreciate that these I × |X e | × |X r | equations come with I × |X r | additional parameters u i (x r ); i ∈ I/{0}, x r ∈ X r (it concludes that the exclusion restrictions yield a system of "I × X + I × |X e | × |X r | equations in I × X + 3 unknowns (u, β,β, δ)"). Clearly, on balance, these I|X e ||X r | equations only introduce I|X e ||X r |−I|X r | = I (|X e | − 1) |X r | additional restrictions, as in our representation. Of course, these restrictions are simply the equalities in (1) that can be derived by differencing Fang and Wang's equations u i (x r , x e ) = u i (x r ) and u i (x r , x e ) = u i (x r ). The system of equations (2) contains all the information linking the unknown parameters (u, β,β, δ) to the data (P, Π) under the assumed exclusion restrictions in (1). Therefore, Fang and Wang studied their model's identification by analyzing whether (2) uniquely determines (u, β,β, δ) for given data. It claimed the following result: 7 Proposition 2. Consider the space of data sets that can be generated by the assumed data generating process for some primitives (u * , β * ,β * , δ * ). Suppose that there exist state variables that satisfy Assumption 5. Then, all the model parameters are generically identified if I (|X e | − 1) |X r | ≥ 4. Fang and Wang does not formally define "generic identification," but paraphrases Proposition 2 as giving identification "for almost all data sets generated by the assumed hyperbolic discounting model" (p. 579). Fang and Wang's proof of Proposition 2, in its online Appendix C, further defines "almost all" and therewith "generic." The proof of Proposition 2 applies the transversality theorem to Fang and Wang's model to demonstrate that there are generically no parameters that are consistent with any given data. Next, it notes that since the model generated the data by assumption, there must exist some parameters consistent with the data. It concludes that these parameters are therefore generically the only parameters that are consistent with such data. Section 2 uncovers Fang and Wang's definition of "generic" from this proof. Then, it demonstrates that Proposition 2 is void: The model may be generically identified, in the sense of Fang and Wang, independently of whether any data sets that can be generated by the model correspond to a unique parameter vector. It then shows that the proof is incorrect. Finally, it notes that the proof is incomplete, as it fails to verify the rank condition for the transversality theorem that it invokes. It is shown that independently of whether this rank condition holds, the proof has no implications for the model's identification. Section 3 concludes with a brief discussion of alternative approaches to identification in dynamic discrete choice models. 7 We quote Fang and Wang's Proposition 2 verbatim, except that we have replaced its condition I|X e ||X r | ≥ 4 with the stronger condition I (|X e | − 1) |X r | ≥ 4. Proposition 2's proof in Fang and Wang's online Appendix C relies on the fact that "I × X + I × |X e | × |X r | ... is larger than the number of unknowns I × X + 3 under our identifying assumption that I × |X e | × |X r | ≥ 4." However, as we have explained in Footnote 6, the number of equations equals IX +I (|X e | − 1) |X r |, not IX +I|X e ||X r |, so that I (|X e | − 1) |X r | ≥ 4 is required to ensure that there are more equations than unknowns. Note that this correction neither changes the substance of Fang and Wang's proof, which simply relies on having more equations than unknowns, nor that of our comment. A void generic identification result The proof of Proposition 2 in Fang and Wang's online Appendix C first presents the following transversality theorem (Mas-Colell, 1985, Proposition 8.3.1): 8 Theorem 1. Let F : A × B → R m , A ⊂ R n , B ⊂ R s be C r with r > max{n − m, 0}. Suppose that 0 is a regular value of F ; that is, F (a, b) = 0 implies rank ∂F (a, b) = m. Then, except for a set of b ∈ B of Lebesgue measure zero, F b : A → R m has 0 as a regular value. Here, ∂F is the Jacobian of F with respect to (a, b); F b is such that F b (a) = F (a, b) for all a, b; and ∂F b is the Jacobian with respect to a only. To prove Proposition 2, it applies this transversality theorem to the system of equations (2), with the following mapping of notation: 9 Transversality Theorem Fang and Wang F (a, b) ∈ R mG u, β,β, δ; P , Π ∈ R IX+I(|Xe|−1)|Xr| m Number of equations inG: IX + I (|X e | − 1) |X r | a ∈ A ⊂ R n Unknown parameters u, β,β, δ ∈ R IX × (0, 1] 3 ⊂ R IX+3 n Number of unknown parameters: IX + 3 b ∈ B ⊂ R s Vector of probabilities in [0, 1] IX+(I+1)X(X−1) ⊂ R IX+(I+1)X(X−1) that represents the data P , Π s IX + (I + 1)X(X − 1) a → F b (a) u, β,β, δ →G u, β,β, δ; P , Π That is, Fang and Wang studied the generic identification of the vector a ∈ A ⊂ R n of unknown parameters u, β,β, δ from the choice and transition probabilities b ∈ B ⊂ R s by applying the transversality theorem to the system F (a, b) = 0 of m smooth equality constraints. This implicitly defines "for almost all data sets generated by the assumed hyperbolic discounting model" (and therewith "generically" in Proposition 2) to mean for all data b ∈ B in the model's range (the set of b ∈ B such that F (a, b) = 0 has at least one solution a ∈ A) outside a set of Lebesgue measure zero in R s . 10 8 To avoid confusion with Fang and Wang's use of x for states, we slightly deviate from Mas-Colell's and Fang and Wang's notation and use a instead of x and A instead of N here. 9 This mapping corrects two minor problems with Fang and Wang's mapping at the top of page 3 of its online Appendix. See Footnotes 2 and 6. 10 Recall from Footnote 2 that it is not completely clear how Fang and Wang represent the choice and transition probability data, but that it is clear that they think of the data as living in R s = R IX+(I+1)X(X−1) . A key problem with Fang and Wang's Proposition 2 is that its proof, given that the assumed rank condition holds (we return to this at the end of this section), establishes that the model's range has Lebesgue measure zero in R s . Because the transversality theorem, as applied in Fang and Wang's proof, only has implications for data outside a set of Lebesgue measure zero, it has no consequences for identification from data in the model's range. To be precise, suppose that the rank condition for the transversality theorem holds: rank ∂F (a, b) = m if F (a, b) = 0. Then, the transversality theorem implies that, for all b ∈ B outside a set of Lebesgue measure zero in R s , rank ∂F b (a) = m if F (a, b) = 0. Moreover, because a ∈ R n , rank ∂F b (a) ≤ n < m. Taken together, this implies that F (a, b) = 0 has no solutions a ∈ A, except for b ∈ B in a set of Lebesgue measure zero in R s . Consequently, given that the rank condition holds, Proposition 2 is void. It claims that F (a, b) = 0 has a unique solution a ∈ A for all b ∈ B in the model's range. Since the model's range has zero measure, it is excepted from the claim. Proposition 2 therefore makes no claim about the number of solutions in the range of the model. We note that Proposition 2 is not false. Formally, Proposition 2 is vacuously true, because it is a statement about a property of the elements of an empty set. 11 Moreover, its proof cannot easily be adapted to establish a more substantive identification result, for some or all data in the model's range, because its application of the transversality theorem has no implications for the number of parameters a ∈ A that solves F (a, b) = 0 for data b ∈ B in the model's zero measure range. We illustrate these two points with two simple examples. We first note that Fang and Wang's proof does not use the particular structure of the dynamic discrete choice model, but applies to any model that can be represented by a system of equations with more equations than unknowns under the regularity conditions stated above. Our examples therefore use highly stylized, linear models that allow easy and direct verification of the rank condition and the conclusions of the transversality theorem. Like Fang and Wang's model under the conditions of Proposition 2, both examples have models with more equations than unknowns (m > n). Their ranges have Lebesgue measure zero in R s , so that generic identification vacuously holds. However, in the first example, a is uniquely determined from F (a, b) = 0 for b ∈ B in the model's range; in the second example, it is not. Example 1 (Everywhere point identified). Suppose that the data are b = (b 1 , b 2 ) ∈ The exact way the data are represented in R s is irrelevant, because Lebesgue measure is invariant under affine transformations with determinant 1 or −1. In particular, Fang and Wang's representation and ours both assign zero measure to the same sets of choice and transition probabilities. 11 It is vacuously true since any statement about a property of elements of an empty set is formally true. B = R 2 , the parameter is a ∈ A = R, and the model is F : R × R 2 → R 2 , with 0 = F (a; b) = b 1 − a b 2 − a , ∂F (a; b) = −1 1 0 −1 0 1 , and ∂F b (a) = −1 −1 . Note that n = 1, s = 2, and m = 2. In this example, rank ∂F (a, b) = 2 always. The transversality theorem gives that F b (a) = 0 implies rank ∂F b (a) = 2 for almost all b ∈ R 2 . Now, rank ∂F b (a) ≤ n = 1, so F b (a) = 0 for almost all b ∈ R 2 . The model is linear, so we can do without the transversality theorem and directly observe that the model can only generate data b such that b 1 = b 2 , which is nongeneric in B = R 2 . Data b ∈ B that can be generated by this model uniquely determine a. In this example, the transversality theorem tells us that there are zero (not one) parameters that rationalize the data, for almost all data in R 2 , and that data that are in the model's range, excepted from the transversality theorem, always identify the unknown parameter. Example 2 (Nowhere point identified). Now suppose we have data b = (b 1 , b 2 , b 3 ) ∈ B = R 3 , a pair of parameters a = (a 1 , a 2 ) ∈ A = R 2 , and a model is F : R 2 × R 3 → R 3 , with 0 = F (a; b) =    b 1 − (a 1 + a 2 ) b 2 − (a 1 + a 2 ) b 3 − (a 1 + a 2 )    , ∂F (a; b) =    −1 −1 1 0 0 −1 −1 0 1 0 −1 −1 0 0 1    , and ∂F b (a) =    −1 −1 −1 −1 −1 −1    . Note that n = 2, s = 3, and m = 3. In this example, rank ∂F (a, b) = 3 always. The transversality theorem gives that F b (a) = 0 implies that rank ∂F b (a) = 3 for almost all b ∈ R 3 . Since rank ∂F b (a) ≤ n = 2, F b (a) = 0 for almost all b. This again makes sense: The model, which requires F (a, b) = 0, can only generate data b such that b 1 = b 2 = b 3 , which is nongeneric in R 3 . Data from the range of the model only identify a 1 + a 2 and never a 1 and a 2 separately. So, this is another example where transversality tells us there are zero (not one) parameters that match the data for almost all data. However, unlike in the previous example, data in the model's range never identify the parameters. Together, these examples show that the transversality theorem, as applied in Fang and Wang's proof, has no implications for identification. Given that the rank condition for its application of the transversality condition holds, Fang and Wang's proof is correct up to its last half sentence. The first half of the proof's last sentence correctly concludes that, for all data b ∈ B outside a set of Lebesgue measure zero, there exist no parameters a ∈ A that solve F (a, b) = 0. However, the last half sentence qualifies this conclusion with "except the true primitives (u * , β * ,β * , δ * )... that generated the data." This qualification does not follow from the preceding mathematical arguments. In particular, we have shown that Fang and Wang's application of the transversality theorem implies that zero, not one, parameters solve the model for all data outside a set of measure zero. The transversality theorem has no implications for the number of parameters that solve the model for data in an exceptional set, which includes the model's range. So, this last half sentence of Fang and Wang's proof is incorrect. Finally, Fang and Wang's proof is also incomplete, because it fails to verify the key rank condition for its application of the transversality theorem: rank ∂F (a, b) = m if F (a, b) = 0. Instead, Fang and Wang noted that "this can be verified in the same way that we verify [a similar condition in the proof of Proposition 1]," but did not verify the latter condition either (p.578). The incompleteness of the proof is however immaterial for the conclusion that can be drawn from the proof. If the rank condition holds, we know that the model's range has Lebesgue measure zero and is excepted from the transversality theorem. If the rank condition is violated, the transversality theorem does not apply. Either way, the proof has no implications for identification. Discussion The source of the problems with Fang and Wang's Proposition 2 is its focus on identification that is generic in the data space, rather than the parameter space. This is nonstandard and complicates the analysis in two ways. First, the specification of an appropriate measure directly on the data requires knowledge of the model's empirical content, i.e. the range of data that can be generated by varying the model parameters on their domain. Our discussion of Proposition 2 highlights the problems of ignoring the model's empirical content. Second, it is unclear how the concept of generic identification in the data space corresponds to the concept of generic identification in the parameter space. The two concepts are generally not interchangeable, as the following stylized example illustrates. Consider a model that maps a parameter θ ∈ R to a choice probability p = p(θ) ∈ [0, 1]. Define "for almost all" θ (or p) to mean for all θ (or p) outside a set of Lebesgue measure 0. If p(θ) = 1/(1 + exp(θ)), then θ is identified for almost all p and almost all θ. If instead p(θ) equals 0 if θ ≤ 0, θ if θ ∈ (0, 1), and 1 if θ ≥ 1, then θ is identified for almost all p, but we not for almost all θ. One could possibly derive an identification result for the case with more equations than free parameters that is generic in the parameter space instead, following e.g. Sargan, Mc-Manus (1992), and Ekeland et al.. One would also have to choose between a measuretheoretic definition of "genericity", like Fang and Wang's, and a topological one. McManus and Ekeland et al. provide discussion. Generic identification, however, is a weak concept of identification, and particularly so if the exceptional set cannot be characterized. The very small subsets where identification fails may happen to contain economically important models. One example is Ekeland et al. (2004) which shows that the generic identification of the hedonic model does not cover the linear-quadratic special case that is at the center of most applied work. Abbring and Daljord offers an alternative approach that dispenses with the concept of generic identification. It instead exploits the specific structure of the dynamic discrete choice model to analyze identification of a special case of Fang and Wang's model, with exponential discounting (β =β = 1). It shows that each exclusion restriction in (1) for some x r ∈ X r and distinct x e ∈ X e and x e ∈ X e gives a single moment condition that relates the discount factor δ to the choice and transition probabilities. 12 These moment conditions contain all the information in the data about δ, and therewith u. 13 The analysis shows that each single exclusion restriction in general gives set identification, where the identified set is finite. For important special cases, such as models with one-period finite dependence (e.g. Rust, 1987;De Groote and Verboven, 2018), the exclusion restriction gives point identification. Abbring et al. (2018) showed that it is similarly possible to concentrate the identification analysis of a model with sophisticated present biased preferences (β = 1) on a small number of moment conditions derived directly from equally many exclusion restrictions. However, this approach does not extend to Fang and Wang's partially naive case. The identification of partially naive time preferences seems to require an analysis of the full system of equations and remains an 12 The exclusion restrictions in (1) are special cases of the ones considered in Abbring and Daljord. 13 Abbring and Daljord's Section 4 noted that a version of Magnac and Thesmar's Proposition 2 holds: There exist unique (up to a standard utility normalization) values of the primitives (notably, u) that rationalize the data for any given discount factor β ∈ [0, 1). The joint identification of a non-parametric utility function and the discount factor is therefore reduced to the conditions β derived from the exclusion restriction. open question. Identifying the discount factor in dynamic discrete choice models. J H Abbring, Ø Daljord, arXiv:1808.10651Becker Friedman InstituteChicagoWorking Paper 2018-55revised. econ.EMAbbring, J. H. and Ø. Daljord (2019). Identifying the discount factor in dynamic discrete choice models. Working Paper 2018-55 (revised), Becker Friedman Institute, Chicago. arXiv:1808.10651 [econ.EM]. Identifying present-biased discount functions in dynamic discrete choice models. J H Abbring, Ø Daljord, F Iskhakov, Mimeo, CentER, TilburgAbbring, J. H., Ø. Daljord, and F. Iskhakov (2018). Identifying present-biased discount functions in dynamic discrete choice models. Mimeo, CentER, Tilburg. Identification and semiparametric estimation of a finite horizon dynamic discrete choice model with a terminating action. P Bajari, C S Chu, D Nekipelov, M Park, Quantitative Marketing and Economics. 144Bajari, P., C. S. Chu, D. Nekipelov, and M. Park (2016). Identification and semiparametric estimation of a finite horizon dynamic discrete choice model with a terminating action. Quantitative Marketing and Economics 14 (4), 271-323. Welfare Dependence and Self-Control: An Empirical Analysis. M K Chan, The Review of Economic Studies. 844Chan, M. K. (2017, March). Welfare Dependence and Self-Control: An Empirical Analysis. The Review of Economic Studies 84 (4), 1379-1423. Learning models: An assessment of progress, challenges, and new developments. A Ching, T Erdem, M P Keane, Marketing Science. 326Ching, A., T. Erdem, and M. P. Keane (2013). Learning models: An assessment of progress, challenges, and new developments. Marketing Science 32 (6), 931-938. Subsidies and myopia in technology adoption: Evidence from solar photovoltaic systems. O De Groote, F Verboven, Forthcoming AER. Technical reportDe Groote, O. and F. Verboven (2018, May). Subsidies and myopia in technology adoption: Evidence from solar photovoltaic systems. Technical report, Forthcoming AER. The joint identification of utility and discount functions from stated choice data: An application to durable goods adoption. J.-P Dubé, G J Hitsch, P Jindal, Quantitative Marketing and Economics. 124Dubé, J.-P., G. J. Hitsch, and P. Jindal (2014). The joint identification of utility and discount functions from stated choice data: An application to durable goods adoption. Quantitative Marketing and Economics 12 (4), 331-377. Identification and estimation of hedonic models. I Ekeland, J J Heckman, L Nesheim, Journal of Political Economy. 1121Ekeland, I., J. J. Heckman, and L. Nesheim (2004). Identification and estimation of hedonic models. Journal of Political Economy 112 (1), S60-S109. Estimating dynamic discrete choice models with hyperbolic discounting, with an application to mammography decisions. H Fang, Y Wang, International Economic Review. 562Fang, H. and Y. Wang (2015). Estimating dynamic discrete choice models with hyperbolic discounting, with an application to mammography decisions. International Economic Review 56 (2), 565-596. Estimation of dynastic life-cycle discrete choice models. G.-L Gayle, L Golan, M A Soytas, Quantitative Economics. 93Gayle, G.-L., L. Golan, and M. A. Soytas (2018). Estimation of dynastic life-cycle discrete choice models. Quantitative Economics 9 (3), 1195-1241. A dynamic model of rational addiction: Evaluating cigarette taxes. B R Gordon, B Sun, Marketing Science. 343Gordon, B. R. and B. Sun (2015, May-June). A dynamic model of rational addiction: Evaluating cigarette taxes. Marketing Science 34 (3), 452-470. Conditional choice probabilities and the estimation of dynamic models. V J Hotz, R A Miller, Review of Economic Studies. 603Hotz, V. J. and R. A. Miller (1993). Conditional choice probabilities and the estimation of dynamic models. Review of Economic Studies 60 (3), 497-529. Vertical integration and exclusivity in platform and two-sided markets. R Lee, American Economic Review. 1037Lee, R. (2013). Vertical integration and exclusivity in platform and two-sided markets. American Economic Review 103 (7), 2960-3000. Identifying dynamic discrete choice processes. T Magnac, D Thesmar, Econometrica. 70Magnac, T. and D. Thesmar (2002). Identifying dynamic discrete choice processes. Econo- metrica 70, 801-816. The Theory of General Economic Equilibrium. A Mas-Colell, Number 9780521265140 in Cambridge Books. Cambridge University PressMas-Colell, A. (1985, October). The Theory of General Economic Equilibrium. Number 9780521265140 in Cambridge Books. Cambridge University Press. How common is identification in parametric models. D A Mcmanus, Journal of Econometrics. 531McManus, D. A. (1992). How common is identification in parametric models? Journal of Econometrics 53 (1), 5-23. Semiparametric inference in dynamic binary choice models. A Norets, X Tang, Review of Economic Studies. 813Norets, A. and X. Tang (2014). Semiparametric inference in dynamic binary choice models. Review of Economic Studies 81 (3), 1229-1262. Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher. J Rust, Econometrica. 55Rust, J. (1987). Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher. Econometrica 55, 999-1033. Identification and lack of identification. J D Sargan, Econometrica. 516Sargan, J. D. (1983). Identification and lack of identification. Econometrica 51 (6), 1605- 1633. Determining consumers discount rates with field studies. S Yao, C F Mela, J Chiang, Y Chen, Journal of Marketing Research. 496Yao, S., C. F. Mela, J. Chiang, and Y. Chen (2012). Determining consumers discount rates with field studies. Journal of Marketing Research 49 (6), 822-841.
[]
[ "Multi-parton interactions and rapidity gap survival probability in jet-gap-jet processes", "Multi-parton interactions and rapidity gap survival probability in jet-gap-jet processes" ]
[ "Izabela Babiarz *[email protected][email protected]§[email protected] \nFaculty of Mathematics and Natural Sciences\nUniversity of Rzeszów ul\nPigonia 135-310RzeszówPoland\n", "Rafał Staszewski \nInstitute of Nuclear Physics\nPolish Academy of Sciences\nRadzikowskiego 152PL-31-342KrakówPoland\n\nUniversity of Rzeszów\nPL-35-959RzeszówPoland\n", "Antoni Szczurek \nInstitute of Nuclear Physics\nPolish Academy of Sciences\nRadzikowskiego 152PL-31-342KrakówPoland\n\nUniversity of Rzeszów\nPL-35-959RzeszówPoland\n" ]
[ "Faculty of Mathematics and Natural Sciences\nUniversity of Rzeszów ul\nPigonia 135-310RzeszówPoland", "Institute of Nuclear Physics\nPolish Academy of Sciences\nRadzikowskiego 152PL-31-342KrakówPoland", "University of Rzeszów\nPL-35-959RzeszówPoland", "Institute of Nuclear Physics\nPolish Academy of Sciences\nRadzikowskiego 152PL-31-342KrakówPoland", "University of Rzeszów\nPL-35-959RzeszówPoland" ]
[]
We discuss an application of dynamical multi-parton interaction model, tuned to measurements of underlying event topology, for a description of destroying rapidity gaps in the jet-gap-jet processes at the LHC. We concentrate on the dynamical origin of the mechanism of destroying the rapidity gap. The cross section for jet-gap-jet is calculated within LL BFKL approximation. We discuss the topology of final states without and with the MPI effects. We discuss some examples of selected kinematical situations (fixed jet rapidities and transverse momenta) as distributions averaged over the dynamics of the jet-gap-jet scattering. The color-singlet ladder exchange amplitude for the partonic subprocess is implemented into the PYTHIA 8 generator, which is then used for hadronisation and for the simulation of the MPI effects. Several differential distributions are shown and discussed. We present the ratio of cross section calculated with and without MPI effects as a function of rapidity gap in between the jets.
10.1016/j.physletb.2017.05.095
[ "https://arxiv.org/pdf/1704.00546v1.pdf" ]
119,424,575
1704.00546
10a000bfa33d602c38631650c163b5ffe853de14
Multi-parton interactions and rapidity gap survival probability in jet-gap-jet processes 3 Apr 2017 Izabela Babiarz *[email protected][email protected]§[email protected] Faculty of Mathematics and Natural Sciences University of Rzeszów ul Pigonia 135-310RzeszówPoland Rafał Staszewski Institute of Nuclear Physics Polish Academy of Sciences Radzikowskiego 152PL-31-342KrakówPoland University of Rzeszów PL-35-959RzeszówPoland Antoni Szczurek Institute of Nuclear Physics Polish Academy of Sciences Radzikowskiego 152PL-31-342KrakówPoland University of Rzeszów PL-35-959RzeszówPoland Multi-parton interactions and rapidity gap survival probability in jet-gap-jet processes 3 Apr 20171 We discuss an application of dynamical multi-parton interaction model, tuned to measurements of underlying event topology, for a description of destroying rapidity gaps in the jet-gap-jet processes at the LHC. We concentrate on the dynamical origin of the mechanism of destroying the rapidity gap. The cross section for jet-gap-jet is calculated within LL BFKL approximation. We discuss the topology of final states without and with the MPI effects. We discuss some examples of selected kinematical situations (fixed jet rapidities and transverse momenta) as distributions averaged over the dynamics of the jet-gap-jet scattering. The color-singlet ladder exchange amplitude for the partonic subprocess is implemented into the PYTHIA 8 generator, which is then used for hadronisation and for the simulation of the MPI effects. Several differential distributions are shown and discussed. We present the ratio of cross section calculated with and without MPI effects as a function of rapidity gap in between the jets. I. INTRODUCTION Diffraction, i.e. strong interaction involving the exchange of the vacuum quantum numbers (the pomeron) 1 is a very broad field of research. In the recent years the understanding of diffraction and its connection to the microscopic picture of strong interactions has been greatly improved thanks to the studies of hard diffraction, i.e. diffraction involving a hard scale, like high p T jets. The jet-gap-jet process is an example of the diffractive jet production, in which the pomeron is exchanged between the produced jets. Contrary to other types of diffractive jet processes (e.g. single diffractive jets), the absolute value of the four momentum carried by the pomeron is large. This provides a unique possibility to apply perturbative calculation methods to fully describe the diffractive exchange. The jet-gap-jet processes were measured at Tevatron [2] and recently at the LHC [3]. One important ingredient in the calculations of the hard diffractive cross section is the rapidity gap survival probability. In many calculations, see e.g. [4,5], the gap survival factor is assumed to be constant with respect to the kinematics of the event, and depending only on the centre-of-mass energy. Recently, more detailed analyses were performed, in which kinematic dependence was taken into account. The calculations were done for exclusive processes (see e.g. [6] and references therein) and single diffraction [7]. Recently the gap survival in single diffractive processes was calculated dynamically by including MPIs [8]. In the present paper we study this topic for a different class of processes -the jetgap-jet production. What is/are the process(es) responsible for destroying rapidity gap obtained in the pQCD calculation of colour-singlet exchange? In the present study we explore the role of multi parton interactions that are the main mechanism responsible for understanding of underlying event topology, see e.g. [9][10][11][12]. In particular, we wish to address the problem how much the dynamical calculation changes the somewhat academic BFKL result. 1 The spin structure of the pomeron is a matter of current discussions [1]. 2 II. PARTICLE PRODUCTION IN JET EVENTS The difference in the underlying mechanism of the non-diffractive jet and jet-gap-jet production, the details of which are discussed in Section IV, affect not only the cross section and angular distribution of jets, but also the distributions of particles produces in the events. This difference originate from a different flow of the colour charges in the events, which affect the hadron formation process. These effects can be studied using Monte Carlo event generators that simulate the hadronization process. The following results were obtained with PYTHIA 8. First we wish to illustrate the situation for a selected kinematical situation. Fig. 1 presents the rapidity distribution of particles produced in a pp interaction at √ s = 7 TeV for events obtained with the gg → gg hard subprocess, where the gluons are scattered with fixed transverse momentum p T = 50 GeV at rapidities of y = ±3. Two cases are studied: nondiffractive jets, when colour charges are exchanged between the scattered gluons, and jet-gap-jet production, in which interacting gluons keep their colours. One can clearly see that the rapidity density of produced particles is highest around rapidities of scattered gluons, which reflects the jet structure of the events. In this region one does not see a large difference between the two cases (colour structures). On the other hand, in the region between the jets the difference is quite dramatic. When no colour charge is transferred between the gluons, the density of produced particles is reduced by two orders of magnitude. The particles produced at rapidities outside the jet system originate 3 from the hadronization of the proton remnants and from the fact that there is an colour transfer between the remnants and the scattered gluons. The suppression of particles production in the region between the jets will lead to large rapidity regions devoid of particles -rapidity gaps. However, the actual values of particles rapidities, and thus the size of the rapidity gap, is to some extend random and will fluctuate from event to event. This is true both for for non-diffractive jets as well as for jet-gap-jet events. On average, for the former ones one expects rather small gaps, and much bigger gaps for the latter ones. This is illustrated in Fig. 2, where the distributions of gap size are shown for jets with p T = 50 GeV and y = ±3. One can see that even though the distributions are well separated, their tails are rather long and they have a non-negligible overlap. This shows that, even neglecting other effects discussed later, these two processes cannot be fully separated experimentally (at least based solely on the rapidity gap size). III. MULTIPLE PARTON INTERACTIONS An additional complication to the picture presented in the previous section comes from the fact that hadrons are complicated objects that consist of many partons. In a single hadron-hadron collision more than one parton-parton interaction can take place. This phenomenon is known as the multiple parton interactions (MPI) or the underlying 4 event (UE) activity and it was extensively studied at Tevatron and the LHC [9,11,12]. The MPIs are modeled in PYTHIA with the help of minijets calculated in collinear factorization approach with a special treatment at low transverse momenta of minijets by multiplying standard cross section by a somewhat arbitrary suppression factor [13] F sup (p t ) = p 4 t (p 2 t0 + p 2 t ) 2 θ(p t − p t,cut ) . (3.1) Typically, MPI effects are responsible for increasing the particle production in the events, but for diffraction they have particularly important consequences. If the gg → gg or another partonic subprocess with a colour-singlet exchange is accompanied by another independent parton-parton interaction, additional particles can be produced in the region where a gap was expected. This is presented in Fig. 3, where rapidity distributions of particles produces in jet-gap-jet events are shown for the MPI effects in PYTHIA turned off (black) and turned on (red). It is crucial that even though the particle density y particle rapidity, in the region between the jets is greatly increased, it is possible to observe events with very large gap sizes. This is contrary to the case of non-diffractive jets, and it originates from the fact that it is possible to have events with no additional parton-parton interaction, in which a large gap can survive. The distributions of the gap size for events with and without MPI effects are presented in Fig. 4. For events with MPI effects the gap distribution consists of two parts. The distributions for low gap sizes is steeply falling, similarly to the non-diffractive events. This comes from the fact that additional interactions produce particles between the jets. The large-gap part originates from events where no additional interactions occurred. The distribution is similar to the one obtained without MPI effects, but reduced by a factor of about one order of magnitude. Fig. 4 shows also the gap size distribution plotted with MPI effects included, but only for events that do not contain any additional interactions. For very large gap sizes this distribution agrees with the one for all events. However, for medium gap sizes it is not. On the other hand, the shape of this distribution is the same as for the distribution without MPI effects, but scaled down. The difference between the red and the blue curves comes from events in which the additional interactions produce very few particles. In these events the initial rapidity gap is reduced, but not completely filled. Since experimentally the jet-gap-jet events are distinguished by the presence of large rapidity gaps, the MPI effects lead to a reduction of the measured cross sections with respect to the cross section of the actual colour-singlet exchange. This phenomenon is often called as absorptive corrections and the corresponding probability -gap survival probability. In order to estimate its magnitude one can perform event simulation with PYTHIA and count in which fraction of events no additional parton interactions are present. For the jet-gap-jet processes it is possible to use the existing Monte Carlo generators that take into account modeling of multi parton interactions. If such a generation correctly describes the MPI effects for standard jet production, it should also provide a correct 6 description of the jet-gap-jet process. 2 The MPI generation in PYTHIA is based on phenomenological models that contain several arbitrary parameters. Usually the values of these parameters are tuned in order to give the best description of the Tevatron and the LHC data for observables related to MPI. Therefore, it can be expected that the results presented in the present paper should also be close to reality. A big advantage of this approach is that it allows calculations of the cross section or gap survival probability as a function of the kinematical variables of the hard (sub)process. The dependence of the gap survival probability as a function of the centre-of-mass energy is presented in Fig. 5a. Its value drops from about 15% at the Tevatron energy (2 TeV) to about 5% at the LHC nominal energy (14 TeV). Fig. 5b shows the dependence on the M gg for √ s = 7 TeV. Here, the gap survival increases from about 7% for masses close to zero up to 30% for masses close to 6 TeV. The observed behaviour can be qualitatively explained by the energy conservation: when a bigger part of the proton energy is carried by the parton participating in the hard subprocess, less energy is available for additional interactions and they become less likely. IV. HARD COLOUR-SINGLET EXCHANGE The calculation of the jet-gap-jet process is based on the QCD collinear factorisation, where the cross section for the hard subprocess,σ, is convoluted with the appropriate parton densities. An example of the diagram of the full process is presented in Fig. 6. The cross section can be written in the simple form dσ dp T = g e f f (x 1 , µ 2 F )g e f f (x 2 , µ 2 F ) dσ dp T dx 1 dx 2 , where g e f f (x k , µ 2 F ) = g(x k , µ 2 F ) + 16 81 ∑ f (q f (x k , µ 2 F ) +q f (x k , µ 2 F )) and k =1,2. The jet-gap-jet process differs from the typical jet production in the colour structure of the subprocess. Here, the colour-singlet ladder is exchanged between partons. To a first approximation, the colour-singlet exchange can be described in perturbative QCD as an exchange of a pair of gluons that carry opposite colour charges [14]. A better approach is to describe it as a gluon ladder, which can be performed within the BFKL framework. The first calculation of this type were done in [15]. This was followed e.g. by further studies, see e.g. [16][17][18]. In the present paper we shall use for illustration LL BFKL formalism used previously 9 e.g. in [17,18]. A(∆η, p 2 T ) = 16N C πα 2 s C F p 2 T ∞ ∑ p=−∞ dγ 2iπ [p 2 − (γ − 1/2) 2 ] exp(ᾱχ e f f [2p, γ,ᾱ]∆η) [(γ − 1/2) 2 − (p − 1/2) 2 ][(γ − 1/2) 2 − (p + 1/2) 2 ] ,(4.1) where p t is jet transverse momentum and ∆η is rapidity distance between the partonic jets. The integral above runs along the imaginary axis from 1 2 − i∞ to 1 2 + i∞ and with only even conformal spins. The χ LL kernel reads χ LL = 2ψ(1) − ψ 1 − γ + |p| 2 ψ γ + |p| 2 , (4.2) where ψ(γ) = d log Γ(γ)/dγ. In the LL BFKL approach a constant value of α s is used [17]. In Eq. (4.1)ᾱ = α s N c /π. In Fig. 7 we show the LL BFKL amplitude as a function of rapidity distance between jets for a few values of jet transverse momenta. The increase at large rapidity distance is a typical BFKL increase while the increase at low rapidity distances is "caused" by the presence of higher conformal spins. The calculation sketched above does not include gap survival factor which is a very important ingredient as discussed in the next section. The leading-logarithm approximation of BFKL may be not sufficient to provide a reasonable description of the absolute cross section normalisation and the distributions shapes, but is sufficient for presentation of the MPI effects in suppressing rapidity gap discussed in the present paper, as will be explained below. 10 Here we wish to discuss our results for a broad range of jet rapidities when imposing only a minimal lower cut on jet transverse momenta. We fix transverse momenta of jets to be in the interval 40 GeV < p 1t , p 2t < 200 GeV and impose no cuts on jet rapidities at all. The calculations presented in this section have been performed using the framework of the PYTHIA 8 Monte Carlo event generator. The BFKL leading-logarithm amplitude for the hard subprocess has been calculated following [17,18] and implemented into PYTHIA as new gg → gg, qg → qg, qq → qq, qq → qq subprocesses with the appropriate colour flow. With a realistic description of the jet-gap-jet dynamics (BFKL) and the MPI modeling by PYTHIA it is possible to study the rapidity gap differential cross sections. In this way the kinematics-dependent rapidity gap survival probability will be properly averaged over different kinematic configurations. In addition, the effects of rapidity gap reduction (see Fig. 4) is also taken into account. it is assumed that any additional interaction destroys the rapidity gap. This assumption leads to a flat dependence of the gap survival with ∆η, as seen in 9 (right panels). This is understandable, since here the events with additional interactions are not considered at all. In the previous case they were included, but with smaller values of ∆η, which resulted in high ratio values in that region. It is interesting to compare not only the shape, but also the value of the gap survival probability. This can be best done in the region of large ∆η, where both definitions give an approximately flat behaviour. The definition that takes into account all events results in gap survival factor of about 10%. On the other hand, the definition that counts only the events with no additional MPIs give a value close to 6%. The difference is of the order of 40%, which is rather significant. It shows that latter definition may be too simplistic to provide a precise description of jet-gap-jet processes In addition, we also compare the situation where the jets can be created in the scattering of quarks and gluons, to the situation when only gg → gg process is included. The resulting gap survival probability in these two cases is the same within our present accuracy. In the bottom panels of Fig. 9 we show for comparison also result obtained for the two-gluon exchange model with the regularisation parameter m g = 0.75 GeV (see [14] for the cross section formula and [19] for the value of the nonperturbative parameter). The gap survival factor for the case of n MPI = 0 is (at √ s = 7 TeV) about 6-7 %, independent of modeling color-singlet exchange. In Fig. 10 In the present paper we have performed detailed studies of the role of multi-parton interactions in reduction of the theoretical cross section and/or different differential distributions for the jet-gap-jet processes. The cross section and the differential distributions for jet-gap-jet processes have been calculated for illustration in the LL BFKL framework. 1 ≤ η ∆ 0 < 2 ≤ η ∆ 1 < 4 ≤ η ∆ 2 < η ∆ 4< =71 ≤ η ∆ 0 < 2 ≤ η ∆ 1 < 4 ≤ η ∆ 2 < η ∆ 4< =7 We have also tried to use a simple two-gluon exchange model regularised by the effective gluon mass to describe the jet-gap-jet process. The subprocess amplitudes for the color-singlet exchange (BFKL ladder or two-gluon exchange) were implemented to the PYTHIA 8 generator, which was then used to simulate multi-parton interactions and hadronisation of the generated events. The parameters of the multi-parton interactions models in PYTHIA are tuned to measurements of observables related to the underlying event. In this sense we have no freedom to modify the MPIs. For pedagogical reasons we have first studied particle (hadron) final states for the jet-gap-jet process for fixed kinematic configurations (fixed rapidity and transverse momenta of the jets). After inclusion of MPI effects we have shown fractions of events with no extra activity (in addition to the hard jets) or no activity in some rapidity interval. Those fractions depend, but rather smoothly, on kinematic variables. Finally we have shown similar results when imposing only a cut on jet transverse momenta and integrating over almost whole phase space (full range of jet rapidities). Again, the gap survival factor is shown as a function of the gap size and the jet transverse momenta. We have found an interesting dependence on the size of the rapidity gap and almost no dependence on jet transverse momentum. A simple explanation of the first dependence has been offered. The MPIs suppress production of events with large rapidity gaps but create events with smaller rapidity gaps. The resulting rapidity gap survival factor depends on the gap size. On the other hand, when imposing the requirement of occurring no additional MPIs, the corresponding gap survival factor is almost constant, independent of gap size. However, there is a sizeable dependence on the collision energy. The ratios obtained for colour-singlet two-gluon exchange and for the BFKL ladder are almost the same. FIG. 1 : 1Rapidity distributions of particles produced in non-diffractive jet (black) and jet-gap-jet (red, curve with a dip at y = 0) events, for the selected kinematical situation. FIG. 2 : 2Rapidity gap distributions for non-diffractive jet (red, with maximum at ∆η ∼ 0) and jetgap-jet events (black, with maximum at ∆η ∼ 5), for our selected kinematical configuration. No MPI effects are included here. FIG. 3 : 3Rapidity distributions of produced particles for jet-gap-jet events without (black) and with (red) multi parton interactions for our selected kinematical configuration. FIG. 4 : 4Rapidity gap distributions for jet-gap-jet events without MPI effects (black, the highest curve at ∆η = 3-6) and with MPI effects (red) and with MPI effects included, but for events in which no MPIs occurred i.e. n MPI = 0 (blue, the lowest curve). Fig. 5 5presents the dependence of a few kinematical variables of the gap survival probability, defined as a fraction of events that do not contain any additional partonparton interactions apart from the hard one. Since PYTHIA assumes the initial partons to be collinear with the protons, the kinematics of an event can be described by four parameters. A sensible choice is: the centre-of-mass energy √ s, the invariant mass of the hard subprocess M gg , the difference of rapidities of the scattered gluons ∆y and the rapidity of the gluon-gluon system y gg . Fig. 5c presents the dependence of the survival probability on ∆y gg . FigFIG. 5 :FIG. 6 : 56. 5d presents the dependence on the rapidity of the gluon-gluon system. The dependence is rather flat for central values of rapidities and rapidly grows for |y gg | > 3.For such strongly boosted events one of the incoming partons carries a large energy and the other one very small one. In this situation the possibility of additional Kinematic dependence of gap survival probability as a function of: a) centre-of-mass energy, b) invariant mass in the hard subprocess, c) rapidity distance between the scattered gluons, d) rapidity of the digluon system. also reduced, because additional partons from both protons are needed for extra MPI to occur.In summary, there is a strong depedence of the gap survival probability on kinematical variables. However, not all kinematic configurations are equally probable. For example, the partonic distributions are larger at small values of Bjorken x, which favours small masses of the system. In addition, the dynamics of the colour-singlet exchange will also play some role. A schematic QCD diagram of color-singlet exchange for the jet-gap-jet process in a pp collision. Only gg initiated process is shown explicitly. FIG. 7 : 7Subprocess amplitude in the LL BFKL approach as a function of rapidity distance between jets for selected transverse momenta of the jets.In this calculation α S = 0.17 Fig. 8 8presents such distributions for events with and without MPI effects. Both distributions are rapidly falling, which originates predominantly from the shape of the parton densities. In the large-gap region the distribution with the MPI effects is reduced with respect to the one without MPI. However, for small ∆η the situation is opposite. This comes from the fact that the integrated cross section is in both cases the same, since it is given only by the hard partonic mechanism. The occurrence of MPI effects does not change the normalisation of the distributions, but shifts events to smaller rapidity gap sizes. Fig. 9 9presents the ratio of differential cross section for the gap distributions with and without MPI effects. This ratio can be treated as an effective gap survival factor, including all effects previously discussed and averaged over all configurations for the dynamics of the BFKL colour-singlet exchange. The occurrence of additional MPIs destroys large rapidity gaps and simultaneously increases of number of events with small rapidity gaps (see the left panels). Therefore the gap survival factor, defined in this way, depends on ∆η (gap size), see the left panels of Fig. 9. It is worth considering a different definition of the survival factor, namely the ratio between the number of events in which no additional events occur. This definition is similar to the one typically used in the literature, where 11 FIG. 8 : 8we show similar ratios but now as a function of jet transverse momentum for a few selected rapidity gap intervals. No obvious dependence on the jet transverse momentum can be observed in the left panel where we show the ratio of the distribution with MPI effect included to the corresponding distribution without MPI effects in contrast to the dependence on rapidity gap size observed in the previous figure. For smallest gaps (0 < ∆η < 1) the ratio is with a good precision equal to 1. This seems accidental and is connected with bin size. This could be better understood by inspection of the left panels of Fig. 9 at ∆η ∼ 0. In the right panel we show the ratio with extra academic condition n MPI = 0. Here we can observe that the result for all rapidity gap size intervals coincide within the limited Monte Carlo statistics. The ratios on the right panel are clearly smaller than those on the left panels. Rapidity gap distributions for jet-gap-jet events generated with and without MPI effects as a function of rapidity gap size. All partons combinations are included here. The left panel shows the rapidity gap distribution when MPI effects are included while the right panel shows result with extra requirement of n MPI = FIG. 9 : 9Ratio of rapidity gap distributions for jet-gap-jet events generated with and without MPI effects as a function of rapidity gap size. All partons are included here. For comparison result for only gg → gg is shown by the dark blue line (a,b). Results for color-singlet two-gluon exchange are shown in panels (c, FIG. 10 : 10Ratio of rapidity gap distributions for jet-gap-jet events generated with and without MPI effects (left panel) and with extra requirement n MPI = 0 (right panel) as a function of jet transverse momentum for different intervals of rapidity gaps. Summarising in one sentence, the MPI effects lead to a dependence on kinematical 16 variables of the so-called gap survival factor, in contrast to what is usually TeV 10 000 000 Events s(GeV) T p 40 60 80 100 120 140 160 180 200 (noMPI) T dp σ d =0)/ MPI (n T dp σ d Ratio: -2 10 -1 10 This statement is not necessarily true for other diffractive jet processes, where the production mechanism is somewhat different and not so well understood. AcknowledgmentsThis study was partially supported by the Polish National Science Center grant DEC-2014/15/B/ST2/02528 and by the Center for Innovation and Transfer of Natural Sciences and Engineering Knowledge in Rzeszów. We are indebted to Cyrille Marquet for a discusion of their BFKL calculations. . P Lebiedowicz, O Nachtmann, A Szczurek, arXiv:1601.04537Phys. Rev. 9354015hep-phP. Lebiedowicz, O. Nachtmann and A. Szczurek, Phys. Rev.D93 (2016) 054015, arXiv:1601.04537 [hep-ph]. . C Ewerz, P Lebiedowicz, O Nachtmann, A Szczurek, arXiv:1606.08067Phys. Lett. 763382387hep-phC. Ewerz, P. Lebiedowicz, O. Nachtmann, and A. Szczurek,Phys. Lett. B763 (2016) 382387, arXiv:1606.08067 [hep-ph]. . C Ewerz, M Maniatis, O Nachtmann, arXiv:1309.3478hep-phC. Ewerz, M. Maniatis and O. Nachtmann, arXiv:1309.3478 [hep-ph]. . B Abbott, D0 CollaborationPhys. Lett. 440189D0 Collaboration, B. Abbott et al., Phys. Lett B440(1998) 189. FSQ-12-001Dijet production with a large rapidity gap between the jets, CMS Physics Analysis Summary. CMS Collaboration, Dijet production with a large rapidity gap between the jets, CMS Physics Analysis Summary FSQ-12-001. . M Łuszczak, R Maciuła, A Szczurek, Phys. Rev. 9154024M. Łuszczak, R. Maciuła and A. Szczurek, Phys. Rev. D91 (2015) 054024. . A Chuinard, C Royon, R Staszewski, R J High Energ, 10.1007/JHEP04(2016)092Phys. 92A. Chuinard,C. Royon and R. Staszewski, R. J. High Energ. Phys. (2016) 2016: 92. doi:10.1007/JHEP04(2016)092 . P Lebiedowicz, A Szczurek, Phys. Rev. 9254001P. Lebiedowicz and A. Szczurek, Phys. Rev. D92 (2015) 054001. . M Łuszczak, R Maciuła, A Szczurek, M Trzebiński, JHEP. 0289M. Łuszczak, R. Maciuła, A. Szczurek and M. Trzebiński, JHEP 02 (2017) 089. . C O Rasmussen, T Sjstrand, 10.1007/JHEP02(2016)142arXiv:1512.05525JHEP. 1602142hep-phC. O. Rasmussen and T. Sjstrand, JHEP 1602, 142 (2016) doi:10.1007/JHEP02(2016)142, arXiv:1512.05525 [hep-ph]. . V Khachatryan, Eur. Phys.J.C. 76155V. Khachatryan, et al., Eur. Phys.J.C. 76 (2016) 155. . K Akiba, LHC forward physicsJ. Phys. G.:Nucl. Part. Phys. 43110201K. Akiba et al., LHC forward physics, J. Phys. G.:Nucl. Part. Phys. 43 (2016) 110201. arXiv:1506.05829Proceedings of the Sixth International Workshop on Multiple Partonic Interactions at the Large Hadron Collider. the Sixth International Workshop on Multiple Partonic Interactions at the Large Hadron ColliderKrakow, PolandProceedings of the Sixth International Workshop on Multiple Partonic Interactions at the Large Hadron Collider, Krakow, Poland, 3-7 November 2014, arXiv:1506.05829. R Field, arXiv:1202.0901Lecture presented at XXI Physics in Collisions (PIC2011). Vancouver, BC CanadaR. Field, Lecture presented at XXI Physics in Collisions (PIC2011), Vancouver, BC Canada, August 28 -September 1, 2011, arXiv:1202.0901. . T Sjstrand, 10.1016/j.cpc.2015.01.024arXiv:1410.3012Comput. Phys. Commun. 191159hep-phT. Sjstrand et al., Comput. Phys. Commun. 191 (2015) 159 doi:10.1016/j.cpc.2015.01.024, arXiv:1410.3012 [hep-ph]. High-Energy Particle Diffraction. V Barone, E Predazzi, SpringerBerlinV. Barone and E. Predazzi, "High-Energy Particle Diffraction", Springer, Berlin 2002. . A H Mueller, W K Tang, 10.1016/0370-2693(92)91936-4Phys. Lett. B. 284123A. H. Mueller and W. K. Tang, Phys. Lett. B 284, 123 (1992). doi:10.1016/0370-2693(92)91936-4 . L Motyka, A D Martin, M G Ryskin, 10.1016/S0370-2693(01hep-ph/0110273Phys. Lett. B. 524L. Motyka, A. D. Martin and M. G. Ryskin, Phys. Lett. B 524, 107 (2002) doi:10.1016/S0370- 2693(01)01380-6 [hep-ph/0110273]. . O Kepka, C Marquet, C Royon, Phys. Rev. 8334036O. Kepka, C. Marquet and C. Royon, Phys. Rev. D83 (2011) 034036. . F Chevallier, O Kepka, C Marquet, C Royon, 10.1103/PhysRevD.79.094019arXiv:0903.4598Phys. Rev. D. 7994019hep-phF. Chevallier, O. Kepka, C. Marquet and C. Royon, Phys. Rev. D 79, 094019 (2009) doi:10.1103/PhysRevD.79.094019, arXiv:0903.4598 [hep-ph]. . E Meggiolaro, Phys. Lett. B. 451414E. Meggiolaro, Phys. Lett. B 451 (1999) 414.
[]
[ "Robustification of Elliott's on-line EM algorithm for HMMs", "Robustification of Elliott's on-line EM algorithm for HMMs" ]
[ "Christina Erlwein [email protected]@itwm.fraunhofer.de \nFraunhofer ITWM Department of Financial Mathematics\nFraunhofer-Platz 1D-67663Kaiserslautern\n", "Peter Ruckdeschel \nFraunhofer ITWM Department of Financial Mathematics\nFraunhofer-Platz 1D-67663Kaiserslautern\n" ]
[ "Fraunhofer ITWM Department of Financial Mathematics\nFraunhofer-Platz 1D-67663Kaiserslautern", "Fraunhofer ITWM Department of Financial Mathematics\nFraunhofer-Platz 1D-67663Kaiserslautern" ]
[]
In this paper, we establish a robustification of an on-line algorithm for modelling asset prices within a hidden Markov model (HMM). In this HMM framework, parameters of the model are guided by a Markov chain in discrete time, parameters of the asset returns are therefore able to switch between different regimes. The parameters are estimated through an on-line algorithm, which utilizes incoming information from the market and leads to adaptive optimal estimates. We robustify this algorithm step by step against additive outliers appearing in the observed asset prices with the rationale to better handle possible peaks or missings in asset returns.
null
[ "https://arxiv.org/pdf/1304.2069v1.pdf" ]
119,386,214
1304.2069
4df659e1fcde506750dae8b335a33759f00093df
Robustification of Elliott's on-line EM algorithm for HMMs May 10, 2014 Christina Erlwein [email protected]@itwm.fraunhofer.de Fraunhofer ITWM Department of Financial Mathematics Fraunhofer-Platz 1D-67663Kaiserslautern Peter Ruckdeschel Fraunhofer ITWM Department of Financial Mathematics Fraunhofer-Platz 1D-67663Kaiserslautern Robustification of Elliott's on-line EM algorithm for HMMs May 10, 2014RobustnessHMMAdditive outlierAsset pricing In this paper, we establish a robustification of an on-line algorithm for modelling asset prices within a hidden Markov model (HMM). In this HMM framework, parameters of the model are guided by a Markov chain in discrete time, parameters of the asset returns are therefore able to switch between different regimes. The parameters are estimated through an on-line algorithm, which utilizes incoming information from the market and leads to adaptive optimal estimates. We robustify this algorithm step by step against additive outliers appearing in the observed asset prices with the rationale to better handle possible peaks or missings in asset returns. Introduction Realistic modelling of financial time series from various markets (stocks, commodities, interest rates etc.) in recent years often is achieved through hidden Markov or regime-switching models. One major advantage of regime-switching models is their flexibility to capture switching market conditions or switching behavioural aspects of market participants resulting in a switch in the volatility or mean value. Regime-switching models were first applied to issues in financial markets through Hamilton [1989], where he established a Markov switching AR-model to model the GNP of the U.S. His results show promising effects of including possible regime-switches into the characterisation of a financial time series. A lot of further approaches to use regime-switching models for financial time series followed, e.g. switching ARCH or switching GARCH models (see for example Cai [1994] and Gray [1996]), amongst many other applications. Various algorithms and methods for statistical inference are applied within these model set-ups, including as famous ones as the Baum-Welch algorithm and Viterbi's algorithm for an estimation of the optimal state sequence. HMMs in Finance, both in continuous and in discrete time often utilise a filtering technique which was developed by Elliott [1994]. Adaptive filters are derived for processes of the Markov chain (jump process, occupation time process and auxiliary processes) which are in turn used for recursive optimal parameter estimates of the model parameters. This filter-based Expectation-Maximization (EM) algorithm leads to an on-line estimation of model parameters. Our model set-up is based on Elliott's filtering framework. This HMM can be applied to questions, which arise in asset allocation problems. An investor typically has to decide, how much of his wealth shall be invested into which asset or asset class and when to optimally restructure a portfolio. Asset allocation problems were examined in a regime-switching setting by Ang and Bekaert [2002], where high volatility and high correlation regimes of asset returns were discovered. Guidolin and Timmermann [2007] presented an asset allocation problem within a regime-switching model and found four different possible underlying market regimes. A paper by Sass and Haussmann [2004] derives optimal trading strategies and filtering techniques in a continuous-time regime-switching model set up. Optimal portfolio choices were also discussed in Elliott and van der Hoek [1997] and Elliott and Hinz [2003] amongst others. Here, Markowitz's famous mean-variance approach (see Markowitz [1952]) is transferred into an HMM and optimal weights are derived. A similar Markowitz based approach within an HMM was developed in Erlwein et al. [2011], where optimal trading strategies for portfolio decisions with two asset classes are derived. Trading strategies are developed herein to find optimal portfolio decision for an investment in either growth or value stocks. Elliott's filtering technique is utilised to predict asset returns. However, most of the optimal parameter estimation techniques for HMMs in the literature only lead to reasonable results, when the market data set does not contain significant outliers. The handling of outliers is an important issue in many financial models, since market data might be unreliable at times or high peaks in asset returns, which might occur in the market from time to time shall be considered separately and shall not negatively influence the parameter estimation method. In general, higher returns in financial time series might belong to a separate regime within an HMM. This flexibility is already included in the model set-up. However, single outliers, which are not typical for any of the regimes considered, shall be handled with care, a separate regime would not reflect the abnormal data point. In this paper, we will develop a robustification of Elliot's filter-based EM-algorithm. In section 2 we will set the HMM framework, which is applied (either in a oneor multi-dimensional setting) to model asset or index returns. The general filtering technique is described in section 3. The asset allocation problem which clarifies the effect outliers can have on the stability of the filters is developed in section 4. Section 5 then states the derivation of a robustification for various steps in the filter equations. The robustification of a reference probability measure is derived as well as a robust version of the filter-based EM-algorithm. An application of the robust filters is shown in section 6 and section 7 finishes our work with some conclusions and possible future applications. Hidden Markov model framework for asset returns For our problem setting we first review a filtering approach for a hidden Markov model in discrete time which was developed by Elliott [1994]. The logarithmic returns of a stock or an index follow the dynamics of the observation process y k , which can be interpreted as a discretized version of the Geometric Brownian motion, which is a standard process to model stock returns. The underlying hidden Markov chain x k cannot be directly observed. The parameters of the observation process are governed by the Markov chain and are therefore able to switch between regimes over time. We work under a probability space (Ω, F, P ) under which x k is a homogeneous Markov chain with finite state space I = {1, . . . , N } in discrete time (k = 0, 1, 2...) . Let the state space of x k be associated with the canonical basis {e 1 , e 2 , ..., e N } ∈ R N with e i = (0, ..., 0, 1, 0, ..., 0) ∈ R N . The initial distribution of x 0 is known and Π = (π ji ) is the transition probability matrix with π ji = P (x k+1 = e j |x k = e i ). Let F x0 k = σ{x 0 , ..., x k } be the σ -field generated by x 0 , ..., x k and let F x k be the complete filtration generated by F x0 k . Under the real world probability measure P, the Markov chain x has the dynamics x k+1 = Πx k + v k+1 (2.1) where v k+1 := x k+1 − Πx k is a martingale increment (see Theorem in Elliott [1994]). The Markov chain x k is "hidden" in the log returns y k+1 of the stock price S k . Our observation process is given by y k+1 = ln S k+1 S k = f (x k ) + σ(x k )w k+1 (2.2) where x k has finite state space and w k 's constitute a sequence of i.i.d. random variables independent of x. The real-valued process y can be re-written as y k+1 = f , x k + σ, x k w k+1 . (2.3) Note that f = (f 1 , f 2 , ..., f N ) and σ = (σ 1 , σ 2 , ..., σ N ) are vectors, furthermore f (x k ) = f , x k and σ(x k ) = σ, x k , where b, c denotes the Euclidean scalar product in R N of the vectors b and c . We assume σ i = 0. Let F y k be the filtration generated by the σ(y 1 , y 2 , ..., y k ) and F k = F x k ∨ F y k is the global filtration. The following theorem (Elliott [1994]) states that the dynamics of the underlying Markov chain can be described by martingale differences. 3 Essential Steps in Elliott's Algorithm Change of Measure A widely used concept in filtering applications, going back to Zakai [1969] for stochastic filtering, is a change of probability measure technique. A measure change to a reference measureP is applied here, under which filters for the Markov chain and related processes are derived. UnderP , the underlying Markov chain still has the dynamics x k+1 = Πx k + v k+1 but is independent of the observation process and the observations y k are N (0, 1) i.i.d. random variables. Following the change of measure technique which was outlined in Elliott et al. [1995] the adaptive filters for the Markov chain and related processes are derived under this "idealised" measureP . Changing back to the real world is done by constructing P fromP through the Radon-Nikodŷm derivative dP dP F k = Λ k . To construct Λ k we define the process λ l λ l := φ σ(x l−1 ) −1 y l − f (x l−1 ) σ(x l−1 )φ(y l ) (3.1) where φ(z) is the probability density function of a standard normal random variable Z and set Λ k : = k l=1 λ l , k ≥ 1, Λ 0 = 1 . Under P the sequence of variables w 1 , w 2 , . . . , is a sequence of i.i.d. stan- dard normals, where we have w k+1 = σ(x k ) −1 (y k+1 − f (x k )) . Filtering for general adapted processes The general filtering techniques and the filter equations which were established by Elliott [1994] for Markov chains observed in Gaussian noise are stated in this subsection. This filter-based EM-algorithm is adaptive, which enables fast calculations and filter updates. Our robustification partly keeps this adaptive structure of the algorithm, although the recursivity cannot be kept completely. In general, filters for four types of processes related to the Markov chain, namely the state space process, the jump process, the occupation time process and auxiliary processes including terms of the observation process are derived. Information on these processes can be filtered out from our observation process and can in turn be used to find optimal parameter estimates. To determine the expectation of any F− adapted stochastic process H given the filtration F y k , consider the reference probability measureP defined as P (A) = A Λ dP . From Bayes' theorem a filter for any adapted process H is given by E [H k | F y k ] = E H k Λ k | F y k E Λ k | F y k . We define η k (H k ) := E H k Λ k | F y k , so that E H k | F y k = η k (H k ) / η k (1) . A recursive relationship between η k (H k ) and η k−1 (H k−1 ) has to be found, where η 0 (H 0 ) = E[H 0 ]. However, a recursive formula for the term η k−1 (H k−1 x k−1 ) is found. To relate η k (H k ) and η k (H k x k ) we note that with 1, x k = 1 1, η k (H k x k ) = η k (H k 1, x k ) = η k (H k ). (3.2) Therefore E H k | F y k = 1, η k (H k x k ) 1, η k (x k ) . (3.3) A general recursive filter for adapted processes was derived by Elliott [1994]. Suppose H l is a scalar F = σ((x t , Y t ) t )− adapted process, H 0 is F x 0 measurable and H l = H l−1 + a l + b l , v l + g l f (y l ), where a, b and g are F -predictable, f is a scalar-valued function and v l = x l − Πx l−1 . A recursive relation for η k (H k x k ) is given by η k (H k x k ) = N i=1 Γ i (y k ) e i , η k−1 (H k−1 x k−1 ) Πe i + e i , η k−1 (a k x k−1 ) Πe i +(diag(Πe i ) − (Πe i )(Πe i ) )η k−1 (b k e i , x k−1 ) +η k−1 (g k e i , x k−1 )f (y k )Πe i (3.4) Here, for any column vectors z and y, zy denotes the rank-one (if z = 0 and y = 0 ) matrix zy . The term Γ i (y k ) denotes the component-wise Radon-Nikodŷm derivative λ i k : Γ i (y k ) = φ y k − f i σ i /σ i φ(y k ) Now, filters for the state of the Markov chain as well as for three related processes: the jump process, the occupation time process and auxiliary processes of the Markov chain are derived. These processes can be characterised as special cases of the general process H l . The estimator for the state x k is derived from η k (H k x k ) by setting H k = H 0 = 1, a k = 0, b k = 0 and g k = 0. This implies that η k (x k ) = N i=1 Γ i (y k ) e i , η k−1 (x k−1 ) Πe i . (3.5) The first related process is the number of jumps of the Markov chain x k from state e r to state e s in time k, J (sr) k = k l=1 x l−1 , e r x l , e s . Setting H k = J (sr) k , H 0 = 0, a k = x k−1 , e r π sr , b k = x k−1 , e r e s and g k = 0 in equation ( 3.4 ) we get η k (J sr k x k ) = N i=1 Γ i (y k ) η k−1 (J sr k−1 x k−1 ), e i Πe i +Γ r (y k )η k−1 ( x k−1 , e r )π sr e s . (3.6) The second process O (r) k denotes the occupation time of the Markov process x , which is the length of time x spent in state r up to time k. Here, O r k = k l=1 x l−1 , e r = O r k−1 + x k−1 , e r . We set H k = O r k , H 0 = 0, a k = x k−1 , e r , b k = 0 and g k = 0 in equation ( 3.4 ) to obtain η k (O r k x k ) = N i=1 Γ i (y k ) η k−1 (O r k−1 x k−1 ), e i Πe i +Γ r (y k ) η k−1 (x k−1 ), e r Πe r . (3.7) Finally, consider the auxiliary process T r k (g) , which occur in the maximum likelihood estimation of model parameters. Specifically, T (r) k (g) = k l=1 x l−1 , e r g(y l ), where g is a function of the form g(y) = y or g(y) = y 2 . We apply formula (3.4) and get η k (T r k (g)x k ) = N i=1 Γ i (y k ) η k−1 (T r k−1 (g(y k−1 ))x k−1 ), e i Πe i +Γ r (y k ) η k−1 (x k−1 ), e r g(y k )Πe r . (3.8) The recursive optimal estimates of J, O and T can be calculated using equation ( 3.2 ). Filter-based EM-algorithm The derived adapted filters for processes of the Markov chain can now be utilised to derive optimal parameter estimates through a filter-based EM-algorithm. The set of parameters ρ , which determines the regime-switching model is ρ = {π ji , 1 ≤ i, j ≤ N, f i , σ i , 1 ≤ i ≤ N }. (3.9) Initial values for the EM algorithm are assumed to be given. Starting from these values updated parameter estimates are derived which maximise the conditional expectation of the log-likelihoods. The M-step of the algorithm deals with maximizing the following likelihoods: M-Step • The likelihood in the global F -model is given by log Λ t (σ, f ; (x s , y s ) s≤t ) = − 1 2 t s=1 log σ, x s−1 + (y s − f, x s−1 ) 2 σ, x s • In the F y -model, where the Markov chain is not observed, we obtain L t (σ, f ; (y s ) s≤t = E[log Λ t (σ, f ; (x s , y s ) s≤t ) | F t ] = = − 1 2 N k=1 log σ kÔ k t + (T k t (y 2 ) − 2T k t (y)f k +Ô k t f 2 )/σ 2 k (3.10) The maximum likelihood estimates of the model parameters can be expressed through the adapted filters. Whenever new information is available on the market, the filters are updated and, respectively, updated parameter estimates can be obtained. Theorem 3.1 (Optimal parameter estimates) WriteĤ k = E[H k |F y k ] for any adapted process H. Witĥ J ,Ô andT denoting the best estimates for the processes J , O and T , respectively, the optimal parameter estimatesπ ji ,f i andσ i are given bŷ π ji =Ĵ ji k O i k = η k (J ji k ) η k (O i k ) (3.11) f i = T (i) k O (i) k = η(T (i) (y)) k η(O (i) ) k (3.12) σ i = T (i) k (y 2 ) − 2 f i T (i) k (y) + f 2 i O (i) k O (i) k . (3.13) Proof T:he derivation of the optimal parameter estimates can be found in Elliott et al. [1995]. //// Summary The filter-based EM-algorithm runs in batches of n data points ( n typically equals to a minimum of ten up to a maximum of fifty) over the given time series. The parameters are updated at the end of each batch. Elliott's Algorithm comprises the following steps (0) Find suitable starting values for Π and f , σ. (RN) Determine the RN-derivative for the measure change toP . (E) Recursively, compute filtersĴ ji 4 Outliers in Asset Allocation Problem Outliers in General In the following sections we derive a robustification of the Algorithm 3.3 to stabilize it in the presence of outliers in the observation process. To this end let us discuss what makes an observation an outlier. First of all, outliers are exceptional events, occurring rarely, say with probability 5% -10% . Rather than captured by usual randomness, i.e., by some distributional model, they belong to what Knight [1921] refers to uncertainty: They are uncontrollable, of unknown distribution, unpredictable, their distribution may change from observation to observation, so they are non-recurrent and do not form an additional state, so cannot be used to enhance predictive power, and, what makes their treatment difficult, they often cannot be told with certainty from ideal observations. Still, the majority of the observations in a realistic sample should resemble an ideal (distributional) setting closely, otherwise the modeling would be questionable. Here we understand closeness as in a distributional sense, as captured, e.g., by goodness-of-fit distances like Kolmogorov, total variation or Hellinger distance. More precisely, ideally, this closeness should be compatible to the usual convergence mode of the Central Limit Theorem, i.e., with weak topology. In particular, closeness in moments is incompatible with this idea. Topologically speaking, one would most naturally use balls around a certain element, i.e., the set of all distributions with a suitable distance no larger than some given radius ε > 0 to the distribution assumed in the ideal model. Conceptually, the most tractable neighborhoods are given by the so-called Gross Error Model, defining a neighborhood U about a distribution F as the set of all distributions given by U c (F, ε) = {G | ∃H : G = (1 − ε)F + εH} (4.1) They can also be thought of as the set of all distributions of (realstic) random variables X re constructed as X re = (1 − U )X id + U X di (4.2) where X id is a random variable distributed according to the id eal distribution and U is an independent Bin(1, ε) switching variable, which in most cases lets you see X id but in some cases replaces it by some contaminating or di storting variable X di which has nothing to do with the original situation. Time-dependent Context: Exogenous and Endogenous Outliers In our time dependent setup in addition to the i.i.d. situation, we have to distinguish whether the impact of an outlier is propagated to subsequent observations or not. Historically there is a common terminology due to Fox [1972], who distinguishes innovation outliers (or IO's) and additive outliers (or AO's). Non-propagating AO's are added at random to single observations, while IO's denote gross errors affecting the innovations. For consistency with literature, we use the same terms, but use them in a wider sense: IO's stand for general endogenous outliers entering the state layer (or the Markov chain in the present context), hence with propagated distortion. As in our Markov chain, the state space is finite, IO's are much less threatening as they are in general. Correspondingly, wide-sense AO's denote general exogenous outliers which do not propagate, hence also comprise substitutive outliers or SO's as defined in a simple generalization of (4.2) to the state space context in equations (4.3)-(4.6). Y re = (1 − U )Y id + U Y di , U ∼ Bin(1, r) (4.3) for U independent of (X, Y id , Y di ) and some arbitrary distorting random variable Y di for which we assume Y di , X independent (4.4) and the law of which is arbitrary, unknown and uncontrollable. As a first step consider the set ∂U SO (r) defined as Because of condition (4.4), in the sequel we refer to the random variables Y re and Y di instead of their respective (marginal) distributions only, while in the common gross error model, reference to the respective distributions would suffice. Condition (4.4) also entails that in general, contrary to the gross error model, L(X, Y id ) is not element of ∂U SO (r) , i.e., not representable itself as some L(X, Y re ) in this neighborhood. ∂U SO (r) = L(X, Y re ) | Y re acc. to As corresponding (convex) neighborhood we define U SO (r) = 0≤s≤r ∂U SO (s) (4.6) hence the symbol " ∂ " in ∂U SO , as the latter can be interpreted as the corresponding surface of this ball. Of course, U SO (r) contains L(X, Y id ) . In the sequel where clear from the context we drop the superscript SO and the argument r . Due to their different nature, as a rule, IO's and AO's require different policies: As AO's are exogenous, we would like to damp their effect, while when there are IO's, something has happened in the system, so the usual goal will be to track these changes as fast as possible. Evidence for Robustness Issue in Asset Allocation In this section we examine the robustness of the filter and parameter estimation technique. The filter technique is implemented and applied to monthly returns of MSCI index between 1994 and 2009. The MSCI World Index is one of the leading indices on the stock markets and a common benchmark for global stocks. The algorithm is implemented with batches of ten data points, therefore the adaptive filters are updated whenever ten new data points are available on the market. The recursive parameter estimates, which utilise this new information, are updated as well, the algorithm is self-tuning. Care has to be taken when choosing the initial values for the algorithm, since the EM-algorithm in its general form converges to a local maximum. In this implementation we choose the initial values with regard to mean and variance of the first ten data points. Figure 1 shows the original return series, optimal parameter estimates for the index returns as well as the one-step ahead forecast. To highlight the sensitivity of the filter technique towards exogenous outliers we plant unusual high returns within the time series. Considerable SO outliers are included at time steps t = 40, 80, 130, 140. The optimal parameter estimation through the filter-based EM-algorithm of this data set with outliers can be seen in Figure 2. The filter still finds optimal parameter estimates, although the estimates are visibly affected by the outliers. In a third step, severe outliers are planted into the observation sequence. Data points t = 40, 80, 130, 140 now show severe SO outliers as can be seen from the first panel in Figure 3. The filters cannot run through any longer, optimal parameter estimates cannot be established in a setting with severe outliers. In practice, asset or index return time series can certainly include outliers from time to time. This might be due to wrong prices in the system, but also due to very unlikely market turbulence for a short period of time. It has to be noted, that the type of outliers which we consider in this study does not characterise an additional state of the Markov chain. In the following, we develop robust filter equations, which can handle exogenous outliers. Robust Statistics To overcome effects like in Figure 3 we need more stable variants of the Elliott type filters discussed so far. This is what robust statistics is concerned with. Excellent monographs on this topic are e.g., Huber [1981], Hampel et al. [1986], Rieder [1994], Maronna et al. [2006]. This section provides necessary concepts and results from robust statistics needed to obtain the optimally-robust estimators used in this article. Concepts of Robust Statistics The central mathematical concepts of continuity, differentiability, or closeness to singularities may in fact serve to operationalize stability quite well already. To make these available in our context, it helps to consider a statistical procedure, i.e.; an estimator, a predictor, a filter, or a test as a function of the underlying distribution. In a parametric context, this amounts to considering functionals T mapping distributions to the parameter set Θ . An estimator will then simply be T applied to the empirical distributionF n . For filtering or prediction, the range of such a functional will rather be the state space but otherwise the arguments run in parallel. For a notion of continuity, we have to specify a topology, and as in case of outliers, we use topologies essentially compatible with the weak topology. With these neighborhoods, we now may easily translate the notions of continuity, differentiability and closest singularity to this context: (Equi-)continuity is then called qualitative robustness [Hampel et al., 1986, Sec. 2.2 Def. 3], a differentiable functional with a bounded derivative is called local robust, and its derivative is called influence function (IF) 1 , compare [Hampel et al., 1986, Sec. 2 The IF reflects the infinitesimal influence of a single observation on the estimator. Under additional assumptions, many of the asymptotic properties of an estimator are expressions in the IF ψ . E.g., the asymptotic variance of the estimator in the ideal model is the second moment of ψ . Infinitesimally, i.e., for ε → 0 , the maximal bias on U is just sup |ψ| , where | · | denotes Euclidean norm. sup |ψ| is then also called gross error sensitivity (GES), [Hampel et al., 1986, (2.1.13)]. Seeking robust optimality hence amounts to finding optimal IFs. To grasp the maximal bias of a functional T on a neighborhood U = U(F ; ε) of radius ε , one considers the max-bias curve ε → sup G∈U (F ;ε) |T (G) − T (F )| . The singularity of this curve closest to 0 (i.e., the ideal situation of no outliers) captures its behavior under massive deviations, or its global robustness. In robust statistics, this is called breakdown point-the maximal radius ε the estimator can cope with without producing an arbitrary large bias, see [Hampel et al., 1986, Sec. 2.2 Def.'s 1,2] for formal definitions. Usually, the classically optimal estimators (MLE in many circumstances) are non-robust, both locally and globally. Robust estimators on the other hand pay a certain price for this stability as expressed by an asymptotic relative efficiency (ARE) strictly lower than 1 in the ideal model, where ARE is the ratio of the two asymptotic (co)variances of the classically optimal estimator and its robust alternative. To rank various robust procedures among themselves, other quality criteria are needed, though, summarizing the behavior of the procedure on a whole neighborhood, as in (4.1). A natural candidate for such a criterion is maximal MSE (maxMSE) on some neighborhood U around the ideal model and, for estimation context, maximal bias (maxBias) on the respective neighborhood, or, referring to famous [Hampel, 1968, Lemma 5], trace of the ideal variance subject to a bias bound on this neighborhood. In estimation context, the respective solution are usually called OMSE (Optimal MSE estimator), MBRE (M ost B ias Robust E stimator), and OBRE (Optimally B ias Robust E stimator) 2 . In our context we encounter two different situations where we want to apply robust ideas: (recursive) filtering in the (E)-step and estimation in the (M)-step. While in the former situation we only add a single new observation, which precludes asymptotic arguments, in the (M)-step, the preceding application of Girsanov's theorem turns our situation into an i.i.d. setup, where each observation becomes (uniformly) asymptotically negligible and asymptotics apply in a standard form. Our Robustification of the HMM: General Strategy As a robustification of the whole estimation process in this only partially observed model would (a) lead to computationally intractable terms and (b) would drop the key feature of recursivity, we instead propose to robustify each of the steps in Elliott's algorithm separately. Doing so, the whole procedure will be robust, but in general will loose robust optimality, i.e.; contrary to multiparameter maximum likelihood, the Bellmann principle does not hold for optimal robust multi-step procedures simply because optimal clipping in several steps is not the same as joint optimal clipping of all steps. Table 1 lists all essential steps in Elliot's algorithm related to our proposed robustification approach. Classical setting Robust version Initialization: Find suitable starting values for Π, f , σ and x 0 . Build N clusters on first batches Build N + 1 clusters on first batches, distribute points in outlier cluster randomly on other clusters. Use first and second moment of each Use median and MAD of clusters for cluster as initial values for f and σ. f and σ. Choose Π and x 0 according to Choose Π and x 0 according to cluster probabilities. cluster probabilities. M-step 1: Obtain estimates for f and σ. MLE-estimates for f and σ through Likelihoods re-stated; they are expressed recursive filters O i k and T i k (g). as weighted sums of the observations y k . Recursive filters are substituted into likelihood. Robustified version of MLE through asymptotic linear estimators. Estimates updated after each batch. Estimates updated after each batch, recursivity cannot be preserved completely. M-step 2: Obtain ML-estimators Π. MLE-estimation, recursive filters J ji k and Robustification through robust version of filters. O i k are substituted into likelihood. J ji k and O i k , no further observation y k has to be considered. Rec: Algorithm runs on next batch. Go to (RN) to compute the next batch. Go to (RN) to compute the next batch. Robustification of Step (0) So far, little has been said as to the initialization even in the non-robustified setting. Basically, all we have to do is to make sure that the EM algorithm converges. In prior applications of this algorithms (Mamon et al. [2008] and Erlwein et al. [2011] amongst others), one approach was to fill Π with entries 1/N , i.e., with uniform (and hence non-informative) distribution over all states, independent from state x 0 . As to f i and σ i , an ad hoc approach would estimate the global mean and variance over all states and then, again in a non-informative way jitter the state-individual moments, adding independent noise to it. In our preliminary experiments, it turned out that this naïve approach could drastically fail in the presence of outliers, so we instead propose a more sophisticated approach which can also be applied in a classical (i.e., non-robust) setting: In a first step we ignore the time dynamics and interpret our observations as realizations of a Gaussian Mixture Model, for which we use R package mclust (Fraley and Raftery [2002], Fraley et al. [2012]) to identify the mixture components, and for each of these, we individually determine the moments f i and σ i . As to Π , we again assume independence of x 0 , but fill the columns according to the estimated frequencies of the mixture components. In case of the non-robust setting we would use N mixture components and for each of them determine f i and σ i by their ML estimators (assuming independent observations). For a robust approach, we use N +1 mixture components, one of them-the one with the lowest frequency-being a pure noise component capturing outliers. For each nonnoise component we retain the ML estimates for f i and σ i . The noise component is then randomly distributed amongst the remaining components, respecting their relative frequencies prior to this redistribution. We are aware of the fact that reassigning the putative outliers at random could be misleading in ideal situations (with no outliers) where one cluster could be split off into two but not necessarily so. Then in our strategy, the smaller offspring of this cluster would in part be reassigned to wrong other clusters, so this could still be worked on. On the other hand, this choice often works reasonably well, and as more sophisticated strategies are questions for model selection, we defer them to further work. Based on the f i and σ i , for each observation j and each state i , we get weights 0 ≤ w i,j ≤ 1 , i w i,j = 1 for each j , representing the likelihood that observation j is in state i . For each i , again we determine robustified moment estimators f i , σ i as weighted medians and scaled weighted MADs (medians of absolute deviations). Weighted Medians And MADs: For weights w j ≥ 0 and observations y j , the weighted median m = m(y, w) is defined as m = argmin f j w j |y j − f | , and with y j = |y j − m| , the scaled weighted MAD s = s(y, w) is defined as s = c −1 argmin t j w j |y j − t| , where c is a consistency factor to warrant consistent estimation of σ in case of Gaussian observations, i.e., c = argmin t E j w j |ỹ j | − t forỹ j i.i.d. As to the (finite sample) breakdown point F SBP of the weighted median (and at the same time for the scaled weighted MAD), we define w 0 j = w i,j / j w j , and for each i define the ordered weights w 0 (j) such that w 0 (1) ≥ w 0 (2) ≥ . . . ≥ w 0 (k) . Then the FSBP in both cases is k −1 min{j 0 = 1, . . . , k | j0 j=1 w 0 (j0) ≥ k/2} which (for equivariant estimators) can be shown to be the largest possible value. So using weighted medians and MADs, we achieve a decent degree of robustness against outliers. E.g., assume we have 10 observations with weights 5 × 0.05; 3 × 0.1; 0.2; 0.25 . Then we need at least three outliers (placed at weights 0.1, 0.2, 0.25 , respectively) to produce a breakdown. Robustification of the E-step As indicated, in this step we cannot recur to asymptotics, but rather have to appeal to a theory particularly suited for this recursive setting. In particular, the SO-neighborhoods introduced in (4.3) turn out to be helpful here. Crucial Optimality Thm Consider the following optimization problem of reconstructing the ideal observation Y id by means of the realistic/possibly contaminated Y re on an SO-neighborhood. Minimax-SO problem Minimize the maximal MSE on an SO-neighborhood, i.e., find a Y re -measurable reconstruction f 0 for Y id s.t. max U E re |Y id − f (Y re )| 2 = min f ! (5.1) The solution is given by f 0 (y) := E Y id + H ρ (D(y)), H b (z) = z min{1, b/|z|} (5.2) P Y di 0 (dy) := 1−r r ( D(y) /ρ − 1) + P Y id (dy) (5.3) where ρ > 0 ensures that P Y di 0 (dy) = 1 and D(y) = y − E Y id (5.4) The value of the minimax risk of Problem (5.1) is tr Cov(Y id ) − (1 − r) E id min{|D(Y id )|, ρ} 2 (5.5) Proof : See Appendix 7. //// The optimal procedure in equation (5.2) has an appealing interpretation: It is a compromise between the (unobservable) situations that (a) one observes the ideal Y id , in which case one would use it unchanged and (b) one observes Y di , i.e.; something completely unrelated to Y id , hence one would use the best prediction for Y id (in MSE-sense) without any information, hence the unconditional expectation E Y id . The decision on how much to tend to case (a) and how much to case (b) is taken according to the (length of the) discrepancy D(Y re ) between observed signal Y re and E Y id . If this length is smaller than ρ , we keep Y re unchanged, otherwise we modify E Y id by adding a clipped version of D(Y re ) . Robustification of Steps (RN), (E) In the Girsanov-/change of measure step we recall that the corresponding likelihood ratio here is just λ s := σ −1 (x s−1 )ϕ σ −1 (x s−1 ) y s − f (x s−1 ) ϕ(y s ) (5.6) Apparently λ s can both take values close to 0 , and, more dangerously, in particular for small values of σ(x s−1 ) , very large values. So bounding the λ s is crucial to avoid effects like in Figure 3. A first (non-robust) remedy uses a data driven reference measure, i.e., instead of N (0, 1) , we use N (0,σ 2 ) whereσ is a global scale measure taken over all observations, ignoring time dependence and state-varying σ 's. A robust proposal would takeσ to be the MAD of all observations (tuned for consistency at the normal distribution). This leads toλ s := σ −1 (x s−1 )ϕ σ −1 (x s−1 ) y s − f (x s−1 ) σ −1 ϕ(σ −1 y s ) (5.7) Eventually, in both estimation and filtering/prediction,σ cancels out as a common factor in nominator and denominator, so is irrelevant in the subsequent steps; its mere purpose is to stabilize the terms in numeric aspects. To take into account time dynamics in our robustification, we want to use Theorem 5.1, but to this end, we need second moments, which for λ s need not exist. So instead, we apply the theorem to Y id = λ s , which means thatλ s = (Y id ) 2 is robustified bȳ λ s = (E id λ s + H b ( λ s − E id λ s )) 2 (5.8) for H b (x) = x min{1, b/|x|} . Clipping height b in turn is chosen such that Eλ s = α , α = 0.95 for instance. As in the ideal situation E λ s = 1 , in a last step with a consistency factor c s determined similarly to c i in the initialization step for the weighted MADs, we pass over toλ 0 s = c sλs such that Eλ 0 s = 1 . Similarly, in the remaining parts of the E-step, for each of the filtered processes generically denoted by G and the filtered one byĜ , we could replaceĜ bȳ G = E idĜ + H b (Ĝ − E idĜ ) (5.9) for G any of J ji k , O i k , and T i k (f ) and again suitably chosen b . It turns out though, that it is preferable to pursue another route. The aggregates T i k (f ) are used in the M-step in (3.10), but for a robustification of this step, it is crucial to be able to attribute individual influence to each of the observations, so instead we split up the terms of the filtered neg-loglikelihood into summands w i,j /Ô i k [(y j − f i ) 2 /σ 2 i + log σ i ] for j = 1, . . . , k , and hence we may skip a robustification of T i k (f ) . Similarly, as J ji t , O i t are filtered observations of multinomial-like variables, a robustification is of limited use, as any contribution of a single observation to these variables can at most be of absolute value 1 , so is bounded anyway. Hence in the E-step, we only robustify λ s . The splitting up the aggregates T i k (f ) into summands amounts to giving up strict recursivity, as for k observations in one batch, one now has to store the values w i,j /Ô i k for j = 1, . . . , k , and building up from j = 1 , at observation time j = j 0 within the batch, we construct w i,j;j0 , j = 1, . . . , j 0 from the values w i,j;j0−1 , j = 1, . . . , j 0 − 1 , so we have a growing triangle of weight values. This would lead to increasing memory requirements, if we had not chosen to work in batches of fixed length k , which puts an upper bound onto memory needs. Robustification of the (M)-Step As mentioned before, contrast to the (E)-step, in this estimation step, we may work with classical gross error neighborhoods (4.1) and with the standard i.i.d. setting. Shrinking Neighborhood Approach By Bienaymé, variance then usually is O(1/n) for sample size n , while for robust estimators, the maximal bias is proportional to the neighborhood radius ε . Hence unless ε is appropriately scaled in n , for growing n , bias will dominate eventually for growing n . This is avoided in the shrinking neighborhood approach by setting ε = ε n = r/ √ n for some r ≥ 0 , compare Rieder [1994]. Kohl et al. [2010] sets ε = ε n = r/ √ n for some initial radius r ∈ [0, ∞) . One could see this shrinking as indicating that with growing n , diligence is increasing so the rate of outliers is decreasing. This is perhaps overly optimistic. Another interpretation is that the severeness of the robustness problem with 10% outliers at sample size 100 should not be compared with the one with 10% outliers at sample size 10000 but rather with the one with 1% outliers at this sample size. In this shrinking neighborhood setting, with mathematical rigor, optimization of the robust criteria can be deferred to the respective IFs, i.e. instead of determining the IF of a given procedure, we construct a procedure to a given (optimally-robust) IF. This is achieved by the concept of asymptotically linear estimators (ALEs), as it arises canonically in most proofs of asymptotic normality: In a smooth ( L 2 -differentiable) parametric model P = {P θ , θ ∈ Θ} for i.i.d observations X i ∼ P θ with open parameter domain Θ ⊂ R d based on the scores 3 Λ θ and its finite Fisher information I θ = E θ Λ θ Λ τ θ , we define the set Ψ 2 (θ) of influence functions as the subset of L d 2 (P θ ) consisting of square integrable functions ψ θ with d coordinates with E θ ψ θ = 0 and E θ ψ θ Λ τ θ = I d where I d is the d -dimensional unit matrix. Then a sequence of estimators S n = S n (X 1 , . . . , X n ) is called an ALE if S n = θ + 1 n n i=1 ψ θ (X i ) + o P n θ (n −1/2 ) (5.10) for some influence function ψ θ ∈ Ψ 2 (θ) . In the sequel we fix the true θ ∈ Θ and suppress it from notation where clear from context. In particular, the MLE usually has influence function ψ MLE = I −1 Λ , while most other common estimators also have a representation (5.10) with a different ψ . For given IF ψ we may construct an ALEθ n with ψ as IF by a one-step construction, often called one-step-reweighting: To given starting estimator θ 0 n such that R 0 n = θ 0 n − θ = o P n θ (n −1/4+0 ) we definê θ n = θ 0 n + 1 n n j=1 ψ θ 0 n (X j ) (5.11) Then indeedθ n = θ+ 1 n n j=1 ψ θ (X j )+R n and R n = o P n θ (n −1/2 ) , i.e.,θ n forgets about θ 0 n as to its asymptotic variance and GES; however its breakdown point is inherited from θ 0 n once Θ is unbounded and ψ is bounded. Hence for the starting estimator, we seek for θ 0 n with high breakdown point. For a more detailed account on this approach, see Rieder [1994]. Shrinking Neighborhood Approach With Weighted Observations As mentioned, coming from the E-step, not all observations y j are equally likely to contribute to state i , hence we are in a situation with weighted observations, where we may pass over to normed weights w 0 i,j = w i,j / j w i,j summing up to 1 . Suppressing state index i from notation here, with these weights and θ 0 n the vector of weighted median and scaled weighted MAD for state i , (5.11) becomeŝ θ n = θ 0 n + n j=1 w 0 j ψ θ 0 n (y j ) (5.12) forθ n again a two-dimensional ALE with location and scale coordinate and ψ θ (y) = σψ((y − f )/σ) , at θ = (f, σ) τ , is the IF of the MBRE in the one-dimensional Gaussian location and scale model at N (0, 1) ; i.e., ψ(y) = bY (y)/|Y (y)|, Y (y) = (y, A(y 2 − 1) − a), (5.13) with numerical values for A, a, b up to four digits taken from R package RobLox, Kohl [2012], being A = 0.7917, a = −0.4970, b = 1.8546, (5.14) Influence function ψ is illustrated in Figure 4. To warrant positivity of σ and to maintain a high breakdown point even in the presence of inliers (driving σ to essentially 0 ), for the scale component, instead of (5.12) we use the asymptotically equivalent form σ n = σ 0 n exp n j=1 w 0 j ψ scale (y j − µ 0 n )/σ 0 n . (5.15) Using the MBRE-ψ from (5.13) and (5.14) on first glance could be seen as overly cautious. Detailed simulation studies, compare e.g. Kohl and Deigner [2010], show that for our typical batch lengths of 10-20, the MBRE also is near to optimal in the sense of Rieder et al. [2008] in the situation where nothing is known about the true outlier rate (including, of course the situation where no outliers at all occur). Robustification of Steps (M1) & (M2) Now, we derive robust estimators of the model parameters f i and σ i , i.e., we justify passage to weighted ALEs as in (5.12). In particular we specify the weights w 0 j = w 0 i,j therein. Recall, that the M1-step in the classical algorithm gives the optimal parameter estimates stated in Theorem 3.1. We now build ALEs, which can be achieved, when the MLEs of the parameters f i and σ i are stated as weighted sums of the observations y k . Theorem 5.2 With w 0 i,l = x l−1 , e i η(O (i) ) k (5.16) the optimal parameter estimatesf i andσ i are given by f i = k l=1 w 0 i,l y l (5.17) σ 2 i = k l=1 w 0 i,l (y l − f i ) 2 . (5.18) Proof : To find the optimal estimate for f consider Λ * k := k l=1 λ * l with λ * l := exp (y l − f, x l−1 ) 2 − (y l − f , x l−1 ) 2 2 σ, x l−1 2 . Up to constants irrelevant for optimization, the filtered log-likelihood is then E[ln(Λ * k ) | F y k ] = N i=1 k l=1 w 0 i,l (y l − f i ) 2 (5.19) Maximising the log-likelihood E[ln(Λ * k ) | F Y k ] in f i hence leads to the optimal parameter estimate k l=1 x l−1 , e i (2y l f i − f 2 i ) = 0 =⇒ f i (k) = k l w 0 i,l y l In an analogue way, for σ i we define Λ + k := k l=1 λ + l with λ + l := σ, x l−1 σ, x l−1 exp (y l − f, x l−1 ) 2 2 σ, x l−1 2 − (y l − f, x l−1 ) 2 2 σ, x l−1 2 . and hence, again up to irrelevant terms E[ln(Λ + k ) | F y k ] = N i=1 k l=1 x l−1 , e i [ (y l − f i ) 2 2σ 2 i + log(σ i )] (5.20) From this term, which has to be minimised for σ i we get k l=1 x l−1 , e i − 1 2 σ 3 (y l − f i ) 2 + 1 σ i = 0 =⇒ σ 2 i = k l=1 w 0 i,l (y l − f i ) 2 Note that (5.20) takes its minimum at the same place as ln(Λ ++ k ) = N i=1 k l=1 w 0 i,l [(y l − f i ) 2 − σ 2 i ] 2 (5.21) //// For the robustification of the parameter estimation (step M1) we now distinguish two approaches. The first robustification is utilized in the first run over the first batch of data and is therefore called the initialization step M1. The robust estimates of the parameters from the second batch onwards are then achieved through a weighted ALE. 2. For further batches, the weighted MBRE is obtained as a one-step construction with the parameter estimate (f 0 i , σ 0 i ) from the previous batch as starting estimator and with IF ψ = (ψ loc , ψ scale ) from (5.13), (5.14), i.e.,f i = f 0 i + σ 0 i k l=1 w 0 i,l ψ loc (y l − f 0 i )/σ 0 i (5.22) σ i = σ 0 i exp k l=1 w 0 i,l ψ scale (y l − f 0 i )/σ 0 i (5.23) Proof : 1. Initialization: With absolute values instead of squares, (5.19) becomeŝ f i = argmin fi k l=1 w 0 i,l |y l − f i | Now if w 0 i,l is constant in l , this leads to the empirical median as unique minimizer justifying the name. For the scaled weighted MAD, the argument parallels the previous one, leading to consistency factor c i = Φ(3/4) for Φ the cdf of N (0, 1) . 2. M1 in further batches: Apparently, by definition, (f i ,σ i ) is an ALE, once we show that ψ is square integrable, E(ψ) = 0 , E(ψΛ ) = I 2 . The latter two properties can be checked numerically, while by boundedness square integrability is obvious. In addition it has the necessary form of an MBRE in the i.i.d. setting as given in [Rieder, 1994, Thm 5.5.1]. To show that this also gives the MBRE in the context of weighted observations, we would need to develop the theory of ALEs for triangular schemes similar to the one in the Lindeberg Feller Theorem. This has been done, to some extent in [Ruckdeschel, 2001, Section 9]. In particular, for each state i , we have to assume a Noether condition excluding observations overly influential for parameter estimation in this particular state, i.e., Consider again our filtering algorithm and recall, that the filter runs over the data set in batches of roughly ten two fifty data point. To determine the ALE for our parameters, we have to calculate the weights w 0 i,l = x l−1 , e i / O i k . Therefore, our algorithm has to know all values ofx l from 1 to k in each batch. With this, our robustification of the algorithm cannot obtain the same recursiveness as the classical algorithm. However, since we only have to determine and save the estimates of x l in each batch, the algorithm still is numerically efficient, the additional costs are low. In general, the ALEs are fastly computed robust estimators, which lead in our case to a fast and, over batches, recursive algorithm. The additional computational burden to store all the weights w 0 i,l arising in the robustification of the M1-step is more than paid off by the additional benefits they offer for diagnostic purposes beyond the mere EM-algorithm: They tell us which of the observations, due to their likelihood to be in state i carry more information on the respective parameters f i and σ i than others. The same goes for the terms ψ θ (y l ) which capture the individual information of observation y l for the respective parameters. Even more though, the coordinates of ψ θ (y j )/|ψ θ (y l )| tell us how much of the information in observation y l is used for estimating f i and how much for σ i . In addition the function y → w 0 i,l ψ θ (y) can be used for sensitivity analysis, telling us what happens to the parameter estimates for small changes in observation y . Finally, using the unclipped, classically optimal IF of the MLE, but evaluated at the robustly estimated parameters, we may identify outliers not fitting to the "usual" states. Implementation and Simulation The classical algorithm as well as the robust version are implemented in R; we plan to release the code in form of a contributed package on CRAN at a later stage. The implementation builds up on, respectively uses contributed packages RobLox and mclust. At the time of writing we are preparing a thorough simulation study to explore our procedure in detail and in a quantitative way. For the moment, we restrict ourselves to assess the procedure in a qualitative way, illustrating how it can cope with a situation like in Figure 3. In Figure 5, we see the paths of the robust parameter estimates for f , σ , and Π ; due to the new initialization procedure, the estimates-in particular those for Π -differ a little from those of Figure 1. Still, all the estimators behave very reasonable and are not too far from the classical ones. In the outlier situations from Figures 2 and 3, illustrated in Figures 6, the estimates for f and σ remain stable at large as desired. The estimates for Π however do get irritated, essentially flagging out one state as outlier state. Some more work remains to be done to better understand this and to see how to avoid this. Aside from this, our algorithm already achieves its goals; in particular, our procedure never breaks downcontrary to the classical one. Conclusion In financial applications, we often have to consider the case of outliers in our data set, which can occur from time to time e.g., due to either wrong values in the financial database or unusual peaks or lows in volatile markets. Conventional parameter estimation methods cannot handle these specific data characteristics well. Contribution of this paper: Our contribution to this issue is two fold: First, we analyse step by step the general filter-based EM-algorithm for HMMs by Elliott [1994] and highlight, which problems can occur in case of extreme values. We extend the classical algorithm by a new technique to find initial values, taking into account the N − state setting of the HMM. In addition, for numerical reasons we use a data-driven reference measure instead of the standard normal distribution. Second, we have proposed a full robustification of the classical EM-algorithm. Our robustified algorithm is stable w.r.t. outliers in the observation process and is still able to estimate processes of the Markov chain as well as optimal parameter estimates with acceptable accuracy. The robustification builds up on concepts from robust statistics like SO-optimal filtering and asymptotic linear estimators. Due to the non-iid nature of the observations as apparent from the non-uniform weights w 0 i,l attributed to the observations, these concepts had to be generalized for this situation, leading to weighted medians, weighted MADs, weighted ALEs. Similarly, the SO-optimal filtering (with focus on state reconstruction) is not directly applicable for robustifying the Radon-Nikodym terms λ s , where we (a) had to clean the "observations" themselves and (b) had to pass over to √ λ s for integrability reasons. Our robust algorithm is computationally efficient. Although complete recursivity cannot be obtained, the algorithm runs over batches and keeps its recursivity there additionally storing the filtered values of the Markov chain. This additional burden is outweighed though by the benefits of these weights and influence function terms for diagnostic purposes. As in the original algorithm, the model parameters, which are guided by the state of the Markov chain, are updated after each batch, using a robust ALE however. The robustification therefore keeps the characteristic of the algorithm, that new information, which arises in the observation process, is included in the recent parameter update-there is no forward-backward loop. The forecasts of asset prices, which are obtained through the robustified parameter estimates, can be utilized to make investment decisions in asset allocation problems. To sum up, our forecasts are robust against additive outliers in the observation process and able to handle switching regimes occurring in financial markets. Outlook: It is pretty obvious how to generalize our robustification to a multivariate setting: The E-step is not affected by multivariate observations, and the initialization technique using Gaussian Mixture Models ideas already is available in multivariate settings. Respective robust multivariate scale and location estimators for weighted situations still have to be implemented, though, a candidate being a weighted variant of the (fast) MCD-estimator, compare Rousseeuw and Leroy [1987], Rousseeuw and van Driessen [1999]. Future work will hence translate our robustification to a multivariate setting to directly apply the algorithm to asset allocation problems for portfolio optimisation. Furthermore, investment strategies shall be examined within this robust HMM setting to enable investors a view on their portfolio, which includes possible outliers or extreme events. The implementation of the algorithms shall be part of an R package, including a thorough simulation study of the robustified algorithm and its application in portfolio optimisation. Finally, an automatic selection criterion for the number of states to retain would be desirable which is a question of model selection, where criteria like BIC have still to be adopted for robustness. Acknowledgement Financial support for C. Erlwein from Deutsche Forschungsgemeinschaft (DFG) within the project "Regimeswitching in zeitstetigen Finanzmarktmodellen: Statistik und problemspezifische Modellwahl" (RU-893/4-1) is gratefully acknowledged. A Proofs Proof to Theorem 5.1 (1) Let us solve max ∂U min f [. . .] first, which amounts to min ∂U Ere[ Ere[Y id |Y re ] 2 ] . For fixed element P Y di assume a dominating σ -finite measure µ , i.e., µ P Y di , µ P Y id ; this gives us a µ -density q(y) of P Y di . Determining the joint (real) law P Y id ,Y re (dỹ, dy) as P (Y id ∈ A, Y re ∈ B) = IA(ỹ) IB(y)[(1−r) I(ỹ = y) + rq(y)] p Y id (ỹ) µ(dỹ)µ(dy) (A.1) we deduce that µ(dy) -a.e. Ere[Y id |Y re = y] = rq(y)E Y id +(1−r)yp Y id (y) rq(y) + (1 − r)p Y id (y) =: a1q(y)+a2(y) a3q(y)+a4(y) (A.2) Hence we have to minimize F (q) := |a1q(y) + a2(y)| 2 a3q(y) + a4(y) µ(dy) in M0 = {q ∈ L1(µ) | q ≥ 0, q dµ = 1} . To this end, we note that F is convex on the non-void, convex cone M = {q ∈ L1(µ) | q ≥ 0} so, for someρ ≥ 0 , we may consider the Lagrangian . So by continuity, there is some ρ ∈ (0, ∞) with H(ρ) = 1 . On M0 , q dµ = 1 , butqρ = qs=ρ ∈ M0 and is optimal on M ⊃ M0 hence it also minimizes F on M0 . In particular, we get representation (5.3) and note that, independently from the choice of µ , the least favorable P Y di 0 is dominated according to P Y di 0 P Y id , i.e.; non-dominated P Y di are even easier to deal with. To this end we first verify (5.2) determining f0(y) as f0(y) = E re;P [X|Y re = y] . Writing a sub/superscript " re; P " for evaluation under the situation generated by P = P Y di andP for P Y di 0 , we obtain the the risk for general P as MSEre; P [f0(Y re, P )] = (1 − r) Eid Y id − f0(Y id ) 2 + r tr Cov Y id + +r EP min(|D(Y di;,q )| 2 , ρ 2 ) (A.4) This is maximal for any P that is concentrated on the set |D(Y di;,q )| > ρ , which is true forP . Hence (A.3) follows, as for any contaminating P MSEre; P [f0(Y re; P ] ≤ MSE re;P [f0(Y re;P )] Finally, we pass over from ∂U to U : Let fr ,Pr denote the components of the saddle-point for ∂U(r) , as well as ρ(r) the corresponding Lagrange multiplier and wr the corresponding weight, i.e., wr = wr(y) = min(1, ρ(r) / |D(y)|) . Let R(f, P, r) be the MSE of procedure f at the SO model ∂U(r) with contaminating P Y di = P . As can be seen from (5.3), ρ(r) is antitone in r ; in particular, asPr is concentrated on {|D(Y )| ≥ ρ(r)} which for r ≤ s is a subset of {|D(Y )| ≥ ρ(s)} , we obtain R(fs,Ps, s) = R(fs,Pr, s) for r ≤ s Note that R(fs, P, 0) = R(fs, Q, 0) for all P, Q -hence passage toR(fs, P, r) = R(fs, P, r) − R(fs, P, 0) is helpful-and that tr Cov Y id = Eid tr Covid[Y id |Y id ] + |D(Y id )| 2 (A.5) Abbreviatews(Y id ) = 1 − 1 − ws(Y id ) 2 ≥ 0 to see that R(fs, P, r) = r Eid |D(Y id )| 2w s(Y id ) + EP min(|D(Y id )|, ρ(s)) 2 ≤ ≤ r Eid |D(Y id )| 2w s(Y id ) + ρ(s) 2 =R(fs,Pr, r) <R(fs,Ps, s) Hence the saddle-point extends to U(r) ; in particular the maximal risk is never attained in the interior U(r) \ ∂U(r) . (5.5) follows by plugging in the results. //// Figure 1 : 1Optimal parameter estimates for monthly MSCI returns between 1994 and 2009 Figure 2 : 2Optimal parameter estimates for monthly MSCI returns with planted outliers Figure 3 : 3Filter-based EM-algorithm for observation sequence with severe outliers obtained empirically for a sufficiently large sample size M , e.g., M = 10000 , setting c = 1 M M k=1 c k , c k = argmin t j w j |y j,k | − t , y j,k i.i.d. ∼ N (0, 1) . Figure 4 : 4Influence function of the MBRE at N (0, 1) ; left panel: location part; right panel: scale part. Theorem 5. 3 3The robust parameter estimates for the model parameters f i and σ i in the (1) initialization and (2) all following batches are given by 1. Replacing, for initialization, the squares by absolute values in (5.19) and in (5.21),f i andσ i are the weighted median and scaled weighted MAD, respectively, of the y l , l = 1, . . . , k with weights (w 0 i,l ) l . Figure 5 : 5lim k→∞ max l=1,...,k (w 0 i,l;k ) 2 / k j=1 (w 0 i,l;k ) 2 = 0 (5.24)We do not work this out in detail here, though. Robust parameter estimates for monthly MSCI returns between 1994 and 2009-analogue toFigure 1 Figure 6 : 6Robust parameter estimates for monthly MSCI with planted outliers-analogue toFigures 2 and 3 Lρ(q) := F (q) +ρ q dµ for some positive Lagrange multiplierρ . Pointwise minimization in y of Lρ(q) gives qs(y) = 1−r r ( D(y) s − 1)+ p Y (y) for some constant s = s(ρ) = ( | E Y id | 2 +ρ/r) 1/2 , Pointwise in y ,qs is antitone and continuous in s ≥ 0 and lim s→0[∞] qs(y) = ∞[0] , hence by monotone convergence, H(s) = qs(y) µ(dy) too, is antitone and continuous and lim s→0[∞] H(s) = ∞[0] Table 1 : 1Classical algorithm setting and robustified version of each step. As next step we show that max ∂U min f [. . .] = min f max ∂U [. . .] (A.3) t ,Ô i t , andT i t (g).(M1) Obtain ML-estimatorsf = (f 1 , . . . ,f N ) andσ = (σ 1 , . . . ,σ N ).(M2) Obtain ML-estimators Π.(Rec) Go to (RN) to compute the next batch. In mathematical rigor, the IF, when it exists, is the Gâteaux derivative of functional T into the direction of the tangent δx − F . For certain properties, this notion is in fact too weak, and one has to require stronger notions like Hadamard or Fréchet differentiability; for details, seeFernholz [1983] or[Rieder, 1994, Ch. 1]. For the terms OBRE and MBRE, seeHampel et al. [1986], while for OMSE seeRuckdeschel and Horbenko [2010]. Usually Λ θ is the logarithmic derivative of the density w.r.t. the parameter, i.e., Λ θ (x) = ∂/∂θ log p θ (x) . International asset allocation with regime shifts. A Ang, G Bekaert, Review of Financial Studies. 15Ang A, Bekaert G. (2002): International asset allocation with regime shifts. Review of Financial Studies 15:1137 -1187. A Markov model of switching-regime ARCH. J Cai, Journal of Business & Economic Statistics. 12Cai, J. (1994): A Markov model of switching-regime ARCH. Journal of Business & Economic Statistics 12: 309-316. Hidden Markov models: Estimation and Control. R J Elliott, L Aggoun, Moore J B , Applications of Mathematics. 29SpringerElliott R.J., Aggoun L., and Moore J.B. (1995): Hidden Markov models: Estimation and Control. Applications of Mathematics vol. 29. Springer: New York. Exact adaptive filters for Markov chains observed in Gaussian noise. R J Elliott, Automatica. 30Elliott RJ. (1994): Exact adaptive filters for Markov chains observed in Gaussian noise. Automatica 30: 1399- 1408. An application of hidden Markov models to asset allocation problems. R J Elliott, J Van Der Hoek, Finance and Stochastics. 1Elliott R.J. and van der Hoek J. (1997): An application of hidden Markov models to asset allocation problems. Finance and Stochastics 1: 229-238. A method for portfolio choice. R J Elliott, J Hinz, Applied Stochastic Models in Business and Industry. 19Elliott RJ, Hinz J. (2003): A method for portfolio choice. Applied Stochastic Models in Business and Industry 19: 1-11. An examination of HMM-based investment strategies for asset allocation. Applied stochastic models in business and industry. C Erlwein, R Mamon, M Davison, 27Erlwein C., Mamon R., and Davison M. (2009): An examination of HMM-based investment strategies for asset allocation. Applied stochastic models in business and industry 27(3): 204-221. Von Mises Calculus for Statistical Functionals. L T Fernholz, Lecture Notes in Statistics. 19SpringerFernholz, L.T. (1983): Von Mises Calculus for Statistical Functionals. Lecture Notes in Statistics, vol. 19, Springer. Outliers in time series. A J Fox, J. R. Stat. Soc., Ser. B. 34Fox, A.J. (1972): Outliers in time series. J. R. Stat. Soc., Ser. B , 34: 350-363. Model-based Clustering, Discriminant Analysis and Density Estimation. C Fraley, A E Raftery, Journal of the American Statistical Association. 97Fraley, C. and Raftery, A.E. (2002): Model-based Clustering, Discriminant Analysis and Density Esti- mation. Journal of the American Statistical Association 97: 611-631. mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. C Fraley, A E Raftery, T B Murphy, L Scrucca, No. 597Department of Statistics, University of WashingtonTechnical ReportFraley, C., Raftery, A.E., Murphy, T.B., and Scrucca, L. (2012): mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. Modeling the conditional distribution of interest rates as a regime-switching process. S F Gray, Journal of Financial Economics. 42Gray, S.F. (1996): Modeling the conditional distribution of interest rates as a regime-switching process. Journal of Financial Economics 42: 27-62. Asset allocation under multivariate regime switching. M Guidolin, A Timmermann, Journal of Economic Dynamics and Control. 31Guidolin M, Timmermann A. (2007): Asset allocation under multivariate regime switching. Journal of Economic Dynamics and Control 31: 3503-3544. A new approach to the economic analysis of nonstationary time series and the business cycle. J D Hamilton, Econometrica: Journal of the Econometric Society. 57Hamilton, J.D. (1989): A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica: Journal of the Econometric Society 57: 357-384. F R Hampel, Contributions to the theory of robust estimation. Dissertation. Berkely, CAUniversity of CaliforniaHampel, F.R. (1968): Contributions to the theory of robust estimation. Dissertation, University of California, Berkely, CA. F R Hampel, E M Ronchetti, P J Rousseeuw, W A Stahel, Robust statistics. The approach based on influence functions. WileyHampel, F. R., Ronchetti, E. M., Rousseeuw, P. J. and Stahel, W. A. (1986): Robust statistics. The approach based on influence functions. Wiley. P J Huber, Robust statistics. WileyHuber, P. J. (1981): Robust statistics, Wiley. RobLox: Optimally robust influence curves and estimators for location and scale. M Kohl, R package version 0.8.2. URLKohl, M. (2012). RobLox: Optimally robust influence curves and estimators for location and scale. R package version 0.8.2. URL http://robast.r-forge.r-project.org/. Preprocessing of gene expression data by optimally robust estimator. M Kohl, H P Deigner, BMC Bioinformatics. 11583Kohl, M. and Deigner, H.P. Preprocessing of gene expression data by optimally robust estimator. BMC Bioinformatics, 11: 583. Infinitesimally Robust Estimation in General Smoothly Parametrized Models. M Kohl, H Rieder, P Ruckdeschel, Stat. Methods Appl. 19Kohl, M., Rieder, H., and Ruckdeschel, P. (2010): Infinitesimally Robust Estimation in General Smoothly Parametrized Models. Stat. Methods Appl., 19: 333-354. F H Knight, Risk, Uncertainty, and Profit. Boston: Houghton MifflinKnight, F.H. (1921): Risk, Uncertainty, and Profit. Boston: Houghton Mifflin. Adaptive signal processing of asset price dynamics with predictability analysis. R Mamon, C Erlwein, B Gopaluni, Information Sciences. 178Mamon, R., Erlwein, C. and Gopaluni, B. (2008): Adaptive signal processing of asset price dynamics with predictability analysis. Information Sciences, 178: 203-219. Portfolio Selection. H Markowitz, The Journal of Finance. 71Markowitz, H. (1952): Portfolio Selection. The Journal of Finance, 7(1): 77-91. R A Maronna, R D Martin, V J Yohai, Robust Statistics: Theory and Methods. WileyMaronna, R. A., Martin, R. D. and Yohai, V. J. (2006): Robust Statistics: Theory and Methods. Wiley. H Rieder, Robust Asymptotic Statistics. SpringerRieder, H. (1994): Robust Asymptotic Statistics. Springer. H Rieder, M Kohl, P Ruckdeschel, The cost of not knowing the radius. 17Rieder, H., Kohl, M. and Ruckdeschel, P. (2008): The cost of not knowing the radius. Statistical Methods & Applications 17(1), 13-40. Robust Regression and Outlier Detection. P J Rousseeuw, A M Leroy, WileyRousseeuw, P.J. and Leroy, A.M. (1987): Robust Regression and Outlier Detection. Wiley. A fast algorithm for the minimum covariance determinant estimator. P J Rousseeuw, K Van Driessen, Technometrics. 41Rousseeuw, P.J. and van Driessen, K. (1999): A fast algorithm for the minimum covariance determinant estimator. Technometrics 41: 212-223. Ansätze zur Robustifizierung des Kalman Filters. P Ruckdeschel, Bayreuther Mathematische Schriften. 64Ruckdeschel, P. (2001): Ansätze zur Robustifizierung des Kalman Filters. Bayreuther Mathematische Schriften, Vol. 64. Robustness Properties of Estimators in Generalized Pareto Models. P Ruckdeschel, N Horbenko, ITWM N o 182Technical ReportRuckdeschel, P. and Horbenko, N. (2010): Robustness Properties of Estimators in Generalized Pareto Models. Technical Report ITWM N o 182, http://www.itwm.fraunhofer.de/fileadmin/ITWM-Media/ Zentral/Pdf/Berichte_ITWM/2010/bericht_182.pdf. Optimizing the terminal wealth under partial information: The drift process as a continuous time Markov chain. J Sass, U G Haussmann, Finance and Stochastics. 8Sass J. and Haussmann U.G. (2004): Optimizing the terminal wealth under partial information: The drift process as a continuous time Markov chain. Finance and Stochastics 8: 553-577. On the optimal filtering of diffusion processes. M Zakai, 11Zeitschrift für Wahrscheinlichkeitstheorie und verwandte GebieteZakai, M. (1969): On the optimal filtering of diffusion processes. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete,11: 230-243.
[]
[ "A Gluing Construction for Prescribed Mean Curvature", "A Gluing Construction for Prescribed Mean Curvature" ]
[ "Adrian Butscher " ]
[]
[]
The gluing technique is used to construct hypersurfaces in Euclidean space having approximately constant prescribed mean curvature. These surfaces are perturbations of unions of finitely many spheres of the same radius assembled end-to-end along a line segment. The condition on the existence of these hypersurfaces is the vanishing of the sum of certain integral moments of the spheres with respect the prescribed mean curvature function. *
10.2140/pjm.2011.249.257
[ "https://arxiv.org/pdf/0902.3499v1.pdf" ]
13,943,955
0902.3499
acc02fe25a83c9721f93d0e798468358d068fcc3
A Gluing Construction for Prescribed Mean Curvature 20 Feb 2009 February 23, 2009 Adrian Butscher A Gluing Construction for Prescribed Mean Curvature 20 Feb 2009 February 23, 2009 The gluing technique is used to construct hypersurfaces in Euclidean space having approximately constant prescribed mean curvature. These surfaces are perturbations of unions of finitely many spheres of the same radius assembled end-to-end along a line segment. The condition on the existence of these hypersurfaces is the vanishing of the sum of certain integral moments of the spheres with respect the prescribed mean curvature function. * Introduction In the recent paper [1], Butscher we say that these surfaces condense to the appropriate subset of γ. Such surfaces cannot exist in Euclidean space, and their existence relies on the fact that the gradient of the ambient scalar curvature of M acts as a 'friction term' which permits the usual analytic gluing construction (akin to the classical gluing constructions pioneered by Kapouleas [2], [4]) to be carried out. The purpose of this paper is to show the same techniques used in [1] can be adapted in a straightforward manner to show that a similar construction is possible in a much simpler yet fairly general context: that of hypersurfaces having near-constant prescribed mean curvature in Euclidean space. The essence of the gluing construction carried out herein therefore lies in identifying and appropriately exploiting the analogous 'friction term' appearing in this setting. Let F : R n+1 × T R n+1 → R be a given, fixed smooth function. For simplicity and to maintain the parallel with [1], we will assume that F has cylindrical symmetry in the following sense. Endow R n+1 with coordinates (x 0 , x 1 , . . . , x n ) and let G ⊆ O(n+1) be the set of orthogonal transformations that fix the x 0 -axis. Each rotation R ∈ G acts on T R n+1 via the differential R * : T R n+1 → T R n+1 . We will now demand that F (R(p), R * V p ) = F (p, V p ) for all (p, V p ) ∈ R n+1 × T R n+1 . The prescribed mean curvature problem that will be solved in this paper is to find, for every sufficiently small r ∈ R + , a G-invariant hypersurface Σ r which satisfies H[Σ r ](p) = 2 + r 2 F (p, N Σr (p)) ∀ p ∈ Σ r where H[Σ r ] is the mean curvature of Σ r and N Σr is the unit normal vector field of Σ r . Furthermore, this hypersurface will be constructed by gluing together a finite number K of spheres of radius one (and thus of mean curvature exactly equal to two) whose centres lie on the x 0 -axis using small catenoidal necks having the x 0 -axis as their axes of symmetry. In order to properly state the Main Theorem, we must make the following definition, which is meant to capture the most important effect of the prescribed mean curvature function F on the surface that we're attempting to construct in this paper. Definition 1. Let S be a compact surface in R n+1 . The F -moment of S is the quantity µ F (S) := S F (x, N S (x))JdVol S where N S is the unit normal vector field of S and dVol S is the induced volume form of S, while J : S → R is defined by J(x) := ∂ ∂x 0 , N S (x) for x ∈ S. Now let p 0 k (s) := (s + 2(k − 1), 0, . . . , 0) and consider the spheres S k (s) := ∂B 1 (p 0 k (s)). These spheres are positioned along the x 0 -axis in such a way that each S k (s) makes tangential contact with S k±1 (s). The following theorem will be proved in this paper. Main Theorem. Suppose that there is s 0 ∈ R such that • the F -moments of the spheres S k (s 0 ) satisfy K k=1 µ F (S k (s 0 )) = 0 • the function s → K k=1 µ F (S k (s)) has non-vanishing derivative at s = s 0 , then there for all sufficiently small r > 0, there is a smooth, embedded hypersurface Σ r which is a small perturbation of K k=1 S k (s 0 ) that satisfies the prescribed mean curvature equation (1). It is easy to find a situation in which the conditions of the Main Theorem hold. For example: if F (·, ·) is such that µ F (∂B 1 (x 0 , x 1 , . . . , x n )) is negative whenever x 0 is sufficiently negative and positive whenever x 0 is sufficiently positive, then the mean value theorem asserts that a zero of the function s → K k=1 µ F (S k (s)) can be found. And if also F (x, ·) is monotone as a function of x 0 , then this function will have non-zero derivative. An application of the Main Theorem, and indeed an inspiration for it, is the earlier work by Kapouleas on slowly rotating assemblies of water droplets [3]. In this case, the prescribed mean curvature function F : R n+1 × T R n+1 → R takes the form F (p, N Σr (p)) := C(ω)(p 0 ) 2 where p := (p 0 , p 1 , . . . p n ) and C(ω) depends on the angular velocity ω. The prescribed mean curvature equation now approximates the effect of centrifugal force on the surface Σ r when ω is small. One of the assemblies of water droplets that Kapouleas constructs is exactly as described in the Main Theorem. (He constructs many other, more complex, and less symmetrical assemblies as well.) Another application of the Main Theorem is for understanding the possible shapes an electrically charged soap film can adopt in the presence of a weak, axially symmetric electric field. In this case, the equation satisfied by the surface adopted by the soap film is exactly (1), where the prescribed mean curvature function F : R n+1 ×T R n+1 → R takes the form F (p, N Σr (p)) := −C ∇φ(p), N Σr (p) while φ : R n+1 → R is the electric potential and C is a constant. We can see why this is so by writing the total energy of the soap film as the sum of a surface area term and a term proportional to the surface integral of φ, and then computing the Euler-Lagrange equation for the variation of this energy subject to the constraint that the volume enclosed by the surface remains constant. If we now assume that φ is such that the existence conditions of the Main Theorem hold, then the Main Theorem asserts that K spherical, electrically charged soap films connected by small catenoidal necks can be held in equilibrium at special points in space by the electric field. The Approximate Solution To construct an approximate solution for the Main Theorem, we use essentially exactly the same procedure as in [1, §3.1]. This will be outlined here very briefly for the convenience of the reader. The presentation is given for the dimension n = 2 for simplicity; everything that follows can be easily adapted to the (n + 1)-dimensional setting. Endow R n+1 with coordinates (x 0 , x 1 , . . . , x n ) and let γ be the arc-length parametrization of the x 0 -axis with γ(0) = (0, 0, . . . , 0). We will construct an approximate solution for the Main Theorem out of K spheres of radius one as follows. Choose a localization parameter s ∈ R and small separation parameters σ 1 , . . . , σ K−1 ∈ R + . Define s 1 := s and s k := s + 2(k − 1) + k−1 l=1 σ l for k = 2, . . . K and set p k := γ(s k ) and p ± k := γ(s k ± 1). Define the spheres S k := ∂B 1 (p k ). These spheres will now be joined together according to the following three steps. Step 1. The first step is to replace each S k with the surfaceS k obtained by taking the normal graph of a specially chosen function G k over S k \ [B ρ k (p + k ) ∪ B ρ k (p − k )] where ρ k ∈ (0, 1) is a small radius as yet to be determined. The functions we use for this purpose satisfy the equations • L(G k ) = ε + k δ(p + k ) + ε − k δ(p − k ) + A k J k if k = 2, . . . K − 1 • L(G 1 ) = ε + 1 δ(p + 1 ) + A 1 J 1 if k = 1 • L(G K ) = ε − K δ(p − K ) + A K J K if k = K where L := ∆ S 2 + 2 is the linearized mean curvature operator of the unit sphere, the small scale parameters ε ± k are yet to be determined and δ(q) is the Dirac δ-function centered at q, while J k := ∂ ∂x 0 , N S k is the sole G-invariant function in the kernel of L normalized to have unit L 2norm, and A k is chosen to ensure L 2 -orthogonality to J k . Of course J k = x 0 S k , the restriction of the x 0 coordinate function to S k . Step 2. Let W be the catenoid, i.e. the unique complete minimal surface of revolution whose axis of symmetry is γ and whose waist lies in the (x 1 , x 2 )-plane. The next step is to find the truncated and re-scaled catenoidal neck of the form W k := B ρ ′ k (p ♭ k ) ∩ ε k W + p ♭ k + (δ k , 0, 0) that fits optimally in the space betweenS k andS k+1 for k = 2, . . . , K − 1. Here ε k > 0 is a small scale parameter and p ♭ k is a point between p + k and p − k+1 that are determined by the optimal fitting procedure while δ k is a small displacement parameter that takes W k away from its optimal location and ρ ′ k is a small radius as yet to be determined. The optimal fit is obtained by matching the asymptotic expansions of the functions givingS k ∩ B ρ ′ k (p ♭ k ) andS k+1 ∩ B ρ ′ k (p ♭ k ) and W k as graphs over the translate of the (x 1 , x 2 )-plane passing through p ♭ k exactly as in [1, §3.1]. One particularly important outcome of the matching is that ε k from the previous step, as well as ε ± k and p ♭ k are all uniquely determined by σ k . In fact, an invertible relationship of the form σ k := Λ k (ε k ) holds, with Λ k (ε k ) = O(ε k | log(ε k )|). Finally, we find that we must choose ρ k , ρ ′ k = O(ε 3/4 k ) to ensure the optimal fit between the necks and the perturbed spheres. Step 3. The final step is to use cut-off functions to smoothly glue the neck W k (σ k ) into the space betweenS k andS k+1 . The interpolating region is the annulus B ρ ′ k (p ♭ k ) \ B ρ ′ k /2 (p ♭ k ). In this way we obtain a family of surfaces depending on the σ, δ and s parameters. Solving the Projected Problem We now proceed to solve the equation (1) up to a finite-dimensional error term by perturbing the approximate solution constructed in the previous section. The required analysis is in most respects identical to or less involved than the analysis found in [1, §4 - §6] and will thus again only be abbreviated here for the sake of the reader. The outcome will be a surface Σ ♯ r (σ, δ, s) satisfying H[Σ ♯ r (σ, δ, s)] − 2 − r 2 F Σ ♯ r (σ,δ,s) ∈W, whereW is a finite-dimensional space of functions that will be defined precisely below. It arises because the linearized mean curvature operator, which governs the solvability of (2), possesses a finite-dimensional approximate kernel consisting of eigenfunctions corresponding to small eigenvalues. These small eigenvalues make it impossible to implement a convergent algorithm for prescribing the components of the mean curvature of the approximate solution lying inW. Function spaces. We first define the weighted Hölder spaces in which the analysis will be carried out. These are essentially the same weighted spaces as in [1, §4], namely the spaces C k,α ν (Σ r (σ, δ, s)) consisting of all C k,α loc functions onΣ r (σ, δ, s) where the rate of growth in the neck regions of Σ r (σ, δ, s) is controlled by the parameter ν. Choose some fixed, small 0 < R ≪ 1 and define a weight function ζ r :Σ r (σ, δ, s) → R as ζ r (p) :=          x p = (x 0 , x) ∈B R/2 (p ♭ k ) for some k Interpolation p ∈B R (p ♭ k ) \ B R/2 (p ♭ k ) for some k 1 elsewhere where the interpolation is such that ζ r is smooth and monotone in the region of interpolation, has appropriately bounded derivatives, and is G-invariant. Now for any open set U ⊆Σ r (σ, δ, s), define |f | C k,α ν (U ) := k i=0 |ζ i−ν r ∇ i f | 0, U + [ζ k+α−ν r ∇ k f ] α, U where | · | 0,U is the supremum norm on U and [·] α,U is the α-Hölder coefficient on U. This is the norm that will be used in the C k,α ν (Σ r (σ, δ, s)) spaces. The equation to solve. Let µ : C 2,α ν (Σ r (σ, δ, s)) → Emb(Σ r (σ, δ, s), R n+1 ) be the exponential map ofΣ r (σ, δ, s) in the direction of the unit normal vector field ofΣ r (σ, δ, s). Hence µ f Σ r (σ, δ, s) is the normal deformation ofΣ r (σ, δ, s) generated by f ∈ C 2,α ν (Σ r (σ, δ, s)). The equation H µ f Σ r (σ, δ, s) = 2 + r 2 F • µ f × N µ f (Σr(σ,δ,s))(2) selects f ∈ C 2,α ν (Σ r (σ, δ, s)) so that µ f (Σ r (σ, δ, s)) satisfies equation (1). In addition, the function f will be assumed G-invariant. Define the operator Φ r,s,σ,δ : C 2,α ν (Σ r (σ, δ, s)) → C 0,α ν−2 (Σ r (σ, δ, s)) Φ r,s,σ,δ (f ) : (Σr (σ,δ,s)) . The linearization of Φ r,σ,δ,s at zero is given by L := DΦ r,σ,δ,s (0) = ∆ + B 2 + r 2 D 1 F (µ 0 , NΣ r (σ,δ,s) ) · f NΣ r (σ,δ,s) − D 2 F (µ 0 , NΣ r (σ,δ,s) ) · ∇f where D 1 F and D 2 F are the derivatives of F in its first and second slots and = H µ f Σ r (σ, δ, s) − 2 − r 2 F(f ) where F(f ) := F • µ f × N µ fB := B[Σ r (σ, δ, s)] is the second fundamental form ofΣ r (σ, δ, s). On the k th spherical part ofΣ r (σ, δ, s), the operator L is a small perturbation of L k := ∆ S k + 2 which is the linearized mean curvature operator of the sphere S k . Let J k once again be the G-invariant function in its kernel and define the spacẽ W := span{χ ext,k J k , χ ext,k L k (η k ) : k = 1, . . . , K − 1} ∪ {χ ext,K J K } . Here χ ext,k is a smooth cut-off function supported onS k and η k is a smooth cut-off function supported on the transition region between the k th neck andS k with the property that the support of ∇η k and ∇χ ext,k do not overlap. We now prove the following theorem. Let ε := max{ε 1 , . . . , ε K−1 } and δ := max{δ 1 , . . . , δ K−1 } and we will assume that ε = O(r 2 ) and δ = O(r), which will be justified a posteriori. Theorem 3. If r is sufficiently small, then there exists f := f r (σ, δ, s) ∈ C 2,α ν (Σ r (σ, δ, s)) with ν ∈ (1, 2) so that Φ r,σ,δ,s (f ) ∈W . (3) The estimate |f | C 2,α ν ≤ Cr 2 holds for the function f , where the constant C is independent of r. Finally, the mapping (σ, δ, s) → f r (σ, δ, s) is smooth in the sense of Banach spaces. Proof. As in [1], we will use a fixed-point argument to solve the equation Φ r,σ,δ,s (f ) ∈W for a function f ∈ C 2,α ν (Σ r (σ, δ, s)) with ν ∈ (1, 2). The fixed-point argument follows from three steps: an estimate of the size of Φ r,s,σ,δ,s (0); the construction of a bounded parametrix R satisfying L • R = id + E where E : C 0,α ν−2 (Σ r (σ, δ, s)) →W; and an estimate of the non-linear part of the operator Φ r,σ,δ,s . Each of these steps is given in great detail in [1]. Thus all that is needed here is to point out how the analysis of [1] applies to the present situation. Step 1. We begin with the estimate of |Φ r,σ,δ,s (0)| C 0,α ν−2 , which is the amount that the approximate solutionΣ r (σ, δ, s) deviates from being an actual solution of equation (2). This is done by adapting [1,Prop. 13]. In fact, by combining Steps 1, 2 and 4 of the estimate of H Σ r (σ, δ, s) − 2 in the C 0,α ν−2 for ν ∈ (1, 2) found in [1, Prop. 13], with a straightforward estimate for the C 0,α ν−2 norm of the r 2 F term, we find that |Φ r,σ,δ,s (0)| C 0,α ν−2 ≤ C max r 2 , ε 3/2−3ν/4 , δε 1−3ν/4 ≤ Cr 2 for some constant C independent of r. Step 2. We now find a parametrix R : C 0,α ν−2 (Σ r (σ, δ, s)) → C 2,α ν (Σ r (σ, δ, s)) satisfying L • R = id + E where E : C 0,α ν−2 (Σ r (σ, δ, s)) →W. As in [1,Prop. 15], this is done by first constructing an approximate parametrix by patching together parametrices for the linearized mean curvature operator of each sphere with parametrices for the linearized mean curvature operator of each neck; and then iterating to produce an exact parametrix plus an error term inW in the limit. The difference here is that the terms coming from the non-Euclidean background metric in [1,Prop. 15] must be replaced by the r 2 F term. The same result holds because this term can easily be shown to satisfy the right estimates. In fact, R and E satisfy the estimate |R(w) | C 2,α ν +|E(w)| C 2,α 0 ≤ C|w| C 0,α ν−2 for all w ∈ C 0,α ν−2 (Σ r (σ, δ, s)), where C is a constant independent of r. Step 3. We define the quadratic and higher remainder term of the operator Φ r,σ,δ,s as Q : C 2,α ν (Σ r (σ, δ, s)) → C 0,α ν−2 (Σ r (σ, δ, s)) Q(f ) := Φ r,σ,δ,s (f ) − Φ r,σ,δ,s (0) − L(f ) The estimates for the C 0,α ν norm of Q can be found exactly as in [1,Prop. 18] with the terms coming from the non-Euclidean background metric replaced by the r 2 F term. Then there exists M > 0 so that if f 1 , f 2 ∈ C 2,α ν (Σ r (σ, δ, s)) for ν ∈ (1, 2) and satisfying |f 1 | C 2,α ν + |f 2 | C 2,α ν ≤ M , then |Q(f 1 ) − Q(f 2 )| C 0,α ν−2 ≤ C|f 1 − f 2 | C 2,α ν max |f 1 | C 2,α ν , |f 2 | C 2,α ν where C is a constant independent of r. Once again, this works because the r 2 F term can easily be shown to satisfy the right estimates. Step 4. We can now solve the CMC equation up to a finite-dimensional error term by implement- N r,σ,δ,s : C 0,α ν−2 (Σ r (σ, δ, s)) → C 0,α ν−2 (Σ r (σ, δ, s)) N r,σ,δ,s (w) := −Q • R(w − E) + r 2 F • R(w − E) The estimates that have been established up to now give us the estimate |N r (w 1 ) − N r (w 2 )| C 0,α ν−2 ≤ Cr 2 |w 1 − w 2 | C 0,α ν−2 for w belonging to a ball of radius O(r 2 ) about zero in C 0,α ν−2 (Σ r (σ, δ, s)), where C is independent of r. Hence N r is a contraction mapping on this ball if r is sufficiently small, and a solution of (3) satisfying the desired estimate can be found. The smooth dependence of this solution on the parameters (σ, δ, s) is a consequence of the fixed-point process. Force Balancing Arguments and the Proof of the Main Theorem When r is sufficiently small, we have now found a function f r (σ, δ) ∈ C 2,α * (Σ r (σ, δ, s)) for each (σ, δ, s) so that H µ fr(σ,δ) Σ r (σ, δ, s) − 2 − r 2 F(f r (σ, δ, s)) = E r (σ, δ, s) where E r (σ, δ, s) is an error term belonging to the finite-dimensional spaceW depending on the free parameters (σ, δ, s). The corresponding surface that satisfies the prescribed mean curvature condition up to finite-dimensional error is Σ ♯ r (σ, δ, s) := µ fr(σ,δ,s) (Σ r (σ, δ, s)). To complete the proof of the main theorem, we must show that it is possible to find a value of (σ, δ, s) for which these error terms vanish identically. As in [1, §7.2], we consider the balancing map B r : R 2K−1 → R 2K−1 defined by B r (σ, δ, s) := π 1 E r (σ, δ, s) , . . . , π 2K−1 E r (σ, δ, s) where π 2k+1 :W → R and π 2k :W → R are the L 2 projection operators given by π 2k (e) := Σ ♯ r (σ,δ,s) e · χ neck ,k I k and π 2k+1 (e) := Σ ♯ r (σ,δ,s) e · χ ′ ext,k J k where χ ′ ext,k is a cut-off function supported on the k th spherical region, χ neck,k is a cut-off function supported on the k th neck and transition region, and I k the Jacobi field of the neck coming from translation along the neck axis. This is an odd, bounded function with respect to the centre of the neck. Note that B r is a smooth map between finite-dimensional vector spaces by virtue of the fact that the dependence of the solution f r (σ, δ, s) on (σ, δ, s) is smooth and the mean curvature operator is a smooth map of the Banach spaces upon which it is defined. The following lemma proves that π(e) = 0 implies that e = 0, and its proof is a straightforward computation. (In order to make this work out, we must choose the cut-off functions properly: we must have overlap between the supports of χ neck,k , χ ext,k and χ ext,k+1 and no overlap between the supports of χ ′ ext,k and η k .) Lemma 4. Choose e ∈W as e = K k=1 a k χ ext,k J k + K−1 k=1 b k χ ext,k L k (η k ) for a k , b k ∈ R. Then π 2k (e) = C 1 b k + C ′ 1 (ε 3/2 k+1 a k+1 − ε 3/2 k a k ) π 2k+1 (e) = C 2 a k where C 1 , C ′ 1 , C 2 are independent of r and (σ, δ, s). We must now show that B r (σ, δ, s) can be controlled by the initial geometry ofΣ r (σ, δ, s), at least to lowest order in r. The calculations are similar to those found in [1, §7.2] except with the contributions from the ambient background geometry replaced by a contribution from the prescribed mean curvature in the form of the F -moments of the spheres making upΣ r (σ, δ, s). The highest-order part of E r (σ, δ, s) involves the F -moments of the spherical constituents S k ofΣ r (σ, δ, s) as follows. Set µ k (σ, s) := µ F (S k ) -this depends on s and σ 1 , . . . , σ k because the location of the centre of S k is determined by these parameters. Let us continue to assume that ε k = O(r 2 ) and δ k = O(r) for each k. This will be justified shortly. Lemma 5. The quantity E r (σ, δ, s) satisfies the formulae π 2k E r (σ, δ, s) = C 1 δ k ε 3/2 k + O(r 2+2ν ) (5a) π 2k+1 E r (σ, δ, s) =          C 2 ε 1 − r 2 µ 1 (σ, s) + O r 4 k = 0 C 2 ε k+1 − ε k − r 2 µ k+1 (σ, s) + O r 4 0 < k < K − 1 −C 2 ε K − r 2 µ K (σ, s) + O r 4 k = K − 1 (5b) where C 1 , C 2 are constants independent of r, σ, δ, s. Proof. Set Σ ♯ r := Σ ♯ r (σ, δ, s) and Σ :=Σ r (σ, δ, s) for convenience. Consider first equation (5b) with 0 < k < K − 1. By the first variation formula and estimates of the size of the perturbation generating Σ ♯ r fromΣ r (σ, δ, s), and calculating as in [1,Prop. 27], we have π 2k+1 E r (σ, δ, s) = Σ ♯ r H[Σ ♯ r ] − 2 − r 2 F(f r (σ, δ, s)) χ ext,k J k = ∂Σ ♯ ∩supp(χ ext ,k ) ∂ ∂x 0 , ν k − r 2 S k F (x, N S k (x))) J k + O(r 4 ) = C 2 ε k+1 − ε k − r 2 µ k (s, σ) + O(r 4 ) where ν k is the unit normal vector field of ∂Σ ♯ ∩ supp(χ ext,k ) in Σ ♯ . Consider now equation (5b). In the neck we have H[Σ r (σ, δ, s)] = 0. Using similar estimates, π 2k E r (σ, δ, s) = Σ ♯ r H[Σ ♯ r ] − 2 − r 2 F(f r (σ, δ, s)) χ neck ,k I k = −2 Σ∩supp(χ neck ,k ) χ neck ,k I k + O(r 2+2ν ) = C 1 δ k ε 3/2 k + O(r 2+2ν ) where δ k is the displacement parameter of the k th neck. This is because I k is an odd function with respect to the neck having δ k = 0, whereas the integral is being taken over the neck with δ k = 0. Hence the integral Σ∩supp(χ neck ,k ) χ neck ,k I k picks up the displacement of the k th neck from its position at δ k = 0. This same phenomenon arises in [1,Prop. 27]. Proof of the Main Theorem It remains to find a value of the parameters (σ, δ, s) so that E r (σ, δ, s) = 0. As shown in Lemma 4, this is equivalent to find a solution of the equation B r (σ, δ, s) = 0. In what follows, we will continue to assume that ε = O(r 2 ) and δ = O(r) and this will be justified shortly. As a consequence of Lemma 5, the equations that we must solve are as follows: C 1 δ 1 = E 1 (σ, δ, s) . . . C K−1 δ K−1 = E K−1 (σ, δ, s) and C 2 ε 1 = r 2 µ 1 (σ, s) + E ′ 1 (σ, δ, s) C 2 (ε 2 − ε 1 ) = r 2 µ 2 (σ, s) + E ′ 2 (σ, δ, s) . . . C 2 (ε K−1 − ε K−2 ) = r 2 µ K−1 (σ, s) + E ′ K−1 (σ, δ, s) −C 2 ε K−1 = r 2 µ K (σ, s) + E ′ K (σ, δ, s) where ε k depends on σ k in an invertible manner as indicated in Step 2 of the construction of the approximate solution, and E k , E ′ k are error quantities satisfying the bounds |E k | = O(r −1+2ν ) and |E ′ k | = O(r 4 ). We can abbreviate these equations by introducing the matrix M : = I 0 0 J where I is the (K − 1) × (K − 1) identity matrix and J is the K × (K − 1) matrix J :=   1 −1 1 . . . −1 1 −1   . The equations become M (δ, ε) t = (E, r 2 µ + E ′ ) t(6) where δ := (δ 1 , . . . , δ K−1 ), ε := (ε 1 , . . . , ε K−1 ) and so on for E, E ′ and µ. We will solve these equations in two steps as follows. Note first that the matrix M is injective can now be solved using the implicit function theorem. Thus the matrix ρM : R 2K−2 → R 2K−2 is invertible and the equation at r = 0 is just ρM (ε, δ) = 0 which has the solution (ε, δ) = 0. Hence the solution persists for small r. We now have a solution ε := ε r (s) and δ r (s) of (6) for all sufficiently small r and depending implicitly on the one remaining free parameter s. Moreover, we see that ε = O(r 2 ) and δ = O(r −1+2ν ) = O(r) since ν ∈ (1, 2). It remains to solve (7) and we proceed as follows. Once (ε, δ) satisfy (7), then (6) becomes equivalent to 0 = (0, e) · M (ε, δ) = r 2 e · µ + e · E ′ or simply has non-vanishing derivative at s = s 0 . If these conditions are satisfied, then the implicit function theorem implies that for r sufficiently small, there is a solution s(r) of (8). This completes the proof of the Main Theorem. and Mazzeo have constructed examples of constant mean curvature (CMC) hypersurfaces in a Riemannian manifold M with axial symmetry by gluing together small spheres positioned end-to-end along a geodesic γ. The examples they have constructed have very large mean curvature 2/r and lie within a distance O(r) of either a segment or a ray of γ; hence Definition 2 . 2Let K be given. The approximate solution with parameters σ := {σ 1 , . . . , σ K−1 } and δ := {δ 1 , . . . , δ K−1 } and s is the surface given bỹ Σ r (σ, δ, s) ing a fixed-point argument based on the parametrix constructed in Step 2 as well as the estimates we have computed so far. Let E := Φ r,σ,δ,s (0) and use the Ansatz f := R(w − E) to convert the equation Φ r,σ,δ,s (f ) ∈W into the fixed point problem w − N r,σ,δ,s (w) ∈W where but not surjective, with vectors in the image of M satisfying the relation (0, e) · M (v, w) = 0 for all (v, w) ∈ R 2K−1 , where e := (1, 1, . . . , 1). Let ρ : R 2K−1 → R 2K−2 be the orthogonal projection onto the image of M . The equation ρM (ε, δ) = ρ(E, r 2 µ + E ′ ) (σ r (s), s) + E ′′ (σ r (s), δ r (s), s) = 0 (8) where the error quantity satisfies the estimate |E ′′ | = O(r 2 ). Equation (8) may or may not have a solution, depending on the nature of the function k µ k , which in turn depends on the specific nature of the prescribed mean curvature function F . However, if the following two conditions are met, then the implicit function theorem guarantees the existence of a solution. First, it must be the case that the equation at r = 0 has a solution, in other words if the F -moments of the spheres S 1 , . . . , S K satisfy K k=1 µ F (∂B 1 (p 0 k (s))) = 0 for some s, where p 0 k (s) := (s + 2(k − 1), 0, . . . , 0). Second, if s 0 is the solution of this equation, then it must also be the case that the mapping s → K k=1 µ F (∂B 1 (p 0 k (s))) A Butscher, R Mazzeo, arXiv:0812.3133Constant mean curvature hypersurfaces condensing to geodesic segments and rays in riemannian manifolds. preprintA. Butscher and R. Mazzeo, Constant mean curvature hypersurfaces condensing to geodesic segments and rays in riemannian manifolds, preprint: arXiv:0812.3133. Complete constant mean curvature surfaces in Euclidean three-space. Nikolaos Kapouleas, Ann. of Math. 2Nikolaos Kapouleas, Complete constant mean curvature surfaces in Euclidean three-space, Ann. of Math. (2) 131 (1990), no. 2, 239-330. Slowly rotating drops. Comm. Math. Phys. 1291, Slowly rotating drops, Comm. Math. Phys. 129 (1990), no. 1, 139-159. Compact constant mean curvature surfaces in Euclidean three-space. J. Differential Geom. 333, Compact constant mean curvature surfaces in Euclidean three-space, J. Differential Geom. 33 (1991), no. 3, 683-715.
[]
[ "Comparative study of density functional theories of the exchange-correlation hole and energy in silicon", "Comparative study of density functional theories of the exchange-correlation hole and energy in silicon" ]
[ "A C Cancio \nSchool of Physics\nGeorgia Institute of Technology\n30332-0430AtlantaGA\n", "M Y Chou \nSchool of Physics\nGeorgia Institute of Technology\n30332-0430AtlantaGA\n", "Randolph Q Hood \nLawrence Livermore National Laboratory\n94551LivermoreCA\n" ]
[ "School of Physics\nGeorgia Institute of Technology\n30332-0430AtlantaGA", "School of Physics\nGeorgia Institute of Technology\n30332-0430AtlantaGA", "Lawrence Livermore National Laboratory\n94551LivermoreCA" ]
[]
We present a detailed study of the exchange-correlation hole and exchange-correlation energy per particle in the Si crystal as calculated by the Variational Monte Carlo method and predicted by various density functional models. Nonlocal density averaging methods prove to be successful in correcting severe errors in the local density approximation (LDA) at low densities where the density changes dramatically over the correlation length of the LDA hole, but fail to provide systematic improvements at higher densities where the effects of density inhomogeneity are more subtle. Exchange and correlation considered separately show a sensitivity to the nonlocal semiconductor crystal environment, particularly within the Si bond, which is not predicted by the nonlocal approaches based on density averaging. The exchange hole is well described by a bonding orbital picture, while the correlation hole has a significant component due to the polarization of the nearby bonds, which partially screens out the anisotropy in the exchange hole.
10.1103/physrevb.64.115112
[ "https://arxiv.org/pdf/cond-mat/0101363v2.pdf" ]
119,349,796
cond-mat/0101363
7897aaaf3c26af6e98a3d42f3d73bd2bda2f3a7f
Comparative study of density functional theories of the exchange-correlation hole and energy in silicon 2001 A C Cancio School of Physics Georgia Institute of Technology 30332-0430AtlantaGA M Y Chou School of Physics Georgia Institute of Technology 30332-0430AtlantaGA Randolph Q Hood Lawrence Livermore National Laboratory 94551LivermoreCA Comparative study of density functional theories of the exchange-correlation hole and energy in silicon 2001(Submitted to Physical Review B)numbers: 7115Mb7145Gm0270Ss We present a detailed study of the exchange-correlation hole and exchange-correlation energy per particle in the Si crystal as calculated by the Variational Monte Carlo method and predicted by various density functional models. Nonlocal density averaging methods prove to be successful in correcting severe errors in the local density approximation (LDA) at low densities where the density changes dramatically over the correlation length of the LDA hole, but fail to provide systematic improvements at higher densities where the effects of density inhomogeneity are more subtle. Exchange and correlation considered separately show a sensitivity to the nonlocal semiconductor crystal environment, particularly within the Si bond, which is not predicted by the nonlocal approaches based on density averaging. The exchange hole is well described by a bonding orbital picture, while the correlation hole has a significant component due to the polarization of the nearby bonds, which partially screens out the anisotropy in the exchange hole. I. INTRODUCTION Density functional theory (DFT) is the leading theoretical tool for the study of the material properties of solids. It is based on the characterization of the groundstate energy of the inhomogeneous electron gas as a functional of the density, whose optimization condition can be expressed in terms of the self-consistent single-particle Kohn-Sham equation. 1 The success of the method lies in the ease of use and surprising effectiveness of the standard approximation, the local density approximation (LDA). This models the key component of the functional, the exchange-correlation energy E xc [n], and the exchange-correlation potential used in the Kohn-Sham equation with the assumption that the inhomogeneous electron gas at any location in space behaves like the homogeneous electron gas at a density equal to the local density. As a result, the formidably complex and nonlocal relationship between the energy and the density due to interparticle correlations in the inhomogeneous environment can be converted into a local functional dependent on input from the comparatively simpler homogeneous electron gas. This approximation has been quite successful at obtaining useful and suprisingly accurate estimates for bulk ground-state properties of many solids and the basic ground-state structure of solids, surfaces and molecules. However, there remains a significant need for the development of more accurate density functionals in several areas. Applications in molecular and solid-state chemistry require the calculation of total energies to at least an order of magnitude greater precision than the LDA can provide. Excited state properties, in particular the band-gap of semiconductors, are not typically obtainable with quantitative accuracy. In addition, in materials in which electron correlations are important, the LDA often fails to predict qualitative ground-state properties. There have been numerous attempts in recent years to develop exchange-correlation functionals beyond the LDA, including generalized gradient approximations (GGA's) 3,4,5,6,7 based on gradient expansions about the LDA, model nonlocal potentials, 8,9 self-interaction corrections, 10 hybrids with Hartree-Fock 11 and configuration interaction 12 or other many-body theories, 13,14 and orbital-dependent functionals. 15,16,17,18 These have led to significant improvements in accuracy of DFT calculations; but a functional of systematically quantitative accuracy for many quantum chemistry and solid-state applications has yet to be achieved. A key relation that has been particularly fruitful in the development of density functional theory is the adiabatic connection formula 19 which relates the exchangecorrelation energy to a coupling-constant integrated exchange-correlation hole. The exchange-correlation hole, n xc , measures the change in density from its mean value at each point in a system, given the observation of an electron at one particular point, providing a simple visualization of the effects of electron correlation in an inhomogeneous material. By integrating this quantity over a family of systems characterized by interaction coupling constant λ and kept at fixed density, the kinetic energy cost to create the hole at full coupling, λ = 1, is incorporated into it. As a result, the Coulomb interaction energy of the integrated hole reflects the total energy associated with creating the hole. The adiabatic connection formula, by making explicit the relation between the exchange-correlation energy and the exchange-correlation hole has been the impetus, both explicitly and implicitly, to many proposed corrections to the LDA. The average density approximation 8 (ADA) and weighted density approximation 8,9 (WDA) make explicit use of this relation to construct nonperturbative nonlocal density functionals. WDA in particular has attracted attention as a potential tool with applications for quantum chemistry, 20 metallic surfaces, 2 and bulk solid-state structure. 21,22,23,24 It is apparently limited by a lack of consistency in its behavior 21,22 and typically requires the use of various context-dependent models or extensions to achieve optimal results for different applications. 20 The PW91 6 and PBE 7 forms of the GGA explicitly build in a number of exact and approximate properties of the system average of the exchange-correlation hole and energy using a gradient expansion of n xc 25 as a framework. Hybrid models, 11,12 which have been particularly successful in achieving quantitative improvements of binding and total energies in molecules, may also be motivated as approximations to the coupling constant integration of the adiabatic connection formula. 26 In developing many of these approaches, many-body calculations of the exchange-correlation hole and related quantities in real systems, particularly in atoms and simple molecules, have played a useful role, helping to confirm assumptions of approximate functionals or point out specific areas for improvement. 27,28,29 There have been few such studies for real solids, however. 30,31 A second fruitful decomposition of E xc has been to separate it into exchange and correlation components. The exchange energy, which contains the correlation due to Fermi statistics that occurs for the noninteracting or λ = 0 limit, may be explicitly written in terms of the occupied single-particle orbitals obtained in this limit, and thus it and its functional derivative with density can in principle be obtained exactly and self consistently. Applications along this line include the optimized effective potential (OEP) method, 15 an implementation of the exact exchange potential which has been limited in practice to free atoms and other simple geometries, and the method of Krieger, Li and Iafrate (KLI) 16 which provides an approximate evaluation of the exchange functional by means of a simplified single-particle Green's function. Recently, an exact-exchange formalism applicable to solids has been developed by Städele et al. 17,18 These methods are of particular interest because they are self-interaction free and are promising candidates for the quantitative treatment of excited states. 18,32 On the other hand, these advances expose an important problem: many traditional methods such as LDA rely for their success on the close relation between exchange and correlation, in which the correlation hole tends to correct for much of the nonlocal behavior in the exchange hole. 19 As a result, the error for the sum of exchange and correlation energies is typically signif-icantly smaller than for either separately. 33 It is of interest, therefore, to study how the correlation hole alone behaves in real materials and how current functionals for correlation fare with respect to accurate many-body calculations. Or from the complementary point of view, it becomes necessary to consider to what extent modeling orbital effects in the correlation hole become important once they are included in exchange. 14 Variational Quantum Monte Carlo (VMC) calculations of the coupling-constant pair correlation function for the Si crystal have been reported previously, 30,31 including an analysis in depth of the exchange-correlation energy density obtained by the adiabatic connection theory. In this work we discuss in more detail the exchange-correlation hole, its decomposition into exchange and correlation and the relation between the angle-averaged hole and the energy density. We present analysis along two lines: first a detailed comparison of the VMC exchange-correlation hole and that of several DFT models as a function of position in the crystal, in order to search for systematic trends in how these functionals describe the exchangecorrelation hole of a prototypical semiconductor material. Secondly, we look at how the exchange and correlation holes separately behave in the bonding region of Si, and observe how orbital effects in the exchange generate related effects in the correlation hole. The paper is organized as follows: Sec. II provides theoretical background on the exchange-correlation hole, and the various models used to describe it. We briefly discuss the method used to obtain computational results in Sec. III. Results for spherically averaged holes are presented in Sec. IV, and a detailed analysis of the correlation and exchange holes in Sec. V. In Sec. VI we discuss issues raised by our data analysis concerning the improved treatment of nonlocal density information in DFT's, as well as orbital effects in the Si correlation and exchange holes that may be of relevance in improving upon exact exchange methods. Our conclusions are presented in Sec. VII. II. THE EXCHANGE-CORRELATION HOLE IN DENSITY FUNCTIONAL THEORY A. Basic theory We consider a family of systems parameterized by a coupling-constant λ H(λ) = i − ∇ 2 i 2 + V ext (r i ) + V λ (r i ) + i>j λ |r i − r j | (1) where V ext is the external potential and V λ a potential added to keep the ground-state density invariant. The units here and elsewhere in the paper are in Hartree atomic units unless otherwise indicated. Two limits of interest include the noninteracting system, λ = 0, in which H reduces to the standard Kohn-Sham equation, with the adjustable potential equal to the Kohn-Sham potential. The second is at λ = 1, where V λ vanishes and one recovers the original fully interacting many-body system. The ground-state wave function is a Slater determinant of single-particle orbitals in the first case and the true many-body ground state in the second. The exchange-correlation hole is defined for a given value of λ as the change in the ground-state expectation of the density at one point in the system r + u given the observation of an electron at some other point r: n xc (r, r + u; λ) = 1 n(r) i δ(r − r i ) j =i δ(r + u − r j ) λ − n(r + u).(2) Here λ indicates an average over the ground-state wave function of H(λ) and n(r) = i δ(r − r i ) λ is the λinvariant ground-state density. A related quantity, the pair correlation function g(r, r + u), is a measure of the pair density relative to that expected for uncorrelated electrons with the same density distribution. The exchange-correlation hole is expressed in terms of g as n xc (r, r + u; λ) = n(r + u) [ g(r, r + u, λ) − 1 ]. (3) The importance of the exchange-correlation hole to density functional theory lies in its connection 19,26 to the exchange-correlation energy E xc : E xc [n] = 1 2 d 3 r n(r) d 3 r ′ 1 0 dλ n xc (r, r ′ , λ) |r − r ′ | .(4) E xc includes all the contributions to the energy due to correlations, that is, beyond the noninteracting and Hartree energy. The exchange-correlation energy E xc [n] is analyzed in density functional theory in terms of the exchange-correlation energy density e xc (r) and the exchange-correlation energy per particle ǫ xc (r) E xc = d 3 r n(r)ǫ xc (r) = d 3 r e xc (r).(5) Relating this expression to that for E xc in terms of the exchange-correlation hole n xc one has ǫ xc (r) = 1 2 d 3 u n xc (r, r + u) u ,(6) where n xc (r, r + u) is the coupling-constant integrated hole, 1 0 dλ n xc (r, r + u, λ). The exchange-correlation energy per particle thus has the natural interpretation as the interaction energy of the particle with its exchangecorrelation hole. The exchange-correlation hole and energy are frequently decomposed according to separate exchange and correlation components. The exchange hole n x is that of the noninteracting or λ = 0 ground state [Eq. (2)] and contains correlations due solely to Fermi statistics. The remainder, n c , of the contributions to the total n xc are those induced by introducing pair correlations into the ground-state wave function, representing density fluctuations caused by the Coulomb interaction. The two components satisfy important sum rules which reflect the conservation of particle number: d 3 u n x (r, r + u) = −1,(7)d 3 u n c (r, r + u) = 0.(8) A key to the practical exploitation of the relation between n xc and E xc is that, since the Coulomb interaction depends only on interparticle distance, much of the information contained in n xc is not needed and may be ignored in the development of models. In particular, to obtain the energy per particle ǫ xc (r) at a given point r the angular variation in n xc may be integrated out to leave a function depending only on the distance u from r, n xc (r, u) = dΩ u 4π n xc (r, r + u),(9) with ǫ xc (r) in Eq. 6 given in terms of the integral over u of the weighted angle-averaged hole, 2πu n xc (r, u) . The DFT models we study in this paper can be viewed as models of this angle-averaged hole. B. Models for the exchange-correlation hole The LDA model for n xc may be obtained by replacing the true hole about r with that of the homogeneous electron gas at the local density, n(r): n LDA xc (r, u) = n(r){g heg [u, r s (r)] − 1}.(10) The factor r s = (3/4πn) 1/3 is the average distance between electrons in the homogeneous gas of density n. It provides a useful measure of the correlation length for the exchange-correlation hole, as the width of the exchange hole scales linearly with r s and that of the correlation hole shows similar behavior for the range of densities 1.4 < r s < 4.4 valid for the Si crystal valence. 34 The WDA 2 attempts to account for the inhomogeneity of the density in realistic systems by incorporating it in the density prefactor of n xc n W DA xc (r, u) = dΩ u 4π n(r + u){g model [u,r s (r)] − 1}.(11) This form of the exchange-correlation hole treats the hole at r + u as the modification of the density at that point rather than that at the center of the hole. This is a property of the exact hole that leads to a significant departure from the homogeneous case when the density varies considerably within the effective range of the hole. The major approximation made is the use of a scaling form for the pair correlation function where for each r an averaged length scale r s (r) is determined to select g from a family of suitably chosen pair correlation functions g(u, r s ). This length scale is fixed at each point fixed by the particle sum rule d 3 u n(r + u){g model [u,r s (r)] − 1} = −1.(12) A natural choice for g(u, r s ) in an extended system is that of the homogeneous electron gas; this form will be considered in this paper. The WDA scaling approximation has the ability to fit certain properties of the true n xc unattainable by the LDA; in particular it significantly improves on the description of the exchange-correlation energy per particle of atoms when a single electron is removed, obtaining the correct limiting value of 1/2R versus distance from the atom R. 8 The ADA 8 is another method to incorporate into a model of the exchange-correlation hole information about the nonlocal density variation in the vicinity of the hole. It preserves the homogeneous gas model for n xc but determines the r s factor at which it is evaluated from an average of the density in the neighborhood of the reference point: n ADA xc (r, u) =n(r){g heg [u,r s (r)] − 1}(13) withn (r) = d 3 r ′ n(r ′ ) w(|r−r ′ |,r s ). Herer s = (3/4πn) 1/3 and w is a normalized weighting factor. The weight w is chosen to obtain the correct E xc in the linear response limit and has an effective range set byr s . The averaging procedure can in this sense be understood as the selecting of a physically reasonable correlation length for the hole from an average of the density over the physical range of the hole, particularly of value in cases where the density varies rapidly within this range. Unlike the WDA, the ADA does not try to provide a correction to the shape of the n xc . Note that with the use of a radially isotropic g, only a spherically averaged density enters into the definition of, and sum rule for, the WDA exchange-correlation hole: (15) and n(r, u) = dΩ u 4π n(r + u). n W DA xc (r, u) = n(r, u) {g heg [u,r s (r)] − 1}, A similar result holds for the ADA, as the density averaging function w is a function only of interparticle distance. This averaging reduces the information about the nonlocal density actually used by these methods from the full complexity of the density variation about each reference point to a simpler and often slowly varying radial function n(r, u) . Knowledge of this function at each point in space, along with the model used for g in the WDA and w in the ADA, specifies E xc within each approximation. Another line of attack in improving the exchangecorrelation hole and energy, the GGA, has been to design corrections to the LDA hole using information about system inhomogeneity contained in the local density gradient. One implementation of this approach 7 has led to the construction of a GGA model for n xc . 25 In this and other GGA models the measure of inhomogeneity is the absolute value of the gradient, |∇n(r)|/n(r). Secondorder effects proportional to the Laplacian of the density, ∇ 2 n(r)/n(r), are mapped to a gradient form by an integration by parts. This transformation has no effect on system-averaged quantities, namely E xc and the average over r of n xc . However, the local quantities n xc (r, r + u) and ǫ xc (r) in the GGA are as a result no longer defined in terms of the adiabatic formulae of Eqs. [2] and [6] and not directly comparable with our VMC data. 35 III. SYSTEM AND COMPUTATIONAL DETAILS A description of the computational method used in obtaining numerical many-body expectations of the pair correlation function and related quantities in Si has been presented in a previous paper. 31 We present here a short summary of the details relevant to our discussion. For V ext in Eq. (1) we use a norm conserving nonlocal LDA pseudopotential which replaces the electrons in the atom core. The system and expectations involve valence electrons and their correlations. A simulation cell consisting of a 3 × 3 × 3 lattice of fcc primitive cells of the diamond lattice containing 216 valence electrons is used in obtaining expectations. Finite size effects in the electronelectron interaction were taken into account by replacing the Coulomb interaction in Eq. (1) with a truncated potential into which the long-range Hartree interaction is incorporated. This form has proved to be successful in reducing finite size errors in the Coulomb energy as compared to the conventional Ewald interaction. 36 To describe the ground state for a given value of coupling constant, a Slater-Jastrow type wave function was used, consisting of a product of Slater determinants multiplied by a trial many-body correlation factor: ψ λ = exp   − i j u λ (|r i − r j |) + i χ λ (r i )   D ↑ D ↓ .(17) The orbitals used in the Slater determinants for up and down spins D σ are determined from an LDA calculation and include all the valence-band Bloch orbitals of the primitive cell periodic on the supercell. The Jastrow factor introduces variationally explicit electron-electron correlations with one-body corrections to optimize the density. It is described in detail in the paper by Hood et al. 31 A variational ground-state wave function optimized over 22 variational parameters was obtained for five different values of the coupling-constant, and recovered 88% of the correlation energy as compared to nearly exact diffusion Monte Carlo calculations for the same system. The pair correlation function g(r, r ′ ) [Eq. (3)] was obtained from the ground-state wave function for each λ in order to obtain coupling-constant integrated exchangecorrelation holes and energies. The pair correlation function was expanded in symmetrized plane-waves, taking full advantage of the symmetries of the pair correlation function in the crystal to reduce the basis set to a reasonable number. Expectations for the coefficients for symmetrized plane-waves were evaluated statistically using the Monte Carlo method. The resulting data are limited by plane-wave cutoff and statistical errors which can be checked by comparing the numerically calculated and exact values of the exchange hole. The resulting variation in g x was between 1% and 6%. The error due to basis set cutoff is most noticeable near the atom core where, despite the pseudopotential treatment, the rapid variation of the density and the valence orbitals leads to large wave-numbers in the exchange hole. Figure 1 shows as a function of interelectron distance u the angle-averaged correlation, exchange and exchangecorrelation holes, weighted by 2πu, for the VMC calculation of Hood et al. 30,31 and several DFT models at several representative points in the Si crystal. The pair correlation function of the homogeneous electron gas used to generate the DFT holes is obtained from an accurate model form. 37 The distance weighting factor 2πu is chosen so that the integral of the plotted function yields the exchange-correlation energy per particle at the reference point. These values are listed separately in Table I. IV. ANGLE-AVERAGED EXCHANGE-CORRELATION HOLE In order to give an idea of the environment surrounding each reference point and to compare with the WDA and ADA forms for n xc , the angle-averaged density n(r, u) , the average density at a distance u from the reference point r, is shown at the top of each plot. The density is to be compared to the average over the unit cell of 0.0296 a.u. The angle-averaged density for each of the four reference points shown indicate four different environments typical of the pseudopotential model Si crystal. In the first, (a), the reference electron is at a density considerably higher than the unit-cell average, and the density decreases significantly over the length-scale of the VMC exchange-correlation hole. The antibond position (b) is typical of intermediate density points where the lo-cal density is close to the unit-cell average and the angleaveraged density in the vicinity of the reference point shows only modest variations over long distances. The atom center (c) is typical of the "pathological" situation inside the pseudopotential core. Here the density increases by an order of magnitude, from roughly 10% to 200% of the unit-cell average, over roughly one-third of the Si bond length of 4.44 a.u., before slowly decaying back to the average. Such an environment of dramatic density change over the length scale of the hole indicates the likely failure of the LDA in this region, which has in fact been observed in the comparison of LDA and VMC energy densities. 30 Finally, the interstitial point (d) shows the generic features at low density outside both valence and core. Here the angle-averaged density also changes considerably from the local density but at a much more gradual rate. In the high-density case of the bond center, Fig. 1(a), the exchange-correlation hole closely follows that determined by the local density, with r s = 1.397 a.u. The average density within the effective range of the hole is lower than the local density, leading to a slightly largerr s for the ADA and an overall better fit of the hole. The WDA hole [Eq. (15)] shows the effect of two competing factors. The value ofr s used in the scaled pair correlation function g heg is significantly larger than the local r s . The resulting pair correlation function is wider and deeper than that of the LDA. However, the density prefactor n(u) in Eq. (15) drops rather quickly and "quenches" the exchange-correlation hole at larger u. Eventually, as the density settles to its asymptotic value of the unit cell average, the hole has a long-range tail again dominated by the shape of the pair correlation function. The net result is a considerable overcorrection of the LDA model with respect to the VMC data. At the antibond point, the hole is wider and less deep than at the bond center, consistent with the variation of the homogeneous electron gas hole with respect to density. Neither form of nonlocal averaging results in a significant change from the LDA form of n xc since the angle-averaged density is almost constant in the vicinity of the reference electron. The exchange-correlation energy per particle, determined from the integral of the curves in Fig. 1, has roughly the same modest deviation from the VMC data in each case, as shown in Table I. This leads to a significant difference in the energy density because the density is still large at this point. The holes at the bond center and antibond point are fairly representative of those at other points of intermediate-to-high density in the Si crystal. The corrections introduced by the nonlocal DFT's are quite small because of the relatively small variation in the angleaveraged density within the effective range of the hole. These high density corrections to n xc and e xc have a sensitive dependence on position in the crystal, which appears to lack an obvious relation to the positional trends in the VMC data. As a result, plots of the deviation of the ADA and WDA energy density from the VMC data I. Exchange-correlation energy per particle ǫxc(r) in hartees at various points in the Si crystal discussed in the text. Label VMC refers to raw data, VMC-I includes a correction to obtain the correct particle sum rule, VMC-II includes a correction for the plane-wave cutoff (see text) and corresponds to the integral over the angle-averaged holes shown in Fig. 1 For the two low-density cases, however, the nonlocal density averaging techniques have a considerable impact on the shape and quality of the exchange-correlation hole. In these cases the shape of hole is dominated by the rapid increase of the density with u. At the atom center, Fig 1(c), the rapid change in density leads to a far more compact exchange-correlation hole than that predicted by the LDA at the low local density (r s = 4.12 a.u.). The minimum in the weighted hole, where it contributes the most to the exchange-correlation energy, is 1.6 a.u. from the reference point, less than half that of the LDA hole. Roughly 90% of the total contribution to ǫ xc (r) comes from within a radius of 3.0 a.u., significantly less than a bond length (4.44 a.u.). In this case, both ADA and WDA do significantly better at determining the overall depth and length scale of the hole over the energetically important region of 0.5 to 3.5 a.u. In addition, WDA clearly matches the shape of the VMC hole quite closely. As a result, the ADA and WDA obtain fairly accurate estimates of the energy of the exchangecorrelation hole as shown in Table I, while LDA obtains only half the VMC value. The interstitial point, Fig. 1(d), has low-density features qualitatively similar to the atom center but, consistent with a more gradual change in density, is considerably shallower and more spread out. The WDA has a particularly good overall agreement with the VMC hole on a point-by-point basis (although the ADA has a total energy slightly closer to the VMC value). However, possibly due to the more gradual change in density within the vicinity of the hole, the LDA does a much better job of matching the energy of the hole than near the atom center. The large differences in the shape of the hole largely cancel out in the integral so that the energy of the hole is only 20% smaller than the VMC value. The error in the LDA energy density, which is obtained by weighting the energy of the LDA exchange-correlation hole by the density, is thus fairly insignificant at this point; 30 in contrast the LDA energy density in the atom core suffers from considerably larger deviations with the VMC. Figure 1 also shows results for angle-averaged correlation and exchange holes at each point. In general, the exchange hole is the dominant contribution to the total hole. The correlation hole provides a correction to the exchange hole, reducing electron density near the reference electron and enhancing it towards the outside of the exchange hole, thereby making the total hole slightly deeper and more compact. At high densities this correlation hole is weaker and shorter-ranged than the LDA hole, a result in agreement with the prediction of gradient based models. 25 This trend however does not carry over to low densities, particularly in the atomic core [ Fig. 1(c)]. The correlation hole here, while clearly shorter-ranged than the LDA hole, is also significantly larger and makes a much larger relative contribution to ǫ xc (r). The WDA and ADA both do moderately well in predicting the magnitude and length scale of the correlation hole in this case. In general the nonlocal density averaging methods fair less well at predicting correlation or exchange alone than the combination of the two. For the WDA in particular, the sum rules of the correlation hole and the exchange hole alone are not satisfied by satisfying the sum rule for exchange-correlation [Eq. (12)], with the correlation hole influenced more by the short-range behavior of the angleaveraged density and exchange the long-range behavior. This may contribute to the larger errors seen in n c and n x than for n xc observable in the cases with significant density variation. Some sense of the general trends resulting from the different strategies for choosing the pair correlation function can be obtained from plotting the solution for the average interelectron distancer s used by the nonlocal methods versus the r s obtained from the local density. The result for sample points at various densities throughout the unit cell is shown in Fig. 2. In the case of the ADA, for r s lower than the unit-cell average of 2.01 a.u., r s hews closely to the local density value. It deviates from it at lower densities, either dramatically at the atom center (the one point below the average r s curve), or more gradually as one heads towards the interstitial point (the other low-density points plotted). The WDA values in contrast are all grouped about the unit-cell average value, regardless of location and density. In other words, the variation in the WDA exchange-correlation hole is primarily derived from variation in the nonlocal angleaveraged density, and very little from the variation ofḡ. These results reflect the character of the criteria used to determine the average density in each case. The weighting function of the ADA is an oscillatory function in real space, with a peak contribution at roughly 0.5 r s from the reference electron, so that a significant effect is observed only where the density variation about the electron is sufficiently rapid. In contrast, the weight implied by the WDA sum rule condition, u 2 [1 − g heg (u, r s )], is peaked near r s and has significant contributions out to 2 r s , a distribution broad enough to to pick up the average density at almost every point in the crystal. V. EXCHANGE AND CORRELATION HOLES As discussed earlier, the trends in the comparison of the DFT and VMC models for e xc (r) are complex and difficult to characterize. Some insight into what is going on can be obtained by decomposing the exchangecorrelation hole into exchange and correlation components and particularly by considering the response not only as a function of distance from the reference electron but including the complete information, n xc (r, r + u), of the second electron's position in the crystal. The exchange hole is of use particularly in that it can be derived exactly and thus provide an unambiguous test of density functionals. Further, it is often the dominant part of the total exchange-correlation hole. It may be formally obtained as the exchange-correlation hole associated with the Slater determinant ground-state wave function of the noninteracting (λ = 0) limit of Eq. 1. By applying this wave function in Eq. 2 one has n x (r, r + u) = − 1 n(r) σ Nσ α ψ α (r)ψ * α (r + u) 2 ,(18) where σ is a spin index and N σ the number of electrons with spin σ. In practice n x is obtained from LDA valence orbitals periodic on the 3 × 3 × 3 simulation cell used in our VMC calculation. These produce a density that is indistinguishable from the VMC density within the statistical limitations of the Monte Carlo sampling. At u = 0 the exchange hole reduces to −n(r)/2, which reflects the Pauli exclusion prohibiting the occupation of the same point in space by two particles of the same spin. In addition, the exchange hole is strictly negative, as may easily be seen in the case of Si where the orbitals may be taken as real. In Fig. 3 we show results for the exchange hole and angle-averaged hole weighted by distance for three positions near the bond center of the Si crystal. In the bottom half of the figure is a contour plot of n x (r, r + u): the exchange hole at r + u given a reference electron at the position r. The plots cut through a chain of Silicon atoms in the (110) plane of the crystal; the reference electron's position is shown as a white dot. The shading represents a change in value ranging from just less than zero for the white region to just above the absolute minimum of minus one-half the peak density of 0.087 a.u. in the black regions. The thick solid curve in the upper plots show the result of spherically averaging the exchange hole to obtain a function of distance from the reference point. The same weighting factor of 2πu as that of Fig. 1 is used. This result is compared to the various DFT models discussed previously. In the case of the reference electron at bond center, the exchange hole is focused in the bonding region immediately surrounding the reference electron, with the bulk of the hole contained in the region between the atoms on either side of the bond, and a negligible weight on the nearest neighbor bonds. The electron density has been reduced by roughly 45% within the bond, i.e., in a diamond-shaped region extending 1.7 a.u. along the bond axis in either direction and 1.2 a.u. perpendicular to it. This accounts for 90% of the density of electrons with the same spin as the reference one. As one moves the reference electron either along the bond axis (c) or perpendicular (a) to it, the hole tends to remain focused on the nearby bond center, despite a small shift in weight towards the reference point. As before, more than 90% of the same-spin density is removed in the bond. This stiffness or insensitivity to the position of the reference electron is gradually lost as the reference electron is moved, say, to the atom center or towards the antibond point equidistant from three bonds. In the latter case, the exchange hole is again centered on the reference point and resembles a sp 3 -hybrid atomic orbital. The effect of the stiffness of the exchange hole on its angle average can be seen in the difference between the LDA and exact cases (thin and thick solid lines). In the bond center, Fig. 3(b), the LDA result closely matches the exact value. At position (a) off the bond axis, the bond center is oriented tangentially with respect to an angle average about the reference point and the exchange hole has become deeper and narrower than that obtained by the LDA. In contrast, in case (c), where the bond center is oriented radially out from the reference electron, and the electron is near to an atomic core, the hole has become shallower and wider than in the LDA. Although the two positions have similar densities and density gradients, the resulting deviations from the LDA are qualitatively different and point to a genuinely nonlocal behavior in the exchange hole. Nonetheless, neither the WDA nor the ADA are particularly sensitive to the position of the bond center with respect to the reference electron and provide no systematic improvement (or disimprovement) over the LDA fit. The correlation holes corresponding to each case in Fig. 3 are shown in Fig. 4. Note the order of magnitude smaller range of density changes involved. The correlation hole n c takes on both negative values in regions where electrons are repulsed by the Coulomb potential of the reference electron and positive values representing regions where electron density has increased. The net electron density change is zero. At the bond center, Fig. 4(b), the correlation hole contour plot can be roughly separated into two regimes: a deep and narrow well in the vicinity of the reference electron and a shallow longer-ranged response that may be described as a polarization of the nearest neighbor bonds. In the region of the pseudopotential core the correlation response is suppressed. The resulting angle-averaged hole has a complex and unusual structure as a function of distance. In comparison, the LDA and other DFT holes show a relative lack of structure and greatly overestimate the magnitude of the correlation energy per particle at bond center. As the position of the reference electron is moved either off the bond axis (a) or along the axis (c), the correlation hole demonstrates a marked sensitivity to its position relative to the bond center. In either case the minimum is slightly offset in the direction of steepest increase in density. This is consistent with g c being nearly isotropic near the electron as expected from the cusp condition. Outside this short-range region the hole undergoes strong distortions in shape from that of the hole about the bond center. These longer wavelength features, as in the bondcenter case, are well characterized in terms of the polarization of the nearby bonds. Bond polarization is particularly noticeable for the electron fixed on the bond axis (c) where there is a large shift of the electron density from the atom nearest the electron to the other side of the bond, with a peak on the bond axis opposite the reference electron. Nearest neighbor bonds are polarized in a similar fashion. In (a), the nearby bonds are polarized with density shifted from the nearer side of the bond with respect to the electron position to the farther, with the node of the polarization approximately along the bond axis. Consideration of the corresponding pair correlation function shows that these features in the correlation hole constitute a significant departure from an isotropic form of pair correlation function such as is assumed by the DFT models discussed in this paper. Despite the reduction of information engendered by averaging over angle, the dramatic changes in the shape of n c with position show up in the angle-averaged hole, giving rise to nonlocal corrections to the LDA hole. The LDA model depicts a trend in n c to a broader and shallower hole as the density decreases from case (b) to (c) to (a) in the upper half of Fig. 4. The VMC hole in case (a) retains the character of the bond center hole, being narrower and weaker than that predicted by the LDA. The overall disagreement with LDA is less pro-nounced and some of the detailed structure is lost. As one moves along the bond axis to case (c), however, a significantly different trend occurs: the hole in the region of density reduction becomes deeper and broader relative to the LDA, and in particular now matches the width of the LDA hole. In addition to the stronger response in this region, a large peak appears at the hole edge in keeping with the zero sum rule of the correlation hole. This peak, at about 3.5 a.u., correlates with the position of electron density peaks on the nearest neighbor bonds in the contour plot. As with the case of exchange, these trends in n c poorly correlate to the changes in density or density gradient, and show a sensitivity to the larger nonlocal crystal environment. Though the exchange and correlation holes both show significant and nonlocal deviations from the isotropic form typical of the homogeneous electron gas, these deviations are highly correlated with each other and thus tend to cancel in their sum. As the exchange hole tends to stay on the bond center, the bond polarization in the correlation hole is oriented so as to shift the center of the exchange-correlation hole towards the reference electron. A simple picture of this feature in the correlation hole is that it represents a response to the anisotropic distribution of the exchange hole by which some of the deviations of the exchange hole from an isotropic form are screened out in the correlation hole. This behavior may help to explain the relative success of the LDA in describing the exchange-correlation hole. Neither the LDA nor the nonlocal DFT's model with much fidelity the variation in the VMC correlation hole, aside from the general broadening of the hole with lower density. In particular the corrections of the WDA lead to a worse fit of both the correlation hole and its energy. The differences in exchange and correlation between WDA and VMC do tend to balance each other but not consistently: for example they cancel nicely in (a) and (b) but add in case (c). The net effect of these trends on the exchange-correlation energy is shown in Table II, where the total E xc , E x and E c for several methods are compared. In addition to the methods discussed in detail in this paper, we present results from the PW91 version of the GGA 6 and those of a diffusion Monte Carlo (DMC) calculation. 38,39 The DMC method removes nearly all of the variational bias in the VMC correlation energy at a considerably larger computational cost. With the exception of the GGA, none of the density functional methods obtain a good estimate for E x or E c as compared to DMC, with errors of roughly 1 to 2 eV per atom. Estimates for E xc are much closer, especially for the WDA. VI. DISCUSSION A. Limits of density averaging The clear success of density averaging occurs at low densities where ADA and WDA obtain excellent exchange-correlation energy densities. WDA in particular predicts the shape of the exchange-correlation hole with exceptional fidelity, even in the extreme situation inside the atomic core. At high density, the small level of variation in the nonlocal angle-averaged density [Eq. (16)] limits the effects of density averaging to subtle alterations of the hole which do not provide a systematic improvement with respect to the LDA. More importantly, the discrepancy between the density-averaged holes and the VMC hole (much less the exact hole) are difficult to correlate with any known quantity. However, density averaging must lead to a global and systematic improvement over the unit cell or else the quality of the total result may be disappointing. This is particularly the case of the ADA, which returns values very close to the VMC values at low density and slightly underestimates the magnitude of E xc at high density. Unfortunately, once the variational bias of the VMC energy is removed by a DMC calculation as shown in Table II this result proves to be a significant disimprovement with respect to the LDA. Moreover, although the different prescriptions for the WDA and LDA holes lead to extreme differences in the degree of nonlocality they incorporate, both end up with quite similar, reasonable predictions for the total E xc . Apparently, the large error at low densities of replacing n(r + u) with the local density n(r) in the definition of the LDA exchangecorrelation hole is offset by correspondingly large changes in g, which are suppressed by density averaging in the WDA. It is interesting to compare the density averaging approach to the gradient expansion used as the basis for the GGA. A notable result of the PW91 version of the GGA is that it not only provides a significant improvement of the LDA exchange-correlation energy, but of the exchange and correlation energies as well (Table II). Thus, it alone among the functionals we have studied shows promise to be a reasonable candidate as a correlationonly density functional for this material. In comparison, the WDA, although it returns a very good E xc , has the worst estimate for E c . However, the local density and its Laplacian, the formal equivalent of the information about the surroundings of an electron used in obtaining a GGA hole, constitutes but a small part of what is contained in the angle-averaged density n(r, u) . The limitation of the WDA is the restriction to a pair correlation function form with only one variable parameter, one to which the hole in our case proves largely insensitive. It would be interesting if more of the information contained in n(r, u) could be used as input into a more flexible model for g, particularly for correlation or exchange separately. Possi- bly, the rate at which the density changes over the length scale of the hole, a factor which, as illustrated in Fig. 1, is clearly important in determining the overall shape of the WDA hole, could be used to influence the form of g in analogy to the GGA. Furthermore, n(r, u) has truly nonlocal information not accessible to GGA's. A salient feature of the system in our study is the long-range limit of the angle-averaged density, which tends to a finite constant for a crystal and to zero for an atom or molecule. It is thus possible that low density points in these two systems have similar local or semilocal environments (and thus the same GGA holes) but significantly different exchangecorrelation holes because of different long-range boundary conditions. Because of the nonlocal character of the WDA, these boundary conditions have a strong influence on the WDA hole, causing the length scaler s to be nearly constant for the crystal but for the atom, to range from a finite value near the valence density maximum to infinity as the reference electron is removed from the atom. The WDA produces a reasonable fit to n xc at low density for both the Si atom and Si crystal, despite the markedly differentr s factors necessary to satisfy the particle sum rule. 40 It would seem, then, that the WDA should be capable to match or surpass the quality of the GGAperhaps given a more accurate and flexible model pair correlation function form. B. The exchange-correlation hole and energy density in the GGA As discussed in Sec. II, the GGA models gradient expansion corrections to the LDA in terms of the gradient of the density alone, mapping corrections proportional to its Laplacian to a gradient squared term by an integration by parts in Eq. 5. This transformation has significant implications for the GGA models of n xc . For example, each position in Fig. 1 is a critical point in the density, and as such characterizes the nature of n xc in its vicinity. At these points, the PBE model for n xc is indistinguishable from that of the LDA, a particularly serious error in the atom core region 1(c), where effects of inhomogeneity are most apparent. At the same time, corrections to n xc from the atom core and other regions where the Laplacian is large are mapped to the n xc of regions where the gradient is large, producing misleading corrections in these regions as well. The GGA is rather designed to provide gradient corrections to the LDA on a system-averaged basis. It does in fact capture general trends in exchange and correlation as discussed briefly in the next section, but clearly not on a local or point-by-point basis. This approach is frequently justified by the observation that the energy density is not in itself a necessary component of an accurate DFT, rather only the total E xc and its functional derivative with respect to the density, the exchange-correlation potential. 2 Nonetheless, it is worth mentioning that the deviation of the LDA energy density from the VMC energy density closely follows the Laplacian of the density. For example, the minimum of the Laplacian, an hourglass-shaped region near the bond center, closely correlates with the region where we have found the largest positive deviation of the LDA energy density from the VMC value. 30 Likewise a large negative deviation in the LDA energy density occurs in the atom center where the Laplacian has a maximum, and a smaller negative deviation in the interstitial region where the Laplacian has a weak local maximum. This latter feature is clearly visible in the n xc and ǫ xc (r) data of Sec. IV. We expect that a GGA model that fits both gradient and Laplacian terms could prove very useful in improving the energy density and thus also the exchangecorrelation potential. C. Orbital effects in exchange and correlation Insight into the comparison of our VMC results and those obtained from the various models derived from the homogeneous electron gas may be obtained from a consideration of the semiconductor environment. First of all, a distinguishing feature of systems such as molecules or semiconductors which have a finite energy gap is that the ground state may be described in terms of an exponentially decaying localized basis. In a periodic system these are the Wannier functions, which in Si can be defined by a unitary transformation from the four valence bands to four orbitals per unit cell localized about the bond centers r I : 41,42 W I (r − r I ) = nk U nk,I ψ nk (r). The symmetry of the crystal requires that each orbital W I must be related to the others by a space group operation. Such orbitals are well localized on a given bond and describe well the bonding character of the valence ε VMC x ε LDA (r) x - (r) ε c VMC (r) ε Using Wannier functions obtained from our LDA orbitals and the method of Marzari and Vanderbilt, 43 we find that 97% of the the on-top value n x (r I , r I ) of the exchange hole at bond center is determined solely from the Wannier function localized on that bond site. Thus, the lack of sensitivity of the exchange hole with respect to reference position seen in Fig. 3 near the bond center is likely a reflection of the domination of the exchange hole in this region by a single Wannier function. This situation is reminiscent of that of the H 2 molecule in which the exchange hole is constructed from a single orbital and is totally insensitive to electron position. An exponentially decaying exchange hole typical of an insulator or finite system has the effect of lowering the exchange energy with respect to that of the LDA, owing to the more localized form of the hole as compared to that of the homogeneous electron gas; imposing a finiteranged hole has been an ingredient in constructing successful GGA methods. 25 In our case the actual E x is 1.5 eV lower than the LDA result, in good agreement with the PW91 GGA result; likewise, the the exchange energy per particle ǫ x (r) is lower than the LDA prediction through much, but not all, of the unit cell as shown in Fig. 5. The semiconductor environment also affects the correlation hole in several ways. First, there is a finite energy cost to correlate electrons as compared to the homogeneous electron gas and therefore there should be a smaller correlation response and correlation energy than predicted in the LDA. This is seen in the VMC correlation energy density and energy per particle at almost all points in the unit cell, as shown in Fig. 5. Again this trend is consistent with the assumptions made in the PW91 version of the GGA but does not correlate with those made in the density averaging methods. Secondly, as the electronic structure of the covalent bond plays an important role in exchange (in the form of a bond-orbital-like exchange hole near the bond center), it should do so as well in correlation. The signature of the existence of the covalent bond in the correlation hole is bond polarization, a clear feature of our VMC data. In the simplest bond-orbital picture, 44 this arises from the introduction into the noninteracting ground state of excited states in which a pair of electrons are excited from bonding to antibonding orbitals on the same bond site. 45 These states contribute a "left-right" correlation similar to that of H 2 and other diatomic molecules in which the probability density of the second electron is shifted along the bond axis to the opposite side of the bond as that occupied by the first electron. A similar excitation of nearest neighbor electrons causes van der Waals like correlation between bonds. 45 These effects are evident in the "bond-right" case (c) of Fig. 4 where the antibond orbital has a peak. At the same time, the correlation hole associated with this leftright polarization is negligible when the reference electron is placed on the nodal plane of the antibond which passes through the bond center normal to the bond axis: that is, the situation of cases (a) and (b) of Fig. 4. Consequently, the weakness of n c relative to the homogenous electron gas based DFT models in these two cases and the enhancement of the correlation hole on the bond axis have a plausible explanation in the differing contribution of the antibond correlation to the holes in each case. 46 Moreover, bond polarization contributes noticeably to the functional variation of ǫ c (r) in this bonding region. We find that the ǫ c (r) obtained from VMC data has a maximum (that is, a minimum in the energy reduction obtained by electron correlations) on a narrow ridge centered on the antibond node passing through the center of the bond with normal parallel to the bond axis. Although some of the variation in ǫ c (r) in the bonding region can be accounted for within the LDA, the behavior is sufficiently dissimilar to the local density variation as to lead to the largest discrepancy in the unit cell between the VMC and LDA results, as shown in Fig. 5. The significance of the antibond correlation and bond polarization in general is that it provides a partial explanation of the cancellation of errors in the LDA exchange and correlation hole seen in their sum. The exchangecorrelation hole of the homogeneous electron gas is essentially "dynamic" in nature, that is, responding to the position of the reference electron and largely insensitive to the details of electronic structure. The "nondynamic" or orbital-dependent features in the exchange hole, such as its insensitivity to particle position near bond center are a large potential source of error for the LDA and other models based on the homogeneous electron gas. The partial cancellation of these effects by a corresponding nondynamic feature of the correlation hole should then provide a combined hole much more amenable to the LDA and related density gradient corrections. We have made preliminary estimates of the antibond contribution to the correlation hole n c and the correlation energy per particle ǫ c (r) in the bonding region of Si using perturbation theory and perturbative configuration interaction methods 45 and localized orbitals derived from pseudopotential plane-wave DFT orbitals. We reproduce the qualitative differences in ǫ c (r) between the points studied in this paper, as well as a reasonable reproduction of the bond polarization features of Fig. 4. Details of this calculation are to be reported in a later paper. D. Error analysis As discussed in Section III, the exchange-correlation hole suffers from fairly large plane-wave cutoff errors in the atomic core where the pseudopotential orbitals vary rapidly. In order to produce a reasonable correlation hole, particularly in the core, it was necessary to correct for this error by taking the n x and n xc expanded to the same cutoff. Then a best estimate for n xc was taken by adding the finite cutoff estimate of n c to the exact n x , that is, an estimate of n xc obtained from correcting the exchange component of the Monte Carlo estimate: n xc = n approx xc + n x − n approx x .(21) The change in n xc is noticeable particularly for the atom center case Fig. 1(c), where the plane-wave correction results in a deeper and more localized n xc . The resulting ǫ xc (r), shown in Table I, varies from the raw VMC data by roughly 8%, increasing the agreement with WDA and ADA at the expense of the LDA value. However, given the poor agreement of the LDA and VMC values, the uncertainty in the VMC value does not significantly alter the assessment of a dramatic improvement by ADA and WDA in this region. A rather smaller change was noticeable for the angle-averaged hole for the "bond-right" position discussed in Figs 3 and 4. In this case, the difference is noticeable on the scale of the energy differences between the various theories, with the plane-wave correction favoring the ADA case. The correction was essentially negligible for the other points studied. A second source of error in the VMC data is due to the statistical nature of the Monte Carlo estimates of expectations. Statistical sampling leads to roughly equal errors in the expectations of each measured plane-wave component of the pair correlation function, and a homogenous background noise (up to the resolution of the planewave expansion) in the real space behavior of the function. With the assumption of homogeneous and uncorrelated background noise, the weighted angle-averaged holes shown should have an error that is roughly constant at large interparticle distances. We find this to be a reasonable description of the long-range fluctuations in this quantity, resulting in a very small error for all the cases we have studied. On the other hand, the energy and particle sum rule of the exchange-correlation hole, both integrals over a large volume, suffer more serious cumulative effects from background noise, particularly at large interparticle distances where n xc is essentially zero. As a result, the VMC particle sum rule as estimated from the plots in Fig. 1 typically varies by 1 to 3% from the correct value. The values for the exchange-correlation energy per particle and energy density reported in our previous studies 30,31 have been made by adjusting the exchange-correlation hole at each λ to obtain the correct particle sum rule, and numerical values at the various points considered in detail here are shown in Table I. This correction turns out to be noticeable at high density relative to the small differences between VMC and the various DFT models. A final source of error is variational bias in the trial ground-state wavefunction resulting from its limited variational flexibility. The resulting discrepancy from the true ground state can be significant for correlation: the optimized VMC correlation energy is 14% higher than that estimated by the nearly exact DMC calculation. Exchange, on the other hand, is relatively unaffected by variational bias, as the VMC density and the associated single-particle orbitals are expected to be very accurate. Convergence studies 29,47 of a Slater-Jastrow trial wavefunction for the Si atom reveal trends that we also expect to be observed in the present study. The correlation hole averaged over the position of the reference electron should be quite close to that of the true ground state, with deeper minima and sharper maximum in proportion to the change in E c . The hole as a function of position in the system will in addition show subtle alterations in shape, while preserving basic qualitative features, for example the bond polarization in Fig. 4. The correlation hole can be expected to be more accurate at higher densities which carry greater weight in a variational optimization of the wavefunction; however the Slater-Jastrow wavefunction guarantees important conditions on the hole at any density, such as the cusp condition and sum rules, so a reasonable estimate of the hole should be obtained even at quite low densities. Assuming that the generic trend at high density is to deepen the correlation energy of the hole by roughly 14%, we expect the overestimate of ǫ c (r) by the DFT models, as shown for example in Fig. 5 to be reduced by between 10 and 50% in the Si bond. Qualitative trends in this region will not be changed if the error in ǫ c (r) does not vary dramatically with position. In general, the various errors in the VMC data (plane wave cutoffs, sum rule enforcement, and variational bias) are most significant for the angle-averaged n xc and exchange-correlaction energy per particle at high density. Even though estimated corrections are rather small, the agreement with the various DFT's studied is quite close and even small changes in the VMC data can be significant. Consequently, the errors in our data contribute to the general difficulty we find in assessing the high density trends of the WDA and ADA. On the other hand, the difference between the DFT theories and the VMC are larger for the angle-averaged correlation hole. The qualitative trends in n c near bond center discussed in this paper are accordingly unaffected by error corrections though there are small differences in the quantitative value of ǫ c (r) at the "bond-right" position. Finally, the improvement of ADA and WDA over LDA at low density is so dramatic that the observed error corrections to the VMC are small in comparison. VII. CONCLUSION We have carried out a detailed analysis of the exchange-correlation hole and energy per particle in the Si crystal, comparing various DFT models to accurate numerical data calculated with the VMC method. We find that the WDA and ADA help overcome the major defects of the local density approximation at low densities and especially in the pseudopotential atom core, where the rapid change in density relative to the length scale of the LDA hole dramatically affects the shape and range of the hole, and thus improve the fit with the VMC energy density. The remaining discrepancies are due to the failure of density averaging to provide significant information at intermediate and high densities, where the inhomogeneity in the density is less severe, but the contribution to the energy density of subtle effects of the inhomogeneity is noticeable because of the higher density. These discrepancies are, at least in principle, the result of the inflexibility of the scaling form of the pair correlation function used to fit the hole, which is insensitive to subtle variations in density. The detailed investigation of exchange and correlation at high density reveal the importance of orbital correlations in both cases. We find that the exchange hole is well described by a Wannier bonding state near the bond center, and that a "left-right" correlation or bond polarization plays an important role in the sphericallyaveraged correlation hole and has a noticeable effect on the correlation energy density. This bond polarization has the effect of countering the coarse-scaled deviation of the exchange hole from an isotropic form centered on the electron. As a result, the exchange-correlation hole and energy density is much more reliably fit by the DFT models studied than either exchange or correlation alone. Viewed in another sense, these orbital effects lead to serious defects in the correlation hole and energy of models derived from the homogeneous electron gas. The WDA and ADA conspicuously fail to improve on the LDA both in the system averaged energies E x and E c and in detail, particularly in the bonding region. Here, the nonlocal corrections to the LDA employed by these models fail to account for the physical features responsible for the most significant errors in the LDA model. Of all the methods considered in this paper, the GGA alone accurately predicts E x and E c for Si. Despite the lack of a direct comparison between our data and current GGA models for n xc , 25 the assumptions made in constructing the GGA exchange and correlation hole appear to be generally born out by our results, with a notable exception in the low density, large inhomogeneity limit for the correlation hole. Thus we expect that the accurate GGA results for E x and E c are due to an improved description of n x and n c on average, if not locally. Interestingly, the point by point trends of our data seem to be best characterized by the Laplacian of the density, in contrast to the approach taken by current gradient-only GGA models. Moreover, exchange and correlation taken alone show nonlocal qualitative features at high density that may prove difficult to describe in terms of any local density expansion. It would be of great value with respect to augmenting exact-exchange calculations to have a compact expression of the bond polarization features of the correlation hole in terms of localized atomic or bond-centered orbitals, possibly with a short-range correction in LDA or GGA. We have done preliminary calculations on modeling this bond polarization in terms of excitations of electrons into antibonding states. However a conveniently usable and compact form for the correlation hole at long range re- FIG. 1 . 1Distance-weighted angle-averaged exchange-correlation holes at various points in the Si crystal: (a) bond center, (b) antibond position, (c) atom center and (d) tetrahedral interstitial site in the (110) plane. Correlation, exchange and exchange-correlation components for VMC data (thick solid lines) and several models are shown, with the exchange set off by -0.05 a.u. and exchange-correlation by -0.15 a.u. on the vertical axis for clarity. Solid line on the top half of each subplot shows the angle-averaged density langlen(r, u) as function of distance from reference electron. All quantities plotted are in atomic units. FIG. 2 . 2Correlation lengths of WDA and ADA exchange-correlation holes. The correlation length for the equivalent homogeneous electron gas hole,rs = (3/4πn(r)) −1/3 , obtained from the nonlocal DFT models discussed in the paper is plotted versus the correlation length rs derived from the local density for holes at representative points throughout the unit cell. Dotted line gives the LDA approximation, dashed line the rs factor obtained from using the unit cell average. FIG. 3 . 3Exchange hole near bond center: (a) off bond, perpendicular to bond axis, (b) bond center, (c) off bond center along [111] axis. Top half shows angle-averaged hole nx(u, r) weighted by 2πu versus distance from reference electron u for VMC data and several models. Bottom half shows contour plot of nx(r, r + u) in (110) plane, with location of reference electron as white dot. Gray scale is in increments of 0.005 a.u. with white region between -0.005 and 0.000 a.u. FIG. 4 . 4Correlation hole near bond center: (a) off bond, perpendicular to bond axis, (b) bond center, (c) off bond center along [111] axis. Top half shows angle-averaged hole weighted by 2πu versus distance from reference electron u for VMC data and several models . Bottom half shows contour plot of nc in (110) plane, with location of reference electron as white dot. Gray scale is in increments of 0.001 a.u. with white region between 0.001 and 0.002 a.u. FIG. 5 .W 5VMC exchange and correlation energy per particle in the (110) plane of the Si crystal, relative to the LDA value. The contours show increments of 0.01 a.u., with the thicker contour showing where the VMC and LDA values are equal. White dots show the locations of the reference electrons about which exchange and correlation holes are shown in Figs. 3 and 4. electrons. The exchange hole is most clearly represented in terms of these local functions and deviates strongly from the homogeneous electron gas model when these functions have little overlap. The exchange hole in terms of a Wannier basis is n x (r, r + u) I (r − r I )W * I (r + u − r I ) 2 . TABLE .ǫxc VMC VMC-I VMC-II LDA WDA ADA Bond Center -0.3743 -0.3706 -0.3724 -0.3804 -0.3657 -0.3671 Bond Axis Right -0.3427 -0.3425 -0.3357 -0.3803 -0.3657 -0.3671 Antibond Point -0.2947 -0.2954 -0.2957 -0.3090 -0.3117 -0.3068 Atom Center -0.2574 -0.2557 -0.2783 -0.1432 -0.3011 -0.2885 Interstitial -0.1654 -0.1655 -0.1650 -0.1348 -0.1801 -0.1772 ǫc VMC-II LDA WDA ADA Bond Center -0.0317 -0.0525 -0.0679 -0.0513 Bond Axis Right -0.0374 -0.0504 -0.0610 -0.0500 Off Bond Axis -0.0317 -0.0489 -0.0585 -0.0494 Antibond Point -0.0389 -0.0478 -0.0551 -0.0474 feature quite complex "dapple" patterns that remain af- ter the worst errors of the LDA model are smoothed out. 30,31 TABLE II . IIExchange and correlation energies Ex and Ec in eV per atom for VMC and various density functional methods described in the text. The results of a diffusion Monte Carlo DMC calculation to remove the variational bias due to the use of a trial wave function is also shown.LDA ADA WDA GGA VMC DMC Ex -27.66 -27.56 -27.37 -29.10 -29.15 -29.15 Ec -5.09 -5.11 -5.63 -3.93 -3.58 -4.08 Exc -32.75 -32.67 -33.00 -33.03 -32.73±0.01 -33.23±0.08 DE-FG02-97ER45632) and the National Science Foundation (Grant No. DMR-9724303). Department of Energy (Grant No ; U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratoryunder contract No. W-7405-Eng-48. We wish to thank Nicola Marzari for his kind. help in providing us Si Wannier orbitalsDepartment of Energy (Grant No. DE-FG02-97ER45632) and the Na- tional Science Foundation (Grant No. DMR-9724303). It was performed under the auspices of the U.S. Depart- ment of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W- 7405-Eng-48. We wish to thank Nicola Marzari for his kind help in providing us Si Wannier orbitals. . W Kohn, L J Sham, Phys. Rev. 1401133W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965). . R O Jones, O Gunnarsson, Rev. Mod. Phys. 61689R. O. Jones and O. Gunnarsson, Rev. Mod. Phys. 61, 689 (1989). . D C Langreth, M J Mehl, Phys. Rev. B. 281809D. C. Langreth and M. J. Mehl, Phys. Rev. B 28, 1809 (1983). . C Lee, W Yang, R G Parr, Phys. Rev. B. 37785C. Lee, W. Yang, and R. G. Parr, Phys. Rev. B 37, 785 (1988). . A D Becke, Phys. Rev. A. 383098A. D. Becke, Phys. Rev. A 38 3098 (1988). Perdew in Electronic Structure of Solids '91. J P , P. Ziesche and H. EschrigAkademie VerlagBerlinJ. P. Perdew in Electronic Structure of Solids '91, edited by P. Ziesche and H. Eschrig (Akademie Verlag, Berlin 1991). . J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 771396J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77 3865 (1996); 78, 1396 (1997E). . O Gunnarsson, M Jonson, B I Lundqvist, Phys. Rev. B. 203136O. Gunnarsson, M. Jonson, and B. I. Lundqvist, Phys. Rev. B 20, 3136 (1979). . J A Alonso, L A Girifalco, Solid State Commun. 24135J. A. Alonso and L. A. Girifalco, Solid State Commun. 24, 135 (1977); . Phys. Rev. B. 173735Phys. Rev. B 17, 3735 (1978). . J P Perdew, A Zunger, Phys. Rev. B. 235048J. P. Perdew and A. Zunger, Phys. Rev. B 23, 5048 (1981). . A D Becke, J. Chem. Phys. 985648A. D. Becke, J. Chem. Phys. 98, 1372 (1993); 98, 5648 (1993). . A Savin, Int. J. Quantum Chem. Symp. 2259A. Savin, Int. J. Quantum Chem. Symp. 22, 59 (1996); A Savin, Recent Developments and Applications of Modern Density Functional Theory. J.M. SeminarioAmsterdamElsevierA. Savin, in Recent Developments and Applications of Mod- ern Density Functional Theory, edited by J.M. Seminario (Elsevier, Amsterdam 1996). . D C Langreth, J P Perdew, Phys. Rev. B. 215469D. C. Langreth and J. P. Perdew, Phys. Rev. B 21, 5469 (1980). . S Kurth, J P Perdew, Phys. Rev. B. 5910461S. Kurth and J. P. Perdew, Phys. Rev. B 59, 10461 (1999). . J D Talman, W F Shadwick, Phys. Rev. A. 1436J. D. Talman and W. F. Shadwick, Phys. Rev. A 14, 36 (1976). . J B Krieger, Y Li, G J Iafrate, Phys. Rev. B. 44165J. B. Krieger, Y. Li, and G. J. Iafrate, Phys. Rev. B 44, 10437 (1991); 46, 5453 (1992); 47, 165 (1993). . M Städele, J A Majewski, P Vogl, A Görling, Phys. Rev. Lett. 792089M. Städele, J. A. Majewski, P. Vogl, and A. Görling, Phys. Rev. Lett. 79, 2089 (1997). . M Städele, M Moukara, J A Majewski, P Vogl, A Görling, Phys. Rev. B. 5910031M. Städele, M. Moukara, J. A. Majewski, P. Vogl, and A. Görling, Phys. Rev. B 59, 10031 (1999). . J Harris, R O Jones, J. Phys. F. 41170J. Harris and R. O. Jones, J. Phys. F 4, 1170 (1974); . D C Langreth, J P Perdew, Solid State Commun. 171425D. C. Langreth and J. P. Perdew, Solid State Commun. 17, 1425 (1975); . O Gunnarsson, B I Lundqvist, Phys. Rev. B. 134274O. Gunnarsson and B. I. Lundqvist, Phys. Rev. B 13, 4274 (1976). J A Alonso, N A Cordero, Recent Developments and Applications of Modern Density Functional Theory. J.M. SeminarioAmsterdamElsevierJ. A. Alonso and N. A. Cordero, in Recent Developments and Applications of Modern Density Functional Theory, edited by J.M. Seminario (Elsevier, Amsterdam 1996). . D J Singh, Phys. Rev. B. 4814099D. J. Singh, Phys. Rev. B 48, 14099 (1993). . J P A Charlesworth, Phys. Rev. B. 5312666J. P. A. Charlesworth, Phys. Rev. B 53, 12666 (1995). . I I Mazin, D J Singh, Phys. Rev. B. 576879I. I. Mazin and D. J. Singh, Phys. Rev. B 57, 6879 (1998). . M S Hybertson, S G Louie, Solid State Commun. 51451M. S. Hybertson and S. G. Louie, Solid State Commun. 51, 451 (1984). . J P Perdew, K Burke, Y Wang, Phys. Rev. B. 5416533J. P. Perdew, K. Burke, and Y. Wang, Phys. Rev. B 54, 16533 (1996). M Levy, Recent Developments and Applications of Modern Density Functional Theory. J.M. SeminarioAmsterdamElsevierM. Levy, in Recent Developments and Applications of Mod- ern Density Functional Theory, edited by J.M. Seminario (Elsevier, Amsterdam 1996). M Ernzerhof, J P Perdew, K Burke, Density Functional Theory. R NalewajskiBerlinSpringer-VerlagM. Ernzerhof, J. P. Perdew, and K. Burke, in Density Func- tional Theory, edited by R Nalewajski, (Springer-Verlag, Berlin, 1996). . K Burke, J P Perdew, M Ernzerhof, J. Chem. Phys. 1093760K. Burke, J. P. Perdew, and M. Ernzerhof, J. Chem. Phys. 109, 3760 (1998). . A C Cancio, C Y Fong, J S Nelson, physics/0004073Phys. Rev. A. 6262507A. C. Cancio, C. Y. Fong, and J. S. Nelson, Phys. Rev. A 62, 062507 (2000), physics/0004073. . R Q Hood, M Y Chou, A J Williamson, G Rajagopal, R J Needs, W M C Foulkes, Phys. Rev. Lett. 783350R. Q. Hood, M. Y. Chou, A. J. Williamson, G. Rajagopal, R. J. Needs, and W. M. C. Foulkes, Phys. Rev. Lett. 78, 3350 (1997). . R Q Hood, M Y Chou, A J Williamson, G Rajagopal, R J Needs, Phys. Rev. B. 578972R. Q. Hood, M. Y. Chou, A. J. Williamson, G. Rajagopal, and R. J. Needs, Phys. Rev. B 57, 8972 (1998). . D M Bylander, L Kleinman, Phys. Rev. Lett. 743660D. M. Bylander and L. Kleinman, Phys. Rev. Lett. 74, 3660 (1995); . Phys. Rev. B. 559432Phys. Rev. B 55, 9432 (1997). M Ernzerhof, K Burke, J P Perdew, Recent Developments and Applications of Modern Density Functional Theory. J.M. SeminarioAmsterdamElsevierM. Ernzerhof, K. Burke, and J. P. Perdew, in Recent De- velopments and Applications of Modern Density Functional Theory, edited by J.M. Seminario (Elsevier, Amsterdam 1996). . G Ortiz, P Ballone, Phys. Rev. B. 501391G. Ortiz and P. Ballone, Phys. Rev. B 50, 1391 (1994). . K Burke, F G Cruz, K C Lam, J. Chem. Phys. 1098161K. Burke, F. G. Cruz, and K. C. Lam, J. Chem. Phys. 109, 8161 (1998). . A J Williamson, G Rajagopal, R J Needs, L M Fraser, W M C Foulkes, Y Wang, M Y Chou, Phys. Rev. B. 554851A. J. Williamson, G. Rajagopal, R. J. Needs, L. M. Fraser, W. M. C. Foulkes, Y. Wang, and M. Y. Chou, Phys. Rev. B 55, R4851 (1997). . J P Perdew, Y Wang, Phys. Rev. B. 4612947J. P. Perdew and Y. Wang, Phys. Rev. B 46, 12947 (1992); . J P Perdew, K Burke, Y Wang, 5416533J. P. Perdew, K. Burke, and Y. Wang, ibid. 54, 16533 (1996). . D Ceperley, G V Chester, M H Kalos, Phys. Rev. B. 163081D. Ceperley, G. V. Chester, and M. H. Kalos, Phys. Rev. B 16, 3081 (1971). B L Hammond, W A Lester, Jr , P J Reynolds, Monte Carlo Methods in ab initio Quantum Chemistry. SingaporeWorld ScientificB. L. Hammond, W. A. Lester, Jr., and P. J. Reynolds, Monte Carlo Methods in ab initio Quantum Chemistry (World Scientific, Singapore 1994). . A Puzder, M Y Chou, R Q Hood, unpublishedA. Puzder, M. Y. Chou, and R. Q. Hood, unpublished. . J , Des Cloizeaux, Phys. Rev. 129554J. Des Cloizeaux, Phys. Rev. 129, 554 (1963). . J Zak, Phys. Rev. Lett. 541075J. Zak, Phys. Rev. Lett. 54, 1075 (1985). . N Marzari, D Vanderbilt, cond-mat/9707145Phys. Rev. B. 5612847N. Marzari and D. Vanderbilt, Phys. Rev. B 56, 12847 (1997), cond-mat/9707145. Electronic Structure and the Properties of Solids. W A Harrison, W. H. FreemanSan FranciscoW. A. Harrison, Electronic Structure and the Properties of Solids, (W. H. Freeman, San Francisco, 1980). Electron Correlations in Molecules and Solids. Peter Fulde, Springer-VerlagBerlinPeter Fulde, Electron Correlations in Molecules and Solids, (Springer-Verlag, Berlin, 1991). It is interesting to note that even in the antibondunfavorable cases there exist smaller polarization features, either on nearest-neighbor bonds (b) or along a different nodal plane (a). These are not so easily explained in a simple bond-orbital model but perhaps represent higher order orbital effectsIt is interesting to note that even in the antibond- unfavorable cases there exist smaller polarization features, either on nearest-neighbor bonds (b) or along a different nodal plane (a). These are not so easily explained in a sim- ple bond-orbital model but perhaps represent higher order orbital effects. . A C Cancio, C Y Fong, unpublishedA. C. Cancio and C. Y. Fong, unpublished.
[]
[ "Fluctuations in the heterogeneous multiscale methods for fast-slow systems", "Fluctuations in the heterogeneous multiscale methods for fast-slow systems" ]
[ "David Kelly [email protected] \nCourant Institute\nNew York University\nNYUSA\n", "Eric Vanden-Eijnden \nCourant Institute\nNew York University\nNYUSA\n" ]
[ "Courant Institute\nNew York University\nNYUSA", "Courant Institute\nNew York University\nNYUSA" ]
[]
How heterogeneous multiscale methods (HMM) handle fluctuations acting on the slow variables in fast-slow systems is investigated. In particular, it is shown via analysis of central limit theorems (CLT) and large deviation principles (LDP) that the standard version of HMM artificially amplifies these fluctuations. A simple modification of HMM, termed parallel HMM, is introduced and is shown to remedy this problem, capturing fluctuations correctly both at the level of the CLT and the LDP. Similar type of arguments can also be used to justify that the τ -leaping method used in the context of Gillespie's stochastic simulation algorithm for Markov jump processes also captures the right CLT and LDP for these processes.
10.1186/s40687-017-0112-2
[ "https://arxiv.org/pdf/1601.02147v1.pdf" ]
37,084,071
1601.02147
cc425b69f23c1f39fd9c54bf30f7932a2eca8e54
Fluctuations in the heterogeneous multiscale methods for fast-slow systems 9 Jan 2016 January 12, 2016 David Kelly [email protected] Courant Institute New York University NYUSA Eric Vanden-Eijnden Courant Institute New York University NYUSA Fluctuations in the heterogeneous multiscale methods for fast-slow systems 9 Jan 2016 January 12, 2016Dedicated with admiration and friendship to Bjorn Engquist on the occasion of his 70th birthday. How heterogeneous multiscale methods (HMM) handle fluctuations acting on the slow variables in fast-slow systems is investigated. In particular, it is shown via analysis of central limit theorems (CLT) and large deviation principles (LDP) that the standard version of HMM artificially amplifies these fluctuations. A simple modification of HMM, termed parallel HMM, is introduced and is shown to remedy this problem, capturing fluctuations correctly both at the level of the CLT and the LDP. Similar type of arguments can also be used to justify that the τ -leaping method used in the context of Gillespie's stochastic simulation algorithm for Markov jump processes also captures the right CLT and LDP for these processes. Introduction The heterogeneous multiscale methods (HMM) [EE03, VE03, EEL + 07, AEEVE12] provide an efficient strategy for integrating fast-slow systems of the type dX ε dt = f (X ε , Y ε ) (1.1) dY ε dt = 1 ε g(X ε , Y ε ) . The method relies on an averaging principle that holds under some assumption of ergodicity and states that as ε → 0 the slow variables X ε can be uniformly approximated by the solution to the following averaged equationẊ = F (X) . (1.2) Here F (x) = f (x, y)µ x (dy) is the averaged vector field, with µ x (dy) being the ergodic invariant measure of the fast variables Y x with a frozen x variable. This averaging principle is akin to the law of large number (LLN) in the present context and it suggests to simulate the evolution of the slow variables using (1.2) rather than (1.1) when ε is small. This requires to estimate F (x), which typically has to be done on-the-fly given the current value of the slow variables. To this end, note that if Euler's method with time step ∆t is used as integrator for the slow variables in (1.1), we can approximate X ε (n∆t) by x n satisfying the recurrence x ε n+1 = x ε n + (n+1)∆t n∆t f (x ε n , Y ε x ε n (s))ds , (1.3) where Y ε x denotes the solution to the second equation in (1.1) with X ε kept fixed at the value x. If ε is small enough that ∆t/ε is larger than the mixing time of Y ε x , the Birkhoff integral in (1.4) is in fact close to the averaged coefficient in (1.2), in the sense that F (x) ≈ 1 ∆t (n+1)∆t n∆t f (x, Y ε x (s))ds . (1.4) Therefore (1.3) can also be thought of as an integrator for the averaged equation (1.2). In fact, when ε is small, one can obtain a good approximation of F (x) using only a fraction of the macro time step. In particular, we expect that 1 ∆t (n+1)∆t n∆t f (x, Y ε x (s))ds ≈ λ ∆t (n+1/λ)∆t n∆t f (x, Y ε x (s))ds =: F n (x) (1.5) with λ ≥ 1 provided that ∆t/(ελ) remains larger than the mixing time of Y ε x . This observation is at the core of HMM-type methods -in essence, they amount to replacing (1.3) by x n+1 = x n + ∆t F n (x n ) . (1.6) Since the number of computations required to compute the effective vector field F n (x) is reduced by a factor λ, this is also the speed-up factor for an HMM-type method. From the argument above, it is apparent that there is another, equivalent way to think about HMM-type methods, as was first pointed out in [FVE04] (see also [VE07,ERVE09,ASST12, AEK + 13]. Indeed, the integral defining F n (x) in (1.5) can be recast into an integral on the full interval [n∆t, (n + 1)∆t] by a change of integration variables, which amount to rescaling the internal clock of the variables Y ε x . In other words, HMM-type methods can also be thought of as approximating the fast-slow system in (1.1) by d X ε dt = f ( X ε , Y ε ) (1.7) d Y ε dt = 1 ελ g( X ε , Y ε ) . If ε ≪ 1, we can reasonably replace ε with ελ, provided that this product still remains small -in particular, the evolution of the slow variables in (1.7) is still captured by the limiting equation (1.2). Hence HMM-type methods are akin to artificial compressibility [Cho67] in fluid simulations and Car-Parrinello methods [CP85] in molecular dynamics. The approximations in (1.5) or (1.7) are perfectly reasonable if we are only interested in staying faithful to the averaged equation (1.2) -that is to say, HMM-type approximations will have the correct law of large numbers (LLN) behavior. However, the fluctuations about that average will be enhanced by a factor of λ. This is quite clear from the interpretation (1.7), since in the original model (1.1), the local fluctuations about the average are of order √ ε and in (1.7) they are of order √ ελ. The large fluctuations about the average caused by rare events are similarly inflated by a factor of λ. This can be an issue, for example in metastable fastslow systems, where the large fluctuations about the average determine the waiting times for transitions between metastable states. In particular we shall see that an HMM-type scheme drastically decreases these waiting times due to the enhanced fluctuations. In this article we propose a simple modification of HMM which corrects the problem of enhanced fluctuations. The key idea is to replace the approximation (1.5) with 1 ∆t (n+1)∆t n∆t f (x, Y ε x (s))ds ≈ λ j=1 1 ∆t (n+1/λ)∆t n∆t f (x, Y ε,j x (s))ds , (1.8) where each Y ε,j x is an independent copy of Y ε x . By comparing (1.5) with (1.8), we see that the first approximation is essentially replacing a sum of λ weakly correlated random variables with one random variable, multiplied by λ. This introduces correlations that should not be there and in particular results in enhanced fluctuations. In (1.8), we instead replace the sum of λ weakly correlated random variables with a sum of λ independent random variables. This is a much more reasonable approximation to make, since these random variables are becoming less and less correlated as ε gets smaller. Since the terms appearing on the right hand side are independent of each other, they can be computed in parallel. Thus if one has λ CPUs available, then the real time of the computations is identical to HMM. For this reason, we call the modification the parallelized HMM (PHMM). Note that, in analogy to (1.7), one can interpret PHMM as approximating (1.1) by the system d X ε dt = 1 λ λ j=1 f ( X ε , Y ε,j ) (1.9) d Y ε,j dt = 1 ελ g( X ε , Y ε,j ) for j = 1, . . . , λ . It is clear that this approximation will be as good as (1.7) in term of the LLN, but in contrast with (1.7), we will show below that it captures the fluctuations about the average correctly, both in terms of small Gaussian fluctuations and large fluctuations describing rare events. A similar observation in the context of numerical homogenization was made in [BJ11,BJ14]. The outline of the remainder of this article is as follows. In Section 2 we recall the averaging principle for stochastic fast-slow systems and describe how to characterize the fluctuations about this average, including local Gaussian fluctuations and large deviation principles. In Section 3 we recall the HMM-type methods. In Section 4 we show that they lead to enhanced fluctuations. In Section 5 we introduce the PHMM modification and in Section 6 show that this approximation yields the correct fluctuations, both in terms of local Gaussian fluctuations and large deviations. In Section 7 we test PHMM for a variety of simple models and conclude in Section 8 with a discussion. Average and fluctuations in fast-slow systems For simplicity we will from here on assume that the fast variables are stochastic. This assumption is convenient, but not necessary, since all the averaging and fluctuation properties stated below are known to hold for large classes of fast-slow systems with deterministically chaotic fast variables [Kif92,Dol04,KMb,KMa]. The fast-slow systems we investigate are given by dX ε dt = f (X ε , Y ε ) (2.1) dY ε = 1 ε g(X ε , Y ε )dt + 1 √ ε σ(X ε , Y ε )dW , where f : R d × R e → R d , g : R d × R e → R e , σ : R d × R e → R e × R e , and W is a standard Wiener process in R e . We assume that for every x ∈ R d , the Markov process described by the SDE dY x = b(x, Y x )dt + σ(x, Y x )dW (2.2) is ergodic, with invariant measure µ x , and has sufficient mixing properties. For full details on the necessary mixing properties, see for instance [FW12]. In this section we briefly recall the averaging principle for stochastic fast-slow systems and discuss two results that characterize the fluctuations about the average, the central limit theorem (CLT) and the large deviations principle (LDP). Averaging principle As ε → 0, each realization of X ε , with initial condition X ε (0) = x, tends towards a trajectory of a deterministic system dX dt = F (X) ,X(0) = x , (2.3) where F (x) = f (x, y)µ x (dy) and µ x is the invariant measure corresponding to the Markov process dY x = b(x, Y x )dt + σ(x, Y x )dW . The convergence is in an almost sure and uniform sense: lim ε→0 sup t≤T |X ε (t) −X(t)| = 0 for every fixed T > 0, every choice of initial condition x and almost surely every initial condition Y ε (0) (a.s. with respect to µ x ) as well as every realization of the Brownian paths driving the fast variables. Details of this convergence result in the setting above are given in (for instance) [FW12, Chapter 7.2]. Small fluctuations -CLT The small fluctuations of X ε about the averaged systemX can be understood by characterizing the limiting behavior of Z ε := X ε −X √ ε , as ε → 0. It can be shown that the process Z ε converges in distribution (on the space of continuous functions C([0, T ]; R d ) endowed with the sup-norm topology) to a process Z defined by the SDE dZ = B 0 (X)Zdt + η(X)dV , Z(0) = 0 , (2.4) HereX solves the averaged system in (2.3), V is a standard Wiener process, B 0 := B 1 + B 2 with B 1 (x) = ∇ x f (x, y)µ x (dy) B 2 (x) = ∞ 0 dτ µ x (dy)∇ y E y f (x, Y x (τ )) ∇ x b(x, y) and η(x)η T (x) = ∞ 0 dτ Ef (x, Y x (0)) ⊗f (x, Y x (τ ) , wheref (x, y) = f (x, y)−F (x), E y denotes expectation over realizations of Y x with Y x (0) = y, and E denotes expectation over realization of Y x with Y x (0) ∼ µ x . We include next a formal argument deriving this limit, as it will prove useful when analyzing the multiscale approximation methods. We will replicate the argument given in [BGTVE15]; a more complete and rigorous argument can be found in [FW12,Chapter 7.3]. First, we write a system of equation for the triple (X, Z ε , Y ε ) in the following approximated form, which uses nothing more than Taylor expansions of the original system in (1.1): dX dt = F (X) dZ ε dt = 1 √ εf (X, Y ε ) + ∇ x f (X, Y ε ) + O( √ ε) dY ε = 1 ε b(X, Y ε )dt + 1 √ ε ∇ x b(X, Y ε )Z ε dt + 1 √ ε σ(X, Y ε )dW + O(1) . We now proceed with a classical perturbation expansion on the generator of the triple (X, Z ε , Y ε ). In particular we have L ε = 1 ε L 0 + 1 √ ε L 1 + L 2 + . . . where L 0 = b(x, y) · ∇ y + a(x, y) : ∇ 2 y L 1 =f (x, y) · ∇ z + (∇ x b(x, y)z) · ∇ y L 2 = F (x) · ∇ x + (∇ x f (x, y)z) · ∇ z and a = σσ T . Let u ε (x, z, y, t) = E (x,z,y) ϕ(X(t), Z ε (t), Y ε (t) ) and introduce the ansatz u ε = u 0 + √ εu 1 + εu 2 + . . . . By substituting u ε into ∂ t u ε = L ε u ε and equating powers of ε we obtain O(ε −1 ) : L 0 u 0 = 0 O(ε −1/2 ) : L 0 u 1 = −L 1 u 0 O(ε −1 ) : ∂ t u 0 = L 2 u 0 + L 1 u 1 + L 0 u 2 . From the O(ε −1 ) identity, we obtain u 0 = u 0 (x, z, t), confirming that the leading order term is independent of y. By the Fredholm alternative, the O(ε −1/2 ) identity has a solution u 1 which has the Feynman-Kac representation u 1 (x, y, z) = ∞ 0 dτ E y f (x, Y x (τ )) · ∇ z u 0 (x, z) , where Y x denotes the Markov process generated by L 0 , i.e. the solution of (2.2). Finally, if we average the O(1) identity against the invariant measure corresponding to L 0 , we obtain ∂ t u 0 = F (x)∇ x u 0 + µ x (dy)(∇ x f (x, y)z) · ∇ z u 0 + µ x (dy) ∞ 0 dτf (x, y) ⊗ E yf (x, Y x (τ )) : ∇ 2 z u 0 + µ x (dy)(∇ x b(x, y)z) ∞ 0 dτ ∇ y E yf (x, Y x (τ ))∇ z u 0 . Clearly, this is the forward Kolmogorov equation for the Markov process (X, Z) defined by dX dt = F (X) dZ = B 0 (X)Zdt + η(X)dV with B 0 and η defined as above. Large fluctuations -LDP A large deviation principle (LDP) for the fast-slow system (2.1) quantifies probabilities of O(1) fluctuations of X ε away from the averaged trajectoryX. The probability of such events vanishes exponentially quickly and as a consequence are not accounted for by the CLT fluctuations, hence an LDP accounts for the rare events. We say that the slow variables X ε satisfy a large deviation principle (LDP) with action functional S [0,T ] if for any set Γ ⊂ {γ ∈ C([0, T ], R d ) : γ(0) = x} we have − inf γ∈Γ S [0,T ] (γ) ≤ lim inf ε→0 ε log P (X ε ∈ Γ) (2.5) ≤ lim sup ε→0 ε log P (X ε ∈ Γ) ≤ − inf γ∈Γ S [0,T ] (γ) , whereΓ andΓ denote the interior and closure of Γ respectively. An LDP also determines many important features of O(1) fluctuations that occur on large time scales, such as the probability of transition from one metastable set to another. For example, suppose that X ε is known to satisfy an LDP with action functional S [0,T ] . Let D be an open domain in R d with smooth boundary ∂D and let x * ∈ D be an asymptotically stable equilibrium for the averaged systemẊ = F (X). When ε ≪ 1, we expect that a trajectory of X ε that starts in D will tend towards the equilibrium x * and exhibit O( √ ε) fluctuations about the equilibrium -these fluctuations are described by the CLT. On very large time scales, these small fluctuations have a chance to 'pile up' into an O(1) fluctuation, producing behavior of the trajectory that would be considered impossible for the averaged system. Such fluctuations are not accurately described by the CLT and requires the LDP instead. For example, the asymptotic behaviour of escape time from the domain D, τ ε = inf{t > 0 : X ε (t) / ∈ D} , can be quantified in terms of the quasi-potential defined by V (x, y) = inf T >0 inf γ(0)=x,γ(T )=y S [0,T ] (γ) (2.6) Under natural conditions, it can be shown that for any x ∈ D lim ε→0 ε log E x τ ε = inf y∈∂D V (x * , y) . Hence the time it takes to pass from the neighborhood of one equilibrium to another may be quantified using the LDP. Details on the escape time of fast-slow systems can be found in [ H : R d × R d → R by H(x, θ) = lim T →∞ 1 T log E y exp θ · T 0 f (x, Y x (s))ds , (2.7) where Y x denotes the Markov process governed by dY x = b(x, Y x )dt + σ(x, Y x )dW . Let L : R d × R d → R be the Legendre transform of H: L (x, β) = sup θ (θ · β − H(x, θ)) . (2.8) Then the action functional is given by S [0,T ] (γ) = T 0 L (γ(s),γ(s))ds . (2.9) It can also be shown that the function u(t, x) = inf γ(0)=x S [0,t] (γ) satisfies the Hamilton- Jacobi equation ∂ t u(t, x) = H(x, ∇u(t, x)) . (2.10) Donsker-Varadhan theory tells us that the connection between Hamilton-Jacobi equations and LDPs is in fact much deeper. Firstly, Varadhan's Lemma states that if a process X ε is known to satisfy an LDP with some associated Hamiltonian H, then for any ϕ : R d → R we have the generalized Laplace method-type result lim ε→0 ε log E x exp ε −1 ϕ(X ε (t)) = S t ϕ(x) (2.11) where S t is the semigroup associated with the Hamilton-Jacobi equation ∂ t u = H(x, ∇u). Conversely, if it is known that (2.11) holds for all (x, t) and a suitable class of ϕ, then the inverse Varadhan's lemma states that X ε satisfies an LDP with action functional given by (2.8), (2.9). Hence we can use (2.11) to determine the action functional for a given process. In the next few sections, we will exploit both sides of Varadhan's lemma when investigating the large fluctuations of the HMM and related schemes. More complete discussions on Varadhan's Lemma can be found in [DZ09, Chapters 4.3, 4.4]. HMM for fast-slow systems When applied to the stochastic fast-slow system (2.1), HMM-type schemes rely on the fact that the slow X ε variables, and the coefficients that govern them, converge to a set of reduced variables as ε tends to zero. We will describe a simplest version of the method below, which is more convenient to deal with mathematically. Before proceeding, we digress briefly on notation. When referring to continuous time variables we will always use upper case symbols (X ε , Y ε etc) and when referring to discrete time approximations we will always use lower case symbols (x ε n , y ε n etc). We will also encounter continuous time variables whose definition depends on the integer n for which we have t ∈ [n∆t, (n + 1)∆t). We will see below that such continuous time variables are used to define discrete time approximations. In this situation we will use upper case symbols with a subscript n (eg. X ε n ). Let us now describe a 'high-level' version of HMM. Fix a step size ∆t and define the intervals I n,∆t := [n∆t, (n + 1)∆t). On each interval I n,∆t we update x ε n ≈ X ε (n∆t) to x ε n+1 ≈ X ε ((n + 1)∆t) via an iteration of the following two steps: 1. (Micro step) Integrate the fast variables over the interval I n,∆t , with the slow variable frozen at X ε = x ε n . That is, the fast variables are approximated by Y ε n (t) = Y ε n (n∆t) + 1 ε t n∆t g(x ε n , Y ε n (s))ds + 1 √ ε t n∆t σ(x ε n , Y ε n (s))dW (s) (3.1) for n∆t ≤ t ≤ (n+1/λ)∆t with some λ ≥ 1 (that is, we do not necessarily integrate the Y ε n variables over the whole time window). Due to the ergodicity of Y x , the initialization of Y ε n is not crucial to the performance of the algorithm. It is however convenient to use Y ε n+1 (0) = Y ε n ((n + 1/λ)∆t), since this reinitialization leads to the interpretation of the HMM scheme given in (3.5) below. 2. (Macro step) Use the time series from the micro step to update x ε n to x ε n+1 via x ε n+1 = x ε n + λ (n+1/λ)∆t n∆t f (x ε n , Y ε n (s))ds . (3.2) Note that we do not require Y ε n over the whole ∆t time step, but only a fraction of the step large enough for Y ε n to mix. Indeed, if ε is small enough, we have the approximate equality λ ∆t (n+1/λ)∆t n∆t f (x ε n , Y ε n (s))ds ≈ 1 ∆t (n+1)∆t n∆t f (x ε n , Y ε n (s))ds since both sides are close the the ergodic mean f (x ε n , y)dµ x ε n (y). Clearly, the efficiency of the methods comes from the fact that we do not need to compute the fast variables on the whole time interval I n,∆t but only a 1/λ fraction of it. Hence λ should be considered the speed-up factor of HMM. As already stated, the algorithm above is a high-level version, in that one must do further approximations to make the method implementable. For example, one typically must specify some approximation scheme to integrate (3.1), for instance with Euler-Maruyama we compute the time series by y ε n,m+1 = y ε n,m + δt ε g(x ε n , y ε n,m ) + δt ε σ(x ε n , y ε n,m )ξ n,m , (3.3) where 0 ≤ m ≤ M is the index within the micro step, ξ n,m are i.i.d. standard Gaussians and the micro-scale step size δt is much smaller than the macro-scale step size ∆t. In the macro step, we would similarly have x ε n+1 = x ε n + ∆t F n (x ε n ) (3.4) where F n (x) = 1 M M m=1 f (x, y ε n,m ) and M = ∆t/(δtλ). The following observation, which is taken from [FVE04], will allow us to easily describe the average and fluctuations of the above method. On each interval I n,∆t , the high-level HMM scheme described above is equivalently given by x ε n+1 = X ε n ((n + 1)∆t), where X ε n solves the system dX ε n dt = f (x ε n , Y ε n ) (3.5) d Y ε n = 1 ελ b(x ε n , Y ε n )dt + 1 √ ελ σ(x ε n , Y ε n )dB , defined on the interval n∆t ≤ t ≤ (n + 1)∆t, with the initial condition X ε n (n∆t) = x ε n . This can be checked by a simple rescaling of time. It is clear that the efficiency of HMM essentially comes from saying that the fast-slow system is not drastically changed if one replaces ε with the slightly larger, but still very small ελ. Average and fluctuations in HMM methods In this section we investigate whether the limit theorems discussed in Section 2, i.e. the averaging principle, the CLT fluctuations and the LDP fluctuations, are also valid in the HMM approximation a fast-slow system. We will see that the averaging principle is the only property that holds, and that both types of fluctuations are inflated by the HMM method. Averaging By construction, HMM-type schemes capture the correct averaging principle. More precisely, if we take ε → 0 then the sequence x ε n converges to somex n , wherex n is a numerical approximation of the true averaged systemX. If this numerical approximation is well-posed, the limits ε → 0 and ∆t → 0 commute with one another. Hence the HMM approximation x ε n is consistent, in that it features approximately the same averaging behavior as the original fast-slow system. We will argue the claim by induction. Suppose that for some n ≥ 0 we know that lim ε→0 x ε n =x n (the n = 0 claim is trivial, since they are both simply the initial condition). Then, using the representation (3.5) we know that x ε n+1 = X ε n ((n + 1)∆t) where X ε n (n∆t) = x ε n . Since (3.5) is a fast-slow system of the form (2.1) we can apply the averaging principle from Section 2. In particular it follows that X ε n →X n uniformly (and almost surely) on I n,∆t , whereX n satisfies the averaged ODE dX n dt = f (x n , y)µx n (dy) = F (x n ) . Since the right hand side is a constant, it follows that x ε n+1 →x n+1 as ε → 0, wherē x n+1 =x n + F (x n )∆t . This is nothing more than the Euler approximation of the true averaged variablesX, which completes the induction and hence the claim. Introducing an integrator in to the micro-step will make things more complicated, as the invariant measures appearing will be those of the discretized fast variables. In [MSH02] it is shown that discretizations of SDEs often do not possess the ergodic properties of the original system. For those situations where no such issues arise, rigorous arguments concerning this scenario, including rates of convergence for the schemes, are given in [ELVE05]. Small fluctuations For HMM-type methods, the CLT fluctuations about the average become inflated by a factor of √ λ. That is, if we define z ε n+1 = x ε n+1 −x n+1 √ ε , then as ε → 0, the fluctuations described by z ε n+1 are not consistent with (2.4), but rather with the SDE dZ = B(X)Zdt + √ λη(X)dV , Z(0) = 0 (4.1) whereX satisfies the correct averaged system. As above, by consistency we mean that when we take ε → 0, the sequence {z ε n } n≥0 converges to some well-posed discretization of the SDE (4.1). Since Z(0) = 0, it is easy to see that the solution to this equation is simply √ λ times the solution of (2.4). Hence the fluctuations of the HMM-type scheme are inflated by a factor of √ λ. It is convenient to look instead at the rescaled fluctuationŝ z ε n = z ε n / √ λ = x ε n −x n √ ελ , since this allows us to reproduce the argument from Section 2.2, with ε ′ = ελ playing the role of ε. We will again argue by induction, assuming for some n ≥ 0 thatẑ ε n →ẑ n as ε → 0 (the n = 0 case is trivial). The rescaled fluctuations are given byẑ ε n+1 = Z ε n ((n + 1)∆t) where Z ε n (t) = (X ε n (t) − X n (t))/ √ ελ and X ε n (t) is governed by the system (3.5) with initial condition X ε n (n∆t) = x ε n andX n satisfies dX n dt = F (x n ) with initial conditionX n (n∆t) =x n . We can then obtain the reduced equations for the pair (X ε n , Z ε n ) by arguing exactly as in Section 2. Indeed, the triple (X n , Z ε n , Y ε n ) is governed by the system dX n dt = F (x n ) d Z ε n dt = 1 √ ελf (x n , Y ε n ) + ∇ x f (x n , Y ε n )ẑ n + O( √ ελ) d Y ε n = 1 ελ b(x n , Y ε n )dt + 1 √ ελ ∇ x b(x n , Y ε n )ẑ n dt + 1 √ ελ σ(x ε n , Y ε n )dW + O(1) From here on we can carry out the calculation precisely as in Section 2.2, with the added convenience of the vector fields no longer depending on x as a variable. In doing so we obtain Z ε n → Z n (in distribution) as ε → 0, where d Z n = B 0 (x n )ẑ n dt + η(x n )dV , with the initial condition defined recursively by Z n (n∆t) =ẑ n . Using the fact thatẑ n+1 = Z n ((n + 1)∆t), we obtain z n+1 =ẑ n + B 0 (x n )ẑ n ∆t + η(x n ) √ ∆tξ n where ξ n are iid standard Gaussians. Hence we obtain the Euler-Maruyama scheme for the correct CLT (2.4). However, sinceẑ ε n describes the rescaled fluctuations, we see that the true fluctuations z ε n of HMM are consistent with the inflated (4.1). Large fluctuations As with the CLT, the LDP of the HMM scheme is not consistent with the true LDP of the fast-slow system, but rather a rescaled version of the true LDP. In particular, define u λ,∆t by u λ,∆t (t, x) = lim ε→0 ε log E x exp 1 ε ϕ(x ε n+1 ) for t ∈ I n,∆t . If the O(1) fluctuations of HMM were consistent with those of the fast-slow system, we would expect u λ,∆ to converge to the solution of (2.10) as ∆t → 0. Instead, we find that as ∆t → 0, u λ,∆t (t, x) converges to the solution to the Hamilton-Jacobi equation ∂ t u λ = 1 λ H(x, λ∇u λ ) u λ (0, x) = ϕ(x) . (4.2) In light of the discussion in Section 2.3, the reverse Varadhan lemma suggests that the HMM scheme is consistent with the wrong LDP. Before proving this claim, we first discuss some implications. The rescaled Hamilton-Jacobi equation implies that the action functional for HMM will be a rescaled version of that for the true fast-slow system. Indeed, it is easy to see that the Langrangian corresponding to HMM simplifies to L (x, β) := sup θ θ · β − 1 λ H(x, λθ) = 1 λ L (x, β) , where L is the Lagrangian for the true fast-slow system. Thus, the action of the HMM approximation is given by S [0,T ] = λ −1 S [0,T ] where S is the action of the true fast-slow system. In particular, it follows immediately from the definition that the HMM approximation has quasi-potential V (x, y) = λ −1 V (x, y), where V is the true quasi-potential. As a consequence, the escape times for the HMM scheme will be drastically faster than those of the fast-slow system. In the terminology of Section 2.3, if we let τ ε,∆t be the escape time for the HMM scheme then for ε, ∆t ≪ 1 we expect Eτ ε,∆t ≍ exp 1 ελ V (x * , ∂D) . (4.3) where ≍ log-asymptotic equality. Thus, the log-expected escape times are decreasing proportionally with λ. On the other hand, since the HMM action is a multiple of the true action, the minimizers will be unchanged by the HMM approximation. Hence the large deviation transition pathways will be unchanged by the HMM approximation. To justify the claim for u λ,∆t (4.2), we first introduce some notation. Let S (α) t be the semigroup associated with the Hamilton-Jacobi equation ∂ t v(t, x) = H(α, ∇v(t, x)) , (4.4) notice that this is the same as the true Hamilton-Jacobi equation (2.10) but with the first argument of the Hamiltonian now frozen as a parameter α. The necessity of the parameter α is due to the fact that in the system for (X ε n , Y ε n ), the x variable in the fast process is frozen to its value at the left end point of the interval, and hence is treated as a parameter on each interval. We also introduce the operator S t ψ(x) = S (α) t ψ(x)| α=x and also S λ,t = λ −1 S t (λ·). In this notation, it is simple to show that E x exp ε −1 ϕ(x ε n ) ≍ exp ε −1 (S λ,∆t ) n ϕ(x) . (4.5) We will verify (4.5) by induction, starting with the n = 1 case. Since, on the interval I 0,∆t , the pair (X ε 0 , Y ε 0 ) is a fast-slow system of the form (2.1) with ε replaced by ελ, it follows from Section 2.3 that X ε 0 satisfies an LDP with action functional derived from the Hamiltonian-Jacobi equation (4.4), with the parameter α set to the value of X ε 0 at the left endpoint, which is X ε 0 (0) = x. Hence, it follows from Varadhan's lemma that for any suitable ψ : R d → R E x exp (ελ) −1 ψ(X ε 0 (∆t)) ≍ exp (ελ) −1 S (α) ∆t ψ(x)| α=x . Hence, since x ε 1 = X ε 0 (∆t) with X ε 0 (0) = x, we have E x exp ε −1 ϕ(x ε 1 ) = E x exp (ελ) −1 λϕ(X ε 1 (∆t)) ≍ exp (ελ) −1 S (α) ∆t (λϕ)(x)| α=x = exp ε −1 S λ,∆t ϕ(x) as claimed. Now, suppose (4.5) holds for all k with n ≥ k ≥ 1, then E x exp ε −1 ϕ(x ε n+1 ) = E x E x ε 1 exp ε −1 ϕ(x ε n+1 ) . (4.6) By the inductive hypothesis, we have that E x ε 1 exp ε −1 ϕ(x ε n+1 ) ≍ exp ε −1 (S λ,∆t ) n ϕ(x ε 1 ) . (4.7) Applying (4.7) under the expectation in (4.6) (see Remark 4.1) we see that E x exp ε −1 ϕ(x ε n+1 ) = E x E x ε 1 exp ε −1 ϕ(x ε n+1 ) ≍ E x exp ε −1 (S λ,∆t ) n ϕ(x ε 1 ) . Now applying the inductive hypothesis with n = 1 and ψ(·) = (S λ,∆t ) n ϕ(·) E x exp ε −1 (S λ,∆t ) n ϕ(x ε 1 ) ≍ exp ε −1 S λ,∆t (S λ,∆t ) n ϕ(x) , which completes the induction. By definition, we therefore have u λ,∆t (t, x) = (S λ,∆t ) n ϕ(x) when t ∈ I n,∆t . All that remains is to argue that u λ,∆t converges to the solution of (4.2) as ∆t → 0. But this can be seen from the expansion of the semigroup u λ,∆t (t + ∆t, x) − u λ,∆t (t, x) ∆t = (S λ,∆t ) n+1 ϕ(x) − (S λ,∆t ) n ϕ(x) ∆t (4.8) = S λ,∆t (S ∆ t) n ϕ(x) − (S λ,∆t ) n ϕ(x) ∆t = λ −1 H(α, λ∇(S λ,∆t ) n ϕ(x))| α=x + O(∆t) = λ −1 H(x, λ∇u λ,∆t (t, x))) + O(∆t) which yields the desired limiting equation. Remark 4.2. From the discussion above, it appears that the mean transition time can be estimated from HMM upon exponential rescaling, see (4.3). This is true, but only at the level of the (rough) log-asymptotic estimate of this time. How to rescale the prefactor is by no means obvious. As we will see below PHMM avoids this issue altogether since it does not necessitate any rescaling. Parallelized HMM There is a simple variant of the above HMM-type scheme which captures the correct average behavior and fluctuations, both at the level of the CLT and LDP. In a usual HMM type method, the key approximation is given by (n+1)∆t n∆t f (x ε n , Y ε n (s))ds ≈ λ (n+1/λ)∆t n∆t f (x ε n , Y ε n (s))ds , (5.1) which only requires computation of the fast variables on the interval [n∆t, (n + 1/λ)∆t]. This approximation is effective at replicating averages, but poor at replicating fluctuations. Indeed, for each j, the time series Y ε n on the interval [(n + j/λ)∆t, (n + (j + 1)/λ)∆t] is replaced with an identical copy of the time series from the interval [n∆t, (n + 1/λ)∆t]. This introduces strong correlations between random variables that should be essentially independent. Parallelized HMM avoids this issue by employing the approximation (n+1)∆t n∆t f (x ε n , Y ε n (s))ds ≈ λ j=1 (n+1/λ)∆t n∆t f (x ε n , Y ε,j n (s))ds , where Y ε,j n are for each j independent copies of the time series computed in (5.1). Due to their independence, each copy of the fast variables can be computed in parallel, hence we refer to the method as parallel HMM (PHMM). The method is summarized below. 1. (Micro step) On the interval I n,∆t , simulate λ independent copies of the of the fastvariables, each copy simulated precisely as in the usual HMM. That is, let Y ε,j n = Y ε,j n (n∆t) + 1 ε t n∆t g(x ε n , Y ε,j n (s))ds + 1 √ ε t n∆t σ(x ε n , Y ε,j n (s))dW j (s) (5.2) for j = 1, . . . , λ with W j independent Brownian motions. As with ordinary HMM, we will not require the time series of the whole interval I n,∆t but only over the subset [n∆t, (n + 1/λ)∆t). 2. (Macro step) Use the time series from the micro step to update x ε n to x ε n+1 by x ε n+1 = x ε n + λ j=1 (n+1/λ)∆t n∆t f (x ε n , Y ε,j n (s))ds . (5.3) As with the HMM-type method, it will be convenient to write PHMM as a fast-slow system (when restricted to an interval I n,∆t ). Akin to (3.5), it is easy to verify that the parallel HMM scheme is described by the system dX ε n dt = 1 λ λ j=1 f (x ε n , Y ε,j n ) (5.4) d Y ε n,j = 1 ελ b(x ε n , Y ε,j n )dt + 1 √ ελ σ(x ε n , Y ε,j n )dW j , for j = 1, . . . , λ with the initial condition X ε n (n∆t) = x ε n . Average and fluctuations in parallelized HMM In this section we check that the averaged behavior and the fluctuations in the PHMM method are consistent with those in the original fast slow system. Averaging Proceeding exactly as in Section 4.1, it follows that as ε → 0 the PHMM scheme x ε n+1 converges tox n+1 =X n ((n + 1)∆t) where dX n dt = 1 λ λ j=1 F (x n ) = F (x n ) (6.1) with initial conditionX n (n∆t) =x n . Hence, we are in the exact same situation as with ordinary HMM, so the averaged behavior is consistent with that of the original fast slow system. Small fluctuations We now show that the fluctuations z ε n = x ε n −x n √ ε are consistent with the correct CLT fluctuations, described by (2.4). As in Section 4.2, we instead look at the rescaled fluctuationŝ z ε n = x ε n −x n √ ελ . In particular we will show that these rescaled fluctuations are consistent with d Z = B 0 (X) Zdt + λ −1/2 η(X)dV . (6.2) The claim for z ε will follows immediately from the claim forẑ ε . We have thatẑ ε n+1 = Z ε n ((n + 1)∆t) where Z ε n (t) = X ε n (t) −X n (t) √ ελ with X ε n given by the system (5.4) andX n given by the averaged equation (6.1). As in Section 4.2, we derive a system for the triple (X n , Z ε n , Y ε n ), where now the fast process has λ independent components Y ε n = ( Y ε,1 n , . . . , Y ε,λ n ): dX n dt = F (x n ) (6.3) d Z ε n dt = 1 √ ελ 1 λ λ j=1f (x n , Y ε,j n ) + 1 λ λ j=1 ∇ x f (x n , Y ε,j n )ẑ n + O( √ ελ) d Y ε,j n = 1 ελ b(x n , Y ε,j n )dt + 1 √ ελ ∇ x b(x n , Y ε,j n )ẑ n dt + 1 √ ελ σ(x n , Y ε,j n )dW j + O(1) . With a modicum added difficulty, we can now argue as in Section 2.2 with ε ′ = ελ playing the role of ε. The invariant measure µ λ x (dy) associated with the generator of Y ε n is now the product measure µ λ x (dy 1 , . . . , dy λ ) = µ x (dy 1 ) . . . µ x (dy λ ) where µ x is the invariant measure associated with L 0 from Section 2.2. This product structure simplifies the seemingly complicated expressions arising in the perturbation expansion of (6.3). In the setting of Section 2.2 we have that u 0 = u 0 (x, z, t) and u 1 (x, z, y, t) = (−L (1) 0 − · · · − L (λ) 0 ) −1 L 1 u 0 (x, z, y, t) , (6.4) where L (j) 0 = b(x n , y j )∇ y j + 1 2 σσ T (x n , y j ) : ∇ 2 y j Since L 1 u 0 (x, z, y, t) = 1 λ λ j=1f (x n , y j ) · ∇ z u 0 (x, z, t) , the Feynman-Kac representation of (6.4) yields u 1 (x, z, y, t) = 1 λ λ j=1 ∞ 0 dτ E y jf (x n , Yx n,j (τ )) · ∇ z u 0 (x, z, t) . The equation for u 0 is now given by ∂ t u 0 = F (x n )∇ x u 0 + µx n (dy 1 ) . . . µx n (dy λ )( 1 λ λ j=1 ∇ x f (x n , y j )ẑ n )∇ z u 0 (6.5) + µx n (dy 1 ) . . . µx n (dy λ ) × ∞ 0 dτ 1 λ λ j=1f (x n , y j ) ⊗ 1 λ λ k=1 E yf (x n , Y k xn (τ )) : ∇ 2 z u 0 + λ j=1 (∇ x b(x n , y j )ẑ n ) ∞ 0 dτ ∇ y j 1 λ λ k=1 E y kf (x n , Y k xn (τ ))∇ z u 0 . By expanding the product measure, the second term on the right hand side of (6.5) becomes 1 λ λ j=1 µx n (dy j )(∇ x f (x n , y j )ẑ n ) · ∇ z u 0 = µx n (dy 1 )(∇ x f (x n , y 1 )ẑ n ) · ∇ z u 0 = (B 1 (x n )ẑ n ) · ∇ z u 0 , Likewise, using the independence of Y j x for distinct j, the third term becomes 1 λ 2 λ j,k=1 ∞ 0 dτ Ef (x n , Y j xn (0)) ⊗f (x n , Y k xn (τ )) : ∇ 2 z u 0 = 1 λ 2 λ j=1 ∞ 0 dτ Ef (x n , Y j xn (0)) ⊗f (x n , Y j xn (τ )) : ∇ 2 z u 0 = 1 λ ∞ 0 dτ Ef (x n , Y 1 xn (0)) ⊗f (x n , Y 1 xn (τ )) : ∇ 2 z u 0 = 1 λ η(x n )η(x n ) T : ∇ 2 z u 0 . where the expectation is taken over realizations of Y j x with Y j x (0) ∼ µ x . Finally, since the ∇ y j E y k term vanishes on the off-diagonal, the last term in (6.5) reduces to 1 λ λ j,k=1 ∞ 0 dτ µx n (dy j )µx n (dy k )(∇ x b(x n , y j )ẑ n ) · ∇ y j E y kf (x n , Y k xn (τ )) · ∇ z u 0 = 1 λ λ j=1 ∞ 0 dτ µx n (dy j )µx n (dy k )(∇ x b(x n , y j )ẑ n )∇ y j E y jf (x n , Y j xn (τ ))∇ z u 0 = ∞ 0 dτ µx n (dy 1 )µx n (dy k )(∇ x b(x, y 1 )ẑ n )∇ y 1 E y 1f (x n , Y 1 xn (τ ))∇ z u 0 = (B 2 (x n )ẑ n ) · ∇ z u 0 . It follows immediately that the reduced equation for the pair (X n ,Ẑ ε n ) is dX n dt = F (x n ) d Z n = B 0 (x n ) Z n dt + λ −1/2 η(x n )dV , with initial conditions Z n (n∆t) =ẑ n andX n (n∆t) =x n . Hence we see thatẑ n+1 is described byẑ n+1 =ẑ n + B(x n )ẑ n ∆t + λ −1/2 η(x n ) √ ∆t ξ n which is the Euler-Maruyama scheme for (6.2). Large fluctuations In this section we show that the LDP for PHMM is consistent with the true LDP from Section 2.3. In particular, let u λ,∆t (t, x) = lim ε→0 ε log E x exp ε −1 ϕ(x ε n ) for t ∈ I n,∆t , where x ε n is the PHMM approximation. We will argue that u λ,∆t (t, x) → u(t, x) as ∆t → 0, where u solves the correct Hamilton-Jacobi equation (2.10). The argument is a slight modification of that given in Section 4.3. Before proceeding, we recall the notation S where H is the Hamiltonian defined by (2.7). We also define the operator S ∆t ϕ(x) = S (α) ∆t ϕ(x)| α=x . As in Section 4.3, the claim follows from the asymptotic statement E x exp ε −1 ϕ(x ε n ) ≍ exp ε −1 (S ∆t ) n ϕ(x) , ε → 0 . (6.7) Given (6.7), by an identical argument to that started in Equation (4.8), it follows from (6.7) that u λ,∆t is indeed a numerical approximation of the solution to (6.6) and hence u λ,∆t → u as ∆t → 0. We will verify (6.7) by induction, starting with the n = 1 case. Since (X ε , Y ε 0,1 , . . . , Y ε 0,λ ) is a fast-slow system of the form (2.1) with ε replaced by ελ, it follows from Section 2.3 (Varadhan's lemma) that E x exp (ελ) −1 ψ(X ε 1 (∆t)) ≍ exp (ελ) −1 S (α) ∆t ψ(x)| α=x , where S (α) ∆t is the semigroup associated with ∂ t v(t, x) = H(α, ∇v(t, x)) and H(α, θ) = lim T →∞ T −1 log E exp θ · T 0 dτ 1 λ λ j=1 f (α, Y j α (τ )) . Hence we have E x exp ε −1 ϕ(x ε 1 ) = E x exp (ελ) −1 λϕ(X ε 0 (∆t)) ≍ E x exp (ελ) −1 S (α) ∆t (λϕ)(x)| α=x (6.8) But since Y j α are iid for distinct j, the Hamiltonian H reduces to lim T →∞ T −1 log E exp θ · T 0 dτ 1 λ λ j=1 f (α, Y j α (τ )) = λ lim T →∞ T −1 log E exp θ λ · T 0 dτ f (α, Y 1 α (τ )) = λH(α, θ λ ) . It follows that ∂ t λ −1 S (α) and hence λ −1 S (α) ∆t (λϕ) = S (α) ∆t ϕ. Combining this with (6.8) completes the claim for n = 1. The proof of the inductive step for arbitrary n ≥ 1 follows identically to Section 4.3. Numerical evidence In this section, we investigate the performance of the standard HMM and PHMM methods for systems with well understood fluctuations and metastability properties. These simple experiments confirm that HMM amplifies fluctuations, which can drastically change the system's metastable behavior, and that the PHMM succeeds in avoiding these problems. In Section 7.1 we investigate simple CLT fluctuations for a simple quadratic potential systems, in Section 7.2 we look at large deviation fluctuations for a quartic double-well potential. Finally in Section 7.3 we look at fluctuations for a non-diffusive double well potential, which has large deviation properties that cannot be captured by a so-called 'small noise' diffusion. Small fluctuations We examine the small CLT-type fluctuations by looking the following fast-slow system dX dt = Y − X dY = θ ε (µX − Y )dt + σ √ ε dW . It is simple to check that the averaged system is given by dX dt = (µ − 1)X. Hence for µ < 1 the averaged system is a gradient flow in a quadratic potential centered at the origin. We will first illustrate that the HMM-type method described in Section 3 inflates the O( √ ε) fluctuations about the average by a factor of √ λ. In Figure 1 we plot histograms of the slow variable X for different speed-up factors λ. It is clear that the spread of the invariant distribution is increasing with λ. The profile remains Gaussian but the variance is greatly inflated. In Figure 2 we plot the variance of the stationary time series for X as a function of λ. The blue line is computed using HMM and the red line is computed using PHMM. As predicted by the theory in Section 4.2, in the case of HMM the variance is increasing linearly with λ and in the case of PHMM the variance is approximately constant. Note that in this example, the CLT captures the large deviations as well. Large fluctuations To investigate the affect of parallelization on O(1) deviations not captured by the CLT, we will look at a fast-slow system which exhibits metastability. Hence it is natural to take dX dt = Y − X 3 (7.1) dY = θ ε (µX − Y )dt + σ √ ε dW . It is simple to check that the averaged system is dX dt = µX −X 3 . Hence for any µ > 0 the averaged system is a gradient flow in a symmetric double well potential, with stable equilibria at ± √ µ and a saddle point at the origin. The large fluctuations of the fast-slow system can be investigated by looking at the first passage time for transitions from a neighborhood of one stable equilibrium to the other. In Figure 3 we compare the mean first passage time for HMM and PHMM as a function of λ. Even for λ = 2, the distinction between the two methods is vast, with the mean first passage time for HMM rapidly dropping off and for PHMM staying approximately constant. In Figure 4 we compare respectively the stationary distributions of the true fast-slow system, HMM (λ = 5) and PHMM (λ = 5). In the case of HMM, the energy barrier separating the two metastable states is now overpopulated, which explains the rapid fall in mean first passage time. In the case of PHMM, the histogram is indistinguishable from the true stationary distribution (with the exception of a slight asymmetry). In Figure 5 we plot the cumulative distributions function (CDF) for the first passage time, comparing that of the true fast-slow system, with HMM (λ = 5) and PHMM (λ = 5). We see that the HMM first passage times are supported on a much faster time scale than that of the true fast-slow system. In contrast, the CDF of PHMM is almost indistinguishable from that of the true fast-slow system. Hence PHMM is not just replicating the mean first passage time, but also the entire distribution of first passage times. Asymmetric, non-diffusive fluctuations We now compare HMM and PHMM for a multiscale model that also displays metastability, but in which the large fluctuations cannot be characterized by a 'small noise' Ito diffusion. In particular, the Hamiltonian describing the LDP of the system is non-quadratic, as opposed the the previous systems. The system has been used [BGTVE15] to illustrate the ineffectiveness of diffusion-type approximations for fast-slow systems. The fast-slow system is given by dX dt = Y 2 − νX (7.2) dY = − 1 ε γ(X)Y dt + σ √ ε dW . where γ(x) = x 4 /10 − x 2 + 3 . The averaged equation for this system reads For ν = 1 and σ = √ 3, this averaged equation possesses two stable fixed points at x ≈ 0.555 and x ≈= 2.459 and one unstable fixed point at x ≈ 2.459. The the rates of transition between these stable fixed points is captured by the LDP. By an elementary calculation [BGTVE15], the Hamiltonian of this LDP is found to be non-quadratic and given by dX dt = σ 2 2γ(X) − νXH(x, θ) = −νxθ + 1 2 γ(x) − γ 2 (x) − 2σ 2 θ The quasi-potential associated with this Hamiltonian satisfies 0 = H(x, V ′ ), i.e. V ′ (x) = νxγ(x) − 1 2 σ 2 ν 2 x 2 , and is displayed in Figure 6. Whilst there is a significant barrier corresponding to left-to-right transitions, there is almost no barrier corresponding to right-to-left transitions. In Figure 7 we plot CDFs of the first passage times: due to the asymmetry we plot separately the transitions from the left-to-right and right-to-left. For left-to-right transitions, the HMM procedure drastically speeds up transitions because it enhances fluctuations: as is the case with the previous experiment, the HMM transitions are supported on a timescale several orders of magnitude faster than those of the true fast slow system. The PHMM method does not experience this problem and the distribution of first passage times agrees quite well with the true model. For right-to-left transitions, PHMM shows similarly good agreement with the true fast-slow system, but in contrast HMM is not too far off either. This can be accounted for by the 'flatness' of the right potential well, meaning that increasing the amplitude of fluctuations will only decrease the escape time by a linear multiplicative factor. We note that the noise appearing in the CDF plots is due to the scarcity of transitions occurring in the model (7.2). Discussion We have investigated HMM methods for fast-slow systems, in particular their ability (or lack thereof) to capture fluctuations, both small (CLT) and large (LDP). We found, both theoretically (Section 4) and numerically (Section 7), that the amplitude of fluctuations is enhanced by an HMM-type method. In particular with an HMM speed up factor λ, in the CLT the variance of Gaussian fluctuations about the average is increased by a factor λ as well. In the LDP, the quasi-potential is decreased by a factor λ, leading to the first passage times being supported on a time scale λ orders of magnitude smaller than in the true fast slow system. This inability to correctly capture fluctuations about the average suggests that HMM can be a poor approximation of fast-slow systems, particularly when metastable behavior is important. As noted in Section 4.3, although the fluctuations of HMM are enhanced, the large deviation transition pathways remain faithful to the true model. Thus we stress that HMM is a reliable method of finding transition pathways in metastable systems, but not for simulating their dynamics. We have introduced a simple modification of HMM, called parallel HMM (PHMM), which avoids these fluctuation issues. In particular, the PHMM method yields fluctuations that are consistent with the true fast slow system for any speed up factor λ (provided that we still have ελ ≪ 1), as was shown both theoretically (Section 6) and numerically (Section 7). The HMM method relies on computing one short burst of the fast variables, and inferring the statistical behavior of the fast-variables by extrapolating this short burst over a large time window. PHMM on the other hand computes an ensemble of λ short bursts, and infers the statistics of the fast variables using the ensemble. Since the ensemble members are independent, they can be computed in parallel. Hence if one has λ CPUs available, then the real computational time required in PHMM is identical to that in HMM. Interestingly, one can draw connections between the parallel method introduced here and the tau-leaping method used in stochastic chemical kinetics [Gil00]. The tau-leaping method is an approximation used to speed up simulation of stochastic fast-slow systems of the type X ε (t) = X ε (0) + m k=1 εN k ε −1 t 0 a k (X ε (s))ds ν k , (8.1) where N k are independent unit rate Poisson processes, ν k are vectors in R d and a k : R d → R. The system (8.1) can be solved exactly by the stochastic simulation algorithm (SSA), but when ε is small this can be extremely expensive, due to the Poisson clocks being reset each time a jump occurs. The tau-leaping procedure avoids this issue by chopping the simulation window into sub-intervals of size τ and on each subinterval fixing the Poisson clocks to their value at the left endpoint. The speed-up is a result of the fact that one can simulate the Poisson jumps in parallel, since their clocks are fixed over the τ interval. As a consequence of this analogy, one can check (using calculations similar to those found above) that the tau-leaping method also captures the fluctuations correctly, both at the level of the CLT and that of the LDP. The former observation was made in [AGK11]; to the best of our knowledge, the second one is new. As a final note, we stress that there are non-dissipative fast-slow systems for which the PHMM will not be effective at capturing their long time scale behavior, including metastability. These are system for which the CLT and LDP hold on O(1) timescale, but they either cannot be extended to longer time-scale (in the case of the CLT) or leads to trivial prediction on these time scales (in the case of the LDP). To clarify this point, take for example the fast-slow Langevin systeṁ q 1 = p 1ṗ1 = q 1 − q 3 1 + (q 2 − q 1 ) , (8.2) q 2 = ε −1 p 2ṗ2 = ε −1 (q 1 − q 2 ) − ε −1 γp 2 + 2ε −1 β −1 γη . where γ > 0 and β > 0 are parameters. For any value of ε, γ, this system is invariant with respect to the Gibbs measure with Hamiltonian H(q 1 , q 2 , p 1 , p 2 ) = 1 2 p 2 1 + 1 2 p 2 2 + 1 4 q 4 1 − 1 2 q 2 1 + 1 2 (q 1 − q 2 ) 2 . As ε → 0, it is easy to check that the slow variables (q 1 , q 2 ) converge to the averaged systeṁ q 1 =p 1ṗ1 = −G ′ (q 1 ) (8.3) where the averaged vector field is the gradient of the free energy G(q 1 ) = 1 4 q 4 1 − 1 2 q 2 1 + 1 2 q 2 1 = −β −1 log exp(βU(q 1 , q 2 ))dq 2 , with U(q 1 , q 2 ) = 1 4 q 4 1 − 1 2 q 2 1 + 1 2 (q 1 − q 2 ) 2 . Likewise, if we introduce η 1 = q 1 −q 1 √ ε , ζ 1 = p 1 −p 1 √ ε , the CLT indicates that the evolution of these variables are captured bẏ η 1 = ζ 1 , dζ 1 = 2β −1 γ dB (8.4) and we can also derive an LDP for (8.2) with action S [0,T ] (q 1 ) = β 4γ T 0 |q 1 − q 1 + q 3 1 | 2 dt (8.5) However, neither (8.4) nor (8.5) capture the long time behavior of the solution to (8.2). The problem stems from the fact that the averaged equation in (8.3) is Hamiltonian, hence nondissipative. As a result, fluctuations accumulate as time goes on. Eventually, the CLT stops being valid, and the LDP becomes trivial -in particular, it is easy to see that the quasipotential associated with the action in (8.5) is flat. For examples of this type, other techniques will have to be employed to describe their long time behavior including, possibly, their metastability (which, in the case of (8.2) is controlled by how small β −1 is, rather than ε). These questions will be investigated elsewhere. Remark 4. 1 . 1Regarding the operation of taking the log-asymptotic result inside the expectation, one can find such calculations done rigorously in (for instance) [FW12, Lemma 4.3]. the semigroup associated with the Hamilton-Jacobi equation ∂ t u(t, x) = H(α, ∇u(t, x)) , (6.6) Figure 1 :Figure 2 : 12Histogram of X variables. Parameters used are ε = 10 −2 , δt = 0.1, θ = 1, µ = 0.5, σ = 5, T = 10 4 . Comparing the stationary variance of HMM and PHMM as a function of λ. Once again, we use parameters ε = 10 −2 , δt = 0.1, θ = 1, µ = 0.5, σ = 5, T = 10 4 . Figure 3 : 3The mean first passage time as a function of the speed-up factor λ, for HMM (red dotted) and PHMM (blue dotted). We include the LDP predicted curve for the mean first passage time of HMM, as discussed in Section 4.3. ε = 10 −3 , δt = 0.05, θ = 1, µ = 1, σ = 15, T = 5 × 10 4 Figure 4 : 4Histogram of X variables. ε = 10 −3 , δt = 0.05, θ = 1, µ = 1, σ = 15, T = 5 × 10 4 Figure 5 :Figure 6 : 56Cumulative distribution functions for first passage times of the true model (red) for (7.1), HMM with λ = 5 (green) and PHMM with λ = 5 (blue). The parameters used are ε = 10 −3 , δt = 0.05, θ = 1, µ = 1, σ = 15, T = 5 The quasi-potential V (x) (red curve) and the one obtained from a quadratic approximation of the Hamiltonian (orange curve). Also shown in blue is the coefficient at the right-hand side of the reduced equation. Figure 7 : 7Cumulative distribution functions for first passage times of the true model for (7.2) (red), HMM with λ = 5 (green) and PHMM with λ = 5 (blue). Left-to-right transitions on the left, right-to-left transition on the right. The parameters used are ε = 0.05, δt = 0.5, ν = 1, σ = √ 3, T = 1 × 10 7 t (λϕ) = λ −1 H(α, ∇( S (α) t (λϕ))) = H(α, λ −1 ∇( S (α) t (λϕ))) The heterogeneous multiscale method. A Abdulle, W E , B Engquist, E Vanden-Eijnden, Acta Numerica. 21A. ABDULLE, W. E, B. ENGQUIST, and E. VANDEN-EIJNDEN. The hetero- geneous multiscale method. Acta Numerica 21, (2012), 1-87. A multiscale method for highly oscillatory dynamical systems using a poincaré map type technique. G Ariel, B Engquist, S Kim, Y Lee, R Tsai, Journal of Scientific Computing. 542-3AEK + 13[AEK + 13] G. ARIEL, B. ENGQUIST, S. KIM, Y. LEE, and R. TSAI. A multiscale method for highly oscillatory dynamical systems using a poincaré map type technique. Journal of Scientific Computing 54, no. 2-3, (2013), 247-268. Error analysis of tauleap simulation methods. D F Anderson, A Ganguly, T G Kurtz, The Annals of Applied Probability. 216D. F. ANDERSON, A. GANGULY, and T. G. KURTZ. Error analysis of tau- leap simulation methods. The Annals of Applied Probability 21, no. 6, (2011), 2226-2262. A multiscale technique for finding slow manifolds of stiff mechanical systems. G Ariel, J Sanz-Serna, R Tsai, Multiscale Modeling & Simulation. 104G. ARIEL, J. SANZ-SERNA, and R. TSAI. A multiscale technique for finding slow manifolds of stiff mechanical systems. Multiscale Modeling & Simulation 10, no. 4, (2012), 1180-1203. Large deviations in fast-slow systems. F Bouchet, T Grafke, T Tangarife, E Vanden-Eijnden, PreprintF. BOUCHET, T. GRAFKE, T. TANGARIFE, and E. VANDEN-EIJNDEN. Large deviations in fast-slow systems. Preprint (2015). Corrector theory for msfem and hmm in random media. G Bal, W Jing, Multiscale Model. Simul. 9G. BAL and W. JING. Corrector theory for msfem and hmm in random media. Multiscale Model. Simul. 9(2011). Corrector analysis of a heterogeneous multi-scale scheme for elliptic equations with random potential. G Bal, W Jing, M2AN. 482G. BAL and W. JING. Corrector analysis of a heterogeneous multi-scale scheme for elliptic equations with random potential. M2AN 48, no. 2(2014). A numerical method for solving incompressible viscous flow problems. A Chorin , J. Comp. Phys. 2A. CHORIN. A numerical method for solving incompressible viscous flow problems. J. Comp. Phys 2, (1967), 12-26. Unified approach for molecular dynamics and density functional theory. R Car, M Parrinello, Phys. Rev. Lett. 5522R. CAR and M. PARRINELLO. Unified approach for molecular dynamics and density functional theory. Phys. Rev. Lett. 55, no. 22, (1985), 2471-2475. Limit theorems for partially hyperbolic systems. D Dolgopyat, Transactions of the American Mathematical Society. 3564D. DOLGOPYAT. Limit theorems for partially hyperbolic systems. Transactions of the American Mathematical Society 356, no. 4, (2004), 1637-1689. A Dembo, O Zeitouni, Large deviations techniques and applications. Springer38A. DEMBO and O. ZEITOUNI. Large deviations techniques and applications., vol. 38. Springer, 2009. The heterogeneous multiscale methods. W E , B Engquist, Commun. Math. Sci. 11W. E and B. ENGQUIST. The heterogeneous multiscale methods. Commun. Math. Sci. 1, no. 1, (2003), 87-132. Heterogeneous multiscale methods: a review. W E , B Engquist, X Li, W Ren, E Vanden-Eijnden, Commun. Comput. Phys. 23EEL + 07[EEL + 07] W. E, B. ENGQUIST, X. LI, W. REN, and E. VANDEN-EIJNDEN. Heteroge- neous multiscale methods: a review. Commun. Comput. Phys 2, no. 3, (2007), 367-450. Analysis of multiscale methods for stochastic differential equations. W E , D Liu, E Vanden-Eijnden, Comm. Pure Appl. Math. 5811W. E, D. LIU, and E. VANDEN-EIJNDEN. Analysis of multiscale methods for stochastic differential equations. Comm. Pure Appl. Math. 58, no. 11, (2005), 1544-1585. A general strategy for designing seamless multiscale methods. W E , W Ren, E Vanden-Eijnden, Journal of Computational Physics. 22815W. E, W. REN, and E. VANDEN-EIJNDEN. A general strategy for designing seamless multiscale methods. Journal of Computational Physics 228, no. 15, (2009), 5437-5453. A computational strategy for multiscale systems with applications to Lorenz 96 model. I Fatkullin, E Vanden-Eijnden, J. Comput. Phys. 2002I. FATKULLIN and E. VANDEN-EIJNDEN. A computational strategy for mul- tiscale systems with applications to Lorenz 96 model. J. Comput. Phys. 200, no. 2, (2004), 605-638. Random perturbations of dynamical systems. M I Freidlin, A D Wentzell, Springer260M. I. FREIDLIN and A. D. WENTZELL. Random perturbations of dynamical systems, vol. 260. Springer, 2012. Approximate accelerated stochastic simulation of chemically reaction systems. D T Gillespie, Journal of Chemical Physics. 1154D. T. GILLESPIE. Approximate accelerated stochastic simulation of chemically reaction systems. Journal of Chemical Physics 115, no. 4(2000). Averaging in dynamical systems and large deviations. Y Kifer, Inventiones mathematicae. 1101Y. KIFER. Averaging in dynamical systems and large deviations. Inventiones mathematicae 110, no. 1, (1992), 337-370. Deterministic homogenization of fast-slow systems with chaotic noise. D Kelly, I Melbourne, arXivD. KELLY and I. MELBOURNE. Deterministic homogenization of fast-slow systems with chaotic noise. arXiv . Smooth approximations of stochastic differential equations. D Kelly, I Melbourne, To appear in Annals of ProbabilityD. KELLY and I. MELBOURNE. Smooth approximations of stochastic differ- ential equations. To appear in Annals of Probability . Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise. J C Mattingly, A M Stuart, D J Higham, Stochastic Process. Appl. 1012J. C. MATTINGLY, A. M. STUART, and D. J. HIGHAM. Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise. Stochastic Process. Appl. 101, no. 2, (2002), 185-232. Numerical techniques for multi-scale dynamical systems with stochastic effects. E Vanden-Eijnden, Commun. Math. Sci. 12E. VANDEN-EIJNDEN. Numerical techniques for multi-scale dynamical sys- tems with stochastic effects. Commun. Math. Sci. 1, no. 2, (2003), 385-391. On hmm-like integrators and projective integration methods for systems with multiple time scales. E Vanden-Eijnden, Communications in Mathematical Sciences. 52E. VANDEN-EIJNDEN. On hmm-like integrators and projective integration methods for systems with multiple time scales. Communications in Mathe- matical Sciences 5, no. 2, (2007), 495-505.
[]
[ "Two-dimensional percolation with multiple seeds", "Two-dimensional percolation with multiple seeds" ]
[ "Hongting Yang \nSchool of Science\nDepartment of Physics and Astronomy\nWuhan University of Technology\n430070WuhanP.R. China\n", "Stephan Haas \nUniversity of Southern California\n90089-0484Los AngelesCA\n" ]
[ "School of Science\nDepartment of Physics and Astronomy\nWuhan University of Technology\n430070WuhanP.R. China", "University of Southern California\n90089-0484Los AngelesCA" ]
[]
We study non-uniform percolation in a two-dimensional cluster growth model with multiple seeds.With increasing concentration of seeds, the percolation threshold is found to increase monotonically, while the exponents for correlation length, order parameter, and average cluster size, keep invariant.The scaling law for an infinite square lattice keeps working for any nonzero concentration of seeds.Abnormal finite-size scaling behaviours happen at low concentration of seeds.
null
[ "https://arxiv.org/pdf/1409.5507v3.pdf" ]
117,770,644
1409.5507
1c26797e1f9252cdbabb83bfb3c8da55022195d3
Two-dimensional percolation with multiple seeds 7 Oct 2014 Hongting Yang School of Science Department of Physics and Astronomy Wuhan University of Technology 430070WuhanP.R. China Stephan Haas University of Southern California 90089-0484Los AngelesCA Two-dimensional percolation with multiple seeds 7 Oct 2014arXiv:1409.5507v3 [cond-mat.dis-nn]numbers: 6460ah0510Ln0570Jk We study non-uniform percolation in a two-dimensional cluster growth model with multiple seeds.With increasing concentration of seeds, the percolation threshold is found to increase monotonically, while the exponents for correlation length, order parameter, and average cluster size, keep invariant.The scaling law for an infinite square lattice keeps working for any nonzero concentration of seeds.Abnormal finite-size scaling behaviours happen at low concentration of seeds. I. INTRODUCTION In an ordinary two-dimensional percolation model, the sites or bonds of a lattice are usually distributed uniformly, whether the lattice is square, triangular, diamond, or in other forms [1]. However, in nature, the non-uniform distribution may be more popular than the uniform one. For example, electron density is used to describe the non-uniform spatial distribution of an electron in materials [2], cancer cells begins in some tissues or organs of the body, but not the whole body. In view of the wide influence of percolation theory [3][4][5][6][7][8][9], it is essential to study the nonuniform percolation models and their percolation properties. In the past decades, an important development towards nonuniform percolation is the study on the correlated percolation, examples of which are bootstrap percolation [10][11][12][13], jamming percolation [14][15][16][17], and directed percolation [18][19][20][21][22][23][24][25] etc. Our model is certainly some kind of correlated percolation model. However, our model originates from a completely different idea. In our model, clusters start to grow from a number of preoccupied sites. Our model is at first a cluster growth model. Except one special case, the model displays similar properties as the ordinary non-restricted percolation model. The only difference is the specific values of percolation thresholds. The critical properties of the model are obtained in the same way that already used in an ordinary percolation model. For the convenience, we summarize the formulae here, detailed description of the method can be found elsewhere [1]. An interesting quantity of a percolation model is the correlation length defined as ξ 2 = 2 s R 2 s s 2 n s s s 2 n s ,(1) where n s is the average number of s-clusters per lattice site, 2R 2 s = ij |r i − r j | 2 /s 2 is the average squared distance between two cluster sites. It is expected to behave as ξ(p) ∼ |p − p c | −ν .(2) infinite system, Π = 1 above and Π = 0 below p c . As for percolation transitions, only the value of p c is not enough, we have to introduce a number of other observables. On a square lattice with periodic boundary conditions, the quantity dΠ/dp gives the probability per interval dp at concentration p, that a wrapping cluster appears for the first time. The average concentration p av , at which, a wrapping cluster appears for the first time is defined as p av = p dΠ dp dp. If we define the width ∆ of the transition region as ∆ 2 = (p − p av ) 2 dΠ dp dp,(4) then ∆ can be related to p av via p av − p c ∝ ∆.(5) In this way, one can first get the value of p c by fitting the observed thresholds p av and the observed widths ∆, without prior knowledge of the correlation length exponent ν. With the value of p c at hand, one can further obtain the value of ν through |p av − p c | ∝ L −1/ν .(6) If an observable X is predicted to scale as |p − p c | −λ in an infinite lattice, then we expect it to obey the general scaling law X(L, p) = (p − p c ) −λX (p − p c )L 1/ν ,(7) whereX is a scale-independent function. Other two interesting observables are the order parameter, P ∞ , the probability of an occupied site belongs to the infinite cluster, and the average cluster size, S. At p = p c , they behave respectively as P ∞ (L, p c ) ∝ L −β/ν ,(8)S(L, p c ) ∝ L γ/ν .(9) Here, the value of P ∞ (or S) at p = p c is estimated by linear interpolation between two values of P ∞ (or S) right above and below p c . The exponents for two-dimensional lattice are expected to obey the well-known scaling law γ + 2β = 2ν.(10) These exponents will be calculated in our model. II. MODEL Our model can be easily built up by first occupying a number of sites randomly chosen from a two-dimensional square lattice with N = L 2 sites, then one by one, occupy the empty sites being neighbour to the previous occupied sites. After occupying a qualified empty site (being neighbour to at least one of the previous occupied sites), the list of qualified empty sites is refreshed by adding the new empty sites being neighbour to the site that was occupied just now. Groups of neighbour sites form clusters. Thus clusters grow from the multiple seeds, the sites occupied at the very beginning. Since the later occupied sites in our model are surrounding the previously occupied sites, there could be some differences between our model and other models. We use η to denote the concentration of seeds. For the Eden model [26,27], η = 1/N, which approaches zero with increasing L. When η ≥p c , withp c the percolation threshold of the usual two-dimensional site percolation [28][29][30], the critical region of percolation transition will be covered by the occupying process of the sites chosen as seeds, the following cluster growth process is therefore less meaningful. Especially, η = 1 is the usual percolation model. So, it is of interest only in the region with 0 < η <p c . We have calculated the thresholds p av for many sets of η and L. insensitive to the value of η as it should be. This peak value is larger than that in the freeboundary lattice with the same L, because touching-boundary clusters could become bigger by including the sites across the periodic boundaries, and a spanning cluster not counted in calculating ξ should be summed if it does not form a wrapping cluster when the freeboundary condition is switched to the periodic-boundary condition [30]. It is worth of noting that, the peak value of ξ appears in different point, one is around 0.54, the other is around 0.58, which implies that percolation thresholds for models with different concentration of seeds are different. Obviously, the p c values decrease with decreasing η values. In other words, if the number of seeds for cluster growing is smaller, the percolation phase transition will happen earlier. This result seems a little bit strange but is understandable. In our model, the following occupied sites are gathered to the clusters centered with these seeds occupied at the very beginning. Given more seeds, which means there are more clusters growing from these seeds to randomly distribute all the occupied sites, thus the largest cluster in this case will be smaller, and the wrapping cluster will be certainly delayed to appear. Data fitting gives of exponents in an ordinary two-dimensional site percolation model. The ratio between exponents (2β + γ)/ν is therefore a constant, and it obeys the scaling law Eq. (10) as it does in an usual site percolation model. p c = a + be IV. CONCLUSION Given a number of seeds (except only one seed) on any lattice, percolation phase transition will inevitably happen while clusters centered with these seeds grow up. Except the universal scaling law and various scaling exponents, the percolation threshold is variable Here, p c = 0.530298 for the model η = 0.1. with non-uniform distribution of sites characterized by the concentration of seeds. Percolation properties of a lattice depend on the non-uniform distribution of sites or bonds centered with multiple seeds, as well as its structures. Any details (uniform or non-uniform structures) of geometrical distribution of sites or bonds may contribute to the geometrical phase transition of a lattice. In general, smaller number of seeds for clusters growing implies the earlier occurrence of percolation transition. It is expected that there exists a critical value of the concentration of seeds, only below which, the abnormal finite-size scaling behaviours could happen. The idea of multiple seeds can be extended to other correlated percolation models. This work make it possible to push the application of percolation theory to wider fields, where percolation thresholds are expected to vary with non-uniform population of sites or bonds while scaling exponents keep invariant. Upon finishing this work, we became aware of a similar cluster model with initial seed concentration ρ and an additional parameter called growth probability g reported by Roy and Santra recently [31]. However, our model (corresponding to their g = 1) has not been discussed there. Except the difference in the model itself, the results of their model, including the values of percolation thresholds around 0.593 for the selected seed concentration 0.05, 0.25, and 0.50, and all the exponents are nearly the same for an ordinary percolation model. FIG. 1 :FIG. 2 : 12For moderate value of η, the values of p av decrease with decreasing L as what could be found in a general percolation model. However, an abnormal phenomenon could be observed for small values of η. The values of p av first decrease with decreasing L, after passing some critical lattice dimension L a , the p av values increase abnormally with decreasing L. Obviously, the scaling behaviours of p av above and below L a are different. To get any convergent observable of an infinite system, one has to choose lattices with L greater than the critical lattice dimension L a . If L a is too large to meet the requirement of a computer memory, then no critical information of an infinite system can be obtained. The abnormal change of p av happens at L a = 32, 64, 128 for η = 0.1, 0.05, 0.025 respectively. As an example, the circumstance of η = 0.05 is shown in FIG. 1. Simple calculation gives L a ∝ η −1 . Clearly, L a → ∞ when η → 0. The Eden model (η → 0) is purely a cluster growth model, which is inappropriate to be regarded as a percolation model. Along another trend of changing η, there should be a critical value of η, only below which, the abnormal scaling behaviour of p av happens. This value of η for a finite lattice could be figured out by changing the number of seeds one by one. This is clearly not a easy work. Our model, except the case η = 0, is some kind of constrained The average threshold p av for η = 0.05 changes with the linear dimension L. The abnormal change of p av happens at L a = 64 for the selected data points. percolation model. III. RESULTS On a lattice with N sites, there are Nη seeds, from which clusters start to grow until the lattice is fully occupied in each run or configuration. Relative to other observables, the computation of ξ needs rather longer CPU time. Given one value of η on the lattice with L = 128, a common desktop PC with CPU clock speed 2.6 GHz should keep running for about 14 hours to output the values of ξ(p) averaged on 20000 runs, and the corresponding p-dependence of ξ(p) for η = 0.2 and η = 0.5 is shown in FIG. 2. The peak value of ξ is The p-dependence of ξ for the periodic square lattice with L = 128. (a) is for η = 0.2 and (b) for η = 0.5. For each set of parameters L and η, we have calculated p av , ∆, P ∞ , and S. The number of runs is in the range of 1.3 × 10 5 to 2.3 × 10 8 , and the corresponding computation time is 13-16.5 hours. In FIG. 3, at the concentration of seeds η = 0.1, the average threshold p av versus the width of transition region ∆ is given for L=512, 256, and 128, respectively. Linear fitting gives the percolation threshold for an infinite system p c = 0.530298(22) for the given η. The datum for L=64 is not adopted in linear fitting since it is close to the critical lattice dimension 32 for η = 0.1. For η = 0.2, we also choose the data with L=512, 256, and 128, to do linear fitting. For η ≥ 0.2, we choose the data with L=64, 128, and 256 in linear fitting, while choosing the data with L= 256, 512, and 640, for η = 0.05. In the same way, the p c values for other η values (η=0.5, 0.4, 0.3, 0.2, 0.05) are obtained and summarized in FIG. 4. FIG. 3 :FIG. 4 : 34cη , with a = 0.59951(89), b = −0.11056(67), and c = −4.64(11). The minimum p c = 0.4890(16) in the limit η → 0 is unreachable since it is not a percolation model in this case.As an example, the exponent ν= 1.344(89) for η = 0.1 is extracted as shown in FIG. 5. According to their respective scaling relations, the values of β and γ for η = 0.1 can be obtained in a similar way. Finally, the values of exponents ν, β, and γ for our selected concentration of seeds from 0.05 to 0.5 are summarized in Table I. It can be seen that, there are minor differences in the values of each exponential for different η values, the values of exponents are in fact invariant, and they are respectively close to the corresponding values The average threshold p av versus the transition width ∆. The concentration of seeds is η = 0.1, and the data points are respectively for L=128, 256, and 512. The dependence of the percolation thresholds p c on the concentration of seeds η. FIG. 5 : 5The slope from the linear fitting of ln |p c − p av | versus ln L −1 gives the reciprocal of ν. TABLE I : IThe values of exponents ν, β, γ, and rate (2β + γ)/ν under different concentration of seeds η.η ν β γ (2β + γ)/ν 0.05 1.35(17) 0.142(18) 2.39(30) 1.98(50) 0.1 1.344(89) 0.138(11) 2.38(16) 1.98(27) 0.2 1.359(40) 0.141(7) 2.41(7) 1.98(12) 0.3 1.353(85) 0.139(10) 2.39(15) 1.97(25) 0.4 1.356(79) 0.140(10) 2.40(14) 1.98(23) 0.5 1.353(83) 0.140(10) 2.39(15) 1.97(24) Introduction to Percolation Theory. D Stauffer, A Aharony, Taylor & FrancisLondon2nd ed.D. Stauffer and A. Aharony, Introduction to Percolation Theory, 2nd ed. (Taylor & Francis, London, 1992). A Guide to Molecular Mechanics and Quantum Chemical Calculations. W J Hehre, Wavefunction, IncIrvine, CaliforniaW.J. Hehre, A Guide to Molecular Mechanics and Quantum Chemical Calculations, (Wave- function, Inc., Irvine, California, 2003). M Sahimi, Applications of Percolation Theory. Taylor & Francis, Bristol, MAM. Sahimi, Applications of Percolation Theory (Taylor & Francis, Bristol, MA, 1994). A G Hunt, R Ewing, Percolation Theory for Flow in Porous Media. BerlinSpringer7712nd ed.A. G. Hunt and R. Ewing, Percolation Theory for Flow in Porous Media, Lecture Notes in Physics 771, 2nd ed., (Springer, Berlin, 2009). . D Achlioptas, R M Souza, J Spencer, Science. 3231453D. Achlioptas, R. M. D'Souza and J. Spencer, Science 323, 1453 (2009). . K Lai, M Nakamura, W Kundhikanjana, M Kawasaki, Y Tokura, M A Kelly, Z.-X , K. Lai, M. Nakamura, W. Kundhikanjana, M. Kawasaki, Y. Tokura, M. A. Kelly and Z.-X. . Shen, Science. 329190Shen, Science 329, 190 (2010). . S V Buldyrev, R Parshani, G Paul, H E Stanley, S Havlin, Nature. 4641025S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley and S. Havlin, Nature 464, 1025 (2010). . R A Da Costa, S N Dorogovtsev, A V Goltsev, J F F Mendes, Phys. Rev. Lett. 105255701R. A. da Costa, S. N. Dorogovtsev, A. V. Goltsev and J. F. F. Mendes, Phys. Rev. Lett. 105, 255701 (2010). . O Riordan, L Warnke, Science. 333322O. Riordan and L. Warnke, Science 333, 322 (2011). . J Adler, Physica A. 171453J. Adler, Physica A 171, 453 (1991). . P D Gregorio, A Lawlor, P Bradley, K A Dawson, Phys. Rev. Lett. 9325501P. D. Gregorio, A. Lawlor, P. Bradley and K. A. Dawson, Phys. Rev. Lett. 93, 025501 (2004). . J Balogh, B Bollobás, R Morris, Ann. Probab. 371329J. Balogh, B. Bollobás, and R. Morris, Ann. Probab. 37, 1329 (2009). . F Sausset, C Toninelli, G Biroli, G Tarjus, J. Stat. Phys. 138411F. Sausset, C. Toninelli, G. Biroli, and G. Tarjus, J. Stat. Phys. 138, 411 (2010). . C Toninelli, G Biroli, D S Fisher, Phys. Rev. Lett. 9635702C. Toninelli, G. Biroli, and D. S. Fisher, Phys. Rev. Lett. 96, 035702 (2006). . C Toninelli, G Biroli, J. Stat. Phys. 13083C. Toninelli and G. Biroli, J. Stat. Phys. 130, 83 (2008). . M Jeng, J M Schwarz, J. Stat. Phys. 131575M. Jeng and J. M. Schwarz, J. Stat. Phys. 131, 575 (2008). . A Ghosh, E Teomy, Y Shokef, EPL. 10616003A. Ghosh, E. Teomy and Y. Shokef, EPL 106, 16003 (2014). . H Hinrichsen, Adv. Phys. 49815H. Hinrichsen, Adv. Phys. 49, 815 (2000). . G Ódor, Rev. Mod. Phys. 76663G.Ódor, Rev. Mod. Phys. 76, 663 (2004). . H Hinrichsen, Physica A. 3691H. Hinrichsen, Physica A 369, 1 (2006). M Henkel, H Hinrichsen, S Lübeck, Absorbing Phase Transitions. Springer1M. Henkel, H. Hinrichsen, and S. Lübeck, Non Equilibrium Phase Transitions, vol. 1: Absorb- ing Phase Transitions (Springer, 2008). . Z Zhou, J Yang, R M Ziff, Y Deng, Phys. Rev. E. 8621102Z. Zhou, J. Yang, R. M. Ziff, and Y. Deng, Phys. Rev. E 86, 021102 (2012). . A Lipowski, A L Ferreira, J Wendykier, Phys. Rev. E. 8641138A. Lipowski, A. L. Ferreira, and J. Wendykier, Phys. Rev. E 86, 041138 (2012). . F Landes, A Rosso, E A Jagla, Phys. Rev. E. 8641150F. Landes, A. Rosso, and E. A. Jagla, Phys. Rev. E 86, 041150 (2012). . J Wang, Z Zhou, Q Liu, T M Garoni, Y Deng, Phys. Rev. E. 8842102J. Wang, Z. Zhou, Q. Liu, T. M. Garoni, and Y. Deng, Phys. Rev. E 88, 042102 (2012). M Eden, Proc. 4th Berkeley Symp. on Mathematical Statistics and Probability. J. Neyman4th Berkeley Symp. on Mathematical Statistics and ProbabilityBerkeleyUniversity of California Press4223M. Eden, Proc. 4th Berkeley Symp. on Mathematical Statistics and Probability, vol. 4, ed. J. Neyman (University of California Press, Berkeley,1961), p. 223. . R Jullien, R Botet, J. Phys. A. 182279R. Jullien and R. Botet, J. Phys. A 18, 2279 (1985). . X Feng, Y Deng, H W J Blöte, Phys. Rev. E. 7831136X. Feng, Y. Deng and H. W. J. Blöte, Phys. Rev. E 78, 031136 (2008). . M J Lee, Phys. Rev. E. 7831131M. J. Lee, Phys. Rev. E 78, 031131 (2008). . H Yang, Phys. Rev. E. 8542106H. Yang, Phys. Rev. E 85, 042106 (2012). . B Roy, S B Santra, Croat. Chem. Acta. 86495B. Roy and S. B. Santra, Croat. Chem. Acta 86, 495 (2013).
[]
[ "Self-Tuning Deep Reinforcement Learning", "Self-Tuning Deep Reinforcement Learning" ]
[ "Tom Zahavy ", "Zhongwen Xu ", "Vivek Veeriah ", "Matteo Hessel ", "Junhyuk Oh ", "Hado Van Hasselt ", "David Silver ", "Satinder Singh " ]
[]
[]
Reinforcement learning (RL) algorithms often require expensive manual or automated hyperparameter searches in order to perform well on a new domain. This need is particularly acute in modern deep RL architectures which often incorporate many modules and multiple loss functions. In this paper, we take a step towards addressing this issue by using metagradients (Xu et al., 2018) to tune these hyperparameters via differentiable cross validation, whilst the agent interacts with and learns from the environment. We present the Self-Tuning Actor Critic (STAC) which uses this process to tune the hyperparameters of the usual loss function of the IMPALA actor critic agent (Espeholt et. al., 2018 ), to learn the hyperparameters that define auxiliary loss functions, and to balance trade offs in off policy learning by introducing and adapting the hyperparameters of a novel leaky V-trace operator. The method is simple to use, sample efficient and does not require significant increase in compute. Ablative studies show that the overall performance of STAC improves as we adapt more hyperparameters. When applied to 57 games on the Atari 2600 environment over 200 million frames our algorithm improves the median human normalized score of the baseline from 243% to 364%.
null
[ "https://arxiv.org/pdf/2002.12928v2.pdf" ]
211,572,804
2002.12928
fb54fc42e327fe7e01d5d574938885344716ff2f
Self-Tuning Deep Reinforcement Learning Tom Zahavy Zhongwen Xu Vivek Veeriah Matteo Hessel Junhyuk Oh Hado Van Hasselt David Silver Satinder Singh Self-Tuning Deep Reinforcement Learning Reinforcement learning (RL) algorithms often require expensive manual or automated hyperparameter searches in order to perform well on a new domain. This need is particularly acute in modern deep RL architectures which often incorporate many modules and multiple loss functions. In this paper, we take a step towards addressing this issue by using metagradients (Xu et al., 2018) to tune these hyperparameters via differentiable cross validation, whilst the agent interacts with and learns from the environment. We present the Self-Tuning Actor Critic (STAC) which uses this process to tune the hyperparameters of the usual loss function of the IMPALA actor critic agent (Espeholt et. al., 2018 ), to learn the hyperparameters that define auxiliary loss functions, and to balance trade offs in off policy learning by introducing and adapting the hyperparameters of a novel leaky V-trace operator. The method is simple to use, sample efficient and does not require significant increase in compute. Ablative studies show that the overall performance of STAC improves as we adapt more hyperparameters. When applied to 57 games on the Atari 2600 environment over 200 million frames our algorithm improves the median human normalized score of the baseline from 243% to 364%. Introduction Reinforcement Learning (RL) algorithms often have multiple hyperparameters that require careful tuning; this is especially true for modern deep RL architectures, which often incorporate many modules and many loss functions. Training a deep RL agent is thus typically performed in two nested optimization loops. In the inner training loop, we fix a set of hyperparameters, and optimize the agent parameters with respect to these fixed hyperparameters. In the outer (manual or automated) tuning loop, we search for good hyperparameters, evaluating them either in terms of their inner loop performance, or in terms of their performance on some special validation data. Inner loop training is typically differentiable and can be efficiently performed using back propagation, while the optimization of the outer loop is typically performed via gradient-free optimization. Since only the hyperparameters (but not the agent policy itself) are transferred between outer loop iterations, we refer to this as a "multiple lifetime" approach. For example, random search for hyperparameters (Bergstra & Bengio, 2012) falls under this category, and population based training (Jaderberg et al., 2017) also shares many of its properties. The cost of relying on multiple lifetimes to tune hyperparameters is often not accounted for when the performance of algorithms is reported. The impact of this is mild when users can rely on hyperparameters established in the literature, but the cost of hyper-parameter tuning across multiple lifetimes manifests itself when algorithm are applied to new domains. This motivates a significant body of work on tuning hyperparameters online, within a single agent lifetime. Previous work in this area has often focused on solutions tailored to specific hyperparameters. For instance, Schaul et al. (2019) proposed a non stationary bandit algorithm to adapt the exploration-exploitation trade off. Mann et al. (2016) and White & White (2016) proposed algorithms to adapt λ (the eligibility trace coefficient). Rowland et al. (2019) introduced an α coefficient into V-trace to account for the variance-contraction trade off in off policy learning. In another line of work, metagradients were used to adapt the optimiser's parameters (Sutton, 1992;Snoek et al., 2012;Maclaurin et al., 2015;Pedregosa, 2016;Franceschi et al., 2017;Young et al., 2018). In this paper we build on the metagradient approach to tune hyperparameters. In previous work, online metagradients have been used to learn the discount factor or the λ coefficient (Xu et al., 2018), to discover intrinsic rewards (Zheng et al., 2018) and auxiliary tasks (Veeriah et al., 2019). This is achieved by representing an inner (training) loss function as a function of both the agent parameters and a set of hyperparameters. In the inner loop, the parameters of the agent are trained to minimize this inner loss function w.r.t the current values of the hyperparameters; in the outer loop, the hyperparameters are adapted via back propagation to minimize the outer (validation) loss. It is perhaps surprising that we may choose to optimize a different loss function in the inner loop, instead of the outer loss we ultimately care about. However, this is not a new idea. Regularization, for example, is a technique that changes the objective in the inner loop to balance the bias-variance trade-off and avoid the risk of over fitting. In model-based RL, it was shown that the policy found using a smaller discount factor can actually be better than a policy learned with the true discount factor (Jiang et al., 2015). Auxiliary tasks (Jaderberg et al., 2016) are another example, where gradients are taken w.r.t unsupervised loss functions in order to improve the agent's representation. Finally, it is well known that, in order to maximize the long term cumulative reward efficiently (the objective of the outer loop), RL agents must explore, i.e., act according to a different objective in the inner loop (accounting, for instance, for uncertainty). This paper makes the following contributions. First, we show that it is feasible to use metagradients to simultaneously tune many critical hyperparameters (controlling important trade offs in a reinforcement learning agent), as long as they are differentiable w.r.t a validation/outer loss. Importantly, we show that this can be done online, within a single lifetime, requiring only 30% additional compute. We demonstrate this by introducing two novel deep RL architectures, that extend IMPALA (Espeholt et al., 2018), a distributed actor-critic, by adding additional components with many more new hyperparameters to be tuned. The first agent, referred to as a Self-Tuning Actor Critic (STAC), introduces a leaky V-trace operator that mixes importance sampling (IS) weights with truncated IS weights. The mixing coefficient in leaky V-trace is differentiable (unlike the original V-trace) but similarly balances the variance-contraction trade-off in off-policy learning. The second architecture is STACX (STAC with auXiliary tasks). Inspired by Fedus et al. (2019), STACX augments STAC with parametric auxiliary loss functions (each with it's own hyper-parameters). These agents allow us to show empirically, through extensive ablation studies, that performance consistently improves as we expose more hyperparameters to metagradients. In particular, when applied to 57 Atari games (Bellemare et al., 2013), STACX achieved a normalized median score of 364%, a new state-of-the-art for online model free agents. Background In the following, we consider three types of parameters: 1. θ -the agent parameters. 2. ζ -the hyperparameters. 3. η ⊂ ζ -the metaparameters. θ denotes the parameters of the agent and parameterises, for example, the value function and the policy; these parameters are randomly initialised at the beginning of an agent's lifetime, and updated using back-propagation on a suitable inner loss function. ζ denotes the hyperparameters, including, for example, the parameters of the optimizer (e.g the learning rate) or the parameters of the loss function (e.g. the discount factor); these may be tuned over the course of many lifetimes (for instance via random search) to optimize an outer (validation) loss function. In a typical deep RL setup, only these first two types of parameters need to be considered. In metagradient algorithms a third set of parameters must be specified: the metaparameters, denoted η; these are a subset of the differentiable parameters in ζ that start with some some initial value (itself a hyper-parameter), but that are then adapted during the course of training. The metagradient approach Metagradient RL (Xu et al., 2018) is a general framework for adapting, online, within a single lifetime, the differentiable hyperparameters η. Consider an inner loss that is a function of both the parameters θ and the metaparameters η: L inner (θ; η). On each step of an inner loop, θ can be optimized with a fixed η to minimize the inner loss L inner (θ; η) : θ(η t ) . = θ t+1 = θ t − ∇ θ L inner (θ t ; η t ).(1) In an outer loop, η can then be optimized to minimize the outer loss by taking a metagradient step. Asθ(η) is a function of η this corresponds to updating the η parameters by differentiating the outer loss w.r.t η, η t+1 = η t − ∇ η L outer (θ(η t )). The algorithm is general, as it implements a specific case of online cross-validation, and can be applied, in principle, to any differentiable meta-parameter η used by the inner loss. IMPALA Specific instantiations of the metagradient RL framework require specification of the inner and outer loss functions. Since our agent builds on the IMPALA actor critic agent (Espeholt et al., 2018), we now provide a brief introduction. IMPALA maintains a policy π θ (a t |x t ) and a value function v θ (x t ) that are parameterized with parameters θ. These policy and the value function are trained via an actor-critic update with entropy regularization; such an update is often represented (with slight abuse of notation) as the gradient of the following pseudo-loss function L(θ) =g v · (v s − V θ (x s )) 2 −g p ρ s · log π θ (a s |x s )(r s + γv s+1 − V θ (x s )) −g e · a π θ (a s |x s ) log π θ (a s |x s ), where g v , g p , g e are suitable loss coefficients. We refer to the policy that generates the data for these updates as the behaviour policy µ(a t |x t ). In the on policy case, where µ(a t |x t ) = π(a t |x t ), ρ s = 1 , then v s is the n-steps bootstrapped return v s = s+n−1 t=s γ t−s r t + γ n V (x s+1 ). IMPALA uses a distributed actor critic architecture, that assign copies of the policy parameters to multiple actors in different machines to achieve higher sample throughput. As a result, the target policy π on the learner machine can be several updates ahead of the actor's policy µ that generated the data used in an update. Such off policy discrepancy can lead to biased updates, requiring to weight the updates with IS weights for stable learning. Specifically, IMPALA (Espeholt et al., 2018) uses truncated IS weights to balance the variance-contraction trade off on these off-policy updates. This corresponds to instantiating Eq. (3) with v s = V (x s ) + s+n−1 t=s γ t−s Π t−1 i=s c i δ t V,(4) where we define δ t V = ρ t (r t + γV (x t+1 ) − V (x t )) and we set ρ t = min(ρ, π(at|xt) µ(at|xt) ), c i = λ min(c, π(at|xt) µ(at|xt) ) for suitable truncation levelsρ andc. Metagradient IMPALA The metagradient agent in (Xu et al., 2018) uses the metagradient update rules from the previous section with the actor-critic loss function of Espeholt et al. (2018). More specifically, the inner loss is a parameterised version of the IMPALA loss with metaparameters η based on Eq. (3), L(θ; η) =0.5 (v s − V θ (x s )) 2 −ρ s · log π θ (a s |x s )(r s + γv s+1 − V θ (x s )) −0.01 a π θ (a s |x s ) log π θ (a s |x s ), where η = {γ, λ}. Notice that γ and λ also affect the inner loss through the definition of v s (Eq. (4)). The outer loss is defined to be the policy gradient loss L outer (η) =L outer (θ(η)) = − ρ s log πθ (η) (a s |x s )(r s + γv s+1 − Vθ (η) (x s )). Self-Tuning actor-critic agents We first consider a slightly extended version of the metagradient IMPALA agent (Xu et al., 2018). Specifically, we allow the metagradient to adapt learning rates for individual components in the loss, L(θ; η) =g v · (v s − V θ (x s )) 2 −g p ρ s · log π θ (a s |x s )(r s + γv s+1 − V θ (x s )) −g e · a π θ (a s |x s ) log π θ (a s |x s ). The outer loss is defined using Eq. (3), L outer (η) = L outer (θ(η)) =g outer v · v s − Vθ (η) (x s ) 2 −g outer p · ρ s log πθ (η) (a s |x s )(r s + γv s+1 − Vθ (η) (x s )) −g outer e · a πθ (η) (a s |x s ) log πθ (η) (a s |x s ) +g outer kl · KL πθ (η) , π θ . Notice the new Kullback-Leibler (KL) term in Eq. (6), motivating the η-update not to change the policy too much. Compared to the work by Xu et al. (2018) that only selftuned the inner-loss γ and λ (hence η = {γ,λ}), also self-tuning the inner-loss coefficients corresponds to setting η = {γ, λ, g v , g p , g e }. These metaparameters allow for loss specific learning rates and support dynamically balancing exploration with exploitation by adapting the entropy loss weight 1 . The hyperparameters of STAC include the initialisations of the metaparameters, the hyperparameters of the outer loss {γ outer , λ outer , g outer v , g outer p , g outer e }, the KL coefficient g outer kl (set to 1), and the learning rate of the ADAM meta optimizer (set to 1e − 3). IMPALA Self-Tuning IMPALA θ v θ , π θ v θ , π θ ζ {γ, λ, g v , g p , g e } {γ outer , λ outer , g outer v , g outer p , g outer e } Initialisations ADAM parameter, g outer kl η -{γ, λ, g v , g p , g e } To set the initial values of the metaparameters of self-tuning IMPALA, we use a simple "rule of thumb" and set them to the values of the corresponding parameters in the outer loss (e.g., the initial value of the inner-loss λ is set equal to the outer-loss λ). For outer-loss hyperparameters that are common to IMPALA we default to the IMPALA settings. In the next two sections, we show how embracing self tuning via metagradients enables us to augment this agent with a parameterised Leaky V-trace operator and with self-tuned auxiliary loss functions. These ideas are examples of how the ability to self-tune metaparameters via metagradients can be used to introduce novel ideas into RL algorithms without requiring extensive tuning of the new hyperparameters. 1 There are a few additional subtle differences between this self-tuning IMPALA agent and the metagradient agent from Xu et al. (2018). For example, we do not use the γ embedding used in (Xu et al., 2018). These differences are further discussed in the supplementary where we also reproduce the results of Xu et al. (2018) in our code base. STAC All the hyperparameters that we considered for self-tuning so far have the property that they are explicitly defined in the definition of the loss function and can be directly differentiated. The truncation levels in the V-trace operator within IMPALA, on the other hand, are equivalent to applying a ReLU activation and are non differentiable. Motivated by the study of non linear activations in Deep Learning (Xu et al., 2015), we now introduce an agent based on a variant of the V-trace operator that we call leaky Vtrace. We will refer to this agent as the Self-Tuning Actor Critic (STAC). Leaky V-trace uses a leaky rectifier (Maas et al., 2013) to truncate the importance sampling weights, which allows for a small non-zero gradient when the unit is saturated. We show that the degree of leakiness can control certain trade offs in off policy learning, similarly to V-trace, but in a manner that is differentiable. Before we introduce Leaky V-trace, let us first recall how the off policy trade offs are represented in V-trace using the coefficientsρ,c. The weight ρ t = min(ρ, π(at|xt) µ(at|xt) ) appears in the definition of the temporal difference δ t V and defines the fixed point of this update rule. The fixed point of this update is the value function V πρ of the policy πρ that is somewhere between the behaviour policy µ and the target policy π controlled by the hyper parameterρ, πρ = min (ρµ(a|x), π(a|x)) b min (ρµ(b|x), π(b|x)) . The product of the weights c s , ..., c t−1 in Eq. (4) measures how much a temporal difference δ t V observed at time t impacts the update of the value function. The truncation levelc is used to control the speed of convergence by trading off the update variance for a larger contraction rate, similar to Retrace(λ) (Munos et al., 2016). By clipping the importance weights, the variance associated with the update rule is reduced relative to importance-weighted returns . On the other hand, the clipping of the importance weights effectively cuts the traces in the update, resulting in the update placing less weight on later TD errors, and thus worsening the contraction rate of the corresponding operator. Following this interpretation of the off policy coefficients, we now propose a variation of V-trace which we call leaky V-trace with new parameters α ρ ≥ α c , IS t = π(a t |x t ) µ(a t |x t ) , ρ t = α ρ min ρ, IS t + (1 − α ρ )IS t , c i = λ α c min c, IS t + (1 − α c )IS t , v s = V (x s ) + s+n−1 t=s γ t−s Π t−1 i=s c i δ t V, δ t V = ρ t (r t + γV (x t+1 ) − V (x t )).(8) We highlight that for α ρ = 1, α c = 1, Leaky V-trace is exactly equivalent to V-trace, while for α ρ = 0, α c = 0, it is equivalent to canonical importance sampling. For other values we get a mixture of the truncated and non truncated importance sampling weights. Theorem 1 below suggests that Leaky V-trace is a contraction mapping, and that the value function that it will converge to is given by V πρ,α ρ , where πρ ,αρ = α ρ min (ρµ(a|x), π(a|x)) + (1 − α ρ )π(a|x) α ρ b min (ρµ(b|x), π(b|x)) + 1 − α ρ ,(9) is a policy that mixes (and then re-normalizes) the target policy with the V-trace policy Eq. (7) Similar toρ, the new parameter α ρ controls the fixed point of the update rule, and defines a value function that interpolates between the value function of the target policy π and the behaviour policy µ. Specifically, the parameter α c allows the importance weights to "leak back" creating the opposite effect to clipping. Since Theorem 1 requires us to have α ρ ≥ α c , our main STAC implementation parametrises the loss with a single parameter α = α ρ = α c . In addition, we also experimented with a version of STAC that learns both α ρ and α c . Quite interestingly, this variation of STAC learns the rule α ρ ≥ α c by its own (see the experiments section for more details). Note that low values of α c lead to importance sampling which is high contraction but high variance. On the other hand, high values of α c lead to V-trace, which is lower contraction and lower variance than importance sampling. Thus exposing α c to meta-learning enables STAC to directly control the contraction/variance trade-off. In summary, the metaparameters for STAC are {γ, λ, g v , g p , g e , α}. To keep things simple, when using Leaky V-trace we make two simplifications w.r.t the hyperparameters. First, we use V-trace to initialise Leaky V-trace, i.e., we initialise α = 1. Second, we fix the outer loss to be V-trace, i.e. we set α outer = 1. STAC with auxiliary tasks (STACX) Next, we introduce a new agent, that extends STAC with auxiliary policies, value functions, and respective auxiliary loss functions; this is new because the parameters that define the auxiliary tasks (the discount factors in this case) are self-tuned. As this agent has a new architecture in addition to an extended set of metaparameters we give it a different acronym and denote it by STACX (STAC with auxiliary tasks). The auxiliary losses have the same parametric form as the main objective and can be used to regularize its objective and improve its representations. STACX's architecture has a shared representation layer θ shared , from which it splits into n different heads (Fig. 1). For the shared representation layer we use the deep residual net from (Espeholt et al., 2018). Each head has a policy and a corresponding value function that are represented using a 2 layered MLP with parameters {θ i } n i=1 . Each one of these heads is trained in the inner loop to minimize a loss function L(θ i ; η i ). parametrised by its own set of metaparameters η i . The policy of the STACX agent is defined to be the policy of a specific head (i = 1). The hyperparameters {η i } n i=1 are trained in the outer loop to improve the performance of this single head. Thus, the role of the auxiliary heads is to act as auxiliary tasks (Jaderberg et al., 2016) and improve the shared representation θ shared . Finally, notice that each head has its own policy π i , but the behaviour policy is fixed to be π 1 . Thus, to optimize the auxiliary heads we use (Leaky) V-trace for off policy corrections 3 . 3 We also considered two extensions of this approach. (1) Random ensemble: The policy head is chosen at random from [1, .., n], and the hyperparameters are differentiated w.r.t the performance of each one of the heads in the outer loop. (2) Average ensemble: The actor policy is defined to be the average logits of the heads, and we learn one additional head for the value function of this policy. The metagradient in the outer loop is taken with respect to the actor policy, and /or, each one of the heads individually. While these extensions seem interesting, in all of our experiments they The metaparameters for STACX are {γ i , λ i , g i v , g i p , g i e , α i } 3 i=1 . Since Experiments In all of our experiments (with the exception of the robustness experiments in Section 4.3) we use the IMPALA hyperparameters both for the IMPALA baseline and for the outer loss of the STAC agent, i.e., λ outer = 1, α outer = 1, g outer v,p,e = 1. We use γ = 0.995 as it was found to improve the performance of IMPALA considerably (Xu et al., 2018). Atari learning curves We start by evaluating STAC and STACX in the Arcade Learning Environment (Bellemare et al., 2013, ALE). Fig. 2 presents the normalized median scores 4 during training. We found STACX to learn faster and achieve higher final performance than STAC. We also compare these agents with versions of them without self tuning (fixing the metaparameters). The version of STACX with fixed unsupervised auxiliary tasks achieved a normalized median score of 247, similar to that of UNREAL (Jaderberg et al., 2016) but not much better than IMPALA. In Fig. 3 we report the relative improvement of STACX over IMPALA in the individual levels (an equivalent figure for STAC may be found in the supplementary material). always led to a small decrease in performance when compared to our auxiliary task agent without these extensions. Similar findings were reported in (Fedus et al., 2019). 4 Normalized median scores are computed as follows. For each Atari game, we compute the human normalized score after 200M frames of training and average this over 3 different seeds; we then report the overall median score over the 57 Atari domains. Ablative analysis Next, we perform an ablative study of our approach by training different variations of STAC and STACX. The results are summarized in Fig. 4. In red, we can see different base-lines 5 , and in green and blue, we can see different ablations of STAC and STACX respectively. In these ablations η corresponds to the subset of the hyperparameters that are being self-tuned, e.g., η = {γ} corresponds to self-tuning a single loss function where only γ is self-tuned (and the other hyperparameters are fixed). Inspecting Fig. 4 we observe that the performance of STAC and STACX consistently improve as they self-tune more metaparameters. These metaparameters control different trade-offs in reinforcement learning: discount factor controls the effective horizon, loss coefficients affect learning rates, the Leaky V-trace coefficient controls the variancecontraction-bias trade-off in off policy RL. We have also experimented with adding more auxiliary losses, i.e, having 5 or 8 auxiliary loss functions. These variations performed better then having a single loss function but slightly worse then having 3. This can be further explained by Fig. 9 (Section 4.4), which shows that the auxiliary heads are self-tuned to similar metaparameters. Nature DQN IMPALA {} { } { , } { , , g v , g e , g p } { , , g v , g e , g p , } Xu et al. {} 3 i = 1 { } 3 i = 1 { i , i } 3 i = 1 { i , i , g i v , g i e , g i p } 3 i = 1 { i , i , g i v , g i e , g i p , i } 3 i = 1 95% 191% 243% 240% 248% 257% 292% 287% 247% 262% 249% 341% 364% Figure 4. Ablative studies of STAC (green) and STACX (blue) compared to some baselines (red). Median normalized scores across 57 Atari games after training for 200M frames. Finally, we experimented with a few more versions of STACX. One variation allows STACX to self-tune both α ρ and α c without enforcing the relation α ρ ≥ α c . This version performed slightly worse than STACX (achieved a median score of 353%) and we further discuss it in Section 4.4 (Fig. 10). In another variation we self-tune α together with a single truncation parameterρ =c. This variation performed much worse achieving a median score of 301%, which may be explained byρ not being differentiable. Number of games Figure 5. The number of games (y-axis) in which ablative variations of STAC and STACX improve the IMPALA baseline by at least x percents (x-axis). { i , i , g i v, g i e, g i p, i } 3 i = 1 { i , i , g i v, g i e, g i p} 3 i = 1 { , , gv, ge, gp} { , } { } In Fig. 5, we further summarize the relative improvement of ablative variations of STAC (green,yellow, and blue; the bottom three lines) and STACX (light blue and red; the top two lines) over the IMPALA baseline. For each value of x ∈ [0, 100] (the x axis), we measure the number of games in which an ablative version of the STAC(X) agent is better than the IMPALA agent by at least x percent and subtract from it the number of games in which the IMPALA agent is better than STAC(X). Clearly, we can see that STACX (light blue) improves the performance of IMPALA by a large margin. Moreover, we observe that allowing the STAC(X) agents to self-tune more metaparameters consistently improves its performance in more games. Robustness It is important to note that our algorithm is not hyperparameter free. For instance, we still need to choose hyperparameter settings for the outer loss. Additionally, each hyperparameter in the inner loss that we expose to metagradients still requires an initialization (itself a hyperparameter). Therefore, in this section, we investigate the robustness of STACX to its hyperparameters. We begin with the hyperparameters of the outer loss. In these experiments we compare the robustness of STACX with that of IMPALA in the following manner. For each hyper parameter (γ, g v ) we select 5 perturbations. For STACX we perturb the hyper parameter in outer loss (γ outer , g outer v ) and for IMPALA we perturb the corresponding hyper parameter (γ, g v ). We randomly selected 5 Atari levels and present the mean and standard deviation across 3 random seeds after 200M frames of training. Fig. 6 presents the results for the discount factor. We can see that overall (in 72% of the configurations measured by mean), STACX indeed performs better than IMPALA. Similarly, Fig. 7 shows the robustness of STACX to the critic weight (g v ), where STACX improves over IMPALA in 80% of the configurations. Figure 6. Robustness to the discount factor. Mean and confidence intervals (over 3 seeds), after 200M frames of training. Left (blue) bars correspond to STACX and right (red) bars to IMPALA. STACX is better than IMPALA in 72% of the runs measured by mean. Figure 7. Robustness to the critic weight gv. Mean and confidence intervals (over 3 seeds), after 200M frames of training. Left (blue) bars correspond to STACX and right (red) bars correspond to IMPALA. STACX is better than IMPALA in 80% of the runs measured by mean. Next, we investigate the robustness of STACX to the initialisation of the metaparameters in Fig. 8. We selected values that are close to 1 as our design principle is to initialise the metaparameters to be similar to the hyperparameters in the outer loss. We observe that overall, the method is quite robust to different initialisations. {γ init , λ init , g init v , g init e , g init p , α init } 3 i=1 to a Adaptivity In Fig. 9 we visualize the metaparameters of STACX during training. As there are many metaparameters, seeds and levels, we restrict ourselves to a single seed (chosen arbitrary to 1) and a single game (Jamesbond). More examples can be found in the supplementary material. For each hyper parameter we plot the value of the hyperparameters associated with three different heads, where the policy head (head number 1) is presented in blue and the auxiliary heads (2 and 3) are presented in orange and magenta. Inspecting Fig. 9 we can see that the two auxiliary heads selftuned their metaparameters to have relatively similar values, but different than those of the main head. The discount factor of the main head, for example, converges to the value of the discount factor in the outer loss (0.995), while the discount factors of the auxiliary heads change quite a lot during training and learn about horizons that differ from that of the main head. We also observe non trivial behaviour in self-tuning the coefficients of the loss functions (g e , g p , g v ), in the λ coefficient and in the off policy coefficient α. For instance, we found that at the beginning of training α is self tuned to a high value (close to 1) so it is quite similar to V-trace; instead, towards the end of training STACX self-tunes it to lower values which makes it closer to importance sampling. Finally, we also noticed an interesting behaviour in the version of STACX where we expose to self-tuning both the α ρ and α c coefficients, without imposing α ρ ≥ α c (Theorem 1). This variation of STACX achieved a median score of 353%. Quite interestingly, the metagradient discovered α ρ ≥ α c on its own, i.e., it self-tunes α ρ so that is greater or equal to α c in 91.2% of the time (averaged over time, seeds, and levels) , and so that α ρ ≥ 0.99α c in 99.2% of the time. Fig. 10 shows an example of this in Jamesbond. Summary In this work we demonstrate that it is feasible to use metagradients to simultaneously tune many critical hyperparameters (controlling important trade offs in a reinforcement learning agent), as long as they are differentiable; we show that this can be done online, within a single lifetime. We do so by presenting STAC and STACX, actor critic algorithms that self-tune a large number of hyperparameters of very different nature. We showed that the performance of these agents improves as they self-tune more hyperparameters, and we demonstrated that STAC and STACX are computationally efficient and robust to their own hyperparameters. References Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. Figure 11. Mean human-normalized scores after 200M frames, relative improvement in percents of STAC over the IMPALA baseline. Analysis of Leaky V-trace Define the Leaky V-trace operatorR: RV (x) = V (x) + E µ   t≥0 γ t Π t−1 i=0c i ρ t (r t + γV (x t+1 ) − V (x t )) |x 0 = x, µ)   ,(10) where the expectation E µ is with respect to the behaviour policy µ which has generated the trajectory (x t ) t≥0 , i.e., x 0 = x, x t+1 ∼ P (·|x t , a t ), a t ∼ µ(·|x t ). Similar to (Espeholt et al., 2018), we consider the infinite-horizon operator but very similar results hold for the n-step truncated operator. Let IS(x t ) = π(a t |x t ) µ(a t |x t ) , be importance sampling weights, let ρ t = min(ρ, IS(x t )), c t = min(c, IS(x t )), be truncated importance sampling weights withρ ≥c, and let ρ t = α ρ ρ t + (1 − α ρ )IS(x t ),c t = α c c t + (1 − α c )IS(x t ) be the Leaky importance sampling weights with leaky coefficients α ρ ≥ α c . Theorem 2 (Restatement of Theorem 1). Assume that there exists β ∈ (0, 1] such that E µ ρ 0 ≥ β. Then the operatorR defined by Eq. (10) has a unique fixed pointṼπ, which is the value function of the policy πρ ,αρ defined by πρ ,αρ = α ρ min (ρµ(a|x), π(a|x)) + (1 − α ρ )π(a|x) b α ρ min (ρµ(b|x), π(b|x)) + (1 − α ρ )π(b|x) , Furthermore,R is aη-contraction mapping in sup-norm, with η = γ −1 − (γ −1 − 1)E µ   t≥0 γ t Π t−2 i=0c i ρ t−1   ≤ 1 − (1 − γ)(α ρ β + 1 − α ρ ) < 1, wherec −1 = 1,ρ −1 = 1 and Π t−2 s=0 c s = 1 for t = 0, 1. Proof. The proof follows the proof of V-trace from (Espeholt et al., 2018) with adaptations for the leaky V-trace coefficients. We have thatR V 1 (x) −RV 2 (x) = E µ t≥0 γ t Π t−2 s=0 c s [ρ t−1 −c t−1ρt ] (V 1 (x t ) − V 2 (x t )) . Denote byκ t = ρ t−1 −c t−1ρt , and notice that E µρt = α ρ E µ ρ t + (1 − α ρ )E µ IS(x t ) ≤ 1, since E µ IS(x t ) = 1, and therefore, E µ ρ t ≤ 1. Furthermore, sinceρ ≥c and α ρ ≥ α c , we have that ∀t,ρ t ≥c t . Thus, the coefficientsκ t are non negative in expectation, since E µκt = E µρt−1 −c t−1ρt ≥ E µct−1 (1 −ρ t ) ≥ 0. Thus, V 1 (x) − V 2 (x) is a linear combination of the values V 1 − V 2 at the other states, weighted by non-negative coefficients whose sum is t≥0 γ t E µ Π t−2 s=0c s [ρ t−1 −c t−1ρt ] = t≥0 γ t E µ Π t−2 s=0c s ρ t−1 − t≥0 γ t E µ Π t−1 s=0c s ρ t =γ −1 − (γ −1 − 1) t≥0 γ t E µ Π t−2 s=0c s ρ t−1 ≤γ −1 − (γ −1 − 1)(1 + γE µρ0 ) (11) =1 − (1 − γ)E µρ0 =1 − (1 − γ)E µ (α ρ ρ 0 + (1 − α ρ )IS(x 0 )) ≤1 − (1 − γ) (α ρ β + 1 − α ρ ) < 1(12) where Eq. (11) holds since we expanded only the first two elements in the sum, and all the elements in this sum are positive, and Eq. (12) holds by the assumption. We deduce that R V 1 (x) −RV 2 (x) ∞ ≤η V 1 (x) − V 2 (x) ∞ , withη = 1 − (1 − γ) (α ρ β + 1 − α ρ ) < 1, soR is a contraction mapping. Furthermore, we can see that the parameter α ρ controls the contraction rate, for α ρ = 1 we get the contraction rate of V-traceη = 1 − (1 − γ)β and as α ρ gets smaller with get better contraction as with α ρ = 0 we get that η = γ. ThusR possesses a unique fixed point. Let us now prove that this fixed point is V πρ,α ρ , where πρ ,αρ = α ρ min (ρµ(a|x), π(a|x)) + (1 − α ρ )π(a|x) b α ρ min (ρµ(b|x), π(b|x)) + (1 − α ρ )π(b|x) ,(13) is a policy that mixes the target policy with the V-trace policy. We have: E µ [ρ t (r t + γV πρ,α ρ (x t+1 ) − V πρ,α ρ (x t ))|x t ] =E µ (α ρ ρ t + (1 − α ρ )IS(x t )) (r t + γV πρ,α ρ (x t+1 ) − V πρ,α ρ (x t )) = a µ(a|x t ) α ρ min ρ, π(a|x t ) µ(a|x t ) + (1 − α ρ ) π(a|x t ) µ(a|x t ) (r t + γV πρ,α ρ (x t+1 ) − V πρ,α ρ (x t )) = a (α ρ min (ρµ(a|x t ), π(a|x t )) + (1 − α ρ )π(a|x t )) (r t + γV πρ,α ρ (x t+1 ) − V πρ,α ρ (x t )) = a πρ ,αρ (a|x t )(r t + γV πρ,α ρ (x t+1 ) − V πρ,α ρ (x t )) · b (α ρ min (ρµ(b|x t ), π(b|x t )) + (1 − α ρ )π(b|x t )) = 0, where we get that the left side (up to the summation on b) of the last equality equals zero since this is the Bellman equation for V πρ,α ρ . We deduce thatRV πρ,α ρ = V πρ,α ρ , thus, V πρ,α ρ is the unique fixed point ofR. Reproducibility Inspecting the results in Fig. 4 one may notice that there are small differences between the results of IMAPALA and using meta gradients to tune only λ, γ compared to the results that were reported in (Xu et al., 2018). We investigated the possible reasons for these differences. First, our method was implemented in a different code base. Our code is written in JAX, compared to the implementation in (Xu et al., 2018) that was written in TensorFlow. This may explain the small difference in final performance between our IMAPALA baseline ( Fig. 4) and the the result of Xu et. al. which is slightly higher (257.1). Second, Xu et. al. observed that embedding the γ hyper parameter into the θ network improved their results significantly, reaching a final performance (when learning γ, λ) of 287.7 (see section 1.4 in (Xu et al., 2018) for more details). Our method, on the other hand, only achieved a score of 240 in this ablative study. We further investigated this difference by introducing the γ embedding intro our architecture. With γ embedding, our method achieved a score of 280.6 which almost reproduces the results in (Xu et al., 2018). We then introduced the same embedding mechanism to our model with auxiliary loss functions. In this case for auxiliary loss i we embed γ i . We experimented with two variants, one that shares the embedding weights across the auxiliary tasks and one that learns a specific embedding for each auxiliary task. Both of these variants performed similarly (306.8, 3.077 respectively) which is better then our previous result with embedding and without auxiliary losses that achieved 280.6. Unfortunately, the performance of the auxiliary loss architecture actually performed better without the embedding (353.4) and we therefor ended up not using the γ embedding in our architecture. We leave it to future work to further investigate methods of combining the embedding mechanisms with the auxiliary loss functions. Figure 13. Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Figure 14. Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Figure 19. Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Figure 20. Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Figure 21. Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Figure 1 . 1Block diagrams of STAC and STACX. Figure 2 . 2Median human-normalized scores during training. Figure 8 . 8Robustness to the initialisation of the metaparameters. Mean and confidence intervals (over 6 seeds), after 200M frames of training. Columns correspond to different games. Bottom: perturbing γ init ∈ {0.99, 0.992, 0.995, 0.997, 0.999}. Top: perturbing all the meta parameter initialisations. I.e., setting all the hyperparamters Figure 9 . 9Adaptivity in Jamesbond. Figure 10 . 10Discovery of αρ ≥ αc. Figure 15 . 15Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Figure 18 . 18Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head. Table 1 . 1Parameters in & IMPALA and self-tuning IMPALA. Theorem 1. The leaky V-trace operator defined by Eq.(8)is a contraction operator and it converges to the value function of the policy defined by Eq. (9).2 . A more formal statement of Theorem 1 and a detailed proof (which closely follows that of Espeholt et al. (2018) for the original v-trace operator) can be found in the supplementary material. the outer loss is defined only w.r.t head 1, introducing the auxiliary tasks into STACX does not require new hyperparameters for the outer loss. In addition, we use the same initialisation values for all the auxiliary tasks. Thus, STACX has exactly the same hyperparameters as STAC.Meta gradient, Xu et. al.0 0.5e8 1e8 1.5e8 2e8 0% 50% 100% 150% 200% 250% 300% 350% Median human normalized scores Fixed auxiliary tasks IMPALA STAC STACX single fixed value in {0.99, 0.992, 0.995, 0.997, 0.999}. Figure 12. Meta parameters and reward in each Atari game (and seed) during learning. Different colors correspond to different heads, blue is the main (policy) head.9. Individual level learning curves 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.90 0.92 0.94 0.96 0.98 1.00 alien, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 gp 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 gv 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 ge 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.92 0.94 0.96 0.98 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 reward 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 alien, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 alien, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.75 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.90 0.92 0.94 0.96 0.98 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 amidar, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.970 0.975 0.980 0.985 0.990 0.995 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.975 0.980 0.985 0.990 0.995 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 200 400 600 800 1000 1200 1400 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.984 0.986 0.988 0.990 0.992 0.994 0.996 amidar, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 200 400 600 800 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 amidar, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 500 1000 1500 2000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 assault, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 assault, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 assault, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.980 0.985 0.990 0.995 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.965 0.970 0.975 0.980 0.985 0.990 0.995 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.70 0.75 0.80 0.85 0.90 0.95 1.00 asterix, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 200000 400000 600000 800000 1000000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.90 0.92 0.94 0.96 0.98 1.00 asterix, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 200000 400000 600000 800000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.96 0.97 0.98 0.99 asterix, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 50000 100000 150000 200000 250000 300000 350000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9775 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 asteroids, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 0.9975 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 0.0 0.5 1.0 1.5 2.0 1e8 0 25000 50000 75000 100000 125000 150000 175000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9775 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 asteroids, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.84 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 20000 40000 60000 80000 100000 120000 140000 160000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.970 0.975 0.980 0.985 0.990 0.995 asteroids, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.970 0.975 0.980 0.985 0.990 0.995 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 20000 40000 60000 80000 100000 120000 140000 160000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 atlantis, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.86 0.88 0.90 0.92 0.94 0.96 0.98 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 200000 400000 600000 800000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 atlantis, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 200000 400000 600000 800000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 atlantis, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0 200000 400000 600000 800000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.975 0.980 0.985 0.990 0.995 1.000 bank_heist, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 gp 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 gv 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 ge 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 200 400 600 800 1000 1200 reward 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.965 0.970 0.975 0.980 0.985 0.990 0.995 1.000 bank_heist, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 0 200 400 600 800 1000 1200 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.5 0.6 0.7 0.8 0.9 1.0 bank_heist, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 200 400 600 800 1000 1200 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9750 0.9775 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 battle_zone, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 10000 20000 30000 40000 50000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9775 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 battle_zone, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.75 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 10000 20000 30000 40000 50000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 battle_zone, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 2500 5000 7500 10000 12500 15000 17500 20000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.982 0.984 0.986 0.988 0.990 0.992 0.994 0.996 beam_rider, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 20000 40000 60000 80000 100000 120000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.982 0.984 0.986 0.988 0.990 0.992 0.994 0.996 beam_rider, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0 20000 40000 60000 80000 100000 120000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.982 0.984 0.986 0.988 0.990 0.992 0.994 0.996 beam_rider, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.84 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 20000 40000 60000 80000 100000 120000 140000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 berzerk, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 400 500 600 700 800 900 1000 1100 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.984 0.986 0.988 0.990 0.992 0.994 0.996 berzerk, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.92 0.94 0.96 0.98 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 500 600 700 800 900 1000 1100 1200 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.984 0.986 0.988 0.990 0.992 0.994 0.996 berzerk, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.980 0.985 0.990 0.995 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.984 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.88 0.90 0.92 0.94 0.96 0.98 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 400 500 600 700 800 900 1000 1100 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 0.9975 1.0000 bowling, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.970 0.975 0.980 0.985 0.990 0.995 1.000 0.0 0.5 1.0 1.5 2.0 1e8 25.0 27.5 30.0 32.5 35.0 37.5 40.0 42.5 45.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 0.9975 bowling, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 0.9975 1.0000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 0.0 0.5 1.0 1.5 2.0 1e8 20 25 30 35 40 45 50 55 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 bowling, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.9825 0.9850 0.9875 0.9900 0.9925 0.9950 0.9975 1.0000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 0.0 0.5 1.0 1.5 2.0 1e8 20 25 30 35 40 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.960 0.965 0.970 0.975 0.980 0.985 0.990 0.995 boxing, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 20 40 60 80 100 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.970 0.975 0.980 0.985 0.990 0.995 boxing, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 20 40 60 80 100 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 boxing, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 2 0 2 4 6 Self-Tuning Deep Reinforcement Learning0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.980 0.982 0.984 0.986 0.988 0.990 0.992 0.994 0.996 wizard_of_wor, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 gp 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 gv 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.975 0.980 0.985 0.990 0.995 ge 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 10000 20000 30000 40000 50000 reward 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 wizard_of_wor, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.965 0.970 0.975 0.980 0.985 0.990 0.995 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 35000 40000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.982 0.984 0.986 0.988 0.990 0.992 0.994 0.996 0.998 wizard_of_wor, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.96 0.97 0.98 0.99 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 10000 20000 30000 40000 50000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 yars_revenge, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.980 0.985 0.990 0.995 1.000 0.0 0.5 1.0 1.5 2.0 1e8 0 20000 40000 60000 80000 100000 120000 140000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 yars_revenge, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 5000 10000 15000 20000 25000 30000 35000 40000 45000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 yars_revenge, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.0 0.5 1.0 1.5 2.0 1e8 20000 40000 60000 80000 100000 120000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 zaxxon, seed=1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.988 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.960 0.965 0.970 0.975 0.980 0.985 0.990 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.984 0.986 0.988 0.990 0.992 0.994 0.996 0.998 1.000 zaxxon, seed=2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.94 0.95 0.96 0.97 0.98 0.99 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 1e8 0 5000 10000 15000 20000 25000 30000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.984 0.986 0.988 0.990 0.992 0.994 0.996 0.998 zaxxon, seed=3 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.990 0.992 0.994 0.996 0.998 1.000 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 0.95 0.96 0.97 0.98 0.99 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e8 Work done at DeepMind, London. 2 University of Michigan. Correspondence to: Tom Zahavy <[email protected]>. Note that α−trace (Rowland et al., 2019), another adaptive algorithm for off policy learning, mixes the V-trace policy with the behaviour policy; Leaky V-trace mixes it with the target policy. We further discuss reproducing the results of Xu et al. (2018) in the supplementary material.
[]
[ "Coalgebraic Tools for Randomness-Conserving Protocols", "Coalgebraic Tools for Randomness-Conserving Protocols" ]
[ "Dexter Kozen \nCornell University\n\n", "Matvey Soloviev \nCornell University\n\n" ]
[ "Cornell University\n", "Cornell University\n" ]
[]
We propose a coalgebraic model for constructing and reasoning about statebased protocols that implement efficient reductions among random processes. We provide basic tools that allow efficient protocols to be constructed in a compositional way and analyzed in terms of the tradeoff between state and loss of entropy. We show how to use these tools to construct various entropyconserving reductions between processes.
10.1016/j.jlamp.2021.100734
[ "https://arxiv.org/pdf/1807.02735v2.pdf" ]
49,652,014
1807.02735
f14690187ff87a42fc308845d7c77e37dbb26311
Coalgebraic Tools for Randomness-Conserving Protocols 22 Oct 2021 Dexter Kozen Cornell University Matvey Soloviev Cornell University Coalgebraic Tools for Randomness-Conserving Protocols 22 Oct 2021Randomnessentropyprotocolreductiontransducercoalgebra We propose a coalgebraic model for constructing and reasoning about statebased protocols that implement efficient reductions among random processes. We provide basic tools that allow efficient protocols to be constructed in a compositional way and analyzed in terms of the tradeoff between state and loss of entropy. We show how to use these tools to construct various entropyconserving reductions between processes. Introduction In low-level performance-critical computations-for instance, data-forwarding devices in packet-switched networks-it is often desirable to minimize local state in order to achieve high throughput. But if the situation requires access to a source of randomness, say to implement randomized routing or loadbalancing protocols, it may be necessary to convert the output of the source to a form usable by the protocol. As randomness is a scarce resource to be conserved like any other, these conversions should be performed as efficiently as possible and with a minimum of machinery. In this paper we propose a coalgebraic model for constructing and reasoning about state-based protocols that implement efficient reductions among random processes. By "efficient" we mean with respect to loss of entropy. Entropy is a measure of the amount of randomness available in a random source. For example, a fair coin generates entropy at the rate of one random bit per flip; a fair six-sided die generates entropy at the rate of log 6 ≈ 2.585 random bits per roll. We view randomness as a limited computational resource to be conserved, like time or space. Unfortunately, converting from one random source to another generally involves a loss of entropy, as measured by the ratio of the rate of entropy pro-Email addresses: [email protected] (Dexter Kozen), [email protected] (Matvey Soloviev) duced to the rate of entropy consumed. This quantity is called the efficiency of the conversion protocol. For example, if we wish to simulate a coin flip by rolling a die and declaring heads if the number on the die is even and tails if it is odd, then the ratio of entropy production to consumption is 1/2.585 ≈ .387, so we lose about .613 bits of entropy per trial. The efficiency cannot exceed the information-theoretic bound of unity, but we would like it to be as close to unity as can be achieved with simple state-based devices. For example, we could instead roll the die and if the result is 1, 2, 3, or 4, output two bits 00, 01, 10, or 11, respectively-the first bit can be used now and the second saved for later-and if the result is 5 or 6, output a single bit 0 or 1, respectively. The efficiency is much better, about .645. In this paper we introduce a coalgebraic model for the analysis of reductions between discrete processes. A key feature of the model is that it facilitates compositional reasoning. In §3 we prove several results that show how the efficiency and state complexity of a composite protocol depend on the same properties of its constituent parts. This allows efficient protocols to be constructed and analyzed in a compositional way. We are able to cover a full range of input and output processes while preserving asymptotic guarantees about the relationship between memory use and conservation of entropy. In §4 we use the model to construct the following reductions between processes, where k is a tunable parameter roughly proportional to the logarithm of the size of the state space: • d-uniform to c-uniform with efficiency 1 − Θ(k −1 ); • d-uniform to arbitrary rational with efficiency 1 − Θ(k −1 ); • d-uniform to arbitrary with efficiency 1 − Θ(k −1 ); • arbitrary to c-uniform with efficiency 1 − Θ(log k/k); • (1/r, (r − 1)/r) to c-uniform with efficiency 1 − Θ(k −1 ). Thus choosing a larger value of k (that is, allowing more state) results in greater efficiency, converging to the optimal of 1 in the limit. Here "d-uniform" refers to an independent and identically distributed (i.i.d.) process that produces a sequence of letters from an alphabet of size d, each chosen independently with uniform probability 1/d; "arbitrary rational" refers to an i.i.d. process with arbitrary rational probabilities; and "arbitrary" refers to an i.i.d. process with arbitrary real probabilities. In the last item, the input distribution is a coin flip with bias 1/r. The notation Θ(·) is the usual notation for upper and lower asymptotic bounds. These results quantify the dependence of efficiency on state complexity and give explicit bounds on the asymptotic rates of convergence to the optimal. Related Work Since von Neumann's classic paper showing how to simulate a fair coin with a coin of unknown bias [1], many authors have studied variants of this problem. Our work is heavily inspired by the work of Elias [2], who studies entropy-optimal generation of uniform distributions from known sources. The definition of conservation of entropy is given there. Mossel, Peres, and Hillar [3] show that there is a finite-state protocol to simulate a q-biased coin with a p-biased coin when p is unknown if and only if q is a rational function of p. Peres [4] shows how to iterate von Neumann's procedure for producing a fair coin from a biased coin to approximate the entropy bound. Blum [5] shows how to extract a fair coin from a Markov chain. Another line of work by Pae and Loui [6][7][8] focuses on emitting samples from a variety of rational distributions given input from an unknown distribution, as in von Neumann's original problem. In [6], the authors introduce a family of von-Neumann-like protocols that approach asymptotic optimality as they consume more input symbols before producing output, and moreover can be shown to be themselves optimal among all such protocols. In [9], Han and Hoshi present a family of protocols for converting between arbitrary known input and output distributions, based on an interval-refinement approach. These protocols exhibit favorable performance characteristics and are comparable to the ones we present according to multiple metrics, but require an infinite state space to implement. Finally, there is a large body of related work on extracting randomness from weak random sources (e.g. [10][11][12][13][14]). These models typically work with imperfect knowledge of the input source and provide only approximate guarantees on the quality of the output. Here we assume that the statistical properties of the input and output are known completely, and simulations must be exact. Definitions A (discrete) random process is a finite or infinite sequence of discrete random variables. We will view the process as producing a stream of letters from some finite alphabet Σ. We will focus mostly on independent and identically distributed (i.i.d.) processes, in which successive letters are generated independently according to a common distribution on Σ. Informally, a reduction from a random process X with alphabet Σ to another random process Y with alphabet Γ is a deterministic protocol that consumes a stream of letters from Σ and produces a stream of letters from Γ. To be a valid reduction, if the letters of the input stream are distributed as X, then the letters of the output stream must be distributed as Y. In particular, for i.i.d. processes X and Y in which the letters are generated independently according to distributions µ on Σ and ν on Γ, respectively, we say that the protocol is a reduction from µ to ν. Most (but not all) of the protocols considered in this paper will be finite-state. To say that the protocol is deterministic means that the only source of randomness is the input process. It makes sense to talk about the expected number of input letters read before halting or the probability that the first letter emitted is a, but any such measurements are taken with respect to the distribution on the space of inputs. There are several ways to formalize the notion of a reduction. One approach, following [4], is to model a reduction as a map f : Σ * → Γ * that is monotone with respect to the prefix relation on strings; that is, if x, y ∈ Σ * and x is a prefix of y, then f (x) is a prefix of f (y). Monotonicity implies that f can be extended uniquely by continuity to domain Σ * ∪ Σ ω and range Γ * ∪ Γ ω . The map f would then constitute a reduction from the random process X = X 0 X 1 X 2 . . . to f (X 0 X 1 X 2 . . .) = Y 0 Y 1 Y 2 . . ., where the random variable X i gives the ith letter of the input stream and Y i the ith letter of the output stream. To be a reduction from µ to ν, it must hold that if the X i are independent and identically distributed as µ, then the Y i are independent and identically distributed as ν. In this paper we propose an alternative state-based approach in which protocols are modeled as coalgebras δ : S × Σ → S × Γ * , where S is a (possibly infinite) set of states. 1 We can view a protocol as a deterministic stream automaton with output. In each step, depending on its current state, the protocol samples the input process, emits zero or more output letters, and changes state, as determined by its transition function δ. The state-based approach has the advantage that it is familiar to computer scientists, is easily programmable, and supports common constructions such as composition. Protocols and Reductions Let Σ, Γ be finite alphabets. Let Σ * denote the set of finite words and Σ ω the set of ω-words (streams) over Σ. We use x, y, . . . for elements of Σ * and α, β, . . . for elements of Σ ω . The symbols and ≺ denote the prefix and proper prefix relations, respectively. If µ is a probability measure on Σ, we endow Σ ω with the product measure in which each symbol is distributed as µ. The notation Pr(A) for the probability of an event A refers to this measure. The measurable sets of Σ ω are the Borel sets of the Cantor space topology whose basic open sets are the intervals {α ∈ Σ ω | x ≺ α} for x ∈ Σ * , and µ({α ∈ Σ ω | x ≺ α}) = µ(x), where µ(a 1 a 2 · · · a n ) = µ(a 1 )µ(a 2 ) · · · µ(a n ); see [15]. A protocol is a coalgebra (S, δ) where δ : S × Σ → S × Γ * . Intuitively, δ(s, a) = (t, x) means that in state s, it consumes the letter a from its input source, emits a finite, possibly empty string x, and transitions to state t. We can immediately extend δ to domain S × Σ * by coinduction: δ(s, ε) = (s, ε) δ(s, ax) = let (t, y) = δ(s, a) in let (u, z) = δ(t, x) in (u, yz). Since the two functions agree on S × Σ, we use the same name. It follows that δ(s, xy) = let (t, z) = δ(s, x) in let (u, w) = δ(t, y) in (u, zw). By a slight abuse, we define the length of the output as the length of its second component as a string in Γ * and write | δ(s, x) | for | z |, where δ(s, x) = (t, z). A protocol δ also induces a partial map δ ω : S × Σ ω ⇀ Γ ω by coinduction: 2 δ ω (s, aα) = let (t, z) = δ(s, a) in z · δ ω (t, α). It follows that δ ω (s, xα) = let (t, z) = δ(s, x) in z · δ ω (t, α). Given α ∈ Σ ω , this defines a unique infinite string in δ ω (s, α) ∈ Γ ω except in the degenerate case in which only finitely many output letters are ever produced. A protocol is said to be productive (with respect to a given probability measure on input streams) if, starting in any state, an output symbol is produced within finite expected time. It follows from this assumption that infinitely many output letters are produced with probability 1. The supremum over all states s of the expected time before an output symbol is produced starting from s is called the latency of the protocol. We will restrict attention to protocols with finite latency. Now let ν be a probability measure on Γ. Endow Γ ω with the product measure in which each symbol is distributed as ν, and define ν(a 1 a 2 · · · a n ) = ν(a 1 )ν(a 2 ) · · · ν(a n ), a i ∈ Γ. We say that a protocol (S, δ, s) with start state s ∈ S is a reduction from µ to ν if for all y ∈ Γ * , Pr(y δ ω (s, α)) = ν(y), (2.1) where the probability Pr is taken with respect to the product measure µ on Σ ω . This implies that the symbols of δ ω (s, α) are independent and identically distributed as ν. Restart Protocols A prefix code is a subset A ⊆ Σ * such that every element of Σ ω has at most one prefix in A. Thus the elements of a prefix code are pairwiseincomparable. A prefix code is exhaustive (with respect to a given probability measure on input streams) if Pr(α ∈ Σ ω has a prefix in A) = 1. By König's lemma, if every α ∈ Σ ω has a prefix in A, then A is finite and exhaustive, but exhaustive codes need not be finite; for example, under the uniform measure on binary streams, the prefix code {0 n 1 | n ≥ 0} is infinite and exhaustive. We often think of prefix codes as representing their infinite extensions. By a slight abuse of notation, if µ is a probability measure on Σ ω and A ⊆ Σ * is a prefix code, we define µ(A) = µ({α ∈ Σ ω | ∃x ∈ A x ≺ α}). (2.2) A restart protocol is a protocol (S, δ, s) of a special form determined by a function f : A → Γ * , where A is an exhaustive prefix code, A = {ε}, and s is a designated start state. Intuitively, starting in s, we read symbols of Σ from the input stream until encountering a string x ∈ A, output f (x), then return to s and repeat. Note that we are not assuming A to be finite. Formally, we can take the state space to be S = {u ∈ Σ * | x u for any x ∈ A} and define δ : S × Σ → S × Γ * by δ(u, a) = (ua, ε), ua ∈ A, (ε, z), ua ∈ A and f (ua) = z with start state ε. Then for all x ∈ A, δ(ε, x) = (ε, f (x)). As with the more general protocols, we can extend to a partial function on streams, but here the definition takes a simpler form: δ ω (ε, xα) = f (x) · δ ω (ε, α), x ∈ A, α ∈ Σ ω . A restart protocol is positive recurrent (with respect to a given probability measure on input streams) if, starting in the start state s, the expected time before the next visit to s is finite. All finite-state restart protocols are positive recurrent, but infinite-state ones need not be. If a restart protocol is positive recurrent, then the probability of eventually restarting is 1, but the converse does not always hold. For example, consider a restart protocol that reads a sequence of coin flips until seeing the first heads. If the number of flips it read up to that point is n, let it read 2 n more flips and output the sequence of all flips it read, then restart. The probability of restarting is 1, but the expected time before restarting is infinite. Convergence We will have the occasion to discuss the convergence of random variables. There are several notions of convergence in the literature, but for our purposes the most useful is convergence in probability. Let X and X n , n ≥ 0 be bounded nonnegative random variables. We say that the sequence X n converges to X in probability and write X n Pr −→ X if for all fixed δ > 0, Pr(| X n − X | > δ) = o(1). Let E(X) denote the expected value of X and V(X) its variance. (i) If X n Pr −→ X and X n Pr −→ Y, then X = Y with probability 1. (ii) If X n Pr −→ X and Y n Pr −→ Y, then X n + Y n Pr −→ X + Y and X n Y n Pr −→ XY. (iii) If X n Pr −→ X and X is bounded away from 0, then 1/X n (1) and E(X n ) = e for all n, then X n Pr −→ 1/X. (iv) If V(X n ) = oPr −→ e. Proof. For (iv), by the Chebyshev bound Pr(| X − E(X) | > k V(X)) < 1/k 2 , for all fixed δ > 0, See [16][17][18][19][20][21][22] for a more thorough introduction. Pr(| X n − e | > δ) < δ −2 V(X n ), Efficiency The efficiency of a protocol is the long-term ratio of entropy production to entropy consumption. Formally, for a fixed protocol δ : S × Σ → S × Γ * , s ∈ S, and α ∈ Σ ω , define the random variables E n (α) = | δ(s, α n ) | n · H(ν) H(µ) , (2.3) where H is the Shannon entropy H(p 1 , . . . , p n ) = − n ∑ i=1 p i log p i (logarithms are base 2 if not otherwise annotated), µ and ν are the input and output distributions, respectively, and α n is the prefix of α of length n. Intuitively, the Shannon entropy of a distribution measures the amount of randomness in it, where the basic unit of measurement is one fair coin flip. For example, as noted in the introduction, one roll of a fair six-sided die is worth about 2.585 coin flips. The random variable E n measures the ratio of entropy production to consumption after n steps of δ starting in state s. Here | δ(s, α n ) | · H(ν) (respectively, n · H(µ)) is the contribution along α to the production (respectively, consumption) of entropy in the first n steps. We write E δ,s n when we need to distinguish the E n associated with different protocols and start states. In most cases of interest, E n converges in probability to a unique constant value independent of start state and history. When this occurs, we call this constant value the efficiency of the protocol δ and denote it by Eff δ. Notationally, E n Pr −→ Eff δ. One must be careful when analyzing infinite-state protocols: The efficiency is well-defined for finite-state protocols, but may not exist in general. For positive recurrent restart protocols, it is enough to measure the ratio for one iteration of the protocol. In §3.3 we will give sufficient conditions for the existence of Eff δ that are satisfied by all protocols considered in §4. Capacity After reading some fixed number n of random input symbols, the automaton implementing the protocol δ will have emitted a string of outputs y n and will also be in some random state s n , where (s n , y n ) = δ(s, a 1 a 2 · · · a n ). The state s n will be distributed according to some distribution σ n , which is induced by the distribution µ on inputs, therefore contains information H(σ n ). We regard this quantity as information that is stored in the current state, later to be emitted as output or discarded. Any subsequent output entropy produced by the protocol is bounded by the sum of this stored entropy and additional entropy from further input. Restart protocols operate by gradually consuming entropy from the input and storing it in the state, then emitting some fraction of the stored entropy as output all at once and returning to the start state. The stored entropy drops to 0 at restart, reflecting the fact that no information is retained; any entropy that was not emitted as output is lost. For finite-state protocols, the stored entropy is bounded by the base-2 logarithm of the size of the state space, the entropy of the uniform distribution. We call this quantity the capacity of the protocol: Cap δ = log 2 | S |. (2.4) The capacity is a natural measure of the complexity of δ, and we will take it as our complexity measure for finite-state protocols. In §4, we will construct families of protocols for various reductions indexed by a tunable parameter k proportional to the capacity. The efficiency of the protocols is expressed as a function of k; by choosing larger k, greater efficiency can be achieved at the cost of a larger state space. The results of §4 quantify this tradeoff. Entropy and Conditional Entropy In this subsection we review a few elementary facts about entropy and conditional entropy that we will need. These are well known; the reader is referred to [23,24] for a more thorough treatment. Let p = (p n : n ∈ N) be any discrete finite or countably infinite subprobability distribution (that is, all p n ≥ 0 and ∑ n∈N p n ≤ 1) with finite entropy H(p) = H(p n : n ∈ N) = − ∑ n∈N p n log p n < ∞. For E ⊆ N, define p E = ∑ n∈E p n . The conditional entropy with respect to the event E is defined as H(p | E) = H( p n p E : n ∈ E) = − ∑ n∈E p n p E log p n p E . (2.5) It follows that H(p n : n ∈ E) = p E H(p | E) − p E log p E . (2.6) A partition of N is any finite or countable collection of nonempty pairwise disjoint subsets of N whose union is N. Lemma 2.2 (Conditional entropy rule; see [23, §2.2]) . Let p = (p n : n ∈ N) be a discrete subprobability distribution with finite entropy, and let E be any partition of N. Then H(p) = H(p A : A ∈ E ) + ∑ A∈E p A H(p | A). Proof. From (2.6), ∑ A∈E p A H(p | A) = ∑ A∈E H(p n : n ∈ A) + ∑ A∈E p A log p A = H(p) − H(p A : A ∈ E ). It is well known that the probability distribution on d letters that maximizes entropy is the uniform distribution with entropy log d (see [23]). A version of this is also true for subprobability distributions: Proof. For any subprobability distribution (p 1 , . . . , p d ) with s = ∑ d i=1 p i , it fol- lows from the definitions that H(p 1 , . . . , p d ) = sH( p 1 s , . . . , p d s ) − s log s ≤ sH( 1 d , . . . , 1 d ) − s log s = H( s d , . . . , s d ) = s log d s . Basic Results Let δ : S × Σ → S × Γ * be a protocol reducing µ to ν. We can associate with each y ∈ Γ * and state s ∈ S a prefix code in Σ * , namely pc δ (s, y) = {≺-minimal strings x ∈ Σ * such that y δ(s, x)}. (3.1) The string y is generated as a prefix of the output if and only if exactly one x ∈ pc δ (s, y) is consumed as a prefix of the input. These events must occur with the same probability, so ν(y) = Pr(y ≺ δ ω (s, α)) = µ(pc δ (s, y)), (3.2) where µ(pc δ (s, y)) is defined in (2.2). Note that pc δ (s, y) need not be finite. Lemma 3.1. If A ⊆ Γ * is a prefix code, then so is y∈A pc δ (s, y) ⊆ Σ * , and ν(A) = µ( y∈A pc δ (s, y)). If A ⊆ Γ * is exhaustive, then so is y∈A pc δ (s, y) ⊆ Σ * . Proof. We have observed that each pc δ (s, y) is a prefix code. If y 1 and y 2 are -incomparable, and if y 1 δ(s, x 1 ) and y 2 δ(s, x 2 ), then x 1 and x 2 are -incomparable, thus y∈A pc δ (s, y) is a prefix code. By (3.2), we have ν(A) = ∑ y∈A ν(y) = ∑ y∈A µ(pc δ (s, y)) = µ( y∈A pc δ (s, y)). If A ⊆ Γ * is exhaustive, then so is y∈A pc δ (s, y), since the events both occur with probability 1 in their respective spaces. Lemma 3.2. (i) The partial function δ ω (s, −) : Σ ω ⇀ Γ ω is continuous, thus Borel measurable. (ii) δ ω (s, α) is almost surely infinite; that is, µ(dom δ ω (s, −)) = 1. (iii) The measure ν on Γ ω is the push-forward measure ν = µ • δ ω (s, −) −1 . Proof. (i) Let y ∈ Γ * . The preimage of {β ∈ Γ ω | y ≺ β}, a basic open set of Γ ω , is open in Σ ω : δ ω (s, −) −1 ({β | y ≺ β}) = {α | y ≺ δ ω (s, α)} = x∈pc δ (s,y) {α | x ≺ α}. (ii) We have assumed finite latency; that is, starting from any state, the expected time before the next output symbol is generated is finite. Thus the probability that infinitely many symbols are generated is 1. (iii) From (i) and (3.2) we have (µ • δ ω (s, −) −1 )({β | y ≺ β}) = µ( x∈pc δ (s,y) {α | x ≺ α}) = µ(pc δ (s, y)) = ν(y) = ν({β | y ≺ β}). Since µ • δ ω (s, −) −1 and ν agree on the basic open sets {β | y ≺ β}, they are equal. Proof. For x ∈ Σ * , let y be the string of output symbols produced after consuming x. The protocol cannot produce y from x with greater probability than allowed by ν, thus (min a∈Σ µ(a)) | x | ≤ µ(x) ≤ ν(y) ≤ (max b∈Γ ν(b)) | y | . Taking logs, | y | ≤ | x | log min a∈Σ µ(a)/ log max b∈Γ ν(b), thus we can choose R = H(ν) log min a∈Σ µ(a) H(µ) log max b∈Γ ν(b) . To show continuity, for r ∈ R, E −1 n ({x | x < r}) = {α | | δ(s, α n ) | < nrH(µ)/H(ν)} = {{α | x ≺ α} | | x | = n, | δ(s, x) | < nrH(µ)/H(ν)}, an open set. Composition Protocols can be composed sequentially as follows. If δ 1 : S × Σ → S × Γ * δ 2 : T × Γ → T × ∆ * , then (δ 1 ; δ 2 ) : S × T × Σ → S × T × ∆ * (δ 1 ; δ 2 )((s, t), a) = let (u, y) = δ 1 (s, a) in let (v, z) = δ 2 (t, y) in ((u, v), z). Intuitively, we run δ 1 for one step and then run δ 2 on the output of δ 1 . The following theorem shows that the partial map on infinite strings induced by the sequential composition of protocols agrees almost everywhere with the functional composition of the induced maps of the component protocols. Theorem 3.4. The partial map δ ω 2 (t, δ ω 1 (s, −)) of type Σ ω ⇀ ∆ ω is defined on all but a µ-nullset and agrees with (δ 1 ; δ 2 ) ω ((s, t), −) on its domain of definition. Proof. We restrict inputs to the subset of Σ ω on which δ ω 1 (s, −) is defined and produces a string in Γ ω on which δ ω 2 (t, −) is defined. This set is of measure 1: if δ ω 1 reduces µ to ν and δ ω 2 reduces ν to ρ, then by Lemma 3.2(iii), µ(dom δ ω 2 (t, δ ω 1 (s, −))) = µ(δ ω 1 (s, −) −1 (δ ω 2 (t, −) −1 (∆ ω ))) = ν(δ ω 2 (t, −) −1 (∆ ω )) = ρ(∆ ω ) = 1. Thus we only need to show that (δ 1 ; δ 2 ) ω ((s, t), α) = δ ω 2 (t, δ ω 1 (s, α)) (3.3) for inputs α in this set. We show (3.3) by coinduction. A bisimulation on infinite streams is a binary relation R such that if R(β, γ), then there exists a finite nonnull string z such that β = zβ ′ γ = zγ ′ R(β ′ , γ ′ ). (3.4) That is, β and γ agree on a finite nonnull prefix z, and deleting z from the front of β and γ preserves membership in the relation R. The coinduction principle on infinite streams says that if there exists a bisimulation R such that R(β, γ), then β = γ. We will apply this principle with the binary relation R(β, γ) ⇔ ∃α ∈ Σ ω ∃s ∈ S ∃t ∈ T β = (δ 1 ; δ 2 ) ω ((s, t), α) ∧ γ = δ ω 2 (t, δ ω 1 (s, α)) on ∆ ω . To show that this is a bisimulation, suppose R(β, γ) with β = (δ 1 ; δ 2 ) ω ((s, t), aα) γ = δ ω 2 (t, δ ω 1 (s, aα)), where a ∈ Σ and α ∈ Σ ω . Unwinding the definitions, β = (δ 1 ; δ 2 ) ω ((s, t), aα) = let ((u, v), z) = (δ 1 ; δ 2 )((s, t), a) in z · (δ 1 ; δ 2 ) ω ((u, v), α) = let (u, y) = δ 1 (s, a) in let (v, z) = δ 2 (t, y) in z · (δ 1 ; δ 2 ) ω ((u, v), α) γ = δ ω 2 (t, δ ω 1 (s, aα)) = let (u, y) = δ 1 (s, a) in let ζ = δ ω 1 (u, α) in δ ω 2 (t, yζ) = let (u, y) = δ 1 (s, a) in let ζ = δ ω 1 (u, α) in let (v, z) = δ 2 (t, y) in z · δ ω 2 (v, ζ) = let (u, y) = δ 1 (s, a) in let (v, z) = δ 2 (t, y) in z · δ ω 2 (v, δ ω 1 (u, α)), so if (u, y) = δ 1 (s, a) and (v, z) = δ 2 (t, y), then β = (δ 1 ; δ 2 ) ω ((s, t), aα) = z · (δ 1 ; δ 2 ) ω ((u, v), α) = zβ ′ γ = δ ω 2 (t, δ ω 1 (s, aα)) = z · δ ω 2 (v, δ ω 1 (u, α)) = zγ ′ , where β ′ = (δ 1 ; δ 2 ) ω ((s, t), α) γ ′ = δ ω 2 (t, δ ω 1 (s, α)) R(β ′ , γ ′ ). We almost have (3.4), except that z may be the null string, in which case β = β ′ and γ = γ ′ , and we cannot conclude yet that R is a bisimulation. But in this case we unwind again in the same way, and continue to unwind until we get a nonnull z, which must happen after finitely many steps by Lemma 3.2(ii). Thus R is a bisimulation. By the principle of coinduction, we can conclude (3.3) for all α in the domain of definition of δ ω 2 (t, δ ω 1 (s, −)). Corollary 3.5. If δ 1 (s, −) is a reduction from µ to ν and δ 2 (t, −) is a reduction from ν to ρ, then (δ 1 ; δ 2 )((s, t), −) is a reduction from µ to ρ. Proof. By the assumptions in the statement of the corollary, ν = µ • δ ω 1 (s, −) −1 and ρ = ν • δ ω 2 (t, −) −1 . By Theorem 3.4, ρ = µ • δ ω 1 (s, −) −1 • δ ω 2 (t, −) −1 = µ • (δ ω 2 (t, −) • δ ω 1 (s, −)) −1 = µ • (δ ω 2 (t, δ ω 1 (s, −))) −1 = µ • ((δ 1 ; δ 2 ) ω ((s, t), −)) −1 . Theorem 3.6. If δ 1 (s, −) is a reduction from µ to ν and δ 2 (t, −) is a reduction from ν to ρ, and if Eff δ 1 and Eff δ 2 exist, then Eff δ 1 ; δ 2 exists and Eff δ 1 ; δ 2 = Eff δ 1 · Eff δ 2 . Proof. Let α ∈ dom(δ 1 ; δ 2 ) ω ((s, t), −), say δ ω 1 (s, α) = β ∈ Γ ω with β ∈ dom δ ω 2 (t, −). Let n ∈ N. The second component of δ 1 (s, α n ) is β m for some m, and | β m | = m = | δ 1 (s, α n ) |. Then | (δ 1 ; δ 2 )((s, t), α n ) | n · H(ρ) H(µ) = | δ 2 (t, β m ) | n · H(ρ) H(µ) = | δ 2 (t, β m ) | | β m | · | β m | n · H(ρ) H(ν) · H(ν) H(µ) = ( | δ 1 (s, α n ) | n · H(ν) H(µ) )( | δ 2 (t, β m ) | m · H(ρ) H(ν) ) = E δ 1 ,s n (α) · E δ 2 ,t m (β). By Lemma 2.1(ii), this quantity converges in probability to Eff δ 1 · Eff δ 2 , so this becomes Eff δ 1 ; δ 2 . The capacity of the composition is additive: Theorem 3.7. For finite-state protocols, Cap δ 1 ; δ 2 = Cap δ 1 + Cap δ 2 . Proof. Immediate from the definition. Protocol Families In §4, we will present families of reductions between concrete pairs of distributions. The families are indexed by a parameter k, which controls the tradeoff between the capacity of the protocol, proportional to k, and its efficiency, typically expressed in the form 1 − Θ( f (k)). Higher efficiency comes at the cost of higher capacity. Asymptotically optimal reductions were known to exist for all finite distributions ([1-4, 6-8], cf. Theorem 3.12); however, by considering the rate of convergence as a function of k, we obtain a natural measure of quality for a family of protocols that allows a finer-grained comparison. A key consequence of our composition theorems (Theorems 3.6 and 3.7) is that this notion of quality is preserved under composition. To make this notion precise, we first formalize protocol families. Definition 3.8. We say that a sequence P = (P k : k ∈ N) is a capacity-indexed family of reductions (cfr) from µ to ν if • each P k is a reduction from µ to ν; • each P k has capacity Θ(k), that is, there exist constants c 1 , c 2 independent of k such that c 1 k ≤ Cap P k ≤ c 2 k. The notion of efficiency of a single reduction naturally generalizes to capacityindexed families, as we can take the efficiency Eff P of a family P to be the function from the index k to the efficiency of the kth protocol. Theorem 3.9. Suppose P = (P k : k ∈ N) is a cfr from µ to ν with efficiency 1 − f (k) and Q = (Q k : k ∈ N) is a cfr from ν to ρ with efficiency 1 − g(k), where f (k) and g(k) are non-negative real-valued functions. Then P ; Q = (P k ; Q k : k ∈ N) is a cfr, and its efficiency is Eff(P ; Q)(k) = (1 − f (k))(1 − g(k)) ≥ 1 − ( f (k) + g(k)). Proof. By Corollary 3.5, each component of P ; Q is a reduction from µ to ρ, and by Theorem 3.7, its capacity is again Θ(k). That the efficiency of the composition exists and satisfies the stated bounds follows immediately from Theorem 3.6. In other words, protocol families can be composed, and the resulting protocol family is asymptotically no worse than the worst of the two input families. Example 3.10. In Section 4.2, we construct a cfr from c-uniform to arbitrary rational distributions with efficiency 1 − Θ(k −1 ), and in Section 4.4, we construct a cfr from an arbitrary distributions to a c-uniform one with efficiency 1 − Θ(log k/k). These two families can be composed to obtain a cfr from an arbitrary distribution to an arbitrary rational distribution. By Theorem 3.9, the resulting cfr has efficiency 1 − Θ(log k/k). Serial Protocols Consider an infinite sequence (S 0 , δ 0 , s 0 ), (S 1 , δ 1 , s 1 ), . . . of positive recurrent restart protocols defined in terms of maps f k : A k → Γ * , where the A k are exhaustive prefix codes, as described in §2.2. These protocols can be combined into a single serial protocol δ. Intuitively, the serial protocol starts in s 0 , makes δ 0steps in S 0 accumulating the consumed letters until the consumed string x is in A 0 , then produces f 0 (x) and transitions to s 1 , where it then repeats these steps for protocol (S 1 , δ 1 , s 1 ), then for (S 2 , δ 2 , s 2 ), and so on. Formally, the states of δ are the disjoint union of the S k , and δ is defined so that δ(s k , x) = (s k+1 , f k (x)) for x ∈ A k , and within S k behaves like δ k . Let C k be a random variable representing the entropy consumption of the component protocol δ k starting from s k during the execution of the serial protocol; that is, C k is the number of input symbols consumed by δ k scaled by H(µ). This is a random variable whose values depend on the input sequence α ∈ Σ ω . Note that C k may be partial, but is defined with probability one by the assumption of bounded latency. Similarly, let P k be the number of output symbols written during the execution of δ k scaled by H(ν). Let e(n) be the index of the component protocol δ e(n) in which the n-th step of the combined protocol occurs. Like C k , P k and e(n) are random variables whose values depend on the input sequence α ∈ Σ ω . Let c k = E(C k ) and p k = E(P k ). To derive the efficiency of serial protocols, we need a form of the law of large numbers (see [16,17]). Unfortunately, the law of large numbers as usually formulated does not apply verbatim, as the random variables in question are bounded but not independent, or (under a different formulation) independent but not bounded. Our main result, Theorem 3.12 below, can be regarded as a specialized version of this result adapted to our needs. Our version requires that the variances of certain random variables vanish in the limit. We need to impose mild conditions (3.5) on the growth rate of m n , the maximum consumption in the nth component protocol, and the growth rate of production relative to consumption. These conditions hold for all serial protocols considered in this paper. The left-hand condition of (3.5) is satisfied by all serial protocols in which either m n is bounded or m n = O(n) and lim inf n c n = ∞. Lemma 3.11. Let V(X) denote the variance of X. Let m n = max x∈A n | x | · H(µ) and suppose that m n is finite for all n. If m n = o( n−1 ∑ i=0 c i ) n ∑ i=0 p i = Ω( n ∑ i=0 c i ), (3.5) then V( ∑ n i=0 C i ∑ n i=0 c i ) = o(1) V( C n ∑ n−1 i=0 c i ) = o(1) (3.6) V( ∑ n i=0 P i ∑ n i=0 p i ) = o(1) V( P n ∑ n−1 i=0 p i ) = o(1). (3.7) Proof. The properties (3.6) require only the left-hand condition of (3.5). Let ε > 0 be arbitrarily small. Choose m such that m i / ∑ j<i c j < ε for all i ≥ m, then choose n > m such that m i / ∑ n j=0 c j < ε for all i < m. As the C i are independent, V( ∑ n i=0 C i ∑ n i=0 c i ) = n ∑ i=0 V(C i ) (∑ n j=0 c j ) 2 ≤ n ∑ i=0 E(C 2 i ) (∑ n j=0 c j ) 2 = m−1 ∑ i=0 E( C i ∑ n j=0 c j · C i ∑ n j=0 c j ) + n ∑ i=m E( C i ∑ n j=0 c j · C i ∑ n j=0 c j ) ≤ m−1 ∑ i=0 E( m i ∑ n j=0 c j · C i ∑ n j=0 c j ) + n ∑ i=m E( m i ∑ i−1 j=0 c j · C i ∑ n j=0 c j ) ≤ m−1 ∑ i=0 E( εC i ∑ n j=0 c j ) + n ∑ i=m E( εC i ∑ n j=0 c j ) = n ∑ i=0 εc i ∑ n j=0 c j = ε V( C n ∑ n−1 i=0 c i ) ≤ E(C 2 n ) (∑ n−1 j=0 c j ) 2 ≤ m 2 n (∑ n−1 j=0 c j ) 2 ≤ ε 2 . As ε was arbitrarily small, (3.6) holds. If in addition the right-hand condition of (3.5) holds, then by Lemma 3.3, m n = o(∑ n−1 i=0 p i ) for all n. Then (3.7) follows by the same proof with P i , p i , and Rm i substituted for C i , c i , and m i , respectively. The following theorem, in conjunction with the constructions of §4, shows that optimal efficiency is achievable in the limit. The result is mainly of theoretical interest, since the protocols involve infinitely many states. ∑ i=0 C i ≤ n · H(µ) ≤ e(n) ∑ i=0 C i e(n)−1 ∑ i=0 P i ≤ | δ(s, α n ) | · H(ν) ≤ e(n) ∑ i=0 P i , therefore ∑ e(n)−1 i=0 P i ∑ e(n) i=0 C i ≤ | δ(s, α n ) | n · H(ν) H(µ) = E n (α) ≤ ∑ e(n) i=0 P i ∑ e(n)−1 i=0 C i . (3.9) By Lemma 3.11, the variance conditions (3.6) and (3.7) hold. Then by Lemma 2.1(iv), ∑ n i=0 C i ∑ n i=0 c i Pr −→ 1 ∑ n i=0 P i ∑ n i=0 p i Pr −→ 1 C n ∑ n−1 i=0 c i Pr −→ 0 P n ∑ n−1 i=0 p i Pr −→ 0. Using Lemma 2.1(i)-(iii), we have ∑ n i=0 P i ∑ n−1 i=0 C i = ( P n ∑ n−1 i=0 p i + ∑ n−1 i=0 P i ∑ n−1 i=0 p i ) · ∑ n−1 i=0 p i ∑ n−1 i=0 c i · ∑ n−1 i=0 c i ∑ n−1 i=0 C i Pr −→ ℓ and similarly ∑ n−1 i=0 P i / ∑ n i=0 C i Pr −→ ℓ. The conclusion E n Pr −→ ℓ now follows from (3.9). Reductions In this section we present a series of reductions between distributions of certain forms. Each example defines a capacity-indexed family of reductions ( §3.2) given as positive recurrent restart protocols ( §2.2) with efficiency tending to 1 as the parameter k grows. By Theorem 3.12, each family can be made into a single serial protocol ( §3.3) with asymptotically optimal efficiency, and by Theorem 3.9, any two compatible reduction families with asymptotically optimal efficiency can be composed to form a family of reductions with asymptotically optimal efficiency. Uniform ⇒ Uniform Let c, d ≥ 2, the sizes of the output and input alphabets, respectively. In this section we construct a family of restart protocols with capacity proportional to k mapping d-uniform streams to c-uniform streams with efficiency 1 − Θ(k −1 ). The Shannon entropy of the input and output distributions are log d and log c, respectively. Let m = ⌊k log c d⌋. Then c m ≤ d k < c m+1 . It follows that 1 c < c m d k ≤ 1 1 − log c k log d < m log c k log d ≤ 1. (4.1) Let the c-ary expansion of d k be d k = m ∑ i=0 a i c i ,(4.2) where 0 ≤ a i ≤ c − 1, a m = 0. Intuitively, the protocol P k operates as follows. Do k calls on the d-uniform distribution. For each 0 ≤ i ≤ m, for a i c i of the possible outcomes, emit a c-ary string of length i, every possible such string occurring exactly a i times. For a 0 outcomes, nothing is emitted, and this is lost entropy, but this occurs with probability a 0 d −k . After that, restart the protocol. Formally, this is a restart protocol with prefix code A consisting of all d-ary strings of length k. For each of the d k strings x ∈ A, we specify an output string f (x) to emit. Partition A into ∑ m i=0 c i disjoint sets A iy , one for each 0 ≤ i ≤ m and c-ary string y of length i, such that | A iy | = a i . The total number of strings in all partition elements is given by (4.2). Set f (x) = y for all x ∈ A iy . By elementary combinatorics, m−1 ∑ i=0 (m − i)a i c i ≤ m−1 ∑ i=0 (m − i)(c − 1)c i = c(c m − 1) c − 1 − m ≤ c(d k − 1) c − 1 − m. (4.3) In each run of P k , the expected number of c-ary digits produced is m ∑ i=0 ia i c i d −k = d −k ( m ∑ i=0 ma i c i − m ∑ i=0 (m − i)a i c i ) ≥ m − d −k ( c(d k − 1) c − 1 − m) by (4.2) and (4.3) = m(1 + d −k ) − c c − 1 (1 − d −k ) ≥ m − c c − 1 , (4.4) thus the entropy production is at least m log c − Θ(1). The number of d-ary digits consumed is k, thus the entropy consumption is k log d, which is also the capacity. By (4.1) and (4.4), the efficiency is at least (m − c c−1 ) log c k log d ≥ 1 − Θ(k −1 ). The output is uniformly distributed, as there are ∑ m i=ℓ a i c i equal-probability outcomes that produce a string of length ℓ or greater, and each output letter a appears as the ℓth output letter in equally many strings of the same length, thus is output with equal probability. Example 4.1. For d = 3 and c = k = 2, the prefix code A would contain the nine ternary strings of length two. The binary expansion of 9 is 1001, which indicates that { f (x) | x ∈ A} should contain the eight binary strings of length three and the null string. The expected number of binary digits produced is 8/9 · 3 + 1/9 · 0 = 8/3 and the expected number of ternary digits consumed is 2, so the production entropy is 8/3 and the consumption entropy is 2 log 3 for an efficiency of about .841. Uniform ⇒ Rational Let c, d ≥ 2. In this section, we will present a family of restart protocols D k mapping d-uniform streams over Σ to streams over a c-symbol alphabet Γ = {1, . . . , c} with rational probabilities with a common denominator e, that is, p i = a i /e for i ∈ Γ. By composing with a protocol of §4.1 if necessary, we can assume without loss of generality that e = d, thus we assume that p i = a i /d for i ∈ Γ. Unlike the protocols in the previous section, here we emit a fixed number k of symbols in each round while consuming a variable number of input symbols according to a particular prefix code S ⊆ Σ * . The protocol D k will have capacity at most k log d and efficiency 1 − Θ(k −1 ), exhibiting a similar tradeoff to the family of §4.1. To define D k , we will construct a finite exhaustive prefix code S over the source alphabet. The codewords of this prefix code will be partitioned into pairwise disjoint nonempty sets S y ⊆ Σ * associated with each k-symbol output word y ∈ Γ k . All input strings in the set S y will map to the output string y. Intuitively, the protocol operates as follows. Starting in the start state s, it reads input symbols until it has read an entire codeword, which must happen eventually since the code is exhaustive. If that codeword is in S y , it emits y and restarts. An example is given at the end of this section. Let p y denote the probability of the word y = e 1 · · · e k in the output process, where e i ∈ Γ, 1 ≤ i ≤ k. Since the symbols e i are chosen independently, p y is the product of the probabilities of the individual symbols. It is therefore of the form p y = a y d −k , where a y = a e 1 · · · a e k is an integer. Let m y = ⌊log d a y ⌋ and let a y = m y ∑ j=0 a yj d j be the d-ary expansion of a y . We will choose a set of ∑ y∈Γ k ∑ m y j=0 a yj prefixincomparable codewords and assign them to the S y so that each S y contains a yj codewords of length k − j for each 0 ≤ j ≤ m y . This is possible by the Kraft inequality (see [ Each codeword in S y is of length at most k, therefore the capacity is at most log d k = k log d. Since the d symbols of the input process are distributed uniformly, the probability that the input stream begins with a given string of length n is d −n . So Pr(y ≺ δ ω k (s, α)) = Pr(∃x ∈ S y x ≺ α) = ∑ x∈S y d −|x| = m y ∑ j=0 a yj d −(k−j) = p y as required, and D k is indeed a reduction. Moreover, by (4.6), the probability that a prefix is in some S y is 1, so the code is exhaustive. To analyze the efficiency of the simulation, we will use the following lemma. Lemma 4.2. Let the d-ary expansion of a be ∑ m i=0 a i d i , where m = ⌊log d a⌋. Then log d a − 2d − 1 d − 1 a < m − d d − 1 a < m ∑ i=0 ia i d i ≤ ma. Proof. By elementary combinatorics, m−1 ∑ i=0 (m − i)a i d i ≤ m−1 ∑ i=0 (m − i)(d − 1)d i = d(d m − 1) d − 1 − m < da d − 1 . Then ma = m ∑ i=0 ma i d i ≥ m ∑ i=0 ia i d i = ma − m−1 ∑ i=0 (m − i)a i d i > m − d d − 1 a = ⌊log d a⌋ − d d − 1 a > log d a − 1 − d d − 1 a = log d a − 2d − 1 d − 1 a. The expected number of symbols consumed leading to the output y is ∑ x∈S y d −| x | · | x | = m y ∑ j=0 a yj d j−k (k − j) = kp y − m y ∑ j=0 ja yj d j−k = kp y − d −k m y ∑ j=0 ja yj d j < kp y − d −k log d a y − 2d − 1 d − 1 a y by Lemma 4.2 = 2d − 1 d − 1 a y d −k − p y log d d −k − a y d −k log d a y = 2d − 1 d − 1 p y − p y log d p y . Thus the expected number of input symbols consumed in one iteration is ∑ y∈Γ k ∑ x∈S y d −| x | · | x | < ∑ y∈Γ k 2d − 1 d − 1 p y − p y log d p y = 2d − 1 d − 1 + H(p y : y ∈ Γ k ) log d and as the uniform distribution has entropy log d, the expected consumption of entropy is at most H(p y : y ∈ Γ k ) + log d · 2d − 1 d − 1 = kH(p 1 , . . . , p c ) + Θ(1). (4.7) The number of output symbols is k, so the production of entropy is kH(p 1 , . . . , p c ). Thus the efficiency is at least kH(p 1 , . . . , p c ) kH(p 1 , . . . , p c ) + Θ(1) respectively. Summing the base-24 digits in 24's place and in 1's place, we obtain 21 and 72, respectively, which means that we need 21 input strings of length one and 72 input strings of length two. As guaranteed by the Kraft inequality (4.5), we can construct an exhaustive prefix code with these parameters, say by taking all 72 strings of length two extending some three strings of length one, along with the remaining 21 strings of length one. Now we can apportion these to the output strings to achieve the desired probabilities. For example, uv should be emitted with probability a uv /24 2 = 35/576, and 35 is 1 11 in base 24, which means it should be allocated one input string of length one and 11 input strings of length two. This causes uv to be emitted with the desired probability 1 · 24 −1 + 11 · 24 −2 = 35/576. The expected number of input letters consumed in one round is 2 · 3/24 + 1 · 21/24 = 9/8 and the entropy of the input distribution is log 2 = 1 1 + Θ(k −1 ) = 1 − Θ(k −1 ). Uniform ⇒ Arbitrary Now suppose the target distribution is over an alphabet Γ = {1, . . . , c} with arbitrary real probabilities p (0) 1 , . . . , p (0) c . It is of course hopeless in general to construct a finite-state protocol with the correct output distribution, as there are only countably many finite-state protocols but uncountably many distributions on c symbols. However, we are able to construct a family of infinite-state restart protocols D k that map the uniform distribution over a d-symbol alphabet Σ to the distribution (p (0) i : 1 ≤ i ≤ c) with efficiency 1 − Θ(k −1 ). If the probabilities p (0) i are rational, the resulting protocols D k will be finite. Although our formal notion of capacity does not apply to infinite-state protocols, we can still use k as a tunable parameter to characterize efficiency. The restart protocols D k constructed in this section consist of a serial concatenation of infinitely many component protocols D We assume that d > c, which implies that max i p (0) i > 1/d. This will ensure that each component has a nonzero probability of emitting at least one output symbol. If d is too small, we can precompose with a protocol from §4.1 to produce a uniform distribution over a larger alphabet. By Theorem 3.6, this will not result in a significant loss of efficiency. The nth component D (n) k of D k is associated with a real probability distribution (p (n) y : y ∈ Γ k ) on k-symbol output strings. These distributions will be defined inductively. As there are countably many components, in general there will be countably many such distributions, although the sequence will cycle if the original probabilities p y : y ∈ Γ k ) with p (0) e 1 ···e k = p (0) e 1 · · · p (0) e k . Intuitively, D (n) k works the same way as in §4.2 using a best-fit rational subprobability distribution (q (n) y : y ∈ Γ k ) with denominator d k such that q (n) y ≤ p (n) y . If a nonempty string is emitted, the protocol restarts. Otherwise, it passes to D (n+1) k to handle the residual probabilities. We show that the probability of output in every component is bounded away from 0, so in expectation a finite number of components will be visited before restarting. Formally, we define a (n) y = ⌊p (n) y d k ⌋ and q (n) y = a (n) y d −k ≤ p (n) y . The round- ing error is p (n) y − q (n) y < d −k . The q (n) y may no longer sum to 1, and the difference is the residual probability r (n) = 1 − ∑ y∈Γ k q (n) y < (c/d) k . Let m (n) = r (n) d k . As in §4.2, since r (n) + ∑ y∈Γ k q (n) y = m (n) d −k + ∑ y∈Γ k a (n) y d −k = 1, by the Kraft inequality (4.6), we can construct an exhaustive prefix code based on the d-ary expansions of m (n) and the a (n) y for y ∈ Γ k and apportion the codewords to sets S y and S m such that the probability of encountering a codeword in S y is q (n) y and the probability of encountering a codeword in S m is r (n) . If the protocol encounters a codeword in S y , it emits y and restarts at the start state of D : y ∈ Γ k ), where p (n+1) y = p (n) y − q (n) y r (n) , the (normalized) probability lost when rounding down earlier. An example is given at the end of this section. To show that the protocol is correct, we need to argue that the string y is emitted with probability p (0) y . It is emitted in D (n) k with probability q (n) y ∏ n−1 j=0 r (j) , and these are disjoint events, so the probability that y is emitted in any component is ∑ n≥0 q (n) y ∏ n−1 j=0 r (j) . Using the fact that q (n) y = p (n) y − r (n) p (n+1) y , ∑ n≥0 q (n) y n−1 ∏ j=0 r (j) = ∑ n≥0 (p (n) y − r (n) p (n+1) y ) n−1 ∏ j=0 r (j) = ∑ n≥0 p (n) y n−1 ∏ j=0 r (j) − ∑ n≥0 p (n+1) y n ∏ j=0 r (j) = p (0) y . We now analyze the production and consumption in one iteration of D k . As just argued, each iteration produces y ∈ Γ k with probability p (0) y , therefore the entropy produced in one iteration is H(p (0) y : y ∈ Γ k ) = kH(p (0) 1 , . . . , p (0) c ). To analyze the consumption, choose k large enough that (c/d) k ≤ e −1 and (max i p (0) i ) k ≤ e −1 . These assumptions will be used in the following way. If q ≤ p ≤ e −1 , then −q log q ≤ −p log p, which can be seen by observing that the derivative of −p log p is positive below e −1 . Thus for any pair of subprobability distributions (q n : n ∈ N) and (p n : n ∈ N) such that q n ≤ p n ≤ e −1 for all n ∈ N, H(q n : n ∈ N) ≤ H(p n : n ∈ N). (4.8) Let s be the start state of D k . Let V (n) ⊆ Σ ω be the event that the protocol visits the nth component D k . Let C k be a random variable representing the total consumption in one iteration of D k , and let C kn be a random variable for the portion of C k that occurs during the execution of D (n) k ; that is, C kn (α) is H(µ) times the number of input symbols read during the execution of D (n) k on input α ∈ Σ ω . Note that C kn (α) = 0 if α ∈ U (m) and n > m, since on input α, the single iteration of D k finishes before stage n; thus for n > m, the conditional expectation E(C kn | U (m) ) = 0. The expected consumption during a single iteration of D k is E(C k ) = ∑ m≥0 Pr(U (m) ) E(C k | U (m) ) = ∑ m≥0 Pr(U (m) ) ∑ n≥0 E(C kn | U (m) ) = ∑ n≥0 ∑ m≥n Pr(U (m) ) E(C kn | U (m) ) = ∑ n≥0 Pr( m≥n U (m) ) E(C kn | m≥n U (m) ) = ∑ n≥0 Pr(V (n) ) E(C kn | V (n) ). Since r (n) ≤ (c/d) k , Pr(V (n) ) = n−1 ∏ j=0 r (j) ≤ ( c d ) kn . The quantity E(C kn | V (n) ) is the expected consumption during the execution of D (n) k , conditioned on the event that D (n) k is visited. By (4.7), this is at most H(q (n) y : y ∈ Γ k ) − r (n) log r (n) + Θ(1) = H(q (n) y : y ∈ Γ k ) + Θ(1), since r (n) ≤ (c/d) k ≤ e −1 , therefore −r (n) log r (n) ≤ −e −1 log e −1 = Θ(1). Using the naive upper bound H(q (n) y : y ∈ Γ k ) ≤ k log c from the uniform distribution for n ≥ 1, we have ·E(C k ) = ∑ n≥0 Pr(V (n) ) E(C kn | V (n) ) ≤ ∑ n≥0 ( c d ) kn (H(q (n) y : y ∈ Γ k ) + Θ(1)) = ∑ n≥1 ( c d ) kn H(q (n) y : y ∈ Γ k ) + H(q (0) y : y ∈ Γ k ) + Θ(1) ≤ (c/d) k (1 − (c/d) k ) k log c + H(q (0) y : y ∈ Γ k ) + Θ(1) ≤ kH(p (0) 1 , . . . , p (0) c ) + Θ(1). The last inference holds because H(q (0) y : y ∈ Γ k ) ≤ H(p (0) y : y ∈ Γ k ) = kH(p (0) 1 , . . . , p (0) c ) as justified by (4.8). The efficiency is the ratio of production to consumption, which is at least kH(p (0) 1 , . . . , p (0) c ) kH(p (0) 1 , . . . , p (0) c ) + Θ(1) = 1 1 + Θ(k −1 ) = 1 − Θ(k −1 ). There is still one issue to resolve if we wish to construct a serial protocol with kth component D k . As D k is not finite-state, its consumption is not uniformly bounded by any m k , as required by Lemma 3.11. However, one iteration of D k visits a series of components, and the consumption in each component is uniformly bounded. In each component, if output is produced, the protocol restarts from the first component, otherwise the computation proceeds to the next component. Each component, when started in its start state, consumes at most k digits and produces exactly k digits with probability at least 1 − (c/d) k and produces no digits with probability at most (c/d) k . The next lemma shows that this is enough to derive the conclusion of Lemma 3.11. Proof. As above, let C k be a random variable for the consumption in one iteration of D k , let C kn be a random variable for the consumption in D (n) k , and let U (n) be the event that D (n) k produces a nonnull string. Again using the fact that C kn restricted to U (m) is 0 for n > m, E(C 2 k | U (m) ) = E(( ∑ n≥0 C kn ) 2 | U (m) ) = E( ∑ n≥0 C 2 kn | U (m) ) + 2E( ∑ 0≤n<ℓ≤m C kn C kℓ | U (m) ) ≤ m k E(C k | U (m) ) + 2m 2 k m + 1 2 . Then Pr(U (m) ) ≤ Pr(V (m) ) ≤ (c/d) km and E(C 2 k ) = ∑ m≥0 Pr(U (m) ) · E(C 2 k | U (m) ) ≤ m k ∑ m≥0 Pr(U (m) )E(C k | U (m) ) + 2m 2 k ∑ m≥0 Pr(U (m) ) m + 1 2 ≤ m k E(C k ) + 2m 2 k ∑ m≥0 ( c d ) km (m + 1)m 2 = m k c k + 2m 2 k (c/d) k (1 − (c/d) k ) −3 = m k c k + o(1), V( ∑ n k=0 C k ∑ n k=0 c k ) = ∑ n k=0 V(C k ) (∑ n k=0 c k ) 2 ≤ ∑ n k=0 E(C 2 k ) (∑ n k=0 c k ) 2 ≤ m n ∑ n k=0 c k · ∑ n k=0 c k ∑ n k=0 c k + o(1) = o(1). Example 4.5. Consider the case d = 6, c = 2, and k = 1 in which the output letters u, v should be emitted with probability p and 1 − p, respectively. The input distribution is a fair six-sided die. We will try to find a best-fit rational distribution with denominator 6. In the first component, we roll the die with result n ∈ {1, . . . , 6} and emit u if n/6 ≤ p and v if (n − 1)/6 ≥ p. Thus u is emitted with probability ⌊6p⌋/6 ≤ p and v with probability 1 − ⌈6p⌉/6 ≤ 1 − p. If p ∈ {1/6, 2/6, . . . , 6/6}, then exactly one of those two events occurs. In this case there are no further components, as u and v have been emitted with the desired probabilities p and 1 − p, respectively; the residual probabilities are 0. The protocol restarts in the start state of the first component. Otherwise, if p ∈ {1/6, 2/6, . . . , 6/6}, then u and v are emitted with probability ⌊6p⌋/6 < p and 1 − ⌈6p⌉/6 < 1 − p respectively, and nothing is emitted with probability 1/6, which happens when n = ⌈6p⌉. In the event nothing is emitted, we move on to the second component, which is exactly like the first except with the residual probabilities p ′ = 6(p − ⌊6p⌋/6) = 6p − ⌊6p⌋ and 1 − p ′ = 6(⌈6p⌉/6 − p) = 6⌈6p⌉ − 6p. The factor 6 appears because we are conditioning on the event that no symbol was emitted in the first component, which occurs with probability 1/6. We continue in this fashion as long as there is nonzero residual probability. For a concrete instance, suppose p = 16/215 and 1 − p = 199/215. Since p falls in the open interval (0, 1/6), we will emit v if the die roll is 2, 3, 4, 5, or 6 and emit nothing if the die roll is 1. In the latter event, we move on to the second component using the residual probabilities p ′ = 6 · 16/215 − ⌊6 · 16/215⌋ = 96/215 and 1 − p ′ = 119/215. Since p ′ ∈ (1/3, 1/2), we will emit u if the die roll is 1 or 2, v if it is 4, 5, or 6, and nothing if it is 3. In the last event, we move on to the third component using the residual probabilities p ′′ = 6 · 96/215 − ⌊6 · 96/215⌋ = 146/215 and 1 − p ′ = 69/215. Since p ′′ ∈ (2/3, 5/6), we will emit u if the die roll is 1, 2, 3, or 4, v if it is 6, and nothing if it is 5. In the last event, we move on to the fourth component using the residual probabilities p ′′′ = 6 · 146/215 − ⌊6 · 146/215⌋ = 16/215 and 1 − p ′′′ = 199/215. Note that after three components, we have p ′′′ = p, so the fourth component is the same as the first. We are are back to the beginning and can return to the first component. In general, the process will eventually cycle iff the probabilities are rational. This gives an alternative to the construction of §4.2. Note also that at this point, u has been emitted with probability (1/6)(2/6) + (1/6 2 )(4/6) = 2/17 = 16/216 and v with probability 5/6 + (1/6)(3/6) + (1/6 2 )(1/6) = 199/216. These numbers are proportional to p and 1 − p, respectively, out of 215/216, the probability that some symbol has been emitted. Arbitrary ⇒ c-Uniform with Efficiency 1 − Θ(log k/k) In this section, we describe a family of restart protocols B k for transforming an arbitrary d-ary distribution with real probabilities p 1 , . . . , p d to a c-ary uniform distribution with Θ(log k/k) loss. Unlike the other protocols we have seen so far, these protocols do not depend on knowledge of the input distribution; perhaps as a consequence of this, the convergence is asymptotically slower by a logarithmic factor. Let D = {1, . . . , d} be the input alphabet. Let G k be the set of all sequences σ ∈ N D such that ∑ i∈D σ i = k. Each string y ∈ D k is described by some σ ∈ G k , where σ i is the number of occurrences of i ∈ D in y. Let V σ be the set of strings in D k whose letter counts are described by σ in this way. The protocol B k works as follows. Make k calls on the input distribution to obtain a d-ary string of length k. The probability that the string is in V σ is q σ = | V σ |p σ , where | V σ | = k σ 1 . . . σ d p σ = ∏ i∈D p σ i i , as there are | V σ | strings in D k whose letter counts are described by σ, each occurring with probability p σ . Thus the strings in V σ are distributed uniformly. For each σ, apply the protocol P k of §4.1 to the elements of V σ to produce c-ary digits, thereby converting the | V σ |-uniform distribution on V σ to a c-uniform distribution. The states then just form the d-ary tree of depth k that stores the input, so the capacity of B k is approximately k log d. To analyze the efficiency of B k , we can reuse an argument from §4.1, with the caveat that the size of the input alphabet was a constant there, whereas | V σ | is unbounded. Nevertheless, we were careful in §4.1 that the part of the argument that we need here did not depend on that assumption. For each σ, let m = ⌊log c | V σ |⌋ and let the c-ary expansion of | V σ | be | V σ | = m ∑ i=0 a i c i , where 0 ≤ a i ≤ c − 1 and a m = 0. It was established in §4.1, equation (4.4), that the expected number of c-ary digits produced by strings in V σ is at least ⌊log c | V σ |⌋ − c c − 1 ≥ (log c | V σ | − 1) − c c − 1 = log c | V σ | − 2c − 1 c − 1 , thus the expected number of c-ary digits produced in all is at least ∑ σ q σ log c | V σ | − 2c − 1 c − 1 . The total entropy production is this quantity times log c, or ∑ σ q σ log 2 | V σ | − b, where b = (2c − 1) log c/(c − 1). The total entropy consumption is kH(p 1 , . . . , p d ). This can be viewed as the composition of a random choice that chooses the number of occurrences of each input symbol followed by a random choice that chooses the arrangement of the symbols. Using the conditional entropy rule (Lemma 2.2), kH(p 1 , . . . , p d ) = H(q σ | σ ∈ G k ) + ∑ σ q σ log | V σ | ≤ log k + d − 1 k + ∑ σ q σ log | V σ | (4.9) ≤ d log k + ∑ σ q σ log | V σ |,(4.10) for sufficiently large k. The inequality (4.9) comes from the fact that the entropy of the uniform distribution exceeds the entropy of any other distribution on the same number of letters. The inequality (4.10) comes from the fact that the binomial expression is bounded by (k + 1) d−1 , which is bounded by k d for all k such that k ln k ≥ d − 1. Dividing, the production/consumption ratio is ∑ σ q σ log | V σ | − b kH(p 1 , . . . , p d ) = d log k + ∑ σ q σ log | V σ | kH(p 1 , . . . , p d ) − d log k + b kH(p 1 , . . . , p d ) ≥ d log k + ∑ σ q σ log | V σ | d log k + ∑ σ q σ log | V σ | − d log k + b kH(p 1 , . . . , p d ) = 1 − Θ(log k/k). Example 4.6. Consider the case d = 3, c = 2, and k = 5 in which the three-letter input alphabet is u, v, w with probabilities p, q, r respectively, and the output distribution is a fair coin. We partition the input strings of length five into disjoint classes V σ depending on the number of occurrences of each input symbol. There are ( 5+2 2 ) = 21 classes represented by the patterns in the following Row 1 represents classes consisting of all strings with four occurrences of one letter and one occurrence of another. Each such class has five instances, depending on the arrangement of the letters. For example, the five instances of strings containing four occurrences of u and one of v are uuuuv, uuuvu, uuvuu, uvuuu, and vuuuu. Each of these five instances occurs with the same probability p 4 q, so this class can be used as a uniformly distributed source over a five-letter alphabet. There are six classes of this form, corresponding to the six choices of two letters. Similarly, row 2 represents classes with three occurrences of one letter and two of another. Each such class has ( 5 3 2 ) = 10 instances, all of which occur with the same probability, so each such class can be used as a uniformly distributed source over a ten-letter alphabet. There are six such classes. The classes of rows 3 and 4 can be used as uniformly distributed sources over 20-and 30letter alphabets, respectively. The classes in row 5 have only one instance and are not usable. In each round of the protocol, we sample the input distribution five times. Depending on the class of the resulting string, we apply one of the protocols of §4.1 to convert to fair coin flips. For example, if the string is in one of the classes from row 3 above, which is a uniform source on ( 5 3 1 1 ) = 20 letters, writing 20 in binary gives 10100, indicating that 16 of the 20 instances should emit the 16 binary strings of length four, and the remaining four instances should emit 00, 01, 10, and 11, respectively. The expected production is 4 · 16/20 + 2 · 4/20 = 18/5. This will be the production for any class in row 3, which will transpire with probability 20(p 3 qr + pq 3 r + pqr 3 ), the probability that the input string falls in a class in row 3. The last column of the table lists these values for the case p = q = r = 1/3. In that case, the total consumption is 5 log 3 ≈ 7.92. The production is 30 · 3 −5 · 8/5 + 60 · 3 −5 · 13/5 + 60 · 3 −5 · 18/5 + 90 · 3 −5 · 49/15 ≈ 2.94, for an efficiency of 2.94/7.92 ≈ 0.37. There is much wasted entropy with this scheme for small values of k. The sampling can be viewed as a composition of a first stage that selects the class, followed by a stage that selects the string within the class. All of the entropy consumed in the first stage is wasted, as it does not contribute to production. 4.5. ( 1 r , r−1 r ) ⇒ (r − 1)-Uniform with Efficiency 1 − Θ(k −1 ) Let r ∈ N, r ≥ 3. In this section we show that a coin with bias 1/r can generate an (r − 1)-ary uniform distribution with Θ(k −1 ) loss of efficiency. This improves the result of the previous section in this special case. Dirichlet's approximation theorem (see [25][26][27]) states that for irrational u, there exist infinitely many pairs of integers k, m > 0 such that | ku − m | < 1/k. We need the result in a slightly stronger form. Proof. The numbers ku − ⌊ku⌋, k ≥ 1, are all distinct since u is irrational. In the following, we use real arithmetic modulo 1, thus we write iu for iu − ⌊iu⌋ and 0u for both 0 and 1. Imagine placing the elements u, 2u, 3u, . . . in the unit interval one at a time, iu at time i. At time k, we have placed k elements, which along with 0 and 1 partition the unit interval into k + 1 disjoint subintervals. We make three observations: (i) At time k, the smallest interval is of length less than 1 k+1 . (ii) An interval of minimum length always occurs adjacent to 0 or 1. (iii) Let k 0 , k 1 , k 2 , . . . be the times at which the minimum interval length strictly decreases. For all i, the new smallest interval created at time k i is adjacent to 0 iff the new smallest interval created at time k i+1 is adjacent to 1. For (i), the average interval length is 1 k+1 , so there must be one of length less than that. It cannot be exactly 1 k+1 because u is irrational. For (ii), suppose [iu, ju] is a minimum-length interval. If i < j, then the interval [0, (j − i)u] is the same length and was created earlier. If i > j, then the interval [(i − j)u, 1] is the same length and was created earlier. Thus the first time a new minimum-length interval is created, it is created adjacent to either 0 or 1. For (iii), we proceed by induction. The claim is certainly true after one step. Now consider the first time a new minimum-length interval is created, say at time k. Let [iu, 1] be the interval adjacent to 1 and [0, ju] the interval adjacent to 0 just before time k. Suppose that [iu, 1] is the smaller of the two intervals (the other case is symmetric). By the induction hypothesis, j < i. By (ii), either iu < ku < 1 or 0 < ku < ju. But if the former, then ku − iu = (k − i)u and 0 < (k − i)u < ju, a contradiction, since then [0, (k − i)u] would be a smaller interval adjacent to 0. By (i)-(iii), every other time k that a new minimum-length interval is created, it is adjacent to 0 and its length is less than 1 k+1 . Choose k ≥ r − 2 and m = ⌊k log r−1 r⌋. Note that log r−1 r is irrational: if log r−1 r = p/q then r q = (r − 1) p , which is impossible because r and r − 1 are relatively prime. Then (r − 1) m < r k and m > k for sufficiently large k. We have two representations of r k −1 r−1 as a sum: r k − 1 r − 1 = k−1 ∑ i=0 r i = k−1 ∑ i=0 k i + 1 (r − 1) i . Moreover, every integer in the interval [0, r k −1 r−1 ] can be represented by a sum of the form ∑ k−1 i=0 a i (r − 1) i , where 0 ≤ a i ≤ ( k i+1 ). (We might call this a binomialary representation.) To see this, let t be any number less than r k −1 r−1 with such a representation, say t = ∑ k−1 i=0 a i (r − 1) i . We show that t + 1 also has such a representation. Let i be the smallest index such that a i < ( k i+1 ). Then t = i−1 ∑ j=0 k j + 1 (r − 1) j + k−1 ∑ j=i a j (r − 1) j 1 = (r − 1) i − i−1 ∑ j=0 (r − 2)(r − 1) j . Adding these, we have t + 1 = i−1 ∑ j=0 ( k j + 1 − r + 2)(r − 1) j + (a i + 1)(r − 1) i + k−1 ∑ j=i+1 a j (r − 1) j , and this is of the desired form. It follows that every multiple of r − 1 in the interval [0, r k − 1] can be represented by a sum of the form ∑ k i=0 a i (r − 1) i with 0 ≤ a i ≤ ( k i ). In particular, (r − 1) m can be so represented. Thus (r − 1) m = k ∑ i=0 a i (r − 1) i ,(4.11) where 0 ≤ a i ≤ ( k i ). Pick k > ln(r − 1) − 1, which ensures that 0 < ln(r − 1)/(k + 1) < 1, and also large enough that k log r−1 r − ⌊k log r−1 r⌋ < 1 k + 1 , (4.12) which is possible by Lemma 4.7. Using the fact that ln x ≤ x − 1 for all x > 0, k log r−1 r − m = k log r−1 r − ⌊k log r−1 r⌋ < 1 k + 1 = log r−1 e · ln(r − 1) k + 1 ≤ − log r−1 e · ln(1 − ln(r − 1) k + 1 ) = − log r−1 (1 − ln(r − 1) k + 1 ). Rearranging terms and exponentiating, we obtain (r − 1) m r k ≥ 1 − ln(r − 1) k + 1 = 1 − Θ(k −1 ). (4.13) From (4.11), we have 1 = ∑ k i=0 a i (r − 1) i−m . Thus we can find an exhaustive prefix code A over the (r − 1)-ary target alphabet with exactly a i words of length m − i, 0 ≤ i ≤ k. Assign a distinct word over the binary source alphabet of length k and probability (r − 1) i /r k to each codeword of length m − i so that the mapping from input words to codewords is injective. There are enough input words to do this, as we need a i input words of probability (r − 1) i /r k , and there are ( k i ) ≥ a i such input words in all. There are (r − 1) m words over the source alphabet with a target word assigned to them. If one of these source words comes up in the protocol, output its associated target word. For the remaining r k − (r − 1) m source words, do not output anything. This is lost entropy. To argue that the output distribution is uniform, we first show that for every prefix x of a codeword in A, x appears as a prefix of an emitted codeword with probability (r − 1) m−| x | /r k . We proceed by reverse induction on | x |. The claim is true for codewords x ∈ A by construction. Since A is exhaustive, for every proper prefix x of a codeword and every letter c, xc is also a prefix of a codeword. Each such xc is emitted as a prefix with probability (r − 1) m−| xc | /r k by the induction hypothesis, and these events are disjoint, therefore x is emitted as a prefix with probability ∑ c (r − 1) m−| xc | r k = (r − 1) (r − 1) m−| x |−1 r k = (r − 1) m−| x | r k . It follows that every letter c appears as the n th letter of an emitted codeword with the same probability | A n−1 | · (r − 1) m−n /r k , where A n−1 is the set of length-(n − 1) proper prefixes of target codewords, therefore the distribution is uniform. To calculate the efficiency, by elementary combinatorics, we have k does it. Now we assign to each codeword of length 5 − i a distinct input word of length 4 and probability 3 i /4 k = (3/4) i (1/4) k−i . We can assign them v 4 uv 3 vuv 2 v 2 uv v 3 u u 2 v 2 uvuv uv 2 u vu 2 v vuvu v 2 u 2 respectively. The following diagram shows the output words and the probabilities with which they are emitted: Discussion: The Case for Coalgebra What are the benefits of a coalgebraic view? Many constructions in the information theory literature are expressed in terms of trees; e.g. [28,29]. Here we have defined protocols as coalgebras (S, δ), where δ : S × Σ → S × Γ * , a form of Mealy automata. These are not trees in general. However, the class admits a final coalgebra D : (Γ * ) Σ + × Σ → (Γ * ) Σ + × Γ * , where D( f , a) = ( f @a, f (a)) f @a(x) = f (ax), a ∈ Σ, x ∈ Σ + . Here the extension to streams D ω : (Γ * ) Σ + × Σ ω ⇀ Γ ω takes the simpler form α). D ω ( f , aα) = f (a) · D ω ( f @a, A state f : Σ + → Γ * can be viewed as a labeled tree with nodes Σ * and edge labels Γ * . The nodes xa are the children of x for x ∈ Σ * and a ∈ Σ. The label on the edge (x, xa) is f (xa). The tree f @x is the subtree rooted at x ∈ Σ * , where f @x(y) = f (xy). where fst and snd denote the projections onto the first and second components, respectively. The coalgebraic view allows arbitrary protocols to inherit structure from the final coalgebra under h −1 , thereby providing a mechanism for transferring results on trees, such as entropy rate, to results on state transition systems. There are other advantages as well. In this paper we have considered only homogeneous measures on Σ ω and Γ ω , that is, those induced by i.i.d. processes in which the probabilistic choices are independent and identically distributed, for finite Σ and Γ. However, the coalgebraic definitions of protocol and reduction make sense even if Σ and Γ are countably infinite and even if the measures are non-homogeneous. We have observed that a fixed measure µ on Σ induces a unique homogeneous measure, also called µ, on Σ ω . But in the final coalgebra, we can go the other direction: For an arbitrary probability measure µ on Σ ω and state f : Σ + → Γ * , there is a unique assignment of transition probabilities on Σ + compatible with µ, namely the conditional probability f (xa) = µ({α | xa ≺ α}) µ({α | x ≺ α}) , or 0 if the denominator is 0. This determines the probabilistic behavior of the final coalgebra as a protocol starting in state f when the input stream is distributed as µ. This behavior would also be reflected in any protocol (S, δ) starting in any state s ∈ h −1 ( f ) under the same measure on input streams, thus providing a semantics for (S, δ) even under non-homogeneous conditions. In addition, as in Lemma 3.2(iii), any measure µ on Σ ω induces a pushforward measure µ • (D ω ) −1 on Γ ω . This gives a notion of reduction even in the non-homogeneous case. Thus we can lift the entire theory to Mealy automata that operate probabilistically relative to an arbitrary measure µ on Σ ω . These are essentially discrete Markov transition systems with observations in Γ * . Even more generally, one can envision a continuous-space setting in which the state set S and alphabets Σ and Γ need not be discrete. The appropriate generalization would give reductions between discrete-time and continuousspace Markov transition systems as defined for example in [30,31]. As should be apparent, in this paper we have only scratched the surface of this theory, and there is much left to be done. and the right-hand side is o(1) by assumption. Lemma 2. 3 . 3The uniform subprobability distribution (s/d, . . . , s/d) on d letters with total mass s and entropy s log(d/s) maximizes entropy among all subprobability distributions on d letters with total mass s. Lemma 3. 3 . 3If δ is a reduction from µ to ν, then the random variables E n defined in(2.3) are continuous and uniformly bounded by an absolute constant R > 0 depending only on µ and ν. Theorem 3. 12 . 12Let δ be a serial protocol with finite-state components δ 0 , δ 1 , . . . satisfying (3.5). If the limit , then the efficiency of the serial protocol exists and is equal to ℓ.Proof. The expected time in each component protocol is finite, thus e(n) is unbounded with probability 1. By definition of e(n), we have e(n)−1 24 ≈ 4.59 for a total entropy consumption of 5.16 bits. The expected number of output letters produced is 2 and the entropy of the output distribution is − 1.49 for a total entropy production of 2.98 bits. The efficiency is the ratio 2.98/5.16 ≈ 0.58. k , each of capacity k. Moreover, D k is computable if the p (0) i are, or under the assumption of unit-time real arithmetic; that is, allowing unit-time addition, multiplication, and comparison of arbitrary real numbers. the target distribution extended to k-symbol strings (p (0) . If the protocol encounters a codeword in S m , it emits nothing and transitions to the start state of D k , and let U (n) = V (n) \ V (n+1) , the event that the protocol emits a string during the execution of D (n) Lemma 4. 4 . 4Let m k be a uniform bound on the consumption in each component D (n) k of one iteration of D k . If the m k satisfy (3.5), then the variances (3.6) and (3.7) vanish in the limit. Lemma 4. 7 . 7Let u be irrational. For infinitely many integers k > 0, ku − ⌊ku⌋ < 1 k+1 . For any coalgebra (S, δ), there is a unique coalgebra morphism h :(S, δ) → ((Γ * ) Σ + , D) defined coinductively by (h(s)@a, h(s)(a)) = let (t, z) = δ(s, a) in (h(t), z),where s ∈ S and a ∈ Σ; equivalently,h(s)(a) = snd(δ(s, a)) h(s)(ax) = h(fst(δ(s, a)))(x), This is a coalgebra with respect to the endofunctor (− × Γ * ) Σ on Set. Normally, as the structure map for such a coalgebra, δ would be typed as δ : S → (S × Γ * ) Σ , but we have recurried it here to align more with the intuition of δ as the transition map of an automaton. The definition is coinductive in the sense that it involves the greatest fixpoint of a monotone map. We must take the greatest fixpoint to get the infinite behaviors as well as the finite behaviors. AcknowledgmentsThanks to Swee Hong Chan, Bobby Kleinberg, Joel Ouaknine, Aaron Wagner for valuable discussions. Thanks to the anonymous referees for several suggestions for improving the presentation. Thanks to the Bellairs Research Institute of McGill University for providing a wonderful research environment. This research was supported by NSF grants CCF-1637532, IIS-1703846, IIS-1718108, and CCF-2008083, ARO grant W911NF-17-1-0592, and a grant from the Open Philanthropy project.Using(4.13), the expected number of target symbols produced isas m is Θ(k). The number of source symbols consumed is k.The information-theoretic bound on the production/consumption ratio is the quotient of the source and target entropies:We also haveThe production/consumption ratio is thusExample 4.8. Consider the case r = k = 4 in which the input alphabet is u, v with probabilities 1/4 and 3/4 respectively, and the output distribution is uniform on the ternary alphabet 0, 1, 2. Then log r−1 r = log 3 4 ≈ 1.26 and m = ⌊4 log 3 4⌋ = 5. The conditions for applying the protocol are satisfied: m > k, 4 > ln 3 − 1, and as required by (4.12), 4 log 3 4 − ⌊4 log 3 4⌋ ≈ .05 < 1/4. As guaranteed by (4.11), we can writeThe coefficients a 0 = a 1 = 0, a 2 = 6, a 3 = 4, and a 4 = 1 do the trick. (The representation is not unique; the coefficients 0, 3, 5, 4, 1 will work as well.) We now select an exhaustive prefix code over the ternary output alphabet with exactly a i codewords of length 5 − i. (1 · 1 · 3 4 4 4 + 4 · 2 · 3 3 4 4 + 6 · 3 · 3 2 4 4 ) log 3 = 459 256 log 3 ≈ 2.84 for an efficiency of 2.84/3.25 ≈ 0.87. The alternative coefficients 0, 3, 5, 4, 1 give slightly better production of 486 256 log 3 ≈ 3.01 for the same consumption, yielding an improved efficiency of 3.01/3.25 ≈ 0.93.ConclusionWe have introduced a coalgebraic model for constructing and reasoning about state-based protocols that implement entropy-conserving reductions between random processes. We have provided basic tools that allow efficient protocols to be constructed in a compositional way and analyzed in terms of the tradeoff between state and loss of entropy. We have illustrated the use of the model in various reductions.An intriguing open problem is to improve the loss of the protocol of §4.4 to Θ(1/k). Partial progress has been made in §4.5, but we were not able to generalize this approach. Various techniques used in connection with random digits. J Neumann, ; G E Forsythe, Reprinted in: von Neumann's Collected Works. Pergamon Press12National Bureau of StandardsJ. von Neumann, Various techniques used in connection with random dig- its, notes by G.E. Forsythe, National Bureau of Standards, Applied Math Series 12 (1951) 36-38. Reprinted in: von Neumann's Collected Works, vol. 5, Pergamon Press, (1963), 768-770. The efficient construction of an unbiased random sequence. P Elias, Ann. Math. Stat. 43P. Elias, The efficient construction of an unbiased random sequence, Ann. Math. Stat. 43 (1992) 865-870. New coins from old: Computing with unknown bias. Y Peres, E Mossel, C Hillar, Combinatorica. 25Y. Peres, E. Mossel, C. Hillar, New coins from old: Computing with un- known bias, Combinatorica 25 (2005) 707-724. Iterating von Neumann's procedure for extracting random bits. Y Peres, Ann. Stat. 20Y. Peres, Iterating von Neumann's procedure for extracting random bits, Ann. Stat. 20 (1992) 590-597. Independent unbiased coin flips from a correlated biased source: a finite state Markov chain. M Blum, Combinatorica. 6M. Blum, Independent unbiased coin flips from a correlated biased source: a finite state Markov chain, Combinatorica 6 (1986) 97-108. Randomizing functions: Simulation of discrete probability distribution using a source of unknown distribution. S Pae, M C Loui, Trans. Information Theory. 52S. Pae, M. C. Loui, Randomizing functions: Simulation of discrete prob- ability distribution using a source of unknown distribution, Trans. Infor- mation Theory 52 (2006) 4965-4976. Optimal random number generation from a biased coin. S Pae, M C Loui, Proc. 16th ACM-SIAM Symposium on Discrete Algorithms. 16th ACM-SIAM Symposium on Discrete AlgorithmsVancouverS. Pae, M. C. Loui, Optimal random number generation from a biased coin, in: Proc. 16th ACM-SIAM Symposium on Discrete Algorithms, Van- couver, Canada, 2005, pp. 1079-1088. Random number generation using a biased source. S Pae, University of IllinoisPh.D. thesisS. Pae, Random number generation using a biased source, Ph.D. thesis, University of Illinois, 2005. Interval algorithm for random number generation. T S Han, M Hoshi, 10.1109/18.556116doi:10.1109/18.556116IEEE Trans. Information Theory. 43T. S. Han, M. Hoshi, Interval algorithm for random number gen- eration, IEEE Trans. Information Theory 43 (1997) 599-611. URL: https://doi.org/10.1109/18.556116. doi:10.1109/18.556116. Randomness is linear in space. N Nisan, D Zuckerman, Journal of Computer and System Sciences. 52N. Nisan, D. Zuckerman, Randomness is linear in space, Journal of Com- puter and System Sciences 52 (1996) 43-52. Extracting randomness: A survey and new constructions. N Nisan, A Ta-Shma, Journal of Computer and System Sciences. 58N. Nisan, A. Ta-shma, Extracting randomness: A survey and new con- structions, Journal of Computer and System Sciences 58 (1999) 148-173. A Ta-Shma, On extracting randomness from weak random sources. Proc. 28th ACM Symp. Theory of ComputingA. Ta-shma, On extracting randomness from weak random sources, in: Proc. 28th ACM Symp. Theory of Computing, 1996, pp. 276-285. Computing with very weak random sources. A Srinivasan, D Zuckerman, SIAM J. Computing. 28A. Srinivasan, D. Zuckerman, Computing with very weak random sources, SIAM J. Computing 28 (1999) 264-275. Improved randomness extraction from two independent sources. Y Dodis, A Elbaz, R Oliveira, R Raz, Approx and Random. K. J. et al.Springer3122Y. Dodis, A. Elbaz, R. Oliveira, R. Raz, Improved randomness extraction from two independent sources, in: K. J. et al. (Ed.), Approx and Random 2004, volume 3122 of LNCS, Springer, 2004, pp. 334-344. . P R Halmos, Measure Theory, Van Nostrand, P. R. Halmos, Measure Theory, Van Nostrand, 1950. K L Chung, A Course in Probability Theory. Academic Press2nd ed.K. L. Chung, A Course in Probability Theory, 2nd ed., Academic Press, 1974. W Feller, An Introduction to Probability Theory and Its Applications. Wiley2nd ed.W. Feller, An Introduction to Probability Theory and Its Applications, vol- ume 1, 2nd ed., Wiley, 1971. W Feller, An Introduction to Probability Theory and Its Applications. Wiley2nd ed.W. Feller, An Introduction to Probability Theory and Its Applications, vol- ume 2, 2nd ed., Wiley, 1971. J L Doob, Stochastic Processes. New YorkWiley2J. L. Doob, Stochastic Processes, volume 2, Wiley, New York; . &amp; Chapman, Hall, LondonChapman & Hall, London, 1953. A Kolmogorov, Foundations of the Theory of Probability. Chelsea1st ed.A. Kolmogorov, Foundations of the Theory of Probability, 1st ed., Chelsea, 1950. A Kolmogorov, Foundations of the Theory of Probability. Chelsea2nd ed.A. Kolmogorov, Foundations of the Theory of Probability, 2nd ed., Chelsea, 1956. R Durrett, Probability: Theory and Examples. Cambridge University PressR. Durrett, Probability: Theory and Examples, Cambridge University Press, 2010. T M Cover, J A Thomas, Elements of Information Theory. Wiley-InterscienceT. M. Cover, J. A. Thomas, Elements of Information Theory, Wiley- Interscience, 1991. . J Adamek, Foundations of Coding. WileyJ. Adamek, Foundations of Coding, Wiley, 1991. An introduction to Diophantine approximation. J W S Cassels, Cambridge Tracts in Mathematics and Mathematical Physics. Cambridge University PressJ. W. S. Cassels, An introduction to Diophantine approximation, vol- ume 45 of Cambridge Tracts in Mathematics and Mathematical Physics, Cam- bridge University Press, 1957. Introduction to Diophantine Approximations. S Lang, SpringerS. Lang, Introduction to Diophantine Approximations, Springer, 1995. Diophantine approximation. W M Schmidt, 10.1007/978-3-540-38645-2Lecture Notes in Mathematics. 785SpringerW. M. Schmidt, Diophantine approximation, volume 785 of Lecture Notes in Mathematics, Springer, 1996. doi:10.1007/978-3-540-38645-2. Comparing entropy rates on finite and infinite rooted trees with length functions. T Hirschler, W Woess, 10.1109/TIT.2017.2787712IEEE Trans. Information Theory. T. Hirschler, W. Woess, Comparing entropy rates on finite and infinite rooted trees with length functions, IEEE Trans. Information Theory (2017). doi:10.1109/TIT.2017.2787712. Informational divergence and entropy rate on rooted trees with probabilities. G Böcherer, R A Amjad, 10.1109/ISIT.2014.6874818Proc. IEEE Int. Symp. Information Theory. IEEE Int. Symp. Information TheoryG. Böcherer, R. A. Amjad, Informational divergence and entropy rate on rooted trees with probabilities, in: Proc. IEEE Int. Symp. Information The- ory, 2014, pp. 176-180. doi:10.1109/ISIT.2014.6874818. P Panangaden, Labelled Markov Processes. Imperial College PressP. Panangaden, Labelled Markov Processes, Imperial College Press, 2009. Stochastic Relations: Foundations for Markov Transition Systems. E.-E Doberkat, Studies in Informatics. Chapman HallE.-E. Doberkat, Stochastic Relations: Foundations for Markov Transition Systems, Studies in Informatics, Chapman Hall, 2007.
[]
[ "People Lie, Actions Don't! Modeling Infodemic Proliferation Predictors among Social Media Users", "People Lie, Actions Don't! Modeling Infodemic Proliferation Predictors among Social Media Users" ]
[ "Chahat Raj [email protected] \nDepartment of Information Technology\nDelhi Technological University\nIndia\n", "Priyanka Meel [email protected] \nDepartment of Information Technology\nDelhi Technological University\nIndia\n" ]
[ "Department of Information Technology\nDelhi Technological University\nIndia", "Department of Information Technology\nDelhi Technological University\nIndia" ]
[]
Social media is interactive, and interaction brings misinformation. With the growing amount of user-generated data, fake news on online platforms has become much frequent since the arrival of social networks. Now and then, an event occurs and becomes the topic of discussion, generating and propagating false information. Existing literature studying fake news primarily elaborates on fake news classification models. Approaches exploring fake news characteristics and ways to distinguish it from real news are minimal. Not many researches have focused on statistical testing and generating new factor discoveries. This study assumes fourteen hypotheses to identify factors exhibiting a relationship with fake news. We perform the experiments on two real-world COVID-19 datasets using qualitative and quantitative testing methods. This study concludes that sentiment polarity and gender can significantly identify fake news. Dependence on the presence of visual media is, however, inconclusive. Additionally, Twitter-specific factors like followers count, friends count, and retweet count significantly differ in fake and real news. Though, the contribution of status count and favorites count is disputed. This study identifies practical factors to be conjunctly utilized in the development of fake news detection algorithms.
10.1016/j.techsoc.2022.101930
[ "https://arxiv.org/pdf/2111.11955v1.pdf" ]
244,488,637
2111.11955
8fab64518d018eb0a90f4f1709c17ef04221b23c
People Lie, Actions Don't! Modeling Infodemic Proliferation Predictors among Social Media Users Chahat Raj [email protected] Department of Information Technology Delhi Technological University India Priyanka Meel [email protected] Department of Information Technology Delhi Technological University India People Lie, Actions Don't! Modeling Infodemic Proliferation Predictors among Social Media Users Factor identificationFake NewsCOVID-19MisinformationInfodemicModeling predictors Social media is interactive, and interaction brings misinformation. With the growing amount of user-generated data, fake news on online platforms has become much frequent since the arrival of social networks. Now and then, an event occurs and becomes the topic of discussion, generating and propagating false information. Existing literature studying fake news primarily elaborates on fake news classification models. Approaches exploring fake news characteristics and ways to distinguish it from real news are minimal. Not many researches have focused on statistical testing and generating new factor discoveries. This study assumes fourteen hypotheses to identify factors exhibiting a relationship with fake news. We perform the experiments on two real-world COVID-19 datasets using qualitative and quantitative testing methods. This study concludes that sentiment polarity and gender can significantly identify fake news. Dependence on the presence of visual media is, however, inconclusive. Additionally, Twitter-specific factors like followers count, friends count, and retweet count significantly differ in fake and real news. Though, the contribution of status count and favorites count is disputed. This study identifies practical factors to be conjunctly utilized in the development of fake news detection algorithms. Introduction COVID-19 spread worldwide even faster than a human brain could imagine. Humans hardly even heard about it than before it turned to be the most fatal. After facing the catastrophic results only, many people became aware of it and started to ponder it. Talks about COVID-19 were everywhere and on everybody's minds and lips. Interactions about the hot topic have overwhelmed social networking platforms. Social media has now established its feet to feed information to people in the easiest way. The internet has been flooded with various types of information. But not everything that is on the internet is not reliable. Information that roams around on social media has not been validated and is merely people's ideas. Gradually these talks turned to be all sorts of fake news. With the feasibility of posting, sharing, and accessing the information on the web, its users can be quickly confounded with fake news. Fake news consists of every type of misinformation and disinformation. From the desks of politicians and public figures made the maiden attempt in spreading fake news worldwide, misleading people at large. It was the result of fake news that 5G towers in the UK turned into ruins. Fake news oozed out deadly political, social, religious, technological, environmental changes around the globe. It generated a sense of distrust among the people of the world. Enmity started grasping its enclosures. People claimed China to be the most causative element in spreading the disease. Detection of all sorts of talks that tend to be getting converted as fake news was the greatest need to lead the world into another mass destruction-like situation. Fake news about the pandemic sprawled amongst various dimensions of society. One of these is the claiming of the remedial part of COVID-19. Enormous remedial approaches and suggestions started their part to play in contributing to fake news. "A pinch of turmeric or a drop of garlic juice could cure the fatal" was amongst the most prevailing unauthentic fake remedies. Poor perceptions, unproven methods, illogical claims, false figures, and alarming news overwhelmed the global information scenario. Social media platforms are well known for the spread of misinformation and denial of scientific literature [1]. False social media posts have also tricked users into relying on harmful and poisonous substances like weed, cannabis, and ethanol intake [2]. The rapid evolution of the COVID-19 pandemic has not permitted immediate and specific scientific data [3]. COVID-19 is not the only fake news generating event. In the past, there have been many instances that led to colossal misinformation spread on online social networks, such as the 2016 US presidential elections, Pizzagate, hurricane Harvey, etc. [4]. COVID-19, whereas, is one major event generating misinformation on a scale larger than any other events. This led the World Health Organization into coining the term "Infodemic," referring to the mass propagation of false news revolving around the pandemic. Previous research has contributed variously to solving the fake news problem. Researchers from behavioral sciences have covered the factors involved in sharing and accepting fake news [5,6,7]. Others have investigated several factors like user demographics and background information [8]. Many studies have developed fake news detection algorithms [9,10]. Such algorithms widely utilize news content, such as linguistic features, visual features, and network features. However, there is an absence of ideal classifiers, and most of the fake news characteristics are unidentified. In this paper, we identify several key factors associated with fake and real news on Twitter. We formulate fourteen hypotheses on the key elements and their direct and mediating relationship with fake news. These hypotheses are evaluated on two real-world datasets which contain tweets about the COVID-19 pandemic. MediaEval 2020 [11] is a benchmark dataset containing tweets pertaining to coronavirus and 5G conspiracy. CovidHeRA [12] is a collection of tweets associated with spreading health-related misinformation amidst the pandemic. The contribution of this paper is the analysis of characteristics that differentiate between fake and real news. We identify the following key factors: sentiment polarity, gender, media usage, follower count, friends count, status count, retweet count, and favorites count. Interdependence of factors like sentiment polarity, gender, and media usage are studied intensely. The relationship between fake news and these factors has not been studied in past research. We also extend the work of Parikh et al. [13] by demonstrating the relationship between fake news and particular sentiment polarities. This paper comes up with exciting outcomes suggesting important features demonstrating fake news dependence. The research bridges existing gaps in the literature and forms the basis for a new direction in fake news analysis. Our hypotheses shall be helpful in developing efficient fake news detection algorithms covering a wide range of fake news components. The organization of this paper is as follows: Section 2 studies the existing literature in fake news and COVID-19 Infodemic. The survey provides insights into existing hypotheses and conclusions drawn upon fake news. Section 3 presents the research methodology explaining the datasets used and characteristics assumed for this study. Section 4 covers the results obtained by performing statistical tests on the datasets. Section 5 describes the insights drawn from the results and summarizes the acceptance/rejection of formulated hypotheses. Section 6 concludes the paper by discussing future directions. Literature Review This section discusses the progress in fake news and hypothesis domain presented by fellow researchers so far. Fake News: The menace of fake news has been a challenging problem for information consumers. It has constantly been a topic of concern in the research society. Various studies have discussed the identification and detection of fake news on online social networks [14]. Past studies have focused on a vast dimension of fake news ranging between its origin, propagation, consumption, and impact [15]. In the recent era, various solutions have been proposed to detect fake news by the help of exploiting its textual [16], visual [17], and nodal features [18]. In contrast, studies pertaining to hypothesis formulation and testing are very few. There is limited literature available discussing the latest trends in online social networks highlighting vulnerabilities in fake news propagation and consumption. It is essential to formulate and discover dependent dimensions of fake news. Some studies have proposed important insights beneficial for fake news detection. For instance, Parikh et al. proposed hypotheses discussing the origin, proliferation, and tone of fake information [13]. They concluded that such misleading information is published more on lesserknown websites than the popular ones. In terms of proliferation or sharing, unverified users are more often shared on social media than by verified accounts. They also demonstrated that fake news has a specific tone or sentiment (positive, negative, or neutral) but did not conclude which type of particular tone is fake news mostly related to. Their study provides ways to form additional hypotheses, which is also a motivation for our work. Demographics and culture form the basis of theories proposed by Rampersad and Althiyabi [8]. They identified the established relationships between age and acceptance of fake news. It was noted that other demographics like gender and education played a more minor role in fake news acceptance. Another notable hypothesis confirmed that educated people are less likely to accept fake news. It was also observed that culture indirectly impacts the acceptance of fake news significantly. Works have highlighted the connection between Third Person Effect (TPE) and fake news sharing [19,20]. Brewer et al. have drawn several conclusions towards readers' reactions to consuming fake news [21]. Horne et al. have distinguished between real and fake news based on stylistic and physiological features of the text [22]. In another work by Silverman and SingerVine, it was identified that 75% of the US adults accepted fake news as true [23]. Similarly, Bovet and Makse studied the fake news propagation on Twitter during the 2016 US presidential elections and explored its influence [24]. Altay et al. hypothesized the relation between users' reputation and fake news sharing [5]. They studied that very few people were indulged in sharing fake news and identified the causes of such behavior. They arrived at the conclusion that sharing fake news harmed people's reputations and resulted in trust issues, which is a significant reason for very few people being indulged in sharing fake news. Osatuyi and Hughes figured that the amount of information available on fake news platforms is lesser than real news [25]. Exploring the role of comments in identifying and rejecting fake news shows that users are less likely to accept fake news if they come across critical comments about the content [26]. Infodemic: With the outbreak of the COVID-19 pandemic, social media communication and interactions rose at a level greater than before. Global concerns about the disease brought the world together to share information on online social networks. Such large-scale propagation gave rise to a phenomenon-"Infodemic." In an early response, researchers approached this problem by analyzing various concerns and suggesting solutions to the issue. Moscadelli et al. [27] have investigated the topics about the pandemic most polluted with fake news. Calvillo et al. [28] have analyzed political associations with the discerning of fake news. Hypotheses linking the fake news belief structure to its acceptance, Kim and Kim [29] proposed that factors like source credibility, quality of information, receiver's ability, perceived benefit, trust, and knowledge decrease people's belief in fake news. Contrastingly, heuristic information, perceived risk, and stigma strengthen the confidence in fake news. Greene and Murphy [30] have discussed the likeliness of people sharing true or false stories on social media, establishing the association with their knowledge concerns. Another study that links conscience and ideology with infodemic sharing behavior is provided by Lawson and Kakkar [31]. Montesi [32] spreads light on the nature of infodemic and suggests that the harm caused by fake news is not health-related but more of a moral sort. Society, politics, and society are identified as the dominant infodemic themes. Building constructs over the Third Person Effect (TPE), Lui and Huang [33] have facts regarding the susceptibility and perception of fake news in the pandemic era. Similarly, Laato et al. [34] discuss the factors such as information sharing, information overload, and cyberchondria aiding fake news propagation. Experimenting on a Nigerian sample, Sulaiman [35] proposed no relationship between information evaluation and fake news sharing. With many hypotheses, Alvi and Saraswat Research Methodology Data This study uses two publicly available benchmark datasets, MediaEval 2020 [11] and CovidHeRA [12]. MediaEval 2020 issued a benchmark dataset for its fake news detection task. The dataset consists of 5842 tweets classified into three classes: 5G coronavirus conspiracy, other conspiracy, and non-conspiracy. The tweets contain real and false information revolving around the COVID-19 pandemic. For this study, we classify these tweets into two coarse classes, with non-conspiracy tweets as real and the remaining tweets as fake. CovidHeRA is another benchmark dataset containing false tweets related to coronavirus and health. These tweets are a collection of fake remedies, preventive measures, treatments, and other health-related information spread across Sizes of both the datasets pertaining to each category are provided in tables 1, 2, and 3. Research Hypotheses To identify characteristics that distinguish fake news and real news and consequently identify fake news based on these characteristics, we have formed fourteen hypotheses based on the qualitative and quantitative variables present in the dataset. Past research to determine factors related to fake news is limited. To identify the dependence of social media misinformation, we identify and analyze eight key elements: sentiment polarity, gender, media usage, follower count, friends count, status count, retweet count, and favorite count. We assume that fake news characterization, propagation, and acceptance have a relationship with these factors, which can be consequently utilized in fake news detection. For a better understanding, each tweet labeled as fake/real in the datasets has specific characteristics mentioned above. It is crucial to examine which feasible aspects demonstrate a relationship with false tweets. We also aim to study if there are any significantly different factors between real and fake tweets. By establishing such relationships, we tend to describe certain features useful for real and fake tweet classification. As evident from the existing literature, very few features have been exploited by fake news detection algorithms. Now examining the stated features, we propose to add more of such contributing characteristics. Qualitative hypotheses HA, HB, and HC, are tested to scrutinize the direct relationships between sentiment, gender, and media usage with fake news, respectively. Further, it is vital to analyze if the bias of one independent variable influences the bias of another independent variable. For example, to test whether or not it is the higher proportion of one categorical variable contributing to the higher proportion of another categorical variable. To do so, we construct six more qualitative hypotheses, HD, HE, HF, HG, HH, and HI. These nine hypotheses are tested using the Chi-square test of independence. The relationship is demonstrated in figure 1. To study quantitative variables, we formulate hypotheses HJ to HN and perform Analysis of Means on each one of, also and calculate intervals. Figure 2 demonstrates the quantitative relationships. Qualitative Hypotheses and Factors Sentiment: According to Parikh et al., it is widely assumed that most of the news spreading online is negative in terms of its linguistic tone. However, it has not been proven that fake news has a HA0: There is no bias in the proportion of different sentiments between fake news and real news. HA1: There is a significant bias in the proportion of different sentiments between fake news and real news. Gender: Rampersad and Althiyabi, examining a sample of Saudi Arabia, observed that gender has a weakly positive effect on the acceptance of fake news by people. The sample is specific to a particular demographic region. In our research, datasets consist of tweets from Twitter users across the globe. This helps to examine the assumptions on a universal scale. We test this hypothesis by using HB's statement to verify if there is a significant relationship between gender and false information. HB0: There is no bias of the gender of users involved in fake news with respect to real news. HB1: There is a significant bias of gender of users involved in fake news with respect to real news. Media: Several fake news detection algorithms have been designed that detect whether a visual media in a piece of fake information is credible or not. We, hereby, analyze whether it can be stated solely based on the presence of visual media that a post/message is false. We categorize the datasets into two modalities: without and with visual media (pictures/videos). We try to analyze what data modality of social media posts contribute more/demonstrate bias towards misinformation using the statement HC. HC0: There is no bias of media usage in fake news with respect to real news. HC1: There is a significant bias of media usage in fake news with respect to real news. Based on the above three univariate hypotheses, we decide the mediating relationships among these factors and formulate multivariate hypotheses (HD to HI) to determine whether bias in one of the above proportions is due to bias in proportions of the other variable. HH0: There is no relationship between a particular gender and media usage in fake news. HH1: There is a significant relationship between a particular gender and media usage in fake news. HI0: There is no bias in and media usage in fake news between different gender of users. HI1: There is a significant bias in and usage of media in fake news between different gender of users. Quantitative Hypotheses and Factors Using the data scraped from Twitter, we decided on testing our hypotheses on five key factors, which can be categorized into three user/profile-specific features, i.e., the number of followers, friends, and statuses and two post-specific features, i.e., retweets count and favorites count. In our novel approach, we assume that these factors can be utilized in identifying the credibility of tweets, or in other words, labeling of tweets. Moreover, we assume that these factors impose an effect on fake news sharing and acceptance. propagation. These contradictory results make Hypothesis HH inconclusive. indifference in proportions of usage of media in Fake news with respect to real news is observed as the χ 2 value of 2.529 is less than χ 2 c = 3.84 and its p-value of p = 0.112 is more remarkable than α = 0.05. Therefore, for Hypothesis HI, we cannot come to any conclusive decision. For the Quantitative variables, we plot the data distribution around the mean with a 95% confidence interval. This will distinguish the central values of the variables and help us determine the strength of the distinguishment, i.e., the smaller the upper and lower bound distance from the mean, the more the reliance on these values representing the true mean value of the population. Real news as the range of 95% CI for mean do not overlap for Fake news and Real news and therefore attributing a label to a piece of information on Twitter by comparing, the number of followers of the user who shared it with the mean range of these plots can be done more accurately. From the plots of the number of Friends in fig 3 and fig 4 for CovidHeRA and MediaEval datasets, respectively, the previously mentioned inference becomes much more robust as not only the 95% CI bounds remain distinct for Fake news and real news, but also the closer proximity of the value of mean for a particular label in both datasets shows the repeatability of the trend. The mean value for Fake news in CovidHeRA and MediaEval dataset is 3181.374 and 3012.989, respectively, and the same for Real news in these datasets is 2293.652 and 1999.394, respectively. There is a significant bias in the mean number of friends for users who propagated Fake news compared to the number of friends of users who propagated real news. distinction between the labels. Therefore, this bias can prove helpful to label a piece of information based on its proximity to one of the mean values 95% CI interval. For the "number of statuses" variable, the 95% CI interval for mean and the mean value for Fake news and Real news alternate between the two datasets. Therefore, we cannot come to any specific conclusion using the information of this variable of a particular information sample despite there being bias in the mean values between the Labels. The same conclusion can be drawn for "number of favorites" as the ranges in both the datasets are significantly different amongst the % CI for Mean -Number of Favourites Fake information on social platforms has constantly been increasing. In the state of the COVID-19 pandemic, this problem has grown at an exponential rate globally. The pandemic is one major event generating misinformation and promoting its consumption through social networks worldwide. In the absence of a holistic fake news detection model, it is unclear what factors can be used to identify misinformation. Very few past works are dedicated to identifying such factors. In this work, we examined several factors from two Twitter datasets, MediaEval 2020 and CovidHeRA, using fourteen hypotheses HA to HN. The study uses Chi-square tests for nine qualitative theories (HA to HI), whereas for five quantitative tests (HJ to HN), we have calculated Confidence Intervals using Analysis of Means. Observations from this study unravel specific characteristics to distinguish fake news from real news. These new findings pave the way for future research and development of fake news detection algorithms. We motivate fellow researchers to design algorithms that utilize the discovered dependencies using their combined decisions. Also, we encourage to discover more identifiers that can characterize false information present online ubiquitously. This study provides a new dimension to the existing literature in the fake news domain. [ 36 ] 36explored connections amongst various heuristic and systematic factors such as Sharing Motivation, Social Media Fatigue, Feel Good Factor. Fear Of Missing Out. News Characteristics, Extraversion, Conscientiousness, Agreeableness, Neuroticism, Trust, and Openness. As observed from the existing literature, past studies revolve around identifying psychological and behavioral factors that demonstrate any relationship with fake news. There is a research gap in characterizing features that could aid in distinguishing false information from real and serve as contributing factors to build fake news detection algorithms. Twitter amidst the pandemic. Originally, the datasets consisted of tweet ids. To procure various characteristics of the tweets, the python library Tweepy is utilized. This scraping results in providing various information of the tweet and user content. This contextual information forms the basis of this study. To obtain the gender information of Twitter users, a gender predictor algorithm by Sap et al. [37] is used. Sentiments on the dataset are extracted using Microsoft's Text Analytics service. Sentiment scores are returned as values in the range of 0.0 to 0.1. A score between 0.0 to 0.3 signifies negative, 0.3 to 0.7 represents neutral and 0.7 to 1.0 represents positive sentiment. For media usage, we utilize the 'extended_entities' column from the scraped datasets. Figure 1 : 1Factors determining fake news (qualitative hypotheses) HD0 : HD0There is no influence of bias in the proportion of a particular gender of the user on the bias in the proportion of sentiments in fake news with respect to real news. HD1: There is significant influence of bias in the proportion of a particular gender of the user on the bias in the proportion of sentiments in fake news with respect to real news.HE0: There is no bias in the proportion of a particular sentiment used in fake news between different gender of users. HE1: There is a significant bias in the proportion of a particular sentiment used in fake news between different gender of users.HF0: There is no bias in inducing a particular sentiment with media usage in fake news. HF1: There is a significant bias in inducing a particular sentiment with media usage in fake news.HG0: There is no bias in the usage of media amongst different sentiments used in fake news. HG1: There is a significant bias in media usage amongst different sentiments used in fake news. Followers and friends count determine the extent of reachability of a particular post or message within the user's social network who created it. A retweet is an action of sharing a particular tweet on one's timeline, which is done mainly by the follower of the user who created it and is visible to other Twitter users who turn the followers of the user who retweeted it. Retweet count determines the propagation and acceptance behavior of a fake post by checking the social reach. It is similar to the action "Share" on other social networks. It spreads a particular post to the user's social network. The larger the retweet count, the more likely the people reading the post will believe that particular piece of information and further spread it across the web. Status count corresponds to the number of total posts/retweets a specific user has posted since the creation of his account. Favorites are user markings made on a post a user would like to save for the future.We determine the relationship between these quantitative variables and the label of the post, i.e., the relationship between the number of retweets and favorites of the post and the followers, friends, and status of the user who posted it, and it being real or fake. Since the source of misinformation can range from a random regular user to a credible account such as commercial news channels, journalists, or celebrities, it becomes difficult to assume any specific range for the count of these quantitative variables. Hence, we test based on a characteristic whether there is a significantly distinguishable bias in the values attributed to the mean and a confidence interval around it for each of these variables. In other words, the probability with which a post or a piece of information under examination can be labeled as fake or real based on its values of the abovementioned quantitative variables. Figure 2 : 2Factors determining fake news (quantitative hypotheses)HJ0: There is no bias of follower count in fake news. HJ1: There is a significantly distinguishable bias of follower count in fake news.HK0: There is no bias of friends count in fake news. HK1: There is a significantly distinguishable bias of friends count in fake news.HL0: There is no bias of status count in fake news. HL1: There is a significantly distinguishable bias of status count in fake news.HM0: There is no bias of retweet count in fake news. HM1: There is a significantly distinguishable bias of retweet count in fake news.HN0: There is no bias of favorite count in fake news. HN1: There is a significantly distinguishable bias of favorite count in fake news. on the nine hypotheses HA to HI, which are formed upon the categorical variables, we use the Chi-Square test of independence alongside computing "Cramer's V," "Pearson's r," and"spearman's rho" values. Cramer's V value provides us with the strength of association between the nominal categorical variables for the conclusion arrived using the Chi-Square test. Its values range between 0 and 1. Pearson's r value signifies both the strength of association and the direction of the association between two continuous variables. Here direction indicates if one variable would increase or decrease with respect to change in another variable. Its values range from -1 to +1, where the value of -1 means that as one variable increases, the other decreases, and +1 means that as one variable increases, the other increases too. A value of 0 indicates no strength of association. Spearman's rho values differ from the outcomes of Pearson's r values by a feature that they can describe the correlation even when the variables do not have a linear association. It is also proof from the long tail of outlier values as it uses the ranks of the values of the variable. The values in the table 7 include degrees of freedom as df, Chi-Square test value as χ 2 , probability value as pvalue and Cremer's V value, Pearson's r-value, and Spearman's rho. The first column in this table indicates the hypothesis to which the variables and their values belong to. From the first row of the same table, we observe that χ 2 values for testing hypothesis HA with 2 degrees of freedom (df) for both CovidHeRA and MediaEval datasets are 352.963 and 17.103, respectively, and are more significant than critical value χ 2 c = 5.991 with p < 0.001 (Significance level α = 0.05 = pc, critical p-value). This implies that there is a significant difference in proportions of sentiments used between Fake and Real news. But despite there being a substantial difference in ratios, low values of Cramer's V (less than 0.2), Pearson's r (between -0.20 and +0.20), and Spearman's rho (between -0.20 and +0.20) indicate weak association of label (news being fake or real) and the sentiment (sentiment being negative or neutral or positive). These values (Cramer's V, Person's r,and Spearman's rho) are low for all the hypotheses tested. Therefore, we rely on comparing Actual values from tables 1,2 and 3 with Expected values in tables 4,5 and 6, respectively, to determine the association between an independent and a categorical dependent variable, or in other words, the bias of fake news towards a specific or a group of categorical variables. Oncomparing table 2and α = 0 . 05 . 005But for the same gender in the MediaEval dataset, the χ 2 value turns out to be 5.503, which is less than χ 2 c = 5.991, and the p-value of p = 0.064 > α = 0.05 suggests contradictory inference from these two datasets. But since the MediaEval dataset gave both the χ 2 and p values close to their respective critical values for female gender, we reject Null Hypothesis (H0) for HD and conclude that there is a significant bias in proportion of sentiments used by users of both the gender and the bias in proportion of the user gender has no influence on the bias of proportion of sentiments.Further, to identify towards which sentiment is the bias more by the users of both genders, we use results from rows six, seven, and eight oftable 7 for testing hypothesis HE. For the CovidHeRA dataset, the three rows mentioned above have χ 2 value of 78.005, 13.65, and 146.509 for negative, neutral, and positive sentiment, respectively, which are all greater than χ 2 c = 3.841 and their respective p values being p < 0.001 for all three, is less than α = 0.05. Results from this dataset do not indicate the specific sentiment towards which the bias is more. However, we can infer that there is a significant difference in the proportion of each sentiment when compared to real news. Observing results from these three rows for the MediaEval dataset, we obtain χ 2 value of 1.702, 4.82, and 0.411, for negative, neutral, and positive sentiments, respectively, where χ 2 values for Negative and Positive sentiments are both less than χ 2 c = 3.841 and for Neutral sentiment, respectively. This shows no significant bias of the user gender on Negative and Positive sentiment as p values (0.192 and 0.521) obtained are greater than α = 0.05. But for Neutral sentiment, we observe a bias as the p-value of 0.028 is less than α = 0.05. Therefore, we reject the Null hypothesis (H0) for HE and conclude that Fake news is more biased towards being sentiment Neutral, followed by being sentiment Negative, and show no significant difference in proportions of Real news towards being sentiment Positive. For testing Hypothesis, HF, the bias of usage of media to induce a particular sentiment in the propagation of COVID-19 Fake news, from table 7, the values from rows nine, ten and eleven for CovidHeRA dataset indicate χ 2 values of 97.382, 59.615, and 124.61 for Negative, Neutral and Positive sentiment, respectively, with all of them being greater than χ 2 c = 5.991 and with a p-value for each of them being p < 0.001, less than α = 0.05 indicate rejection of Null Hypothesis (H0) for HF. For the MediaEval dataset, however, the χ 2 values of 0.235, 2.399, and 20.077 for Negative, Neutral and Positive sentiments, respectively, with the former two being less than χ 2 c = 3.841 and the latter being more excellent, and their respective p values being p = 0.628, p = 0.121 and p < 0.001 indicate that only for Positive sentiment, there is a significant difference of proportion in the usage of media for Fake news with respect to real news. From the contradictory results from the two datasets for Negative and Neutral sentiments, we understand that there is a bias produced by usage of media for only positive sentiment. Hence, we reject the Null Hypothesis (H0) for HF. twelve and thirteen of table 7, we test for Hypothesis HG to observe a bias of proportion of sentiment caused when media is used and when it is not used, respectively. For CovidHeRA dataset, for with usage of media (row 12) and without the usage of media row(13), χ 2 values of 267.346 and 61.585, respectively, both less than χ 2 c = 5.991 and their respective p values of p < 0.001 each for both being less than α = 0.05, suggest that there is a difference in the proportion of sentiment used in Fake news with respect to Real news. Similar inference can be obtained from MediaEval dataset, in which, with the usage of media (row 12) and without the use of media row (13) have χ 2 values of 7.223 and 25.15, respectively, both less than χ 2 c = 5.991 and their respective p values of p = 0.027 and p <0.001, both being less than α = 0.05. Hence, there is a bias induced in the proportions of sentiment in Fake news with respect to Real news by usage and non-usage of media, and therefore we reject the Null Hypothesis (H0) for HG. Further, from rows fourteen and fifteen of table 7, we test for Hypothesis HH to check for bias in proportion of gender of Fake news with respect to Real news is influenced by bias in usage of media. For CovidHeRA, we obtain χ 2 values of 193.333 and 5.472 for "media used" and "media not used", respectively, both greater than χ 2 c = 3.84 with their respective p values being p < 0.001 and p = 0.019, both less than α = 0.05. For the MediaEval dataset, for the same rows, we obtain χ 2 values of 5.561 and 0.345 and p values of p = 0.018 and p = 0.557 for "media used" and "media not used," respectively. We observe that for "media not used," the test shows the opposite resultwith that compared from CovidHeRA dataset, meaning that there is no difference in the proportion of user's gender when media is not used in Fake news propagation, with respect to Real news The plots from fig 4 and fig 4 for the number of retweets have similar mean value for fake news and real news. For Fake news, the mean values of 154.132 and 260.701 for the two datasets, and Real news, the mean values are 628.718, and 644.781 show the closeness within the label and Figure 3 : 395% Confidence Interval for quantitative factors on CovidHeRA dataset Table 1 : 1Count of fake and real items with gender as a categoryCovidHeRA Mediaeval Label Male Female Total Male Female Total Fake 1532 772 2304 929 837 1766 Real 42683 40104 82787 2011 2065 4076 Total 44215 40876 85091 2940 2902 5842 Table 2 : 2Count of fake and real items with sentiment polarity as a categoryCovidHeRA Mediaeval Label Negative Neutral Positive Total Negative Neutral Positive Total Fake 1292 391 621 2304 1042 346 378 1766 Real 31004 24638 27145 82787 2320 690 1066 4076 Total 32296 25029 27766 85091 3362 1036 1444 5842 Table 3 : 3Count of fake and real items with media usage as a categoryCovidHeRA Mediaeval Label With Media W/o Media Total With Media W/o Media Total Fake 150 2154 2304 289 1477 1766 Real 17700 65087 82787 791 3285 4076 Total 17850 67241 85091 1080 4762 5842 higher negative polarity than neutral or positive polarities. Parikh et al. noted that it was inconclusive to say if fake news had a bias towards a particular polarity. Following their assumption, HA forms the primary hypothesis to test if fake news has a tendency towards a specific sentiment polarity.HB Sentiment Gender Media Media Sentiment Fake News Gender HA HC H3 HD HE HE HI HH HG HF CovidHeRA showing reduced count and MediaEval showing an Increase. Similarly, we observe from the same tables that the Actual count of Real news with Negative sentiment is less than that of the Expected count in both datasets. The Actual count of Real news with Positive sentiment is greater than that of the Expected count in both the datasets. Therefore, we reject the Null hypothesis (H0) of HA and observe that Fake news propagation during CoVID-19 has had a proportional bias towards Negative sentiment.table 5, we observe that in both CovidHeRA and MediaEval datasets, Fake news with Negative sentiment has a higher Actual count (1292, 1042) with respect to Expected count (874.5, 1016.3) and Fake news with Positive sentiment has less Actual count (621, 378) with respect to Expected count (751.8, 436.5). Count of Neutral sentiment varies inversely in both datasets, with Table 4 : 4Expected count of fake and real items with gender as a categoryCovidHeRA Mediaeval Label Male Female Total Male Female Total Fake 1197 1107 2304 888.7 877.3 1766 Real 43018 39769 82787 2051.3 2024.7 4076 Total 44215 40876 85091 2940 2902 5842 Table 5 : 5Expected count of fake and real items with sentiment polarity as a categoryCovidHeRA Mediaeval Label Negative Neutral Positive Total Negative Neutral Positive Total Fake 874.5 677.7 751.8 2304 1016.3 313.2 436.5 1766 Real 31421.5 24351.3 27014.2 82787 2345.7 722.8 1007.5 4076 Table 6 : 6Expected count of fake and real items with media usage as a category This implies a significant difference in proportions of the gender of users between Fake and Real news. By comparing Actual values with Expected values fromtable 1 and table 4respectively, we observe that the Male gender has a greater Actual proportion in Fake news than the Expected proportion, and the Female gender has a higher Actual proportion involved in Real news than Expected Proportion, in both datasets. Therefore, we reject the Null hypothesis (H0) for HB and observe a significant bias in the gender of users involved in CoVID-19 Fake news propagation. respectively shows that Actual values for Fake news with media used is less than Expected values and the same is more in the case of Real news. Therefore, there is a significant difference in the proportion of Fake news and Real news propagation with media usage than the expected proportion, which leads us to reject the Null Hypothesis (H0) for HC.CovidHeRA Mediaeval Label With Media W/o Media Total With Media W/o Media Total Fake 483 1821 2304 326.5 1439.5 1766 Real 17367 65420 82787 753.5 3322.5 4076 Total 17850 67241 85091 1080 4762 5842 From the second row of the table 7, we observe that χ 2 values for testing hypothesis HB for both CovidHeRA and MediaEval datasets are 200.321 and 5.261, respectively, and are greater than critical value χ 2 c = 3.841 with p < 0.001 and p = 0.022, respectively, both less than α = 0.05. To test for Hypothesis HC, from third row of table 7, we observe that χ 2 values for both CovidHeRA and MediaEval datasets are 298.995 and 7.765, respectively, and are greater than critical value χ 2 c = 3.841 with p < 0.001 and p = 0.006 respectively, both less than α = 0.05. For both the datasets, comparing the values of Actual and Expected Media usage from table 3 and table 6 Table 7 : 7Chi-square test on qualitative hypothesesMediaEval rho 0.032 0.03 0.035 0.027 0.035 0.023 0.068 0.016 -0.008 0.048 0.117 0.005 0.141 0.034 0.017 0.043 0.03 r 0.037 0.03 0.035 0.035 0.038 0.023 0.068 0.016 -0.008 0.048 0.117 0.01 0.145 0.034 0.017 0.043 0.03 Table 8 : 8Descriptive statistics of CovidHeRA(C) and MediaEval dataset(M) From the values in rows 16 and 17 in table 7, for the CovidHeRA dataset, both genders show that there is a difference in the proportion of media used for Fake news propagation with respect to Real news. This can be observed as the χ 2 values of 209.649 and 87.831 for the male and female gender, respectively, are both greater than χ 2 c = 3.84, and their respective p values, both p < 0.001 is more diminutive than α = 0.05. In the MediaEval dataset, we observe from rows 16 and 17 oftable 7 that while users of Male gender with χ 2 value of 5.664 and p = 0.017 show difference in the proportion of media used for Fake news with respect to Real news, but for Female gender,Favorites Real 7.766 0.490 2 0 141.126 19916.57 82787 0.961 2244.092 224.780 292 177 14350.77 2.06E+08 4076 440.692 Fake 2.238 0.255 1 0 12.261 150.344 2304 0.500 679.669 201.229 48 48 8456.42 715110 39 1766 394.672 Status Real 46189.4 497.010 9238 1760 143003.4 2.04E+10 82787 974.136 55846.16 1744.372 18305.5 446 111366.9 1.24E+10 4076 3419.921 Fake 56262.98 2269.618 17180 3145 108941.7 1.19E+10 2304 4450.709 38369.96 1787.307 12955 547 75109.43 5.64E+09 1766 3505.461 Retweets Real 628.718 17.616 173 28 5068.56 2.57E+07 82787 34.527 644.781 61.498 155 92 3926.28 15415676 4076 120.570 Fake 154.132 38.207 52 18 1833.94 3.36E+ 06 2304 74.886 260.701 71.410 60 77 3000.95 5 900572 9 1766 140.058 Friends Real 2293.652 35.658 605 209 10259.94 1.05E+08 82787 69.890 1999.394 159.182 609 138 10162.77 1.03E+08 4076 311.998 Fake 3181.374 149.150 953 775 7159.233 51254619 2304 292.483 3012.989 302.646 733 650 12718.35 1.62E+08 1766 593.583 Followers Real 63656.21 3847.024 21733 7810 1106894 1.23E+12 82787 7540.138 99511.34 11302.57 37160.5 12548 721596.4 5.21E+11 4076 22153.04 Fake 5421.657 445.735 1742.5 706 21395.32 4.58E+08 2304 874.085 23255.37 9979.619 4711.5 1180 419381.5 1.76E+11 1766 19560.05 Statistics Mean (C) Standard Error (C) Median (C) Mode (C) Standard Deviation (C) Sample Variance (C) Count (C) Confidence Level(95.0%) (C) Mean (M) Standard Error (M) Median (M) Mode (M) Standard Deviation (M) Sample Variance (M) Count (M) Confidence Level(95.0%) (M) Table 8 8provides descriptive statistics of both the datasets. From fig 3 and fig 4, for CovidHeRA and MediaEval dataset, respectively, we observe that users who propagated Fake news have a smaller number of followers than the users with Real news. The mean values for Fake news for these datasets are 5421.65 and 23255.37 in the order mentioned above. These are distinct from the mean number of followers of Real news 63656.21 and 99511.34 for the two datasets. We also observe that there is a significant bias in the number of followers of the users of Fake news and AcknowledgmentsWe thank Mihir P Mehta (Indian Institute of Management Raipur) for his support and feedback in the experimental setup.DiscussionFake news on social media is a menace hard to identify and characterize. It is unclear which factors are helpful in distinguishing between real and fake news. Past literature has identified several psychological and behavioral features associated with fake news propagation and acceptance. Little research has been done in identifying key factors characterizing fake news. This study delves deep into factor analysis and their interdependence. We examine how certain factors influence fake news detection and propagation on Twitter.Table 9summarizes the results of all hypotheses considered.Table HypothesesResults HA: Bias of sentiment in fake news with respect to real news.Reject Null Hypothesis HB: Bias of the gender of users involved in fake news with respect to real news.Reject Null Hypothesis HC: Bias of media usage in fake news with respect to real news.Reject Null Hypothesis HD: Bias in the proportion of a particular gender of the user on the bias in the proportion of sentiments in fake news with respect to real news. Reject Null Hypothesis HH: Relationship between a particular gender and media usage in fake news.Reject Null HypothesisInconclusive HI: Bias in and usage of media in fake news between different gender of users.Inconclusive HJ: Significantly distinguishable bias of "follower" count in fake news.Reject Null Hypothesis HK: Significantly distinguishable bias of "friends" count in fake news.Reject Null Hypothesis HL: Significantly distinguishable bias of "status" count in fake news.Fail to Reject Null Hypotheses HM: Significantly distinguishable bias of "retweet" count in fake news. Reject Null Hypothesis HN: Significantly distinguishable bias of "favorite" count in fake news.Fail to Reject Null HypothesesIn our qualitative hypotheses HA, it is assumed that there is a bias in the proportions of sentiment (linguistic tone) in fake news. Although, the central polarity of bias was unclear. With our study on two COVID-19 specific datasets, we found a strong bias of fake news towards neutral sentiment followed by negative sentiment with respect to real news, which is proved by the results of our first hypothesis. In the second hypothesis, HB, we tested the bias in the proportion of gender in fake news. The results predicted that there is a strong bias of the male gender towards fake news propagation with respect to real news. Now the influence of the gender ratio of Twitter users is not taken into account as the test is performed to distinguish characteristics of real news and fake news. Any sort of this influence is assumed to affect both types of news equally and nullify its effect. In other words, the speculated gender ratio of 6.85:3.15 should be observed in any random sample collection of tweets. Hence, we directly compare the actual ratio from the dataset without considering the deviation from the speculated ratio. In our datasets, the proportion of tweets (both real and fake) with media is more minor than tweets without media. From the chi-square test results on hypothesis HC, we find that the proportion of fake news with media is significantly less than expected and substantially more than anticipated for real news with media. Further, we explore if the bias in proportions of one category amongst sentiment, gender, and media usage, is significantly influenced by the bias in proportions of these categories. From the test for Hypothesis HD, we find that Fake news shared by both male and female gender show bias in proportion of sentiment. The result for hypothesis HE indicates that this bias is towards fake news being sentiment Neutral, followed by sentiment negative, with respect to real news. This supports our Hypothesis HA. Further, from the results of testing Hypothesis HF and HG, HG concludes that there is a bias of sentiment in both "with" and "without" media usage. From HF, we conclude that this bias in fake news propagation is proportional to using positive sentiment. For the remaining combination of gender and media usage, from the results of hypotheses HH and HI, it cannot be concluded if there is a mutual influence of Media usage and gender of the user in the bias observed in Hypothesis HB and HC due to the contradictory results from the two datasets. In Hypothesis HH, the contradictory results for "media used" and for HI, the contradictory results for "Female"gender.From the quantitative variables, we observe a significant distinguishable difference in the mean number of followers, friends, and retweets for fake and real news. The smaller value of mean for followers can be attributed to why most Real news sources are official media channels and celebrity users who share information on Twitter. In contrast, fake news comes mostly from regular Twitter users who do not have such a huge following. Similar reasons can be attributed to a smaller mean value for retweets of fake news. For the larger value of mean for the number of friends, we understand that the users who propagate fake news are involved in more mutual social connections. Understandably, celebrities and official media sources, when compared to active regular Twitter users, do not have many mutual connections that Twitter classifies as "friends"and, therefore, the resulting smaller value of the mean. The confidence interval for mean for each of these plots acts as a range for true mean for fake and real news and can be used to identify any sample of data by comparing its mean to the 95% CI for the mean of these plots. The nondistinguishable mean value and reverse in the plotted trend for the number of statuses posted by the users who propagated fake and real news and the difference of range for the mean of the number of users who favorited the tweet between the two datasets make these variables unsuitable for classification of the label for the tweet. The Twitter pandemic: The critical role of Twitter in the dissemination of medical information and misinformation during the COVID-19 pandemic. H Rosenberg, S Syed, S Rezaie, Canadian journal of emergency medicine. 224Rosenberg, H., Syed, S., & Rezaie, S. (2020). The Twitter pandemic: The critical role of Twitter in the dissemination of medical information and misinformation during the COVID-19 pandemic. Canadian journal of emergency medicine, 22(4), 418-421. Methanol mass poisoning outbreak, a consequence of COVID-19 pandemic and misleading messages on social media. K Soltaninejad, The international journal of occupational and environmental medicine. 113148Soltaninejad, K. (2020). Methanol mass poisoning outbreak, a consequence of COVID-19 pandemic and misleading messages on social media. The international journal of occupational and environmental medicine, 11(3), 148. The "pandemic" of disinformation in COVID-19. F Tagliabue, L Galassi, P Mariani, SN comprehensive clinical medicine. 29Tagliabue, F., Galassi, L., & Mariani, P. (2020). The "pandemic" of disinformation in COVID-19. SN comprehensive clinical medicine, 2(9), 1287-1289. What is gab: A bastion of free speech or an alt-right echo chamber. S Zannettou, B Bradlyn, E De Cristofaro, H Kwak, M Sirivianos, G Stringini, J Blackburn, Companion Proceedings of the The Web Conference. Zannettou, S., Bradlyn, B., De Cristofaro, E., Kwak, H., Sirivianos, M., Stringini, G., & Blackburn, J. (2018, April). What is gab: A bastion of free speech or an alt-right echo chamber. In Companion Proceedings of the The Web Conference 2018 (pp. 1007-1014). Why do so few people share fake news? It hurts their reputation. new media & society. S Altay, A S Hacquin, H Mercier, Altay, S., Hacquin, A. S., & Mercier, H. (2019). Why do so few people share fake news? It hurts their reputation. new media & society, 1461444820969893. Intentional or inadvertent fake news sharing? Fact-checking warnings and users' interaction with social media content. A Ardèvol-Abreu, P Delponti, C Rodríguez-Wangüemert, Profesional de la Información. 529Ardèvol-Abreu, A., Delponti, P., & Rodríguez-Wangüemert, C. (2020). Intentional or inadvertent fake news sharing? Fact-checking warnings and users' interaction with social media content. Profesional de la Información, 29(5). Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. S Talwar, A Dhir, P Kaur, N Zafar, M Alrasheedy, Journal of Retailing and Consumer Services. 51Talwar, S., Dhir, A., Kaur, P., Zafar, N., & Alrasheedy, M. (2019). Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. Journal of Retailing and Consumer Services, 51, 72-82. Fake news: Acceptance by demographics and culture on social media. G Rampersad, T Althiyabi, Journal of Information Technology & Politics. 171Rampersad, G., & Althiyabi, T. (2020). Fake news: Acceptance by demographics and culture on social media. Journal of Information Technology & Politics, 17(1), 1-11. Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. P Meel, D K Vishwakarma, Expert Systems with Applications. 153112986Meel, P., & Vishwakarma, D. K. (2020). Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. Expert Systems with Applications, 153, 112986. Recent State-of-the-art of Fake News Detection: A Review. D K Vishwakarma, C Jain, 2020 International Conference for Emerging Technology (INCET). IEEEVishwakarma, D. K., & Jain, C. (2020, June). Recent State-of-the-art of Fake News Detection: A Review. In 2020 International Conference for Emerging Technology (INCET) (pp. 1-6). IEEE. Fakenews: Corona virus and 5g conspiracy task at mediaeval 2020. K Pogorelov, D T Schroeder, L Burchard, J Moe, S Brenner, P Filkukova, J Langguth, MediaEval 2020 Workshop. Pogorelov, K., Schroeder, D. T., Burchard, L., Moe, J., Brenner, S., Filkukova, P., & Langguth, J. (2020). Fakenews: Corona virus and 5g conspiracy task at mediaeval 2020. In MediaEval 2020 Workshop. Drink bleach or do what now? Covid-HeRA: A dataset for risk-informed health decision making in the presence of COVID19 misinformation. A Dharawat, I Lourentzou, A Morales, C Zhai, arXiv:2010.08743arXiv preprintDharawat, A., Lourentzou, I., Morales, A., & Zhai, C. (2020). Drink bleach or do what now? Covid-HeRA: A dataset for risk-informed health decision making in the presence of COVID19 misinformation. arXiv preprint arXiv:2010.08743. On the origin, proliferation and tone of fake news. S B Parikh, V Patil, P K Atrey, 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEEParikh, S. B., Patil, V., & Atrey, P. K. (2019, March). On the origin, proliferation and tone of fake news. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 135-140). IEEE. Veracity assessment of online data. M G Lozano, J Brynielsson, U Franke, M Rosell, E Tjörnhammar, S Varga, V Vlassov, Decision Support Systems. 129113132Lozano, M. G., Brynielsson, J., Franke, U., Rosell, M., Tjörnhammar, E., Varga, S., & Vlassov, V. (2020). Veracity assessment of online data. Decision Support Systems, 129, 113132. Fake news in digital media. B Narwal, 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). IEEENarwal, B. (2018, October). Fake news in digital media. In 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN) (pp. 977-981). IEEE. A survey on natural language processing for fake news detection. R Oshikawa, J Qian, W Y Wang, arXiv:1811.00770arXiv preprintOshikawa, R., Qian, J., & Wang, W. Y. (2018). A survey on natural language processing for fake news detection. arXiv preprint arXiv:1811.00770. Media-rich fake news detection: A survey. S B Parikh, P K Atrey, 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEEParikh, S. B., & Atrey, P. K. (2018, April). Media-rich fake news detection: A survey. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 436-441). IEEE. Network-based fake news detection: A pattern-driven approach. X Zhou, R Zafarani, ACM SIGKDD Explorations Newsletter. 212Zhou, X., & Zafarani, R. (2019). Network-based fake news detection: A pattern-driven approach. ACM SIGKDD Explorations Newsletter, 21(2), 48-60. Third person effects of fake news: Fake news regulation and media literacy interventions. S M Jang, J K Kim, Computers in human behavior. 80Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in human behavior, 80, 295-302. Sharing of fake news on social media: Application of the honeycomb framework and the third-person effect hypothesis. S Talwar, A Dhir, D Singh, G S Virk, J Salo, Journal of Retailing and Consumer Services. 57102197Talwar, S., Dhir, A., Singh, D., Virk, G. S., & Salo, J. (2020). Sharing of fake news on social media: Application of the honeycomb framework and the third-person effect hypothesis. Journal of Retailing and Consumer Services, 57, 102197. The impact of real news about "fake news": Intertextual processes and political satire. P R Brewer, D G Young, M Morreale, International Journal of Public Opinion Research. 253Brewer, P. R., Young, D. G., & Morreale, M. (2013). The impact of real news about "fake news": Intertextual processes and political satire. International Journal of Public Opinion Research, 25(3), 323-343. This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. B Horne, S Adali, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media11Horne, B., & Adali, S. (2017, May). This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 11, No. 1). Most americans who see fake news believe it, new survey says. C Silverman, J Singer-Vine, BuzzFeed News. C. Silverman and J. Singer-Vine, "Most americans who see fake news believe it, new survey says," BuzzFeed News, 2016. Influence of fake news in Twitter during the 2016 US presidential election. A Bovet, H Makse, Nature communications. 1017A. Bovet and H. Makse, "Influence of fake news in Twitter during the 2016 US presidential election," Nature communications, vol. 10, no. 1, p. 7, 2019. A tale of two internet news platforms-real vs. fake: An elaboration likelihood model perspective. B Osatuyi, J Hughes, Proceedings of the 51st Hawaii International Conference on System Sciences. the 51st Hawaii International Conference on System SciencesOsatuyi, B., & Hughes, J. (2018, January). A tale of two internet news platforms-real vs. fake: An elaboration likelihood model perspective. In Proceedings of the 51st Hawaii International Conference on System Sciences. This is fake news": Investigating the role of conformity to other users' views when commenting on and spreading disinformation in social media. J Colliander, Computers in Human Behavior. 97Colliander, J. (2019). "This is fake news": Investigating the role of conformity to other users' views when commenting on and spreading disinformation in social media. Computers in Human Behavior, 97, 202-215. Fake news and covid-19 in Italy: Results of a quantitative observational study. A Moscadelli, G Albora, M A Biamonte, D Giorgetti, M Innocenzio, S Paoli, . . Bonaccorsi, G , International journal of environmental research and public health. 17165850Moscadelli, A., Albora, G., Biamonte, M. A., Giorgetti, D., Innocenzio, M., Paoli, S., ... & Bonaccorsi, G. (2020). Fake news and covid-19 in Italy: Results of a quantitative observational study. International journal of environmental research and public health, 17(16), 5850. Political ideology predicts perceptions of the threat of covid-19 (and susceptibility to fake news about it). D P Calvillo, B J Ross, R J Garcia, T J Smelter, A M Rutchick, Social Psychological and Personality Science. 118Calvillo, D. P., Ross, B. J., Garcia, R. J., Smelter, T. J., & Rutchick, A. M. (2020). Political ideology predicts perceptions of the threat of covid-19 (and susceptibility to fake news about it). Social Psychological and Personality Science, 11(8), 1119-1128. The Crisis of public health and infodemic: Analyzing belief structure of fake news about COVID-19 pandemic. S Kim, S Kim, Sustainability. 12239904Kim, S., & Kim, S. (2020). The Crisis of public health and infodemic: Analyzing belief structure of fake news about COVID-19 pandemic. Sustainability, 12(23), 9904. Individual differences in susceptibility to false memories for COVID-19 fake news. C M Greene, G Murphy, Cognitive research: principles and implications. 5Greene, C. M., & Murphy, G. (2020). Individual differences in susceptibility to false memories for COVID-19 fake news. Cognitive research: principles and implications, 5(1), 1-8. Of pandemics, politics, and personality: The role of conscientiousness and political ideology in sharing of fake news. A Lawson, H Kakkar, Lawson, A., & Kakkar, H. (2020). Of pandemics, politics, and personality: The role of conscientiousness and political ideology in sharing of fake news. Understanding fake news during the Covid-19 health crisis from the perspective of information behaviour: The case of Spain. M Montesi, Journal of Librarianship and Information Science. 0961000620949653Montesi, M. (2020). Understanding fake news during the Covid-19 health crisis from the perspective of information behaviour: The case of Spain. Journal of Librarianship and Information Science, 0961000620949653. Digital disinformation about COVID-19 and the third-person effect: examining the channel differences and negative emotional outcomes. P L Liu, L V Huang, Cyberpsychology, Behavior, and Social Networking. 2311Liu, P. L., & Huang, L. V. (2020). Digital disinformation about COVID-19 and the third-person effect: examining the channel differences and negative emotional outcomes. Cyberpsychology, Behavior, and Social Networking, 23(11), 789-793. What drives unverified information sharing and cyberchondria during the COVID-19 pandemic?. S Laato, A N Islam, M N Islam, E Whelan, European Journal of Information Systems. 293Laato, S., Islam, A. N., Islam, M. N., & Whelan, E. (2020). What drives unverified information sharing and cyberchondria during the COVID-19 pandemic?. European Journal of Information Systems, 29(3), 288-305. Information Sharing and Evaluation as Determinants of Spread of Fake News on Social Media among Nigerian Youths. K A Sulaiman, I O Adeyemi, I Ayegun, 19Sulaiman, K. A., Adeyemi, I. O., & Ayegun, I. (2020). Information Sharing and Evaluation as Determinants of Spread of Fake News on Social Media among Nigerian Youths: Experience from COVID-19 . Pandemic. International Journal of Knowledge Content Development & Technology. 104Pandemic. International Journal of Knowledge Content Development & Technology, 10(4), 65-82. Information Processing-Heuristic vs Systematic and Susceptibility of Sharing COVID 19-Related Fake News on Social Media. I Alvi, N Saraswat, Alvi, I., & Saraswat, N. Information Processing-Heuristic vs Systematic and Susceptibility of Sharing COVID 19-Related Fake News on Social Media. Developing age and gender predictive lexica over social media. M Sap, G Park, J Eichstaedt, M Kern, D Stillwell, M Kosinski, . . Schwartz, H A , Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Sap, M., Park, G., Eichstaedt, J., Kern, M., Stillwell, D., Kosinski, M., ... & Schwartz, H. A. (2014). Developing age and gender predictive lexica over social media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1146-1151).
[]
[ "A CHARACTERIZATION OF COMPACT LOCALLY CONFORMALLY HYPERKÄHLER MANIFOLDS", "A CHARACTERIZATION OF COMPACT LOCALLY CONFORMALLY HYPERKÄHLER MANIFOLDS" ]
[ "\nCalea Grivitei Street\nAND Max Planck Institut für Mathematik\nResearch Center in Geometry, Topology and Algebra\nFaculty of Mathematics and Informatics\nLIVIU ORNEA AND ALEXANDRA OTIMAN\nAND University of Bucharest\nVivatsgasse 7, 14 Academiei Str010702, 53111Bucharest, Bonn, BucharestRomania, Germany, Romania\n" ]
[ "Calea Grivitei Street\nAND Max Planck Institut für Mathematik\nResearch Center in Geometry, Topology and Algebra\nFaculty of Mathematics and Informatics\nLIVIU ORNEA AND ALEXANDRA OTIMAN\nAND University of Bucharest\nVivatsgasse 7, 14 Academiei Str010702, 53111Bucharest, Bonn, BucharestRomania, Germany, Romania" ]
[]
We give an equivalent definition of compact locally conformally hyperkähler manifolds in terms of the existence of a nondegenerate complex two-form with natural properties. This is a conformal analogue of Beauville's stating that a compact Kähler manifold admitting a holomorphic symplectic form is hyperkähler.2010 Mathematics Subject Classification: 53C55, 53C26.
10.1007/s10231-019-00829-w
[ "https://arxiv.org/pdf/1803.01363v2.pdf" ]
55,564,722
1803.01363
49a0560d22f996e40c70ae82b06300b726da43d5
A CHARACTERIZATION OF COMPACT LOCALLY CONFORMALLY HYPERKÄHLER MANIFOLDS 19 Jun 2018 Calea Grivitei Street AND Max Planck Institut für Mathematik Research Center in Geometry, Topology and Algebra Faculty of Mathematics and Informatics LIVIU ORNEA AND ALEXANDRA OTIMAN AND University of Bucharest Vivatsgasse 7, 14 Academiei Str010702, 53111Bucharest, Bonn, BucharestRomania, Germany, Romania A CHARACTERIZATION OF COMPACT LOCALLY CONFORMALLY HYPERKÄHLER MANIFOLDS 19 Jun 2018locally conformally Kählerlocally conformally hyperkählerWeyl connectionholonomyWeitzenböck formula We give an equivalent definition of compact locally conformally hyperkähler manifolds in terms of the existence of a nondegenerate complex two-form with natural properties. This is a conformal analogue of Beauville's stating that a compact Kähler manifold admitting a holomorphic symplectic form is hyperkähler.2010 Mathematics Subject Classification: 53C55, 53C26. Introduction A complex manifold pM, Jq is called locally conformally Kähler (LCK for short) if it admits a Hermitian metric g such that the two form ωpX, Y q :" gpJX, Y q satisfies the integrability condition dω " θ^ω with respect to a closed one-form θ, called the Lee form. It is then immediate that locally, the metric g is conformal to some local Kähler metrics g 1 U :" e´f U gˇˇU , where θˇˇU " df U . An equivalent definition requires that the universal cover pM , Jq of pM, Jq admits a Kähler metric with respect to which the deck group acts by holomorphic homotheties, see [DO]. This Kähler metric onM , which is globally conformal with the pull-back of the LCK metric g, is in fact obtained by gluing the pulled-back local Kähler metrics g 1 U . Note that, admitting homotheties, the universal cover of a LCK manifold is never compact. There are many examples of LCK manifolds: diagonal and non-diagonal Hopf manifolds, Kodaira surfaces, Kato surfaces, some Oeljeklaus-Toma manifolds etc. All complex submanifolds of LCK manifolds are LCK. See e.g. [DO], [OV2] and the bibliography therein. The LCK condition is conformally invariant: if g is LCK with Lee form θ, then e f g is LCK with Lee form θ`df . One can then speak about an LCK structure on pM, Jq given by the couple prgs, rθsq. where r, s denotes conformal class, respectively de Rham cohomology class. The complex structure J is parallel with respect to the Weyl connection D associated to θ and rgs (acting by Dg " θ b g). This means that D is, in fact, obtained by gluing the Levi-Civita connections of the local Kähler metrics g 1 U and therefore, the Levi-Civita connection of the Kähler metric onM is the pull-back of the Weyl connection on M . On a compact LCK manifold, if the local Kähler metrics are Einstein, a wellknown result by Gauduchon, [G], says that they are in fact Ricci flat. In this case, the LCK metric itself is called Einstein-Weyl (see [PPS], [OV1]) and has the property that in the conformal class of g there exists a metric with parallel Lee form, unique up to homotheties, known in the literature as Vaisman (see [G]). A particular example of Einstein-Weyl metric is locally conformally hyperkähler (LCHK). In this case, dim R M " 2n is divisible by 4, and M admits a hyperhermitian structure pI, J, K, gq such that all three Hermitian couples pg, Iq, pg, Jq, pg, Kq are LCK with respect to the same Lee form θ. The Kähler metric of the universal cover has then holonomy included in Spp n 2 q, thus being Calabi-Yau. See e.g. [CS], [OP], [PPS], [V]. The quaternionic Hopf manifold is an important example. A complete list of compact, homogeneous LCHK manifolds is given in [OP]. One easily verifies that on compact LCHK manifolds, the 2-form Ω :" ω J`?´1 ω K is nondegenerate and produces a volume form by Ω n 2^Ω n 2 " c¨dvol g , for a positive constant c. Moreover, clearly Ω satisfies the equation dΩ " θ^Ω. The aim of this note is to prove that these conditions are also sufficient to define an LCHK structure: Theorem A: Let pM, J, gq be a compact locally conformally Kähler manifold of real dimension 2n and θ the Lee form of g. Then g is locally conformally hyperkähler if and only if there exists a non-degenerate p2, 0q-form Ω such that dΩ " θ^Ω and Ω n 2^Ω n 2 " c¨dvol g , where c P R`, and dvol g is the volume form of g. Remark 1.1: Note that the existence of a nondegenerate p2, 0q-form implies that the real dimension is a multiple of 4. Remark 1.2: A complex manifold admitting a nondegenerate p2, 0q-form ω such that a closed one-form θ P Λ 1 pM, Cq exists and dω " θ^ω, is called complex locally conformally symplectic (CLCS). The Lee form can be real or complex. CLCS manifolds first appeared in [L,Section 5], motivated by the examples of even-dimensional leaves of the natural generalized foliation of a complex Jacobi manifolds (recall that real LCS structures also appear as leaves of real Jacobi manifolds). Theorem A can then be viewed as a conformal version of the celebrated Beauville theorem stating that a compact Kähler manifold, admitting a holomorphic symplectic form is hyperkähler, [B]. Our proof follows the ideas of Beauville's, but is different, the main difficulty being the fact that the universal cover of a LCK manifold is Kähler, but never compact, and hence one has to make a long detour to use the Weitzenböck formula on the compact LCK manifold M . Moreover, there is no analogue of Yau's theorem on LCK manifolds or noncompact Kähler manifolds, which is an essential ingredient in Beauville's proof for obtaining a Kähler Ricci flat metric. Nevertheless, the rather strong condition Ω n 2^Ω n 2 " c¨dvol g is meant to replace Yau's theorem and produce eventually the Kähler Ricci flat metric on the universal cover of M . A generalization of Beauville's theorem, but in a different sense, namely when the manifold is compact Kähler, but admitting a twisted holomorphic form, is presented in [I]. Proof of Theorem A The following two lemmas will be used in the proof. Lemma 2.1: Let pM, J, gq be a compact LCK manifold with a non-degenerate p2, 0q-form Ω such that dΩ " θ^Ω and Ω n 2^Ω n 2 " c¨dvol g , where θ is the Lee form of g, c P R`and dvol g is the volume form of g. Then g is Einstein-Weyl. Proof. LetM be the universal cover of M , endowed with the complex structureJ " π˚J, where π :M Ñ M . Let π˚θ " df andg the Kähler metric given by e´f π˚g. Denote bỹ Ω :" e´f π˚Ω. This is a p2, 0q-form which is closed, as a consequence of dΩ " θ^Ω, hence it is holomorphic. Moreover,Ω n 2^Ω n 2 " c¨dvolg, since dvolg " e´n f dvol g . Note that if instead of g we consider the metric g 1 " e f g with its corresponding Lee form θ 1 " θ`df , then taking Ω 1 " e f Ω, we still obtain a non-degenerate form of type p2, 0q satisfying dΩ 1 " θ 1^Ω1 and Ω n 2 1^Ω 1 n 2 " c¨dvol g 1 , therefore the statement of the lemma is conformally invariant. LetK be the canonical bundle ofM . There is a natural Hermitian structure onK which comes fromg given by α^˚β "gpα, βq dvolg. Note that because n is even,˚β " β (see [M,Exercise 18.2.1]), thus,gpΩ n 2 ,Ω n 2 q " c. The curvature form of the Chern connection associated tog is on one hand i BB log detpg ij q, where detg ij " detgp B Bz i , B Bz j q, and on the other hand, it is´i BB loggpΩ n 2 ,Ω n 2 q " 0. Since i BB log detpg ij q is the local expression of the Ricci form ρpX, Y q " RicgpJ X, Y q, we conclude thatg is Ricci flat and hence, g is Einstein-Weyl. In particular, in the conformal class of g, there exists a Vaisman metric, unique up to homotheties. Lemma 2.2: Let h be the Hermitian structure induced by g on Λ 2,0 TCM (that is hpω, ηq " gpω, ηq). The Weyl connection D on M satisfies Dh "´2θ b h. Proof. This is because the Hermitian structureh induced byg on Λ 2,0 CM is given by e 2f π˚h. Then π˚D " ∇g (see [DO]) implies that π˚Dpe 2f π˚hq " 0, which yields pπ˚Dqpπ˚hq " 2π˚θ b π˚h and our relation follows. We proceed with the proof of Theorem A. In terms of holonomy, to say that g is locally conformally hyperkähler is the same as proving that the holonomy ofg onM is contained in Spp n 2 q, which is equivalent tog being hyperkähler. According to the holonomy principle (see e.g. [B,Page 758]), proving that the holonomy ofg is in Spp n 2 q is equivalent to the existence of a complex structureJ onM , with respect to which g is Kähler and a holomorphic two-formΩ, parallel with respect to the Levi-Civita connection ofg, ∇g. We are going to prove that these areJ andΩ from the proof of Lemma 2.1 and we already saw thatΩ is holomorphic. The non trivial part is then to prove: Lemma 2.3:Ω isg-parallel. Proof. Since ∇g " π˚D, ∇gΩ " 0 is equivalent to (2.1) DΩ " θ b Ω. This is a conformally invariant relation on M : for any smooth f on M we have Dpe f Ωq " pθ`df q b e f Ω. Using thus the freedom of choosing any metric in the conformal class, with the corresponding change of θ and Ω, we choose the Vaisman metric, unique up to homotheties and without loss of generality we assume it is g. In this case, the Lee form is harmonic, has constant norm and moreover, we can choose the Vaisman metric with the Lee form of norm 1. We shall use these facts in the following computations. We apply the Weitzenböck formula on the holomorphic formΩ. According to [M] (see Theorem 20.2 and the beginning of the proof of Theorem 20.5), asg is Ricci flat (by Lemma 2.1) the curvature term vanishes identically and the Weitzenböck formula reduces to: p∇gq˚∇gΩ " 0. However,M is not compact and we cannot deduce by integration that ∇gΩ " 0. For simplicity, from now on we write ∇ for ∇g. By [M,Lemma 20.1], (2.2) 2n ÿ i"1 ∇˚∇Ω " ∇ ∇ f i f iΩ´∇ f i ∇ f iΩ , where tf i u is a localg -orthonormal frame. We can choose f i " e f 2 π˚e i , where te i u is a local g -orthonormal frame on M . Then (2.2) implies 2n ÿ i"1 ∇˚∇Ω " e f p∇ ∇ π˚e i π˚e iΩ´∇ π˚e i ∇ π˚e iΩ q, and hence 2n ÿ i"1 ∇ ∇ π˚e i π˚e iΩ´∇ π˚e i ∇ π˚e iΩ " 0. Writing nowΩ " e´f π˚Ω, the above relation gives: 0 " 2n ÿ i"1 e´f p∇ ∇ π˚e i π˚e i π˚Ω´∇ π˚e i ∇ π˚e i π˚Ω´π˚θp∇ π˚e i π˚e i qπ˚Ὼ 2π˚θpπ˚e i q∇ π˚e i π˚Ω´pπ˚θpπ˚e i qq 2 π˚Ω`π˚e i pπ˚θpπ˚e i qqπ˚Ωq. (2.3) As ∇ " π˚D, (2.3) descends on M to the following equality: (2.4) 2n ÿ i"1 D De i e i Ω´D e i D e i Ω´θpD e i e i qΩ`2θpe i qD e i Ω´pθpe i qq 2 Ω`e i pθpe i qqΩ " 0 We notice that ř 2n i"1 pθpe i qq 2 Ω " }θ} 2 g Ω " Ω. Recall (see [DO]) that (2.5) D " ∇ g´1 2 pθ b id`id b θ´g b θ 7 q. Using that θ is harmonic and (2.5), we get: 0 "´δ g θ " 2n ÿ i"1 e i pθpe i qq´θp∇ g e i e i q " 2n ÿ i"1 pe i pθpe i qq´θpD e i e i qq`n´1. Consequently, (2.4) rewrites as: (2.6) 2n ÿ i"1 D De i e i Ω´D e i D e i Ω`2θpe i qD e i Ω " nΩ. The goal is to prove that ∇Ω " 0, that is DΩ " θ b Ω. Hence, if we define D :" D´θ b, we need to show that DΩ " 0. If D˚is the adjoint of D, as M is compact, DΩ " 0 will follow from: ż M hpD˚D Ω, Ωqdvol g " 0. In order to find an explicit expression of D˚, we use the same method as in the proof of [M,Lemma 20.1]. Let η b σ P ΓpΛ 1 C b Λ 2,0 C q and s P Ω 2,0 pM q. Define the one-form αpXq :" hpηpXqσ, sq. Then taking a ∇ g -parallel local frame te i u, using Lemma 2.2 and the fact that θ is real, we derive: δ g α " 2n ÿ i"1 e i pαpe i qq " 2n ÿ i"1 e i phpηpe i qσ, sqq " 2n ÿ i"1 pD e i hqpηpe i qσ, sq`hpD e i pηpe i qσq, sq`hpηpe i qσ, D e i sq " 2n ÿ i"1 p´2θpe i qhpηpe i qσ, sq`hpe i pηpe i qqσ, sq`hpηpe i qD e i σ, sqq`hpη b σ, Dsq " hpD η 7 σ´pδ g ηqσ, sq`hpη b σ, Ds´2θ b sq. After integration on M , this implies: ż M hpη b σ, Ds´2θ b sqdvol g " ż M hp´D η 7 σ`pδ g ηqσ, sqdvol g and hence, the adjoint of D´2θb acts as follows: pD´2θbq˚pη b σq " pδ g ηqσ´D η 7 σ. We define T : ΓpΛ 2,0 pM qq Ñ ΓpΛ 1 C b Λ 2,0 pM qq, by T psq " θ b s. Then D˚" pD´2θbq˚`T˚. It is easy to see that T˚pη b σq " ηpθ 7 qσ. Therefore, (2.7) D˚pη b σq " pδ g ηqσ´D η 7 σ`ηpθ 7 qσ. Note that in the computation above we denoted by h, too, the Hermitian structure on Λ 1 C b Λ 2,0 C M which is a product of the Hermitian structure induced by g on Λ 1 C M and h defined in Lemma 2.2. We are ready now to compute D˚D Ω. First: D˚D Ω " D˚pDΩ´θ b Ωq " D˚DΩ´D˚θ b Ω. Now (2.7) yields: (2.8) D˚θ b Ω " pδ g θqΩ´D θ 7 Ω`Ω " Ω´D θ 7 Ω. Acknowledgment:We thank Nicolina Istrati, Andrei Moroianu and Victor Vuletescu for carefully reading a first draft of the paper and for very useful comments and to Johannes Schäfer for insightful discussions that improved a previous version.The first term is equal to:D˚D Ω " 2n ÿ i"1 D˚pe i b D e i Ωq " 2n ÿ i"1 pδ g e i qD e i Ω´D e i D e i Ω`e i pθ 7 qD e i Ω.But (2.5) implies:gpD e k e k , e i q`p1´nqθpe i q, and thus:Combining (2.9) and (2.8) we arrive at:which, together with (2.6), leads to the final result:(2.10) D˚DΩ " pn´1qpΩ´D θ 7 Ωq.Integrating(In particular, the above equality proves that ş M hpD θ 7 Ω, Ωqdvol g is a real number. Moreover, ż M pD θ 7 hqpΩ, Ωqdvol g " ż M θ 7 phpΩ, Ωqqdvol g´ż M hpD θ 7 Ω, Ωqdvol g´ż M hpΩ, D θ 7 Ωqdvol g .The first integral in the right hand side vanishes, since by Stokes formula and the fact that θ 7 is Killing (see[DO]Using once more Lemma 2.2 we derive: and thus DΩ " 0, which completes the proof. Variétées kähleriennes dont la première classe de Chern est nulle. A Beauville, J. Diff. Geom. 18A. Beauville, Variétées kähleriennes dont la première classe de Chern est nulle, J. Diff. Geom. 18 (1983), 755-782. (Cited on pages 2 and 3.) Almost Hermitian structures and quaternionic geometries. F Cabrera, F Swann, Diff. Geom. Appl. 21F. Cabrera, F. Swann, Almost Hermitian structures and quaternionic geometries, Diff. Geom. Appl. 21 (2004), 199-214. (Cited on page 2.) Locally conformally Kähler manifolds. S Dragomir, L Ornea, Progress in Math. 55Birkhäuser. Cited on pages 1, 3, 4, and 6.S. Dragomir, L. Ornea, Locally conformally Kähler manifolds, Progress in Math. 55, Birkhäuser, 1998. (Cited on pages 1, 3, 4, and 6.) Structures de Weyl-Einstein,éspaces de twisteurs et variétés de type S 1ˆS3. P Gauduchon, J. reine angew. Math. 469P. Gauduchon, Structures de Weyl-Einstein,éspaces de twisteurs et variétés de type S 1ˆS3 , J. reine angew. Math. 469 (1995), 1-50. (Cited on page 2.) Twisted holomorphic symplectic forms. N Istrati, Bull. London Math. Soc. 485N. Istrati, Twisted holomorphic symplectic forms, Bull. London Math. Soc., 48 (5) (2016), 745-756. (Cited on page 2.) Variétés de Jacobi et espaces homogènes de contact complexes. A Lichnerowicz, J. Math. Pures. Appl. 67A. Lichnerowicz, Variétés de Jacobi et espaces homogènes de contact complexes, J. Math. Pures. Appl., 67 (1988), 131-173. (Cited on page 2.) A Moroianu, Lectures on Kähler geometry. CambridgeCambridge University Press69A. Moroianu, Lectures on Kähler geometry, London Mathematical Society Student Texts 69, Cambridge University Press, Cambridge, 2007. (Cited on pages 3, 4, and 5.) Locally conformal Kähler structures in quaternionic geometry. L Ornea, P Piccinni, Trans. Amer. Math. Soc. 3492L. Ornea, P. Piccinni, Locally conformal Kähler structures in quaternionic geometry, Trans. Amer. Math. Soc. 349 (1997), no. 2, 641-655. (Cited on page 2.) Einstein-Weyl structures on complex manifolds and conformal version of Monge-Ampère equation. L Ornea, M Verbitsky, Bull. Math. Soc. Sci. Math. Roumanie. 514L. Ornea, M. Verbitsky, Einstein-Weyl structures on complex manifolds and conformal version of Monge-Ampère equation, Bull. Math. Soc. Sci. Math. Roumanie 51 (99) No. 4, (2008), 339-353. (Cited on page 2.) LCK rank of locally conformally Kähler manifolds with potential. L Ornea, M Verbitsky, J. Geom. Phys. 107L. Ornea, M. Verbitsky, LCK rank of locally conformally Kähler manifolds with potential, J. Geom. Phys. 107 (2016), 92-98. (Cited on page 1.) The Einstein-Weyl Equations in Complex and Quaternionic Geometry. H Pedersen, Y S Poon, A Swann, Diff. Geom. Appl. 3H. Pedersen, Y.S. Poon, A. Swann, The Einstein-Weyl Equations in Complex and Quaternionic Geom- etry, Diff. Geom. Appl. 3 (1993), 309-321. (Cited on page 2.) M Verbitsky, Theorems on the vanishing of cohomology for locally conformally hyper-Kähler manifolds. Cited on page 2.M. Verbitsky, Theorems on the vanishing of cohomology for locally conformally hyper-Kähler manifolds, Proc. Steklov Inst. Math. no. 3 (246), 54-78 (2004). (Cited on page 2.) . Simion Stoilow" of the Romanian Academy. 21Liviu Ornea) University of BucharestCalea Grivitei Street(Liviu Ornea) University of Bucharest, Faculty of Mathematics and Informatics, 14 Academiei Str., Bucharest, Romania, AND Institute of Mathematics "Simion Stoilow" of the Romanian Academy, 21, Calea Grivitei Street, 010702, Bucharest, Romania
[]
[ "Holographic Entanglement Entropy for Gravitational Anomaly in Four Dimensions", "Holographic Entanglement Entropy for Gravitational Anomaly in Four Dimensions" ]
[ "Tibra Ali [email protected][email protected] \nPerimeter Institute for Theoretical Physics\n31 Caroline Street NN2L 2Y5WaterlooONCanada\n", "S Shajidul Haque \nDepartment of Mathematics & Applied Mathematics\nLaboratory for Quantum Gravity & Strings\nUniversity of Cape Town\nSouth Africa\n", "Jeff Murugan *[email protected] \nDepartment of Mathematics & Applied Mathematics\nLaboratory for Quantum Gravity & Strings\nUniversity of Cape Town\nSouth Africa\n\nSchool of Natural Sciences Institute for Advanced Study Princeton\n08540NJUSA\n" ]
[ "Perimeter Institute for Theoretical Physics\n31 Caroline Street NN2L 2Y5WaterlooONCanada", "Department of Mathematics & Applied Mathematics\nLaboratory for Quantum Gravity & Strings\nUniversity of Cape Town\nSouth Africa", "Department of Mathematics & Applied Mathematics\nLaboratory for Quantum Gravity & Strings\nUniversity of Cape Town\nSouth Africa", "School of Natural Sciences Institute for Advanced Study Princeton\n08540NJUSA" ]
[]
We compute the holographic entanglement entropy for the anomaly polynomial TrR 2 in 3+1 dimensions. Using the perturbative method developed for computing entanglement entropy for quantum field theories, we also compute the parity odd contribution to the entanglement entropy of the dual field theory that comes from a background gravitational Chern-Simons term. We find that, in leading order in the perturbation of the background geometry, the two contributions match except for a logarithmic divergent term on the field theory side. We interpret this extra contribution as encoding our ignorance of the source which creates the perturbation of the geometry. § [email protected][email protected] * [email protected] * The notation in equation (1.1) is explained in more details in Dong's paper[6]. Later in our paper, we explain and use a non-covariant version of Dong's formula (1.1) which is also taken from[6].† Among those that are not solutions of the Chern-Simons modification are the Kerr, Kerr-Newman and Kerr-NUT spacetimes.
10.1007/jhep03(2017)084
[ "https://arxiv.org/pdf/1611.03415v2.pdf" ]
119,378,166
1611.03415
d4416321657d5f873ad24cfeb6c9d3c562f75ae1
Holographic Entanglement Entropy for Gravitational Anomaly in Four Dimensions 13 Jan 2017 Tibra Ali [email protected][email protected] Perimeter Institute for Theoretical Physics 31 Caroline Street NN2L 2Y5WaterlooONCanada S Shajidul Haque Department of Mathematics & Applied Mathematics Laboratory for Quantum Gravity & Strings University of Cape Town South Africa Jeff Murugan *[email protected] Department of Mathematics & Applied Mathematics Laboratory for Quantum Gravity & Strings University of Cape Town South Africa School of Natural Sciences Institute for Advanced Study Princeton 08540NJUSA Holographic Entanglement Entropy for Gravitational Anomaly in Four Dimensions 13 Jan 2017 We compute the holographic entanglement entropy for the anomaly polynomial TrR 2 in 3+1 dimensions. Using the perturbative method developed for computing entanglement entropy for quantum field theories, we also compute the parity odd contribution to the entanglement entropy of the dual field theory that comes from a background gravitational Chern-Simons term. We find that, in leading order in the perturbation of the background geometry, the two contributions match except for a logarithmic divergent term on the field theory side. We interpret this extra contribution as encoding our ignorance of the source which creates the perturbation of the geometry. § [email protected][email protected] * [email protected] * The notation in equation (1.1) is explained in more details in Dong's paper[6]. Later in our paper, we explain and use a non-covariant version of Dong's formula (1.1) which is also taken from[6].† Among those that are not solutions of the Chern-Simons modification are the Kerr, Kerr-Newman and Kerr-NUT spacetimes. Introduction The idea that space and time may be emergent macroscopic properties obtained by coarsegraining over some microscopic quantum substructure is by no means a new one [1,2]. It is an idea with a rich history and an equally interesting geography in the landscape of theoretical physics. Perhaps the most concrete laboratory in which this view of nature may be tested is furnished by the gauge/gravity duality and, more specifically, Maldacena's AdS/CFT correspondence [3]. In this context, the geometrical properties of a dual classical spacetime are seen to emerge from the large-N dynamics of a quantum field theory of interacting N × N matrices. But even in this setting, understanding the emergent properties of realistic, cosmological or black hole spacetimes has remained out of reach. This is due, in no small part, to the fact that such spacetimes necessitate the relaxation of the rigid constraints of supersymmetry and integrability. Without the control afforded by non-renormalization theorems, many of the computations cannot be trusted to give sensible answers. Questions abound: Is there a concrete mechanism for emergence? How do we build realistic spacetimes? What are the relevant quantities to study even? Fortunately, hope is on the horizon. Literally. The past decade has seen several promising developments in our understanding of black holes and the information loss problem they pose. Much of this understanding is intimately related to the idea of entanglement in the quantum field theory dual to the black hole geometry. In a sense, quantum entanglement has emerged as a glue of sorts, with which to put together quantum states and eventually produce a classical spacetime. Entanglement, as characterized by its various associated entropies is, of course, a notoriously difficult quantity to compute in quantum field theories in general and gauge theories in particular. This situation was changed in a fairly dramatic way recently by the celebrated work of Ryu and Takayanagi which relates the entanglement entropy of some boundary field theory to the area of an open minimal surface in the bulk whose boundary is set by the entangling surface [4]. Since its discovery, the Ryu-Takayanagi formula has provided one of the most useful tools in contemporary gauge/gravity problems. And, like any tool, figuring out where and how it fails is just as important as understanding why it works. Recently, building on the work of Lewkowycz and Maldacena [5], Dong has proposed a generalization of the Ryu-Takayanagi prescription for computing the holographic entanglement entropy in a general field theory dual to a higher derivative gravity theory whose Lagrangian is constructed from contractions of the Riemann tensor [6]. Accordingly, in d-dimensions Dong's proposal reads * S EE = 2π d d y √ g − ∂L ∂R µνρσ ε µν ε ρσ + 2 α ∂ 2 L ∂R µ 1 ν 1 ρ 1 σ 1 ∂R µ 2 ν 2 ρ 2 σ 2 α K λ 1 ν 1 σ 1 K λ 2 ν 2 σ 2 q α + 1 × × (n µ 1 µ 2 n ρ 1 ρ 2 − ǫ µ 1 µ 2 ǫ ρ 1 ρ 2 ) n λ 1 λ 2 + (n µ 1 µ 2 n ρ 1 ρ 2 + ǫ µ 1 µ 2 ǫ ρ 1 ρ 2 ) ǫ λ 1 λ 2 ,(1.1) and corrects Wald's gravitational entropy formula (the first term above), by accounting for extrinsic curvature terms crucial to its interpretation as entanglement entropy. These terms, in turn, can be interpreted as 'anomalies' in the variation of the action. Where they coincide these two formulations agree, since the extrinsic curvature vanishes on the Killing horizon. Dong's proposal, however, goes much further and properly reproduces the entropy in a variety of higher derivative theories of gravity including, f (R) gravity, general fourderivative gravity, Lovelock gravity and, in 3-dimensions, topologically massive gravity [7]. Other significant attempts at generalizing the holographic entanglement entropy proposal to higher-derivative gravity include [8][9][10][11][12][13] In this article, we test the proposal against another novel, higher derivative theory, namely a Chern-Simons modification of 4-dimensional Einstein gravity [14]. Originally proposed by Jackiw and Pi as a phenomenological extention of general relativity obtained by lifting the 3-dimensional gravitation Chern-Simons term Ω 3 (Γ) = Tr Γ ∧ dΓ + 2 3 Γ ∧ Γ ∧ Γ ,(1.2) to four spacetime dimensions, the theory breaks CPT as well as diffeomorphism invariance. While the former is manifest in the action, the latter is hidden. Consequently, 4-dimensional Chern-Simons modified gravity still propagates two physical degrees of freedom and gravitational waves still possess two polarizations. Many, although not all † , of the solutions of general relativity, including the Schwarzschild and pp-wave metrics, persist in the Chern-Simons deformation. However, in this article we assume that the coefficient of the Chern-Simons term in the four-dimensional action is a constant and in that case this term becomes topological. In the context of Riemannian geometry this terms is known as the Chern-Pontraygin density. In the physics context, however, this is an anomaly polynomial which has many applications starting from gravitational instantons to anomalies in the gauge or the gravity sector [15]. In the spirit of [15] we call this polynomial a 'gravitational anomaly' in the title of this paper although there are no pure gravitational anomaly in four-dimensions. This article concerns itself with the question: What effect does the addition of a Chern-Pontryagin density to the Einstein-Hilbert action have on the entanglement entropy? There is, of course, excellent reason to suppose that it does in fact have an effect. The Chern-Pontryagin density is, after all, topological in nature and so we would certainly expect it to contribute to the universal terms of the entanglement entropy [16]. How exactly, is the subject of this note. Entropy Functional for the Anomaly Polynomial The bulk theory that we are interested in is given by the following action I = − 1 16πG d 4 x √ g R + 2 ℓ 2 − κ 4 * RR , (2.1) where * RR ≡ * (R M N ) P Q R N M P Q ≡ 1 2 ǫ P QST R M N ST R M N P Q is the Chern-Pontryagin density. The cosmological constant is Λ = − 1 ℓ 2 , where ℓ is the length-scale corresponding to the AdS space. This action (2.1) is a special case of the so-called Chern-Simons modified gravity theory introduced in [14]. In that paper, however, the coefficient of the * RR term is a spacetime dependent scalar. It makes a contribution to the bulk equations of motion which involves the four-dimensional Cotton tensor. In our case, we take κ to be a constant and the anomaly form, which is topological in nature, makes no contribution to the bulk equations of motion. As a result, four dimensional anti-de Sitter space continues to be the maximally symmetric solution of this modified theory. The CP violation effects of this term was explored in [17]. In the absence of the anomaly term, the contribution to the holographic entanglement entropy contribution is given by the Ryu-Takayanagi formula first proposed in [4]. This proposal was later proved in [5] using the holographic replica trick. Recently, Dong [6] has extended the results of [5] for theories that contain higher derivative corrections as in (2.1). The essence of [5,6] is to use the holographic replica trick in the following way. Let M 1 be some spacetime manifold with some conformal field theory living on it. We shall often refer to M 1 as the boundary manifold as we shall think of it as being the boundary of some bulk geometry. We assume that the CFT is in its ground state. We then want to compute the entanglement entropy of the ground state in some region A of M 1 with its complement A. To do this, we then extend M 1 (assumed to be static and analytically continued to Eulcidean signature) to its n-fold cover M n by excising the region A along its boundary ∂A and glueing n copies of M 1 along ∂A in a cyclic fashion. The group Z n permutes each component of M n . The Rényi entropy of this system is given by S n = − 1 n − 1 log Tr[ρ n ] , (2.2) where ρ is the density matrix associated with the region A of the original manifold M 1 . The Rényi entropy can be rewritten as S n = − 1 n − 1 (log Z n − n log Z 1 ) , (2.3) where n > 1, Z n and Z 1 are the partition functions of the CFTs on M n and M 1 , respectively. We are interested in the von Neumann entropy of the CFT on M 1 . Formally, this is computed by the analytical continuation of (2.3) to n → 1. It is worth noting, however, that M n doesn't have a geometric interpretation for noninteger values of n and so it is not clear what the above analytic continuation means geometrically for the boundary manifold. It was pointed out in [5] that for theories that have holographic duals, the bulk geometry B n (which is the bulk geometry associated to the replica manifold M n ) does indeed have a geometric meaning even for non-integer values of n. B n is completely regular but the action of Z n on it has a fixed point set C n which is a codimension-2 surface. The orbifold bulk geometryB n = B n /Z n , whose boundary is our original boundary manifold M 1 , thus has a singular surface C n with a conical deficit of ǫ = 1 − 1 /n. One can regularize this cone by introducing a smoothing parameter a, and the metric of the manifold near this surface is given by [18] ds 2 = e 2A dzdz + e 2A T (zdz − zdz) 2 + (g ij + 2K aij x a + Q abij x a x b )dy i dy j + 2ie 2A (U i + V ai x a )(zdz − zdz)dy i + . . . , (2.4) where x a = {z, z} are the complexified coordinates transverse to the codimension-2 surface and y i , with i = 1, 2, ...d, are the coordinates along this surface. The functions, T, g ij , K aij , Q abij , U i , V ai all depend on y i . A = − ǫ 2 log(|z| 2 + a 2 ) is the regularization function that smooths out the squashed cone. The regularization parameter a keeps track of the contribution from the singular limit of the cone (when a → 0), which is subtracted before taking the a → 0 limit. The holographic formula (in the large N limit) for the entanglement entropy then is given by S EE = lim n→1 n n − 1 S[B n ] − S[B 1 ] = ∂ n S[B n ] n=1 , (2.5) where S[B n ] and S[B 1 ] are the classical bulk action evaluated on the orbifoldsB n andB 1 , respectively. As noted above, since the fixed point set C n is singular on the orbifolds, we smooth out the geometry near the tip of the cone by excising a small region and replacing the tip of the cone by a smoothed-out tip. Calling this new smoothed-out region near the tip as the 'inside' region, it can then be shown that [6] S EE = − ∂ ǫ S[B n ] inside ǫ=0 . (2.6) Applying this to theories whose bulk gravitational action contains only the Einstein-Hilbert term (in addition to the usual cosmological constant) yields the usual Wald term in the expression for the entanglement entropy. It was shown by Dong [6] that for theories with higher derivative coordinate-invariant terms one gets a correction to the Wald term. The total entanglement entropy then is given by (1.1) when expressed in a coordinate-invariant way. For explicit computations, however, it is convenient to express (1.1) in the coordinate system implicit in (2.4). One then gets the following expression for the holographic entanglement entropy S EE = 2π Σ d d y √ g ∂L ∂R zzzz + α ∂ 2 L ∂R zizj ∂R zkzl 8K zij K zkl q α + 1 , (2.7) where the integral is taken over a codimension-2 surface Σ that is homologous to the entangling surface in the dual CFT on the boundary. The extra terms derived by Dong arise from would-be logarithmic divergences that come from the squashed cone method. Consequently, these are naturally interpreted as anomaly terms and the coefficients q α can be thought of as 'anomaly coefficients.' In our case, however, the anomaly coefficient is trivial. We refer the reader to [6] for a more detailed discussion on this issue. A new issue that arises with the addition of the new terms is how to determine the entangling surface in the bulk. The rigorous way of deriving the entangling surface is to solve the equations of motion. But this could be too difficult in practice and so Dong [6] conjectures that the appropriate surface is the one that extremizes (2.7). Since in our case, the anomaly polynomial does not add any new term to the equations of motion, the bulk entangling surface will be same as the Ryu-Takayanagi surface. In order to understand how Dong's prescription gives the expression for holographic entanglement entropy, let us denote the two relevant part of the Lagrangian (2.1) by L 1 = − 1 16πG R , L 2 = κ 64πG * RR . (2.8) According to [6], the contribution that L 1 makes to the entanglement entropy is given by lim ǫ→0 1 4G d 4 x √ G δ 2 (x 1 , x 2 ) (ρ 2 + a 2 ) ǫ − ǫ log(ρ 2 + a 2 ) (ρ 2 + a 2 ) ǫ δ 2 (x 1 , x 2 ) ,(2.9) where ρ is the polar coordinate defined by ρ = |z|. In the ǫ → 0 limit the second term drops out and it is easy to see that the first term is nothing but the Ryu-Takayanagi formula S (1) EE L 1 = A 4G ,(2.10) where A = Σ d 2 y √ g. Since the first term L 1 is first order in curvature it doesn't make any contribution to the entanglement entropy coming from the second term in (2.7). L 2 , on the other hand, contributes to both. Its contribution to the Wald term is computed to be S (1) EE L 2 = − κ 4G d 2 y √ g ∂U j ∂y i − ∂U i ∂y j + 2g kl K z jk K z il ǫ ij . (2.11) The contribution from the L 2 term to the second term in (2.7) can be shown to be S (2) EE L 2 = − κ 2G d 2 y √ gK z ij K z kl ǫ jl g ki ,(2.12) which cancels out the second term in L 2 's contribution to S EE . Thus the final result is (continuing back to Lorentzian signature) S EE = 1 4G N d 2 y √ g − κ 4G d 2 y √ g [∂ i U j − ∂ j U i ] ǫ ij . (2.13) This expression is computed in the particular coordinate system given above. This can be expressed in the following coordinate-invariant way S EE = 1 4G Σ d 2 y √ g + κ 8G Σ F ij dy i ∧ dy j ,(2.14) where F ij = ∂ i A j − ∂ j A i is the curvature of the normal bundle of Σ. The Abelian connection A i of the normal bundle is given by A i = − 1 2 ǫ a b Γ a ib . (2.15) The early Latin indices denote the normal direction to the surface Σ and in four spacetime dimensions they take on two values. The quantity Γ a ib is given by Γ a ib = ∂ i n µ b +Γ µ σν e σ i n ν b n a µ ,(2.16) whereΓ µ σν are the Christoffel symbols of the bulk spacetime and e σ i = ∂x σ ∂y i is the pull-back map. n a µ for a = 0, 1 are the unit normal vectors. In our convention n 0 is time-like, while n 1 is space-like. Happily, this computation reproduces the results given in [18]. Holographic Entanglement Entropy In the previous section we have seen that the entanglement entropy of a bulk theory that contain an additional Chern-Pontryagin term is given by the usual Ryu-Takayanagi term and an additional term in (2.14) that computes the flux of the curvature of the normal bundle through the bulk entangling surface. As argued in the previous section, adding new terms in the action can in principle change the criterion for the bulk entangling surface. But since in our case the extra term is topological, we see that the bulk entangling surface coincides with the one prescribed by Ryu and Takayanagi. It then follows that the contribution from the first term in (2.14) would be identical to the case that one would get if the bulk theory had just the Einstein-Hilbert and the cosmological constant terms. The new contribution would then come from the second term: ∆S EE = κ 8G Σ F ij dy i ∧ dy j ,(3.1) where Σ is the Ryu-Takayanagi surface. Even though this expression was derived using Dong's prescription [6], it could have also been derived on dimensional grounds based on the fact that it computes the flux of the normal bundle through a codimension-2 surface in four spacetime dimensions. See [19] for more details. This new term, (3.1), is itself topological in the sense that the integrand is an exact form which, by Stokes' theorem, can be written as ∆S EE = κ 8G ∂Σ A i dy i . (3.2) One can now express this formula in terms of the bulk coordinates x µ by introducing a new bulk gauge field a µ := A i ∂y i ∂x µ . In terms of this gauge field the above term becomes ∆S EE = κ 8G ∂Σ a µ dx µ . (3.3) From the definition of A i in (2.15) one can easily show that a µ is given by a µ = − 1 2 ǫ a b (∇ µ n ρ b )n a ρ , (3.4) where ∇ is the Levi-Civita connection of the bulk metric. The boundary contour ∂Σ ⊂ ∂B 1 = M 1 and so we can express ∆S EE in terms of quantities that are intrinsic to M 1 . We shall assume that the projections of the normal vectors n µ a on the boundary coincides with the values of those vectors at the boundary. Thus, if m µ is the space-like normal to the boundary ∂Σ and h µν = G µν − m µ m ν is the induced metric on the boundary, then h µ ν n ν a | ∂Σ = n µ a | ∂Σ . (3.5) In other words, n a µ m µ = 0. We define hatted quantities to be the projection of a bulk quantity onto the boundary,X µ = h µ ν X ν . With the above assumption we can write down the boundary projection of the gauge field as follows: a = − 1 2 ǫ a b (∇ νn ρ b )n a ρ . (3.6) We want to emphasize that this explicit expression forâ only holds if the normal vectors at the boundary are orthonormal to m µ . Finally, the contribution to the entanglement entropy for the gravitational Chern-Pontryagin term expressed in terms of the projected gauge field a µ is simply ∆S EE = κ 8G ∂Σâ µd x µ . (3.7) An Example It turns out to be somewhat difficult to obtain a non-trivial contribution to the entanglement entropy coming from (3.7). The reason lies in the structure of the gauge field A i (or alternatively, a µ ). ∆S EE measures the holonomy of the normal bundle to the bulk entangling surface Σ. Since the normal bundle involves a time-like direction, this implies that looking for non-zero ∆S EE using static metrics on the boundary will give trivial contributions. Thus, although the expression (2.14) was derived under the assumption that our metric was static, we need to extend this expression to the case where the metric is no longer static. This is analogous to [7] who also extended the entanglement entropy of their topologically massive bulk gravity theory to non-static cases, using [20] as motivation. They did so even though the proposal of [20] (and its recent proof [21]) is limited to the case where the bulk action doesn't have higher derivative gravitational terms. We leave the justification for extending (3.7) to non-static cases as a future project. In extending the formula to go beyond the static case, we make the most conservative choice and take as our metric to be a stationary, rotating metric. It turns out, however, that in three dimension such a metric is locally flat. Therefore, it is not a huge stretch for us to use our formula which was derived for the static case. The three-dimensional Kerr solution One of the simplest configurations involving a non-static metric that yields a non-trivial contribution to ∆S EE is the three-dimensional 'Kerr metric' discovered by Deser et al. in [22]. We take as our entangling surface a circle in this geometry. The metric is given by ds 2 3 = −dt 2 + 2b dtdϕ + dr 2 + (r 2 − b 2 ) dϕ 2 ,(3.8) where the constant b is proportional to the angular momentum. This metric solves vacuum Einstein's equations. Since in three dimensions Ricci flatness coincides with flatness, this metric is flat. We now discuss a curious feature of this metric as it will be important in interpreting the dual field theory result that we compute in the next section. As mentioned earlier, this metric is actually locally flat as can be seen by making the coordinate transformation: T = t − bϕ . (3.9) The metric then becomes ds 2 3 = −dT 2 + dr 2 + r 2 dϕ 2 . (3.10) But as ϕ → ϕ + 2π, we have T → T − 2πb and so we see that there is a discontinuity in the new time direction. All of this means that even though our 'Kerr' metric is flat, it is being sourced by some mass distribution located near r = 0 with non-zero angular momentum. According to [22,23] this metric is sourced by moving point particles (in three dimensions) or moving parallel cosmic strings (in four dimensions). Although here we do not investigate what the sources are, in the next section we shall see that the field theory 'knows' about the presence of the sources. We now compute the contribution to ∆S EE when we take as our entangling surface to be a circle of radius R centred around the origin. Note that the tangent vector to the circle, however, changes from space-like to time-like as the value of R is dialled from greater than |b| to less than |b|. Since we want our entangling surface to remain space-like we fix R > |b|. We work with metric expressed in (3.8). Then the normalized vectors which lie normal to the entangling circle are given by n 0 µ = (n 0 t , n 0 r , n 0 θ ) = 1/ 1 − b 2 /r 2 , 0, 0 (3.11) n 1 µ = (n 1 t , n 1 r , n 1 θ ) = (0, 1, 0) . (3.12) Note that our time-like normal vector becomes imaginary at r = |b|. It is then straightforward to compute the boundary gauge fieldâ µ , a µ = 0, 0, − b r 1 − b 2 /r 2 . (3.13) Thus, we get the following contribution to the entanglement entropy ∆S EE = − πκ 4G b √ R 2 − b 2 . (3.14) For comparison with our field theory computation in in the next section we note the small angular momentum or large R limit: ∆S EE ≈ πκb 4GR ,(3.15) where we have dropped the minus sign by replacing b by −b. As expected the contribution vanishes in the non-rotating limit. Entanglement Entropy Now, let's focus on the field theory side. As in the gravity side, we want to compute the contribution to entanglement entropy coming from three dimensional parity violating terms in the quantum field theory. These are field theories that contain parity-odd terms in the low-energy effective action when coupled to a background gravitational field. The particular term whose effect on the field theory we are considering is the three-dimensional gravitational Chern-Simons term. We get this term on the boundary from the bulk Chern-Pontryagin term, κ 16πG B 1 tr(Γ ∧ dΓ + 2 3 Γ ∧ Γ ∧ Γ) = κ 32πG M 1 Tr(R ∧ R) ,(4.1) where B 1 = ∂M 1 . The left-hand-side of this equality plays an essential role in topologically massive theories of gravity [25,26]. Our choice to examine the effect of this term is also justified by [24]. We shall adopt the perturbative approach developed in [27][28][29] and use the three-dimensional Kerr metric example (3.8) for illustration. Following [29], we interpret our rotating metric as a perturbation of the 'flat metric' written in polar coordinates ‡ as well as analytically continue to Euclidean signature. The latter, of course, also entails making the angular momentum parameter b to be imaginary. As a starting point we note that ∆S EE = 0 for a circular entangling surface on a flat spacetime. We then perturb the metric which leads to the following expression for ∆S EE , ∆S EE = 1 2 R 3 d 3 xδg µν T µν (x) H conn ,(4.2) where ... conn is the connected two point function. T µν is the unperturbed energy momentum tensor of the QFT and H is the modular Hamiltonian. δg µν is the perturbation around the background geometry. For a two-dimensional ball of radius R centred at the origin, the modular hamiltonian is given by [30] H = 2π R 0 dr ′ 2π 0 dθ ′ r ′ R 2 − r ′2 2R T 00 (τ ′ , r ′ , θ ′ ) + constant ,(4.3) where the constant in the above equation is there to ensure that the density matrix is normalized to unity. It does not play a role in the connected part of the correlation function and so we drop it below. In [31], the parity odd contribution to the energy-momentum two-point function coming from the term on the left-hand-side of (4.1) was computed to be T µν (x)T λρ (x ′ ) = − iκ ′ 16π ǫ (µ(λσ ∇ ν)ρ) − g ν)ρ) ∇ 2 ∇ σ δ 3 (x − x ′ ) √ g , (4.4) where κ ′ is a dimensionless constant and it is related to the bulk quantity by κ ′ = κ G . (4.5) ‡ In doing the computation we momentarily 'forget' that the full rotating metric is locally flat. In (4.4) we have 'covariantized' the expression since we are working in polar coordinates in which the modulus of the determinant of metric tensor is not unity. We note that δg τ θ = b r 2 and δg θθ = b 2 r 4 . (4.6) We compare our computation with the leading order result in the holographic computation (3.15) and so we only consider the δg τ θ perturbation. Plugging in the values and doing the τ and the θ ′ integrals we get the following formal expression for ∆S EE : ∆S EE = κ ′ ib 8 ∞ ε dr 2π 0 dθ R 0 dr ′ r ′ R 2 − r ′2 2R 1 r ∂ 2 r + ∂ 3 r + 1 r 2 ∂ 2 θ ∂ r δ(r − r ′ ) r . (4.7) In the above expression we have introduced a lower cut-off ε to the r integral since the integral has a singularity coming from the origin. After performing the delta function integrals carefully we get: ∆S EE = πiκ ′ 4 b R − b 2R ln R/ε . (4.8) In the above expression we get a factor of i due to the fact that the computation was done in Euclidean space in which the angular momentum b has to be taken to be imaginary for the metric to remain real. We see that (4.8) agrees with the holographic computation (3.15) in the first term but that in the field theory we get an extra divergence from the origin. In the previous section, we saw that the background metric had a curious discontinuity in the time direction when written in 'flat' coordinates. This accounts for the fact that there was a rotating matter source near the origin of the spatial sections. In our computations we did not specify this source. Since on the field theory side we see a short-distance cutoff from the same region of spacetime, we interpret this divergence as encoding our ignorance of the matter source that creates this rotating spacetime. In other words, the field theory 'knows' about the rotating source and assigns some cut-off dependent entropy to the source. In a scenario in which one specifies the sources on both sides, we believe that one should be able to do the matching of the entanglement entropies. Conclusion In the spirit of understanding the intimate relationship between quantum entanglement and gravity, we have devoted this note to computing the entanglement entropy of Jackiw and Pi's modification of general relativity. As described in the introduction, the modification is affected through the addition of a Chern-Pontryagin density. Unless its coefficient is promoted to a dynamical spacetime scalar, this term is topological and does not contribute to the gravitational dynamics. However, as we have demonstrated, it does contribute to the entanglement entropy through a term proportional to the curvature flux through the normal bundle to the Ryu-Takayanagi entangling surface. In this sense, our computation may be considered further evidence in support of Dong's proposal for gravitational entropy in higher derivative gravity. We have exemplified the formal argument in the case of a stationary, rotating background and, following the perturbative approach of [27][28][29], matched this with the entanglement entropy of a (generic) three-dimensional field theory with parity violating terms that contribute to the universal topological entropy. The match is not perfect. In addition to the anticipated part of ∆S EE , we find a second term that diverges like the log of the cut-off scale. We speculate that this term codes the matter source at the origin of the bulk spacetime in some way. This would be an interesting future avenue to explore. AcknowledgementsThe authors would like to thankÁlvaro Véliz-Osorio, Pawe l Caputa, David Kubizňák, Michal Heller and Aitor Lewkowycz for useful discussions and comments on the manuscript. We would also like to thank Arpan Bhattacharyya for collaboration at the earlier phase of this work and Onkar Parrikar for correspondences. TA and SSH would also like thank the hospitality of the organizers of the Simons Summer Workshop 2015 where this project was initially conceived. JM gratefully acknowledges support by NSF grant PHY-1606531 at the Institute for Advanced Study and NRF grant GUN 87667 at the University of Cape Town. SSH is supported by the Claude Leon Foundation. TA's research is funded by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation. Emergent spacetime. N Seiberg, hep-th/0601234The Quantum Structure of Space and Time: Proceedings of the 23rd Solvay Conference on Physics. Brussels, BelgiumN. Seiberg, Emergent spacetime, in The Quantum Structure of Space and Time: Proceedings of the 23rd Solvay Conference on Physics. Brussels, Belgium. 1 -3 December 2005, pp. 163-178. 2006. hep-th/0601234. R De Mello Koch, J Murugan, Emergent Spacetime, 2009. 0911.4817Proceedings, Foundations of Space and Time: Reflections on Quantum Gravity: Cape Town. Foundations of Space and Time: Reflections on Quantum Gravity: Cape TownSouth AfricaR. de Mello Koch and J. Murugan, Emergent Spacetime, in Proceedings, Foundations of Space and Time: Reflections on Quantum Gravity: Cape Town, South Africa, pp. 164-184. 2009. 0911.4817. The Large N limit of superconformal field theories and supergravity. J M Maldacena, hep-th/9711200Adv. Theor. Math. Phys. 38231Int. J. Theor. Phys.J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Int. J. Theor. Phys. 38 (1999) 1113-1133 [hep-th/9711200], [Adv. Theor. Math. Phys.2,231(1998)]. Holographic derivation of entanglement entropy from AdS/CFT. S Ryu, T Takayanagi, hep-th/0603001Phys. Rev. Lett. 96181602S. Ryu and T. Takayanagi, Holographic derivation of entanglement entropy from AdS/CFT, Phys. Rev. Lett. 96 (2006) 181602 [hep-th/0603001]. Generalized gravitational entropy. A Lewkowycz, J Maldacena, 1304.4926JHEP. 0890A. Lewkowycz and J. Maldacena, Generalized gravitational entropy, JHEP 08 (2013) 090 [1304.4926]. Holographic Entanglement Entropy for General Higher Derivative Gravity. X Dong, 1310.5713JHEP. 0144X. Dong, Holographic Entanglement Entropy for General Higher Derivative Gravity, JHEP 01 (2014) 044 [1310.5713]. Holographic entanglement entropy and gravitational anomalies. A Castro, S Detournay, N Iqbal, E Perlmutter, JHEP. 071141405.2792A. Castro, S. Detournay, N. Iqbal and E. Perlmutter, Holographic entanglement entropy and gravitational anomalies, JHEP 07 (2014) 114 [1405.2792]. Holographic Entanglement Entropy in Lovelock Gravities. J Boer, M Kulaxizi, A Parnachev, 109 [1101.5781JHEP. 07J. de Boer, M. Kulaxizi and A. Parnachev, Holographic Entanglement Entropy in Lovelock Gravities, JHEP 07 (2011) 109 [1101.5781]. On Holographic Entanglement Entropy and Higher Curvature Gravity. L.-Y Hung, R C Myers, M Smolkin, JHEP. 04251101.5813L.-Y. Hung, R. C. Myers and M. Smolkin, On Holographic Entanglement Entropy and Higher Curvature Gravity, JHEP 04 (2011) 025 [1101.5813]. Generalized entropy and higher derivative Gravity. J Camps, 1310.6659JHEP. 0370J. Camps, Generalized entropy and higher derivative Gravity, JHEP 03 (2014) 070 [1310.6659]. On Spacetime Entanglement. R C Myers, R Pourhasan, M Smolkin, 1304.2030JHEP. 0613R. C. Myers, R. Pourhasan and M. Smolkin, On Spacetime Entanglement, JHEP 06 (2013) 013 [1304.2030]. Entanglement entropy in higher derivative holography. A Bhattacharyya, A Kaviraj, A Sinha, JHEP. 08121305.6694A. Bhattacharyya, A. Kaviraj and A. Sinha, Entanglement entropy in higher derivative holography, JHEP 08 (2013) 012 [1305.6694]. Holographic Entanglement Entropy, Field Redefinition Invariance and Higher Derivative Gravity Theories. M R Mohammadi Mozaffar, A Mollabashi, M M Sheikh-Jabbari, M H Vahidinia, Phys. Rev. 944460021603.05713M. R. Mohammadi Mozaffar, A. Mollabashi, M. M. Sheikh-Jabbari and M. H. Vahidinia, Holographic Entanglement Entropy, Field Redefinition Invariance and Higher Derivative Gravity Theories, Phys. Rev. D94 (2016), no. 4, 046002 [1603.05713]. Chern-Simons modification of general relativity. R Jackiw, S Y Pi, gr-qc/0308071Phys. Rev. 68104012R. Jackiw and S. Y. Pi, Chern-Simons modification of general relativity, Phys. Rev. D68 (2003) 104012 [gr-qc/0308071]. Gravitational Anomalies. L Alvarez-Gaume, E Witten, Nucl. Phys. 234269L. Alvarez-Gaume and E. Witten, Gravitational Anomalies, Nucl. Phys. B234 (1984) 269. Topological entanglement entropy. A Kitaev, J Preskill, hep-th/0510092Phys. Rev. Lett. 96110404A. Kitaev and J. Preskill, Topological entanglement entropy, Phys. Rev. Lett. 96 (2006) 110404 [hep-th/0510092]. Gravitationally Induced CP Effects. S Deser, M J Duff, C J Isham, Phys. Lett. 93S. Deser, M. J. Duff and C. J. Isham, Gravitationally Induced CP Effects, Phys. Lett. B93 (1980) 419-423. Holographic Entanglement for Chern-Simons Terms. T Azeyanagi, R Loganayagam, G S Ng, 1507.02298T. Azeyanagi, R. Loganayagam and G. S. Ng, Holographic Entanglement for Chern-Simons Terms, 1507.02298. On the Shape of Things: From holography to elastica. P Fonda, V Jejjala, A Veliz-Osorio, 1611.03462P. Fonda, V. Jejjala and A. Veliz-Osorio, On the Shape of Things: From holography to elastica, 1611.03462. A Covariant holographic entanglement entropy proposal. V E Hubeny, M Rangamani, T Takayanagi, JHEP. 07620705.0016V. E. Hubeny, M. Rangamani and T. Takayanagi, A Covariant holographic entanglement entropy proposal, JHEP 07 (2007) 062 [0705.0016]. Deriving covariant holographic entanglement. X Dong, A Lewkowycz, M Rangamani, 1607.07506X. Dong, A. Lewkowycz and M. Rangamani, Deriving covariant holographic entanglement, 1607.07506. Three-dimensional Einstein gravity: Dynamics of flat space. S Deser, R Jackiw, G Hooft, Annals of Physics. 1521S. Deser, R. Jackiw and G. 't Hooft, Three-dimensional Einstein gravity: Dynamics of flat space, Annals of Physics 152 (1984), no. 1, 220 -235. Physical cosmic strings do not generate closed timelike curves. S Deser, R Jackiw, G Hooft, Phys. Rev. Lett. 68S. Deser, R. Jackiw and G. 't Hooft, Physical cosmic strings do not generate closed timelike curves, Phys. Rev. Lett. 68 (1992) 267-269. Membrane paradigm, gravitational Θ-term and gauge/gravity duality. W Fischler, S Kundu, 1512.01238JHEP. 04112W. Fischler and S. Kundu, Membrane paradigm, gravitational Θ-term and gauge/gravity duality, JHEP 04 (2016) 112 [1512.01238]. Topologically Massive Gauge Theories. S Deser, R Jackiw, S Templeton, Annals Phys. 140Annals Phys.S. Deser, R. Jackiw and S. Templeton, Topologically Massive Gauge Theories, Annals Phys. 140 (1982) 372-411 [Annals Phys.281,409(2000)]. Three-Dimensional Massive Gauge Theories. S Deser, R Jackiw, S Templeton, Phys. Rev. Lett. 48S. Deser, R. Jackiw and S. Templeton, Three-Dimensional Massive Gauge Theories, Phys. Rev. Lett. 48 (1982) 975-978. Entanglement Entropy: A Perturbative Calculation. V Rosenhaus, M Smolkin, 1403.3733JHEP. 12179V. Rosenhaus and M. Smolkin, Entanglement Entropy: A Perturbative Calculation, JHEP 12 (2014) 179 [1403.3733]. Entanglement Entropy for Relevant and Geometric Perturbations. V Rosenhaus, M Smolkin, 1410.6530JHEP. 0215V. Rosenhaus and M. Smolkin, Entanglement Entropy for Relevant and Geometric Perturbations, JHEP 02 (2015) 015 [1410.6530]. Entanglement entropy and anomaly inflow. T L Hughes, R G Leigh, O Parrikar, S T Ramamurthy, Phys. Rev. 936650591509.04969T. L. Hughes, R. G. Leigh, O. Parrikar and S. T. Ramamurthy, Entanglement entropy and anomaly inflow, Phys. Rev. D93 (2016), no. 6, 065059 [1509.04969]. Towards a derivation of holographic entanglement entropy. H Casini, M Huerta, R C Myers, 1102.0440JHEP. 0536H. Casini, M. Huerta and R. C. Myers, Towards a derivation of holographic entanglement entropy, JHEP 05 (2011) 036 [1102.0440]. Comments on Chern-Simons Contact Terms in Three Dimensions. C Closset, T T Dumitrescu, G Festuccia, Z Komargodski, N Seiberg, 1206.5218JHEP. 0991C. Closset, T. T. Dumitrescu, G. Festuccia, Z. Komargodski and N. Seiberg, Comments on Chern-Simons Contact Terms in Three Dimensions, JHEP 09 (2012) 091 [1206.5218].
[]
[ "Comparison between the Amount of Environmental Change and the Amount of Transcriptome Change", "Comparison between the Amount of Environmental Change and the Amount of Transcriptome Change" ]
[ "Norichika Ogata \nNihon BioData Corporation\n3-2-1 Sakado, Takatsu-ku213-0012KawasakiKanagawaJapan\n", "Toshinori Kozaki \nHuman Resource Development Program in Agricultural Genome Sciences\nTokyo University of Agriculture and Technology\n3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan\n", "Takeshi Yokoyama \nLaboratory of Sericultural Science\nFaculty of Agriculture\nTokyo University of Agriculture and Technology\n3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan\n", "Tamako Hata \nNational Institute of Agrobiological Sciences\nOwashi 1-2305-8634TsukubaIbarakiJapan\n", "Kikuo Iwabuchi \nFaculty of Agriculture\nLaboratory of Applied Entomology\nTokyo University of Agriculture and Technology\n3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan\n" ]
[ "Nihon BioData Corporation\n3-2-1 Sakado, Takatsu-ku213-0012KawasakiKanagawaJapan", "Human Resource Development Program in Agricultural Genome Sciences\nTokyo University of Agriculture and Technology\n3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan", "Laboratory of Sericultural Science\nFaculty of Agriculture\nTokyo University of Agriculture and Technology\n3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan", "National Institute of Agrobiological Sciences\nOwashi 1-2305-8634TsukubaIbarakiJapan", "Faculty of Agriculture\nLaboratory of Applied Entomology\nTokyo University of Agriculture and Technology\n3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan" ]
[]
Cells must coordinate adjustments in genome expression to accommodate changes in their environment. We hypothesized that the amount of transcriptome change is proportional to the amount of environmental change. To capture the effects of environmental changes on the transcriptome, we compared transcriptome diversities (defined as the Shannon entropy of frequency distribution) of silkworm fat-body tissues cultured with several concentrations of phenobarbital. Although there was no proportional relationship, we did identify a drug concentration "tipping point" between 0.25 and 1.0 mM. Cells cultured in media containing lower drug concentrations than the tipping point showed uniformly high transcriptome diversities, while those cultured at higher drug concentrations than the tipping point showed uniformly low transcriptome diversities. The plasticity of transcriptome diversity was corroborated by cultivations of fat bodies in MGM-450 insect medium without phenobarbital and in 0.25 mM phenobarbital-supplemented MGM-450 insect medium after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium). Interestingly, the transcriptome diversities of cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium) were different from cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital). This hysteretic phenomenon of transcriptome diversities indicates multi-stability of the genome expression system. Cellular memories were recorded in genome expression networks as in DNA/histone modifications.
10.1371/journal.pone.0144822
null
9,665,213
1501.01677
67f63c71dd8484fd075612ce525b231fd560b081
Comparison between the Amount of Environmental Change and the Amount of Transcriptome Change December 14, 2015 Published: December 14, 2015 Norichika Ogata Nihon BioData Corporation 3-2-1 Sakado, Takatsu-ku213-0012KawasakiKanagawaJapan Toshinori Kozaki Human Resource Development Program in Agricultural Genome Sciences Tokyo University of Agriculture and Technology 3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan Takeshi Yokoyama Laboratory of Sericultural Science Faculty of Agriculture Tokyo University of Agriculture and Technology 3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan Tamako Hata National Institute of Agrobiological Sciences Owashi 1-2305-8634TsukubaIbarakiJapan Kikuo Iwabuchi Faculty of Agriculture Laboratory of Applied Entomology Tokyo University of Agriculture and Technology 3-5-8, Saiwai-cho183-8501Fuchu, TokyoJapan Comparison between the Amount of Environmental Change and the Amount of Transcriptome Change December 14, 2015 Published: December 14, 2015Received: August 18, 2015 Accepted: November 24, 2015RESEARCH ARTICLE 1 / 13 OPEN ACCESS Citation: Ogata N, Kozaki T, Yokoyama T, Hata T, Iwabuchi K (2015) Comparison between the Amount of Environmental Change and the Amount of Transcriptome Change. PLoS ONE 10(12): e0144822. Editor: Erjun Ling, Institute of Plant Physiology and Ecology, CHINA Data Availability Statement: Short-read data have been deposited in the DNA Data Bank of Japan (DDBJ)'s Short Read Archive, under project ID DRA002853. Funding: Ogata is a member of a commercial company: Nihon BioData Corporation (NBD). NBD is Ogata's company, and Ogata bought some materials using the company's money. The funder, Nihon BioData Corporation, provided support in the form of materials for author NO, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the Cells must coordinate adjustments in genome expression to accommodate changes in their environment. We hypothesized that the amount of transcriptome change is proportional to the amount of environmental change. To capture the effects of environmental changes on the transcriptome, we compared transcriptome diversities (defined as the Shannon entropy of frequency distribution) of silkworm fat-body tissues cultured with several concentrations of phenobarbital. Although there was no proportional relationship, we did identify a drug concentration "tipping point" between 0.25 and 1.0 mM. Cells cultured in media containing lower drug concentrations than the tipping point showed uniformly high transcriptome diversities, while those cultured at higher drug concentrations than the tipping point showed uniformly low transcriptome diversities. The plasticity of transcriptome diversity was corroborated by cultivations of fat bodies in MGM-450 insect medium without phenobarbital and in 0.25 mM phenobarbital-supplemented MGM-450 insect medium after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium). Interestingly, the transcriptome diversities of cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium) were different from cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital). This hysteretic phenomenon of transcriptome diversities indicates multi-stability of the genome expression system. Cellular memories were recorded in genome expression networks as in DNA/histone modifications. Introduction When environmental conditions change abruptly, living cells must coordinate adjustments in their genome expression to accommodate the changing environment [1]. It is possible that the degree of change in the environment affects the degree of change in the gene expression pattern. However, it is difficult to completely understand the amount of change that occurs in the transcriptome, given that this would involve thousands of gene expression measurements. A recent study defined transcriptome diversity as the Shannon entropy of its frequency distribution, and made it possible to express the transcriptome as a single value [1]. Dimensionality Reduction methods e.g. Principle component analysis and t-SNE had been used to transcriptome analyses [2,3], but biological meanings of value of results of these methods was unclear. Transcriptome diversity also reduces the dimensions of transcriptome data and the biological meaning of transcriptome diversity is clear. The first research on transcriptome diversity was performed with human tissues [1], and later research compared cancer cells with normal cells [4]. In plant research, transcriptome diversity has been used to compare several wounded leaves [5]. Our previous study indicated that silkworm fat-body tissues cultured for 90 hours had higher transcriptome diversity than intact tissues and that transcriptome diversity measured degree of cells development [6]. However, these studies considered only qualitative environmental changes. Here, we present comparisons of transcriptome diversity between cells cultured in vitro in media supplemented with several concentrations of phenobarbital, to investigate how the amount of environmental change (in terms of drug concentration) affects the amount of transcriptome change. Results Effect of phenobarbital on transcriptome diversity To investigate the effects of phenobarbital concentration on transcriptome diversity, we sequenced 15 transcriptomes from larval fat-body tissues exposed to phenobarbital. Freshly isolated tissues were cultured for 80 hours in MGM-450 insect medium, and then cultured for 10 hours in medium supplemented with 0, 0.25, 1.0, 2.5, and 12.5 mM phenobarbital. We measured the diversity of those transcriptomes. Transcriptome diversity were 9.96, 10.39, 10.55 with 0 mM phenobarbital, 10.18, 10.36, 10.43 with 0.25 mM phenobarbital, 8.17, 7.57, 8.00 with 1.0 mM phenobarbital, 7.68, 7.33, 8.06 with 2.5 mM phenobarbital and 8.00, 7.52, 7.93 with 12.5 mM phenobarbital. Correlation between transcriptome diversity and phenobarbital concentration was -0.53 (t = -2.28, df = 13, p-value = 0.04, 95 percent confidence interval, -0.94 to -0.54, Pearson's product-moment correlation). Transcriptome diversity changed only between phenobarbital concentrations of 0.25 and 1.0 mM (Fig 1). Our hypothesis was that the amount of transcriptome change is proportional to the amount of environmental change (i.e., drug concentration). These results showed that there is no proportional relationship. In this research, we studied the effects of the entire range of phenobarbital concentrations that would be tolerated by fat-body cells. However, only two values of transcriptome diversity were recorded. Transcriptome diversity responds to cis-permethrin There is a possibility that the changing transcriptome diversity a phenobarbital-specific phenomenon. To discuss this, we cultured fat bodies in media supplemented with two concentrations of cis-permethrin, an insecticide that is hydrophobically opposite to phenobarbital. We sequenced two transcriptomes of fat bodies cultured in media supplemented with 0.25 and 2.5 mM cis-permethrin. Transcriptome diversities were calculated as 10.3 (0.25 mM cis-permethrin) and 8 (2.5 mM cis-permethrin) (S3 Fig). This result matches perfectly with those from the phenobarbital experiments. Therefore, the increase in storage-protein gene expression cannot be explained as a phenobarbital-treatment-specific phenomenon. Hysteretic phenomena of transcriptome diversity In this study, there is no proportional relationship between drug concentration and transcriptome diversity. This relationship looks non-liner, but it was not confirmed. To discuss that, we examined hysteresis between drug concentration and transcriptome diversity. Observation of the hysteretic phenomenon would indicate the existence of multi-stability. We cultured fat bodies in media supplemented with lower concentrations of phenobarbital after previous cultivation in media supplemented with a higher concentration of phenobarbital. It is becoming increasingly clear that many biological systems are governed by highly non-linear bi-or multistable processes, which may switch between discrete states, induce oscillatory behavior, or define their dynamics based on a functional relationship with the memory of input stimuli [7]. Hysteresis in molecular biology is known in a synthetic mammalian gene network [8]. There is a possibility that hysteresis is hidden in the non-proportional relationship between external stimuli and the transcriptome as a huge complex of gene networks. Topological changes are known in gene regulatory networks [9]. In response to diverse stimuli, transcription factors alter their interactions to varying degrees, thereby rewiring the network. It is possible that our results indicate hysteresis and multi-stability of the transcriptome. To investigate hysteretic phenomena in transcriptome diversities, we sequenced six transcriptomes of fat bodies cultured in media supplemented with 0 and 0.25 mM phenobarbital for 10 hours, after 90 hours' previous cultivation (cultivation for 80 hours in phenobarbitalnon-supplemented MGM-450 insect medium, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium). We measured the diversity of those transcriptomes. Transcriptome diversities were 10.13, 9.75, 10,26 in 0 mM phenobarbital after the 1.0 mM phenobarbital experimental group, and 9.350, 10.22, 9.11 in mM phenobarbital after the 1.0 mM phenobarbital experimental group (Fig 1). Transcriptome diversity was determined not only by the drug concentration at that time, but also by previous drug concentrations. This is a hysteretic phenomenon, and a hysteretic phenomenon provides evidence of a bi-stable system. These results indicate multi-stability of the genome expression system. Transcriptome diversity and differentially expressed genes Determination of the drug concentration is critically important for in-vitro drug-exposure testing in preclinical toxicology. High drug concentrations can induce a radical transcriptome response, while mild concentrations can make it difficult to determine cell responses. Ideally, a drug concentration should be high enough to induce the desired main effect, while not eliciting too many side effects. If we performed differentially expressed gene analysis using several drug concentrations, we would obtain more than 100 differentially expressed genes from each comparison and several thousands of differentially expressed genes in total. We hypothesized that it would be possible to determine a perfect drug concentration by referencing transcriptome diversity as the index of the extent of transcriptome changes. We compared transcriptomes with close transcriptome diversities and those with distant transcriptome diversities, focusing on differentially expressed genes. In this study, a comparison between the transcriptome of a control group (0 mM phenobarbital) and that of an experimental group treated with the lowest phenobarbital concentration (0.25 mM) represented the comparison of close transcriptome diversities. Indeed, only 29 differentially expressed genes were detected in the comparison between the control and the 0.25 mM phenobarbital-treated group. On the other hand, more than 1000 genes were detected in comparisons between the control group and the 1.0, 2.5, and 12.5 mM phenobarbital groups (1534, 2198, and 1302 differentially expressed genes, respectively). We further examined these differentially expressed genes. It is known that insects have evolved mechanisms to detoxify, reduce their sensitivity to, or excrete insecticides [10]. Increased detoxification can occur by gene duplication of carboxylesterase, which cleaves the insecticide, or by transposon insertion, causing increased transcription of cytochrome P450, which hydroxylates the insecticide [10]. Point mutations in the target gene can reduce insecticide binding [10], while increased transporter activity leads to faster excretion from the cell [10]. Gene duplication of carboxylesterase and point mutations in the target gene have been detected using comparative genomics; however, increased transcription of cytochrome P450 and increased transporter activities can be detected using comparative transcriptomics. Phenobarbital is well known to induce additional cytochrome P450s and other drug-metabolizing enzymes [11]. The effects of hepatocyte growth factor on phenobarbital-mediated induction of ABCB1 [a member of the ATP-binding cassette (ABC) transporter superfamily] mRNA expression have been investigated [12]. Phenobarbital is a substrate for Pglycoprotein, which is an energy-dependent transmembrane efflux pump encoded by the ABCB1 gene [13]. Therefore, we focused on phenobarbital-related induction of cytochrome P450 and ABC transporter coding genes. In the current study, cytochrome P450 and ABC transporter genes were identified as differentially expressed genes in comparisons between the control and treatment groups. The various concentrations of phenobarbital induced two (0.25 mM phenobarbital), 12 (1.0 mM), 12 (2.5 mM), and 17 (12.5 mM) cytochrome P450 coding genes (Table 1). Marked elevations in cytochrome P450 expression levels were observed only in two cytochrome P450 genes (BGIBMGA001004 and BGIBMGA001005). These genes were detected from comparisons between the 0.25 and 12.5 mM phenobarbital groups. The ABC transporter coding gene (BGIBMGA007738) increased its expression only in the 0.25 mM phenobarbital experiment. In contrast, many ABC transporter genes in the !1.0 mM phenobarbital experimental groups decreased their expression levels. These results show that the comparison between transcriptomes with close diversities was useful in identifying genes induced by drugs. Macroscopic similarity between transcriptomes To discuss the biological origin of the decreasing transcriptome diversity, we compared the transcriptome diversities of cultured cells, cells cultured with phenobarbital, and intact cells. In a previous study, we compared the transcriptomes of intact and cultured silkworm fat bodies, and showed that fewer genes occupied more of the transcriptome in intact than in cultured fat bodies [6]. In intact fat bodies, storage-protein coding genes occupied more than half of the transcriptome and the diversity of that transcriptome was low (6.49). The transcriptomes of fat bodies exposed to higher concentrations of phenobarbital, with low transcriptome diversity, may have similarities to the transcriptome of the intact fat body. To determine this, we sequenced two transcriptomes from intact larval fat-body tissue, and compared 18 transcriptomes using bar charts (Fig 2). The transcriptomes of intact larval fat bodies and those cultured in high concentrations of phenobarbital (>1.0 mM) were macroscopically similar. In these transcriptomes, storage-protein coding genes showed the highest expression levels, and these genes occupied more than one-third of their transcriptomes. Cultured tissue is thought to have initiated the process of abrogating its tissue-specific function by becoming independent from the donor and its identity as a part of an individual, living organism [6]. In this study, fat bodies exposed to high concentrations of phenobarbital recovered their tissue-specific character as part of an individual organism. It is conceivable that the culture including phenobarbital concentrations of !1.0 mM mimics the in vivo environment. Discussion The relationship between the degree of environmental change in response to external stimuli and the degree of transcriptome change was unclear. We compared the transcriptome diversities of silkworm fat-body tissues cultured with several concentrations of phenobarbital. We failed to find a proportional relationship, but cells cultured with phenobarbital concentrations of !1.0 mM had uniformly reduced transcriptome diversity. We also determined that the 500 genes with the highest relative frequency characterized transcriptome diversity by reducing the relative frequency of other genes (See Materials and Methods). Transcriptome diversity also demonstrated hysteresis. Multi-stability and hysteresis, phenomena widely known in nature [7] were also discovered in transcriptome responses. The concepts of gene expression state space and attractors had been introduced which provide a mathematical and molecular basis for an "epigenetic landscape" [14]. Discovering of multi-stability of genome expression countenance the concepts of gene expression state space and the concepts of "epigenetic landscape". Cellular memories were recorded in genome expression networks as in DNA/histone modifications. We used two drugs (phenobarbital and cis-permethrin) in this study. The decrease in transcriptome diversity in the high drug-concentration groups was observed for both agents. Storage-protein genes occupied more than one-third of all transcriptomes with low transcriptome diversity (<9). Storage-protein genes occupied more than one-third of all transcriptomes with low transcriptome diversity (<9). It could be argued that storage-protein coding genes added to expression following exposure to a highly concentrated drug. Phenobarbital and cis-permethrin are considerably different in hydrophilicity-phenobarbital is a hydrophilic molecule and cis-permethrin is hydrophobic-and it is interesting to consider that storage proteins might cope with the threat from these drugs in the same manner. Comparative transcriptomics for in-vitro preclinical testing are widely performed as an alternative to animal tests. However, there is no quantitative method to determine the drug concentrations that should be used for in-vitro preclinical testing. Therefore, we are forced to determine appropriate drug concentrations by considering cell morphology and through various other examinations. Using transcriptome diversity, however, we can find the perfect drug concentration. It would be interesting to perform a follow-up study on changes in transcriptome diversity using human cells, since a culture-induced increase in transcriptome diversity has also been observed in a comparison between intact human liver tissue (Hij: 10.9) and HepG2 cells (Hij: 13.3), which is the cell line established from human liver carcinoma [15] and is widely used as an in-vitro model for liver cells [16]. For our drug-exposure experiments, we used cultured tissue with high transcriptome diversity. The transcriptome diversity of these tissues was decreased by exposure to higher drug concentrations; the decrease in transcriptome diversity induced by higher concentrated drug exposure was not the phenobarbital-specific response of cultured tissue. We hypothesize that cells have some control mechanisms with respect to transcriptome diversity, and that these mechanisms responded to the stimuli of higher drug concentrations. Such mechanisms might enable cells to work cooperatively in multicellular organisms. Materials and Methods Establishment of primary culture The p50 strain of the silkworm, Bombyx mori, was grown on the fresh leaves of the mulberry, Morus bombycis. The female larvae of the fifth instar were aseptically dissected 3 days after the fourth ecdysis and the fat body was isolated. More than 100 chunks of tissue (approximately 2 mm 3 ) were excised from the fat bodies of 108 larvae. Those tissue particles were incubated in cell culture dishes (diameter, 35 mm; BD Biosciences, Franklin Lakes, NJ, USA) with MGM-450 insect medium [17] supplemented by 10% fetal bovine serum (BioWest, Nuaillé, France) with no gas change. The tissue was cultured for 80 hours at 25°C. The microbes were checked for infection using a microscope. Infection-free tissues were used in the following induction assays. No antibiotics were used in the assays to maintain the primary culture. Chemicals All of the chemicals used in this study were of analytical grade. Phenobarbital sodium (Wako Pure Chemical, Osaka, Japan) was dissolved in distilled water to make a stock solution, which was added to the medium to make final concentrations of 0.25, 1.0, 2.5, and 12.5 mM phenobarbital. cis-Permethrin (Wako Pure Chemical) was dissolved in acetone. This solution was diluted with three times its volume of ethanol just prior to mixing with the medium. The final concentrations of cis-permethrin were 0.25 and 2.5 mM. Induction assay The original medium was replaced with phenobarbital-or cis-permethrin-containing medium in the induction assays. The primary culture tissues were incubated with 0.25, 1.0, 2.5, or 12.5 mM phenobarbital or with 0.25 or 2.5 mM cis-permethrin for 10 hours. The primary culture tissues for hysteresis analysis were incubated with 0 or 0.25 mM phenobarbital for 10 hours after 90 hours' previous cultivation (80 hours in phenobarbital-non-supplemented MGM-450 insect medium, followed by 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium). Induction assays were terminated by soaking the tissues in TRIzol LS reagent (Invitrogen, Carlsbad, CA, USA), and the tissues were kept at -80°C until analysis. RNA isolation Total RNA was extracted using TRIzol LS (Invitrogen) and the RNeasy Lipid Tissue Mini Kit (Qiagen, Hilden, Germany) following the manufacturers' instructions. Silkworm fat bodies (30 mg) soaked in 300 μL TRIzol LS were homogenized. The homogenates were incubated for 5 minutes at 25°C, and the samples were subjected to centrifugation at 12,000 × g for 10 minutes at 5°C. The supernatant was then transferred to a new tube and incubated at 25°C for 5 minutes. Chloroform (60 μL) was added to each sample, and the homogenates were shaken vigorously for 15 seconds and then incubated at 25°C for 2 minutes. Samples were subjected to centrifugation at 12,000 × g for 15 minutes at 4°C; subsequently, the aqueous phase of each sample, which contained the RNA, was placed into a new tube. An equal volume (120 μL) of 70% ethanol was added and mixed well. The samples were transferred to an RNeasy Mini spin column and placed in 2 mL collection tubes. The lid was closed gently, and the samples were centrifuged for 15 seconds at 8,000 × g at 25°C. The flow-through was discarded. A total of 700 μL RW1 buffer was added to the RNeasy spin column. The lid was again closed gently, and the samples were centrifuged for 15 seconds at 8,000 × g. The flow-through was discarded. A total of 500 μL RPE buffer was added to the RNeasy spin column, and the samples were centrifuged for 15 seconds at 8,000 × g. The flowthrough was discarded. A total of 500 μL RPE buffer was added to the RNeasy spin column, and the samples were centrifuged for 2 minutes at 8,000 × g. The flow-through was discarded. The RNeasy spin column was placed in a new 2 mL collection tube, and the old collection tube with the flow-through was discarded. The samples were centrifuged at 13,000 × g for 1 min. The RNeasy spin column was placed in a new 1.5 mL collection tube. A total of 30 μL RNase-free water was added directly to the spin column membrane. To elute the RNA, the samples were centrifuged for 1 min at 8,000 × g. The eluate from the previous step was then added directly to the spin column membrane. To elute the RNA, the samples were centrifuged for 1 min at 8,000 × g. The integrity of rRNA in each sample was checked using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). Library preparation and sequencing: RNA-seq Sequencing was performed according to the TruSeq single-end RNA-sequencing protocols from Illumina for Solexa sequencing on a Genome Analyzer IIx with paired-end module (Illumina, San Diego, CA, USA). A total of 1 μg total RNA was used as the starting material for library construction, using the TruSeq RNA Sample Preparation Kit v2. This involved poly-A mRNA isolation, fragmentation, and cDNA synthesis before adapters were ligated to the products and amplified to produce a final cDNA library. Approximately 400 million clusters were generated by the TruSeq SR Cluster Kit v2 on the Illumina cBot Cluster Generation System, and 36-65 base pairs were sequenced using reagents from the TruSeq SBS Kit v5 (all kits from Illumina). Short-read data have been deposited in the DNA Data Bank of Japan (DDBJ)'s Short Read Archive under project ID DRA002853. Data analysis and programs Sequence read quality was controlled using FastQC program (http://www.bioinformatics. bbsrc.ac.uk/projects/fastqc). Short-read sequences were mapped to an annotated silkworm transcript sequence obtained from KAIKObase (http://sgp.dna.affrc.go.jp) using the Bowtie program. A maximum of two mapping errors were allowed for each alignment. Genome-wide transcript profiles were compared between samples. All the statistical analyses were performed using R software version 2.13.0 and the DESeq package. The homology search and local alignments were determined using Blast2Go [18]. Sequence data from Homo sapiens intact liver (SRA000299) [19] and the HepG2 cell line (SRA050501) were obtained from the DDBJ Sequence Read Archive (http://trace.ddbj.nig.ac.jp/dra/index.html). Short-read sequences were mapped to human cDNA obtained from RefSeq (http://www.ncbi.nlm.nih.gov/refseq). We calculated transcriptome diversities using a script. Genes characterizing transcriptome diversity Changes in transcriptome diversity are based on changes in gene expression. However, it is not clear which genes contribute to such changes. To overcome these problems, we analyzed changes in transcriptome diversity. A previous study described the transcriptomes of each tissue as a set of relative frequencies, Pij, for the ith gene (i = 1, 2, . . ., g) in the jth tissue (j = 1, 2, . . ., t); and then quantified transcriptome diversity using an adaptation of Shannon's entropy formula: H ij ¼ À X g i¼1 P ij log 2 ðP ij Þ H ij ¼ À P g i¼1 P ij log 2 ðP ij Þ Each term [Pij log2 (Pij)] represents a monotonic increase if Pij is larger than 0 and smaller than e -1 . The gene with the largest relative frequency makes the biggest contribution to transcriptome diversity but, in the current study, the largest relative frequency of genes could not be larger than e -1 . To compare Pij between samples, we sorted log2 (Pij) in order of Pij (S1 Fig). Transcriptomes with high diversity (0 and 0.25 mM phenobarbital experimental group) and low diversity (1.0, 2.5, and 12.5 mM phenobarbital experimental group) divided clearly. This result suggests that genes with extremely high relative frequency determine transcriptome diversity, reducing the relative frequency of other genes. To verify this, we estimated the diversity of transcriptomes that were lacking the top 10, 20, 30, 50, 100, 200, 300, and 500 most-expressed genes (S2 Fig). When we removed the top 500 genes, there was no difference in transcriptome diversity. These results show that the 500 genes with the highest relative frequency characterize transcriptome diversity by reducing the relative frequency of other genes. Supporting Information Fig 1 . 1Scatter plot of drug concentration vs transcriptome diversity. Transcriptomes of fat-body cells that were cultured for 80 hours in phenobarbitalnon-supplemented MGM-450 insect medium followed by 10 hours in MGM-450 insect medium supplemented with 0, 0.25, 1.0, 2.5, and 12.5 mM phenobarbital after cultivation are plotted as circles. Transcriptomes of fat-body cells that were cultured for 10 hours in MGM-450 insect medium supplemented with 0 and 0.25 mM phenobarbital after 90 hours' previous cultivation (80 hours in phenobarbital-non-supplemented MGM-450 insect medium followed by 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium) are plotted as Plus "+". doi:10.1371/journal.pone.0144822.g001 Fig 2 . 2Bar charts of 18 silkworm fat-body transcriptomes. The occupation rate of genes in a transcriptome was plotted in a bar chart. Heights of boxes in a bar chart indicate the occupation rate of genes in a transcriptome. Although more than 14,000 genes are included in these bar charts, most are invisible and are included in black regions. (A-C) Transcriptomes of intact silkworm fat-body cells. Transcriptomes of fat-body cells cultured for 10 hours in MGM S1 Fig. Bar charts of silkworm fat-body transcriptomes. Although more than 14,000 genes are included in these bar charts, most are invisible and are included in the black regions. Transcriptomes of fat-body cells cultured for 10 hours in MGM-450 insect medium supplemented with (A) 0.25 mM and (B) 2.5 mM cis-permethrin, after cultivation for 80 hours in cis-permethrin-non-supplemented MGM-450 insect medium. (TIF) S2 Fig. Plots of log 2 (P ij ) of cultured silkworm fat bodies. Genes were sorted in order of log 2 (P ij ). Transcriptomes with high diversity (0 and 0.25 mM phenobarbital experimental section) are plotted as blue dots. Those with low diversity (1.0, 2.5, and 12.5 mM phenobarbital experimental section) are plotted as red dots. (TIF) S3 Fig. Scatter plot of the number of removed genes vs transcriptome diversity (H ij ). The diversity of transcriptomes lacking the top 10, 20, 30, 50, 100, 200, 300, and 500 most-expressed genes was estimated. Transcriptomes with high diversity (0 and 0.25 mM phenobarbital experimental section) are plotted as blue dots. Those with low diversity (1.0, 2.5, and 12.5 mM phenobarbital experimental section) are plotted as red dots. (TIF) Table 1 . 1Cytochrome P450 coding genes that were detected as differentially expressed genes in cultured versus phenobarbital-induced silkworm fat bodies.Gene ID Gene name Log ratio FDR FDR rank Culture vs 0.25 mM phenobarbital BGIBMGA001004 CYP4G23b 5.57070 4.08 × 10 −18 2 BGIBMGA001005 CYP4G23a 6.25565 2.22 × 10 −07 7 Culture vs 1.0 mM phenobarbital BGIBMGA001162 CYP4G22 1.30734 0.00675 1444 BGIBMGA001276 CYP333B1 1.13686 0.00046 1041 BGIBMGA001277 CYP333B2 1.74507 0.00172 1211 BGIBMGA002307 CYP340A3 1.65552 1.21 × 10 −07 487 BGIBMGA003683 CYP4M5 0.89796 0.00638 1430 BGIBMGA003926 CYP9G3 2.17810 1.64 × 10 −14 169 BGIBMGA003943 CYP9A21 1.09148 0.00373 1327 BGIBMGA003944 CYP9A19 1.59676 1.76 × 10 −05 717 BGIBMGA006691 CYP6AV1 0.92672 0.00024 943 BGIBMGA010239 CYP314A1 2.77180 3.10 × 10 −06 624 BGIBMGA013236 CYP6AE2 2.96227 8.61 × 10 −15 159 BGIBMGA013237 CYP6AE3 1.77000 0.00012 880 Culture vs 2.5 mM phenobarbital BGIBMGA001276 CYP333B1 1.51446 3.45 × 10 −09 470 BGIBMGA001277 CYP333B2 1.74751 9.36 × 10 −06 929 BGIBMGA002307 CYP340A3 1.46025 1.07 × 10 −07 618 BGIBMGA003926 CYP9G3 1.96768 1.90 × 10 −13 254 BGIBMGA003943 CYP9A21 1.30292 1.47 × 10 −06 782 BGIBMGA003944 CYP9A19 1.78278 6.71 × 10 −10 415 BGIBMGA006916 CYP18A1 0.75489 0.00298 1810 BGIBMGA010239 CYP314A1 2.15167 0.00014 1214 BGIBMGA010854 CYP6AN2 1.54833 0.00071 1485 BGIBMGA013236 CYP6AE2 2.98777 1.72 × 10 −20 91 BGIBMGA013237 CYP6AE3 1.99385 1.19 × 10 −08 511 BGIBMGA013238 CYP6AE4 0.90125 0.00061 1454 Culture vs 12.5 mM phenobarbital BGIBMGA001004 CYP4G23b 6.45406 8.91 × 10 −27 28 BGIBMGA001005 CYP4G23a 8.26109 5.02 × 10 −22 50 BGIBMGA001162 CYP4G22 1.40631 0.00231 1034 BGIBMGA001276 CYP333B1 1.73687 4.07 × 10 −10 236 BGIBMGA001277 CYP333B2 2.13857 7.77 × 10 −07 412 BGIBMGA001419 CYP6B29 1.40233 4.49 × 10 −06 483 BGIBMGA002307 CYP340A3 1.74468 6.95 × 10 −10 244 BGIBMGA003683 CYP4M5 0.82408 0.00592 1183 BGIBMGA003926 CYP9G3 2.65943 1.35 × 10 −22 44 BGIBMGA003943 CYP9A21 1.70222 1.52 × 10 −08 309 BGIBMGA003944 CYP9A19 1.79717 3.31 × 10 −09 271 BGIBMGA003945 CYP9A20 0.70181 0.00393 1122 BGIBMGA010239 CYP314A1 2.76665 3.92 × 10 −06 477 BGIBMGA010854 CYP6AN2 1.94215 9.65 × 10 −05 692 BGIBMGA013236 CYP6AE2 3.04869 2.62 × 10 −17 76 BGIBMGA013237 CYP6AE3 1.64751 0.00013 714 (Continued) Table 1 . 1(Continued) Gene ID Gene name Log ratio FDR FDR rank BGIBMGA013238 CYP6AE4 1.36505 7.76 × 10 −08 340 CYP, cytochrome P450 doi:10.1371/journal.pone.0144822.t001 entroshannon < Àf unctionðxÞf x2 < Àx=sumðxÞ x3 < Àx2 Ã log 2ðx2Þx4 < Àx3½!is:naðx3ÞNumber of DEGs and difference of Transcriptome diversityTo estimate diversities of transcriptomes, orders of genes were not considered. The correlation of number of differentially expressed genes and difference of transcriptome diversity is uncertain.Case 1 (gene A, gene B, gene C) = (1, 10, 100) Case 2 (gene A, gene B, gene C) = (10, 100, 1) Case 3 (gene A, gene B, gene C) = (1, 10, 1000) Information entropy (transcriptome diversity) of Case 1 issumðx4Þ 1 111 Â log 2 ð 1 111 Þ þ 10 111 Â log 2 ð 10 111 Þ þ 100 111 Â log 2 ð 100 111 Þ ¼ 0:51 Information entropy (transcriptome diversity) of Case 2 is 10 111 Â log 2 ð 10 111 Þ þ 100 111 Â log 2 ð 100 111 Þ þ 1 111 Â log 2 ð 1 111 Þ ¼ 0:51 Information entropy (transcriptome diversity) of Case 3 is 1 1011 Â log 2 ð 1 1011 Þ þ 10 1011 Â log 2 ð 10 1011 Þ þ 1000 1011 Â log 2 ð 1000 1011 Þ ¼ 0:09 Comparison of Case 1 (1, 10, 100) and Case 2 (10, 100, 1), geneA, B and C were changed. Comparison of Case 1 (1, 10, 100) and Case 3 (1, 10, 1000), geneC was changed. PLOS ONE | DOI:10.1371/journal.pone.0144822 December 14, 2015 PLOS ONE | DOI:10.1371/journal.pone.0144822 December 14, 2015 8 / 13 AcknowledgmentsWe thank Kazuhiro Miki for predicting the observation of the hysteretic phenomenon. We also thank Ramen-Jiro, with whom K. M. and N. O. discussed the hysteretic phenomenon. We thank Park Tonogayato at Kokubunji, where N. O. wrote this paper.Author ContributionsConceived and designed the experiments: NO. Performed the experiments: NO. Analyzed the data: NO. Contributed reagents/materials/analysis tools: TK TY TH KI. Wrote the paper: NO. Defining diversity, specialization, and gene specificity in transcriptomes through information theory. O Martinez, M H Reyes-Valdes, 10.1073/pnas.080347910518606989Proceedings of the National Academy of Sciences of the United States of America. 105Martinez O, Reyes-Valdes MH (2008) Defining diversity, specialization, and gene specificity in tran- scriptomes through information theory. Proceedings of the National Academy of Sciences of the United States of America 105: 9709-9714. doi: 10.1073/pnas.0803479105 PMID: 18606989 Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. F Buettner, K N Natarajan, F P Casale, V Proserpio, A Scialdone, F J Theis, 10.1038/nbt.310225599176Nature Biotechnology. 33Buettner F, Natarajan KN, Casale FP, Proserpio V, Scialdone A, Theis FJ, et al. (2015) Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nature Biotechnology 33: 155-160. doi: 10.1038/nbt.3102 PMID: 25599176 Characterizing heterogeneity in leukemic cells using single-cell gene expression analysis. A Saadatpour, G Guo, S H Orkin, G C Yuan, 10.1186/s13059-014-0525-9Genome Biol. 1525517911Saadatpour A, Guo G, Orkin SH, Yuan GC (2014) Characterizing heterogeneity in leukemic cells using single-cell gene expression analysis. Genome Biol. 15: 525 doi: 10.1186/s13059-014-0525-9 PMID: 25517911 Cancer reduces transcriptome specialization. O Martinez, M H Reyes-Valdes, L Herrera-Estrella, 10.1371/journal.pone.001039820454660PloS one. 5Martinez O, Reyes-Valdes MH, Herrera-Estrella L (2010) Cancer reduces transcriptome specialization. PloS one 5: e10398. doi: 10.1371/journal.pone.0010398 PMID: 20454660 How plants sense wounds: damaged-self recognition is based on plant-derived elicitors and induces octadecanoid signaling. M Heil, E Ibarra-Laclette, R M Adame-Alvarez, O Martinez, E Ramirez-Chavez, J Molina-Torres, 10.1371/journal.pone.003053722347382PloS one. 730537Heil M, Ibarra-Laclette E, Adame-Alvarez RM, Martinez O, Ramirez-Chavez E, Molina-Torres J, et al. (2012) How plants sense wounds: damaged-self recognition is based on plant-derived elicitors and induces octadecanoid signaling. PloS one 7: e30537. doi: 10.1371/journal.pone.0030537 PMID: 22347382 Transcriptome responses of insect fat body cells to tissue culture environment. N Ogata, T Yokoyama, K Iwabuchi, 10.1371/journal.pone.003494022493724PloS one. 734940Ogata N, Yokoyama T, Iwabuchi K (2012) Transcriptome responses of insect fat body cells to tissue culture environment. PloS one 7: e34940. doi: 10.1371/journal.pone.0034940 PMID: 22493724 . H R Noori, Hysteresis Phenomena in Biology. 45SpringerNoori HR (2014) Hysteresis Phenomena in Biology: Springer. 45 p. Hysteresis in a synthetic mammalian gene network. B P Kramer, M Fussenegger, 15972812Proceedings of the National Academy of Sciences of the United States of America. 102Kramer BP, Fussenegger M (2005) Hysteresis in a synthetic mammalian gene network. Proceedings of the National Academy of Sciences of the United States of America 102: 9517-9522. PMID: 15972812 Genomic analysis of regulatory network dynamics reveals large topological changes. N M Luscombe, M M Babu, H Yu, M Snyder, S A Teichmann, M Gerstein, 15372033Nature. 431Luscombe NM, Babu MM, Yu H, Snyder M, Teichmann SA, Gerstein M. (2004) Genomic analysis of regulatory network dynamics reveals large topological changes. Nature 431: 308-312. PMID: 15372033 Ecology. Insecticide resistance after Silent spring. D G Heckel, 23019637Science. 337Heckel DG (2012) Ecology. Insecticide resistance after Silent spring. Science 337: 1612-1614. PMID: 23019637 Specific and overlapping functions of the nuclear hormone receptors CAR and PXR in xenobiotic response. P Wei, J Zhang, D H Dowhan, Y Han, D D Moore, 12049174The pharmacogenomics journal. 2Wei P, Zhang J, Dowhan DH, Han Y, Moore DD (2002) Specific and overlapping functions of the nuclear hormone receptors CAR and PXR in xenobiotic response. The pharmacogenomics journal 2: 117-126. PMID: 12049174 Differential regulation of drug transporter expression by hepatocyte growth factor in primary human hepatocytes. Drug metabolism and disposition: the biological fate of chemicals. Le Vee, M Lecureur, V Moreau, A Stieger, B Fardel, O , 37Le Vee M, Lecureur V, Moreau A, Stieger B, Fardel O (2009) Differential regulation of drug transporter expression by hepatocyte growth factor in primary human hepatocytes. Drug metabolism and disposi- tion: the biological fate of chemicals 37: 2228-2235. The clinical impact of pharmacogenetics on the treatment of epilepsy. W Loscher, U Klotz, F Zimprich, D Schmidt, Epilepsia. 50Loscher W, Klotz U, Zimprich F, Schmidt D (2009) The clinical impact of pharmacogenetics on the treat- ment of epilepsy. Epilepsia 50: 1-23. ) A non-genetic basis for cancer progression and metastasis: selforganizing attractors in cell regulatory networks. S Huang, D E Ingber, Breast Dis. 26Huang S and Ingber DE (2006-2007) A non-genetic basis for cancer progression and metastasis: self- organizing attractors in cell regulatory networks. Breast Dis. 26:27-54. Controlled synthesis of HBsAg in a differentiated human liver carcinoma-derived cell line. D P Aden, A Fogel, S Plotkin, I Damjanov, B B Knowles, 233137Nature. 282Aden DP, Fogel A, Plotkin S, Damjanov I, Knowles BB (1979) Controlled synthesis of HBsAg in a differ- entiated human liver carcinoma-derived cell line. Nature 282: 615-616. PMID: 233137 Induction of drug-metabolizing enzymes by phenobarbital in layered co-culture of a human liver cell line and endothelial cells. M Ohno, K Motojima, T Okano, A Taniguchi, Biological & pharmaceutical bulletin. 32Ohno M, Motojima K, Okano T, Taniguchi A (2009) Induction of drug-metabolizing enzymes by pheno- barbital in layered co-culture of a human liver cell line and endothelial cells. Biological & pharmaceutical bulletin 32: 813-817. Obtainment of a continuous cell line from the fat bodies of the mulberry tiger moth, Spilosoma imparilis (Lepidoptera: Arctiidae). J Mitsuhashi, H Inoue, Appl Entomol Zool. 23Mitsuhashi J, Inoue H (1988) Obtainment of a continuous cell line from the fat bodies of the mulberry tiger moth, Spilosoma imparilis (Lepidoptera: Arctiidae). Appl Entomol Zool 23: 488-490. Blast2GO: a universal tool for annotation, visualization and analysis in functional genomics research. A Conesa, S Gotz, J M Garcia-Gomez, J Terol, M Talon, M Robles, 16081474Bioinformatics. 21Conesa A, Gotz S, Garcia-Gomez JM, Terol J, Talon M, Robles M. (2005) Blast2GO: a universal tool for annotation, visualization and analysis in functional genomics research. Bioinformatics 21: 3674- 3676. PMID: 16081474 RNA-seq: an assessment of technical reproducibility and comparison with gene expression arrays. J C Marioni, C E Mason, S M Mane, M Stephens, Y Gilad, 10.1101/gr.079558.10818550803Genome research. 18Marioni JC, Mason CE, Mane SM, Stephens M, Gilad Y (2008) RNA-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome research 18: 1509-1517. doi: 10.1101/gr.079558.108 PMID: 18550803
[]
[ "Statistical distribution of current helicity in solar active regions over the magnetic cycle", "Statistical distribution of current helicity in solar active regions over the magnetic cycle" ]
[ "Y Gao 1⋆ \nKey Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nINTRODUCTION\n\n", "T Sakurai \nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyo\n", "† ", "H Zhang \nKey Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nINTRODUCTION\n\n", "K M Kuzanyan \nKey Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyo\n\nIZMIRAN\n142190TroitskMoscow RegionRussia\n\nINTRODUCTION\n\n", "D Sokoloff \nKey Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n\nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyo\n\nDepartment of Physics\nMoscow State University\n119992, 20120926MoscowRussia\n\nINTRODUCTION\n\n", "Y Gao ", "T Sakurai ", "H Zhang ", "K Kuzanyan ", "D Sokoloff " ]
[ "Key Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "INTRODUCTION\n", "National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyo", "Key Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "INTRODUCTION\n", "Key Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyo", "IZMIRAN\n142190TroitskMoscow RegionRussia", "INTRODUCTION\n", "Key Laboratory of Solar Activity\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyo", "Department of Physics\nMoscow State University\n119992, 20120926MoscowRussia", "INTRODUCTION\n" ]
[ "Mon. Not. R. Astron. Soc" ]
The current helicity in solar active regions derived from vector magnetograph observations for more than 20 years indicates the so-called hemispheric sign rule; the helicity is predominantly negative in the northern hemisphere and positive in the southern hemisphere. In this paper we revisit this property and compare the statistical distribution of current helicity with Gaussian distribution using the method of normal probability paper. The data sample comprises 6630 independent magnetograms obtained at Huairou Solar Observing Station, China, over 1988-2005 which correspond to 983 solar active regions. We found the following. (1) For the most of cases in time-hemisphere domains the distribution of helicity is close to Gaussian. (2) At some domains (some years and hemispheres) we can clearly observe significant departure of the distribution from a single Gaussian, in the form of two-or multi-component distribution. (3) For the most non-single-Gaussian parts of the dataset we see co-existence of two or more components, one of which (often predominant) has a mean value very close to zero, which does not contribute much to the hemispheric sign rule. The other component has relatively large value of helicity that often determines agreement or disagreement with the hemispheric sign rule in accord with the global structure of helicity reported by Zhang et al. (2010).
10.1093/mnras/stt838
[ "https://arxiv.org/pdf/1305.4528v1.pdf" ]
119,100,686
1305.4528
37defbd7acb64887c4f9ecea334f669082e0716c
Statistical distribution of current helicity in solar active regions over the magnetic cycle 20 May 2013 May 2013 Y Gao 1⋆ Key Laboratory of Solar Activity National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina INTRODUCTION T Sakurai National Astronomical Observatory of Japan 2-21-1 Osawa181-8588MitakaTokyo † H Zhang Key Laboratory of Solar Activity National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina INTRODUCTION K M Kuzanyan Key Laboratory of Solar Activity National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina National Astronomical Observatory of Japan 2-21-1 Osawa181-8588MitakaTokyo IZMIRAN 142190TroitskMoscow RegionRussia INTRODUCTION D Sokoloff Key Laboratory of Solar Activity National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina National Astronomical Observatory of Japan 2-21-1 Osawa181-8588MitakaTokyo Department of Physics Moscow State University 119992, 20120926MoscowRussia INTRODUCTION Y Gao T Sakurai H Zhang K Kuzanyan D Sokoloff Statistical distribution of current helicity in solar active regions over the magnetic cycle Mon. Not. R. Astron. Soc 000000020 May 2013 May 2013Printed 21 (MN L A T E X style file v2.2)Sun: magnetic fields -Sun: activity -Sun: interior The current helicity in solar active regions derived from vector magnetograph observations for more than 20 years indicates the so-called hemispheric sign rule; the helicity is predominantly negative in the northern hemisphere and positive in the southern hemisphere. In this paper we revisit this property and compare the statistical distribution of current helicity with Gaussian distribution using the method of normal probability paper. The data sample comprises 6630 independent magnetograms obtained at Huairou Solar Observing Station, China, over 1988-2005 which correspond to 983 solar active regions. We found the following. (1) For the most of cases in time-hemisphere domains the distribution of helicity is close to Gaussian. (2) At some domains (some years and hemispheres) we can clearly observe significant departure of the distribution from a single Gaussian, in the form of two-or multi-component distribution. (3) For the most non-single-Gaussian parts of the dataset we see co-existence of two or more components, one of which (often predominant) has a mean value very close to zero, which does not contribute much to the hemispheric sign rule. The other component has relatively large value of helicity that often determines agreement or disagreement with the hemispheric sign rule in accord with the global structure of helicity reported by Zhang et al. (2010). INTRODUCTION Recently there has been significant progress in collection and interpretation of observational data on vector magnetic fields in solar active regions which enables us to compute the values of current helicity (a measure of departure of magnetic fields from mirror symmetry) averaged over active regions (Seehafer 1990;Pevtsov, Canfield and Metcalf 1995;Bao and Zhang 1998). The data have been averaged over latitude and time in the solar cycle as well (Zhang et al. 2010). The research so far has demonstrated important properties of this quantity and its regular variation in the course of the solar cycle. The helicity is generally negative in the northern hemisphere and positive in the southern hemisphere; the so-called hemispheric sign rule (HSR) for helicity. This rule, however, may occasionally be violated in the activity minimum periods (Hagino and Sakurai 2005;Zhang et al. 2010;Hao and Zhang 2012). Helicity in the solar atmosphere has been noted as an important agent which constrains magnetic field dissipation in the solar corona. Current helicity plays an important role in the solar dynamo theory as an observational proxy of the α−effect as it controls a dynamical back-reaction of the magnetic field to the motion of the media which suppresses and stabilizes the generation of the magnetic field (e.g. Kleeorin et al. 2003;Zhang et al. 2006). From a theoretical point of view the average helicity has the meaning of a quantity averaged over an ensemble of turbulent fluctuations in a small physical volume. This physical volume may contain a limited number of convective cells, and so we may expect it to vary significantly in space and time. The data on current helicity are indeed very fluctuating within a given active region as well as during its evolution (Zhang et al. 2002). Similar fluctuations can be observed in the current helicity averaged over latitude and time in the solar cycle. However, the observation of current helicity is a complex process which deals with initially imperfect data and involves highly non-trivial reduction processes. These difficulties must be overcome by collecting reliable datasets, because noise in the data will degrade the reliability in the analysis (e.g. Abramenko et al. 1996;Bao and Zhang 1998;Bao et al. 2001;Sakurai 2004, 2005). For better understanding of the reliability of the analysis of current helicity previously undertaken, we hereby aim to study its statistical distribution. On the other hand the current helicity as a quantity responsible for mirror asymmetry of solar magnetic field is expected to be fluctuating also from a theoretical viewpoint: A usual expectation is that the degree of mirror asymmetry in dynamo mechanisms will be about 10%. This practical estimate as originating from Parker (1955) means that we have to isolate stable features of current helicity distributions on a background of physical fluctuation which may be ten times greater than the average value. This fluctuation is not due to observational uncertainties and cannot be reduced by improvement of observational techniques. The expected substantial fluctuation in the current helicity data has to be carefully taken into account in estimating the averaged values of current helicity. Intrinsic (physical and true) dispersion in the current helicity data around its mean value is, however, interesting by itself. The probability distribution of current helicity may be substantially non-Gaussian. The point is that physical processes responsible for the fluctuating nature of solar plasma can be considered as an action of a product of independent evolutionary operators rather than a sum of them resulting usually in a Gaussian distribution. On the other hand, averaging over an active region smoothes fluctuation and supports the Gaussian nature of distribution. Because of this, the statistical properties of current helicity deserve to be addressed in full detail. Previously, the statistical distribution of current helicity has been addressed only briefly as a part of more general studies (e.g. Sokoloff et al. 2008). In this paper we consider the distribution of available data over the hemispheres of the Sun and their changes over solar cycles. We compare this distribution with a Gaussian and separate the part of the data which significantly deviates from a single Gaussian distribution. Then, we consider the spatial and temporal properties of this part in comparison with the other (Gaussian) part of the data. We shall see that both parts exhibit similar properties and behavior over the solar cycle. OBSERVATIONAL DATA SET This study is based on the data of photospheric vector magnetograms of solar active regions obtained at Huairou Solar Observing Station, China. The same database systematically covering 18 consecutive years (1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005) was used in Zhang et al. (2010). The parameters adopted there are α av (the average value of α;∇ × B = αB) and H c (the integrated current helicity; H c = ΣJ z B z ). In the present analysis we will use the same parameters. (the inte-grated current helicity; H c = σJ z B z ) First, we analyze the entire database that comprises 6630 magnetograms of 983 different active regions (in terms of NOAA region numbers). We are going to build our statistical analysis for the whole bulk of available data. The majority of active regions are represented by only one or very few magnetograms. However, there are a few active regions that were recorded in 20 or more magnetograms. In the next step we select one magnetogram for an active region. Sometimes, and for some active regions that were observed within a short time interval, the helicity parameters may be close with each other. For these magnetograms, we select the one located nearest to the center of the solar disk. Furthermore, we try to avoid choosing the magnetograms of rapidly emerging active regions, except for those regions which were recorded in only one magnetogram. Nonetheless, this occurs rather rarely because such active regions are usually large and well observed over several days. METHOD OF STATISTICAL ANALYSIS We address the probability distribution of current helicity and its deviation from Gaussian as it follows from observational data by two statistical tools. First of all, we divide the data into two hemispheres and produce histograms of the current helicity distribution for certain time intervals which contain 50 measurements. We then approximate them by multiple Gaussians. In practice it appears that a mixture of two Gaussians is sufficient to reproduce the histograms with a reasonable accuracy. This method, being very practical, may miss in principle substantial deviations from Gaussian statistics which occurs with low probability, i.e. intermittent features in the probability distribution. A tool to address an intermittent feature is the so-called Normal Probability Paper (NPP) test which is organized as follows (see, e.g., Chernoff and Lieberman 1954). Let our set contain N active regions. Let n active regions have current helicity density χ c lower than x. Then the probability for χ c to be lower than x is estimated as P = n/N. Let ξ be a Gaussian variable with the same mean value µ and standard deviation σ as χ c and let y be the value for which the probability for (ξ − µ)/σ to be lower than y is P . The results for various x values are plotted in the (x, y)-plane and can be compared with the cumulative distribution function (CDF) for a Gaussian distribution which gives a straight line in this coordinate space. If the observational data deviate substantially from this straight line, it Statistical distribution of current helicity 5 is an indication of a non-Gaussian nature of observational data (see e.g. Sokoloff et al. 2008 for details). For illustration let us produce a set of random Gaussian variable xx 1 (N = 1500, µ 1 = 0.098, and σ 1 = 1.05) by IDL function "randomn". Then we produce another set of random Gaussian variable xx 2 (µ 2 = −0.28 and σ 2 = 0.42). The GAUSS CVF function computes the cutoff value V in a standard Gaussian distribution with a mean of 0.0 and a variance of 1.0 such that the probability that a random variable X is greater than V is equal to a usersupplied probability P (see Figure 1). The upper panel shows the probability distribution functions (PDFs) of two Gaussian components and their sum (with equal weights). In the bottom panel we show the relationship between the variable and the Gaussian CDF with the same mean value and standard deviation for three kinds of variables. The correlation for Gaussian variables is well fit by a straight line but not for the sum of the components, as shown in Figure 1 by the red curve. The slope of the straight line is determined by the mean value, and the y-intercept is determined by the standard deviation of the corresponding dataset. RESULTS The entire sample includes 983 active regions, in which 464 (519) in the northern (southern) hemisphere, respectively. Tables 1-4 present the detailed information about the Gaussian fitting to the distribution function of the observed parameters. The columns indicate; "#": the sequence number of subgroups, "Start" ("End"): the earliest (last) measurement in the subgroup, δT : the length of the time epoch between "Start" and "End", µ 0 and σ 0 : the mean value and standard deviation of the subgroup, and µ 1 , σ 1 , A 1 , µ 2 , σ 2 and A 2 are the parameters defining the components of the sum of Gaussian function in the form: f (x) = A 1 √ 2πσ 1 exp − (x − µ 1 ) 2 2σ 2 1 (1) + A 2 √ 2πσ 2 exp − (x − µ 2 ) 2 2σ 2 2 , where the component with subscript "1" represents the component with a greater amplitude; A 1 A 2 . The fitting is not made on the PDF but is made for CDF; the function we use for the fitting is actually the integral of Equation (1), namely the sum of two error functions. The "Error" denotes the deviation of the observed values from the fitting with the sum of two error functions. The "No" column records the number of data points in each subgroup. Table 1. Results of fitting to the data of αav for ten data subgroups in the northern hemisphere. The fit error is standard deviation of the fitting curve from the observed values. The values of amplitudes A 1 and A 2 for cases when the second components is significant ( Table 2. Results of fitting to the data of αav for eleven data subgroups in the southern hemisphere. The fit error is standard deviation of the fitting curve from the observed values. The values of amplitudes A 1 and A 2 for cases when the second component is significant ( Table 1 shows the result of Gaussian fitting/decomposition for α av in the northern hemisphere. We focus on the relation between the sign of mean values of two Gaussian components and the HSR. It is found that there are merely one µ 1 (Row 8) which violates the HSR from Dec-25, 2000to Sep-02, 2001, while the corresponding µ 2 follows the HSR. In contrast, there are in total three µ 2 's (Rows 3, 9 and 10) which violate the HSR. These epochs are "Jan-25, 1992" to " Dec-18, 1993", "Sep-22, 2001" to "Jun-28, 2003" and "Jul-5, 2003" to "Dec-23, 2005", and their amplitudes A 2 are 0.46, 0.02 and 0.43, respectively. We would like to stress here that the number of data points in the latter subgroup is only 14, which is much less than in the other subgroups. Therefore the fitting for this subgroup is not that reliable. Probability Distribution Function -4 -2 0 2 4 Variables ∈ [-3., 3.]A 2 > A 1 /2) are underlined. ♯ Start End δT µ 0 σ 0 µ 1 σ 1 A 1 µ 2 σ 2 A 2ErrorA 2 > A 1 /2) are underlined. ♯ Start End δT µ 0 σ 0 µ 1 σ 1 A 1 µ 2 σ 2 A 2Error Nevertheless, the fitting in the third subgroup, i.e., from "Jan-25, 1992" to "Dec-18, 1993", is rather convincing. As A 2 is 0.46 for the former group, both components are clearly seen in the second row of Figure 2. In Figure 2 we also give fitting results for the other three epochs. Their distributions are nicely represented by two Gaussians and both components obey the HSR. Table 2 shows the results of Gaussian fitting/decomposition for α av in the southern hemisphere. It is found that four µ 1 's and three µ 2 's violate the HSR. Among them, two cases have both µ 1 and µ 2 violate the HSR. They occur in the epochs " Apr-26, 1988" to "Feb-25, 1990", and "May-19, 1999" to "Apr-12, 2000", respectively. The amplitudes of A 1 are 0.82 and 0.96 and A 2 are 0.18 and 0.04, respectively. Other two A 2 that violate HSR have amplitudes of 0.04 and 0.02, respectively. Some examples are given in Figure 3. The first row shows apparent violation of two components from the HSR. Other three cases show two components clearly, too. However, all of these components obey the HSR. We also perform comparative analysis for H c in a similar way. Table 3 µ 2 violating the HSR occurs in the epoch of "Mar-08, 1990" to "Aug-11, 1991" (in Row 2); its amplitude is 0.11. Some examples are given in Figure 5. To compare the epochs where the HSR is violated in the present statistical analysis with the evolution and distribution of α av and H c obtained in our earlier papers (Zhang et al. 2010), we mark these epochs with the inclined lines of 45 • (the first component violates the HSR) and −45 • (the second component violates the HSR); see Figure 6. Therefore, the crossed lines represent the cases when both the first and the second components violate the HSR. Here we estimate the uncertainty in determination of the two Gaussian mean values of the bi-modal distribution of the two parameters α av and H c under discussion. For that we use the 95 per cent Student's confidence intervals taking the standard deviations for each Gaussian component σ i , where i = 1, 2, and computing the number of degrees of freedom as Table 5. Statistical significance of HSR violation for α in the north hemisphere. Numbers in bold indicate violation of HSR, and they are underlined if regarded statistically significant. the overall number of available data points in each interval n minus the number of fitting parameters, namely five. Then the expected errors in the mean values of the components would be µ i ± σ i t n−5 / √ n − 5, where t n−5 is Student's quantile for 95 per cent probability. ♯ Start End µ 1 σ 1 t n−5 √ n−5 µ 2 σ 2 t n−5 √ n− We consider the violation of the rule significant if the error bars on the mean values that violate the HSR do not contain the zero value of the quantity under consideration. The results are shown in Table 5-8. The mean values which violate the HSR are shown bold, and the cases of significant violation are underlined. The cases for which violation of the HSR is statistically significant are shown in Fig. 7. The notable features common to two parameters α av and H c are as follows. (i) In the 22nd solar cycle there is an epoch of Apr-26, 1988-Feb-25, 1990 in the southern hemisphere, in which either the first or the second component of both parameters violated the HSR. There is another epoch of Jan-25, 1992-Dec-18, 1993 in the northern hemisphere, in which the second component of both parameters violated the HSR. (ii) In the southern hemisphere of the 23rd solar cycle, the first component violated the HSR in the epochs of May-19, 1999-Apr-12, 2000and Oct-28, 2003to Dec-16, 2005. (iii) Looking at the cases of significant violation of the HSR in terms of Student's error Table 7. Statistical significance of HSR violation for Hc in the north hemisphere. Numbers in bold indicate violation of HSR, and they are underlined if regarded statistically significant. ♯ Start End µ 1 σ 1 t n−5 √ n−5 µ 2 σ 2 t n−5 √ n−5 3 Jan-25,1992 Dec-18,1993 0.0157 0.0288 0.5670 0.2500 Table 8. Statistical significance of HSR violation for Hc in the south hemisphere. Numbers in bold indicate violation of HSR, and they are underlined if regarded statistically significant. ♯ Start End µ 1 σ 1 t n−5 √ n−5 µ 2 σ 2 t n−5 √ n−5 1 Apr -26,1988-26, Feb-25,1990 -0.0004 0.0252 -0.8515 0.0259 7 May -19,1999 Apr-12,2000 -0.0045 0.0097 0.0317 0.0322 11 Oct -28,2003-28, Dec-16,2005 bars we note that there are very few of them compared with the the overall cases of violation of the HSR by one or two Gaussian components of the bi-modal distribution. (iv) We may also note that the cases of significant violation of the HSR occur not in the maximum of the solar cycle but in the phases of rise and fall. The two parameters α av and H c showed disparate results as follows. (i) For α av in the epochs of Dec-25, 2000-Sep-02, 2001 (ii) In the southern hemisphere, the first component of α av violates the HSR in the epoch of Jul-29, 1993-Aug-27, 1996, the second component of α av violates the HSR in the epochs of May-19, 1999-Apr-12, 2000and Oct-22, 2001-Oct-27, 2003, but these are not seen for H c . In contrast, the second component of α av does not violate the HSR in the epochs of Mar-08, 1990-Aug-11, 1991and Oct-28, 2003-Dec-16, 2005 but the second component of H c does violate the HSR during those period. (iii) The cases of significant violation of the HSR for both parameters α av and H c coincide for cycle 22 but not for cycle 23. In cycle 23 there is only one case of significant violation of HSR for H c but three cases for α av . See Fig. 7 for details. DISCUSSION We have investigated to what extent the current helicity and twist data for solar active regions follow the Gaussian statistics and what kind of message comes from the deviations from the Gaussian distribution. In our studies we have adopted the method of Normal Probability Paper which has been developed for Gaussian distributions. Of course, there was no reason to believe that the data must be ideally distributed as a Gaussian. However, such analysis has shown relative contributions of the sources of fluctuations as well as the turbulent nature of the measured quantities. Quite naturally, the statistics of current helicity and twist are not exactly Gaussian. Here we confirmed the previous results of Sokoloff et al. (2008). On the other hand, deviations from Gaussian statistics are rather moderate and it looks often reasonable to discuss the observed statistical distribution for given space-latitude bins as a superposition of two Gaussian distributions with specific means, standard deviations and amplitudes. We have not encountered cases for which such fitting is impossible or insufficient (but see the underlined values of A 2 in Tables 1-4). In other words, we have not detected traces of significant intermittency in solar magnetic fields as its imprints in statistics for the current helicity or twist. We appreciate that the contemporary dynamo theory (see a review by Brandenburg, Sokoloff, and Subramanian 2012) or analysis of the surface solar magnetic tracers (Stenflo 2012) imply that a strong intermittency is expected. Presumably, diffusive processes working during the rise of magnetic flux tubes from the solar interior up to the surface might have strongly smoothed the non-Gaussian features of the distributions. Statistical distribution of current helicity 17 Formally speaking, the Gaussian distribution of the current helicity was implicitly assumed when one estimates the error bars on the current helicity averaged over a time-latitude bin (e.g., Zhang et al. 2010). In practical respect, however, deviations from Gaussian distribution obtained are small. Due to the Central Limit Theorem in probability theory, one may expect only minor modifications to these estimates and the non-Gaussian nature of distribution can be ignored for this point. We have isolated several epochs within the time interval covered by observations where deviations from Gaussian statistics look interesting and meaningful. We isolate the time bins with substantial deviations from Gaussian distribution in two ways: when the main Gaussian or the sub-component violates the HSR (Fig. 6), or when the subcomponent is comparable with the main one. Concerning the violation of the HSR (Fig. 6), we note a substantial north-south asymmetry in the results: main part of the bins with HSR violation belongs to the southern hemisphere. Remarkably, the helicity data for the northern hemisphere for the 23rd cycle during 1997-2006 (with hemispheric averages) do not provide cases with HSR violation at all. We can summarize our main findings as follows. 1. We have established that for the most of cases in time-hemisphere domains the distribution of averaged helicity is close to Gaussian. 2. At the same time, at some domains (some years and hemispheres) we can clearly observe significant departure of the distribution from a single Gaussian, in the form of two-or multicomponent distributions. We are inclined to identify this fact as a real physical property. 3. For the most non-single-Gaussian parts of the dataset we have established co-existence of two or more components, one of which (often predominant) has a mean value very close to zero, which does not contribute much to HSR. The other component has relatively large value, whose sign is sometimes in agreement (for the data in the maximum and shortly after the maximum of the solar cycle), or disagreement ( We may note here that the agreement or disagreement with HSR at some latitudes and times may be understood within the framework of solar dynamo models (see, e.g. Zhang et al. 2012 and references therein). Discussion on the applicability of particular dynamo models is beyond the framework of this paper and will be addressed in our forthcoming studies. 6. Another possible interpretation is that the active regions which belong to multi-component distribution are intrinsically different and formed at different depth or by different mechanism. 7. We may suggest that the formation of current helicity in solar active regions may in general occur due to various physical mechanisms at various scales. However, detailed investigation on these mechanisms are yet to be done. This challenges both the dynamo theory as well as the theory of flux tube/active region formation. ACKNOWLEDGMENTS This work is partially supported by the National Natural Science Foundation of China under the grants 11028307, 10921303, 11103037, 11173033, 41174153, 11178005, 11221063, by National Basic Research Program of China under the grant 2011CB811401 and by Chinese Academy of Sciences under grant KJCX2-EW-T07 and XDA04060804-02. D.S. and K.K. would like to acknowledge support from Visiting Professorship Programme of Chinese Academy or Sciences 2009J2-12 and thank NAOC of CAS for hospitality, as well as acknowledge support from the NNSF-RFBR collaborative grant 13-02-91158 and RFBR under grants 12-02-00170 and 13-02-01183.. K.K. would like to appreciate Visiting Professorship programme of National Observetories of Japan. We thank the anonymous referee for his/her comments and suggestions that helped to improve the quality of this paper. Figure 1 . 1Probability Distribution Function (PDF) of a set of random Gaussian variable (upper panel) and correlation between the variable and the Gaussian Cumulative Distribution Function (CDF) with the same mean value and standard deviation (bottom panel). Figure 2 . 2Four cases of the distributions of αav in the northern hemisphere. The left panels show NPP and are annotated by the start and ending dates of the group. The right panels show the data histogram and decomposed two Gaussians (dotted curves) and their sum (solid curve). Figure 3 . 3Four cases of the distributions of αav in the southern hemisphere. The left panels NPP and are annotated by the start and ending dates of the group. The right panels show the data histogram and decomposed two Gaussians (dotted curves) and their sum (solid curve). shows the fitting results in the northern hemisphere. For example, in Row 3, i.e., fromJan-25, 1992 to Dec-18, 1993, the means of both components µ 1 and µ 2 show violation of the HSR. Their amplitudes are 0.91 and 0.09. This is also consistent with the results obtained for α av . Some examples are given inFigure 4. Figure 4 .Figure 5 .Figure 6 . 456Four cases of the distributions of Hc in the northern hemisphere. The left panels are NPP and are annotated by the start and ending dates of the group. The right panels show the data histogram and decomposed two Gaussians (dotted curves) and their sum (solid curves). Four cases of the distributions of Hc in the southern hemisphere. The left panels show NPP and are annotated by the start and ending dates of the group. The right panels show the data histogram and decomposed two Gaussians (dotted curves) and their sum (solid curve). Time-latitude distribution of Gaussian components for selected 983 magnetograms of active regions (one magnetogram per active region) over the period of 1988-2005. The background is the butterfly diagram of current helicity plotted in time-latitude bins as circles over the color plot of sunspot number density. Each bin contains data coming from 7 • in latitude and two year running average in time. The 45 • and −45 • lines mark the epoch in which the main component and sub-component violate the HSR, respectively. The thick lines denote cases when the second component is significant, i.e. the amplitudes satisfy the relation A 1 < 2A 2 . The upper and lower panels show the results for αav and Hc, respectively. Figure 7 . 7Butterfly diagram with only those cases marked for which violation of the HSR is statistically significant. Notations are the same as inFig. 6. in the northern hemisphere and Jul-29, 1993 -Aug-27, 1996 in the southern hemisphere, the first component violates the HSR. Also, in the epochs of Sep-22, 2001 -Dec-23, 2005 in the northern hemisphere, the second component violates the HSR. These patches are not found for H c . for example of 1989, just at the end of the rising phase of cycle 22) with the HSR. 4. Studies of the locations of the most non-single-Gaussian parts over the time-latitude butterfly diagram shows that these agreement and disagreement are in accord with the global structure of helicity reported by Zhang et al. (2010, cf. their Fig.2). 5. We can interpret the result of multi-component distribution of helicity in terms of the dynamo model which addresses the origin of helicity in solar active regions. For example, there may be spatial and time domains where the dynamo mechanism does not work, or works differently. ⋆ Email: [email protected] † Email: [email protected] ‡ Email: [email protected] § Email: [email protected] ¶ Email: [email protected] Table 3 . 3Results of fitting to the data of Hc for ten subgroups in the northern hemisphere.Table 4. Results of fitting to the data of Hc for eleven data subgroups in the southern hemisphere.♯ Start End δT µ 0 σ 0 µ 1 σ 1 A 1 µ 2 σ 2 A 2 Error No 1 Apr-16,1988 May-11,1990 755 -0.0421 0.0758 -0.0268 0.0741 0.8927 -0.1492 0.0198 0.1073 0.0860 50 2 May-20,1990 Jan-22,1992 612 -0.0774 0.2541 -0.0347 0.0953 0.8894 -0.8641 0.5144 0.1106 0.0688 50 3 Jan-25,1992 Dec-18,1993 693 0.0544 0.2714 0.0157 0.1149 0.9140 0.5670 0.9986 0.0860 0.1649 50 4 Dec-26,1993 May-24,1998 1610 -0.0319 0.0814 -0.0268 0.1084 0.6583 -0.0355 0.0132 0.3417 0.2020 50 5 May-31,1998 Jul-22,1999 417 -0.0187 0.0860 -0.0094 0.0668 0.9321 -0.3537 0.1570 0.0679 0.0822 50 6 Jul-23,1999 Jun-11,2000 324 -0.0078 0.0651 -0.0050 0.0748 0.8629 -0.0139 0.0223 0.1371 0.1012 50 7 Jun-12,2000 Dec-21,2000 192 -0.0267 0.0753 -0.0064 0.0476 0.7282 -0.0816 0.1299 0.2718 0.1168 50 8 Dec-25,2000 Sep-02,2001 251 -0.0154 0.0677 -0.0043 0.0480 0.9682 -0.3474 0.0013 0.0318 0.1337 50 9 Sep-22,2001 Jun-28,2003 644 -0.0228 0.1047 -0.0075 0.1297 0.7312 -0.0536 0.0287 0.2688 0.0991 50 10 Jul-5,2003 Dec-23,2005 902 -0.0817 0.1614 -0.0030 0.0675 0.7956 -0.4294 0.1380 0.2044 0.0679 14 ♯ Start End δT µ 0 σ 0 µ 1 σ 1 A 1 µ 2 σ 2 A 2 Error No 1 Apr-26,1988 Feb-25,1990 670 -0.0273 0.1756 -0.0004 0.1008 0.9426 -0.8515 0.1035 0.0574 0.1190 50 2 Mar-08,1990 Aug-11,1991 521 0.0061 0.2285 0.0622 0.0956 0.8886 -0.7620 0.3935 0.1114 0.1403 50 3 Aug-12,1991 May-08,1992 270 0.0492 0.1122 0.0514 0.1234 0.9488 0.0628 0.0078 0.0512 0.0931 50 4 Jun-16,1992 Jul-21,1993 400 0.1195 0.2721 0.0810 0.1102 0.9185 0.6108 1.0473 0.0815 0.2526 50 5 Jul-29,1993 Aug-27,1996 1125 0.0176 0.0735 0.0115 0.0409 0.6867 0.0369 0.1377 0.3133 0.1160 50 6 Nov-29,1996 Apr-23,1999 875 0.0331 0.0810 0.0130 0.0644 0.8737 0.1903 0.0312 0.1263 0.0989 50 7 May-19,1999 Apr-12,2000 329 0.0053 0.0708 -0.0045 0.0386 0.6750 0.0317 0.1284 0.3250 0.0963 50 8 Apr-15,2000 Nov-20,2000 219 0.0385 0.1084 0.0313 0.0547 0.6922 0.0617 0.2106 0.3078 0.1359 50 9 Nov-27,2000 Oct-11,2001 318 0.0254 0.0807 0.0141 0.0438 0.8746 0.1172 0.2507 0.1254 0.1711 50 10 Oct-22,2001 Oct-27,2003 735 0.0198 0.1427 0.0213 0.0527 0.5970 0.0260 0.2490 0.4030 0.1354 50 11 Oct-28,2003 Dec-16,2005 780 -0.0133 0.1155 -0.0173 0.0949 0.8855 -0.4921 0.4071 0.1145 0.0873 19 Table 4 4shows the results of Gaussian fitting/decomposition for H c in the southern hemisphere. There are three µ 1 's that violate the HSR in Rows 1, 7 and 11, respectively. Theiramplitudes are 0.94, 0.68 and 0.89, respectively. In Rows 1 and 11, the second components also violate the HSR, though with smaller component amplitudes of 0.06 and 0.11. Another Y. Gao, T. Sakurai, H. Zhang, K. Kuzanyan, D. Sokoloff . V I Abramenko, T J Wang, V B Yurchishin, Solar Phys. 16875Abramenko V. I., Wang T. J., Yurchishin V. B., 1996, Solar Phys., 168, 75. . S D Bao, H Q Zhang, ApJ. 43Bao, S.D., Zhang, H.Q.: 1998, ApJ, 496, L43. S D Bao, G X Ai, H Q Zhang, Recent Insights into the Physics of the Sun and Heliosphere -Highlights from SOHO and Other Space Mission. P. Brekke, B. Fleck, and J. B. Gurman, IAU Symp247Statistical distribution of current helicity 19Bao, S.D., Ai, G.X., Zhang, H.Q.: 2001, in Recent Insights into the Physics of the Sun and Heliosphere -Highlights from SOHO and Other Space Mission, eds. P. Brekke, B. Fleck, and J. B. Gurman, IAU Symp., No. 203, 247. Statistical distribution of current helicity 19 . A Brandenburg, D Sokoloff, K Subramanian, Space Science Reviews. 169123Brandenburg, A., Sokoloff, D. and Subramanian, K., : 2012, Space Science Reviews, 169, 123. Use of Normal Probability Paper. H Chernoff, G J Lieberman, J. Am. Stat. Assoc. 49268Chernoff, H. and Lieberman, G.J. Use of Normal Probability Paper, 1954, J. Am. Stat. Assoc., 49, No. 268 (Dec., 1954), 778-785. . M Hagino, T Sakurai, PASJ. 56831Hagino, M., Sakurai, T., 2004, PASJ, 56, 831. . M Hagino, T Sakurai, PASJ. 57481Hagino, M., Sakurai, T., 2005, PASJ, 57, 481. . J Hao, M Zhang, ApJ. 27Hao, J., Zhang, M., 2012, ApJ, 733, L27. . N Kleeorin, K Kuzanyan, D Moss, I Rogachevskii, D Sokoloff, H Zhang, A&A. 4091097Kleeorin, N., Kuzanyan, K., Moss, D., Rogachevskii, I., Sokoloff, D. & Zhang, H. 2003, A&A, 409, 1097. . E Parker, ApJ. 122293Parker, E. 1955, ApJ, 122, 293. . A A Pevtsov, R C Canfield, T R Metcalf, ApJ. 109Pevtsov, A. A., Canfield R. C., & Metcalf T. R. 1995, ApJ, 440, L109. . N Seehafer, 219Seehafer, N.: 1990, SoPh, 125, 219. . D Sokoloff, H Zhang, K M Kuzanyan, V N Obridko, D N Tomin, V N Tutubalin, Sol. Phys. 24817Sokoloff, D., Zhang, H., Kuzanyan, K.M., Obridko, V.N., Tomin, D.N., Tutubalin, V.N., 2008, Sol. Phys., 248, 17. . J O Stenflo, A&A. 17Stenflo, J.O., 2012, A&A, 541, 17. . H Q Zhang, S Bao, K Kuzanyan, Astron. Rep. 46424Zhang, H.Q., Bao, S., and Kuzanyan, K.: 2002, Astron. Rep., 46, 424. . H Zhang, D Sokoloff, I Rogachevskii, D Moss, V Lamburt, K Kuzanyan, N Kleeorin, MNRAS. 276Zhang, H., Sokoloff, D., Rogachevskii, I., Moss, D., Lamburt, V., Kuzanyan, K. & Kleeorin, N. 2006, MNRAS, 365, 276. . H Q Zhang, T Sakurai, A Pevtsov, Y Gao, H Q Xu, D D Sokoloff, K Kuzanyan, MNRAS. 30Zhang, H.Q., Sakurai, T., Pevtsov, A., Gao, Y., Xu, H.Q., Sokoloff, D.D. and Kuzanyan, K.: 2010, MNRAS, 402, L30. . H Zhang, D Moss, N Kleeorin, K Kuzanyan, I Rogachevskii, D Sokoloff, Y Gao, H Xu, ApJ. 47Zhang, H., Moss, D., Kleeorin, N., Kuzanyan, K., Rogachevskii, I., Sokoloff, D., Gao, Y., Xu, H., 2012, ApJ, 751, 47.
[]
[ "Self-duality of Born-Infeld action and Dirichlet 3-brane of type IIB superstring theory", "Self-duality of Born-Infeld action and Dirichlet 3-brane of type IIB superstring theory" ]
[ "A A Tseytlin \nOn leave from Lebedev Physics Institute\nMoscow\n", "⋆ ", "\nTheoretical Physics Group, Blackett Laboratory\nImperial College\nSW7 2BZLondonU.K\n" ]
[ "On leave from Lebedev Physics Institute\nMoscow", "Theoretical Physics Group, Blackett Laboratory\nImperial College\nSW7 2BZLondonU.K" ]
[]
D-brane actions depend on a world-volume abelian vector field and are described by Born-Infeld-type actions. We consider the vector field duality transformations of these actions. Like the usual 2d scalar duality rotations of isometric string coordinates imply target space T -duality, this vector duality is intimately connected with SL(2, Z)-symmetry of type IIB superstring theory. We find that in parallel with generalised 4-dimensional Born-Infeld action, the action of 3-brane of type IIB theory is SL(2, Z) self-dual. This indicates that 3-brane should play a special role in type IIB theory and also suggests a possibility of its 12-dimensional reformulation.
10.1016/0550-3213(96)00173-3
[ "https://arxiv.org/pdf/hep-th/9602064v3.pdf" ]
12,644,866
hep-th/9602064
bfd649a016c805291505b7f06bd9914df8365313
Self-duality of Born-Infeld action and Dirichlet 3-brane of type IIB superstring theory arXiv:hep-th/9602064v3 28 Mar 1996 February 1996 A A Tseytlin On leave from Lebedev Physics Institute Moscow ⋆ Theoretical Physics Group, Blackett Laboratory Imperial College SW7 2BZLondonU.K Self-duality of Born-Infeld action and Dirichlet 3-brane of type IIB superstring theory arXiv:hep-th/9602064v3 28 Mar 1996 February 1996Imperial/TP/95-96/26 hep-th/9602064 D-brane actions depend on a world-volume abelian vector field and are described by Born-Infeld-type actions. We consider the vector field duality transformations of these actions. Like the usual 2d scalar duality rotations of isometric string coordinates imply target space T -duality, this vector duality is intimately connected with SL(2, Z)-symmetry of type IIB superstring theory. We find that in parallel with generalised 4-dimensional Born-Infeld action, the action of 3-brane of type IIB theory is SL(2, Z) self-dual. This indicates that 3-brane should play a special role in type IIB theory and also suggests a possibility of its 12-dimensional reformulation. Introduction Type II superstring theories have supersymmetric p-brane solitonic solutions supported by Ramond-Ramond (R-R) sources [1,2,3]. These solitons have alternative description in terms of open strings with Dirichlet boundary conditions. D-branes [4,5,6] provide an important tool [7] to study non-perturbative properties [8,9] of superstring theories (see, e.g., [10,11,12,13,14,15,16,17]). They are described by an effective Dirac-Born-Infeld (DBI) action [5] (supplemented with extra couplings to R-R fields [11,15,18,19,20]) which is closely connected to the Born-Infeld (BI) type effective action of open string theory [21,22,23,24,25,26]. The (bosonic part of) the DBI action of a D-p-brane is given by the d = p + 1 dimensional world-volume integral and depends on the embedding coordinates X µ and the world-volume vector vield A m . Our aim below will be to consider in detail the case of the D-3-brane of type IIB theory which (like the 2-brane in type IIA case) should play a special role in this theory. The corresponding action can be interpreted as a generalisation of d = 4 BI action coupled to a special background metric, dilaton, axion, etc. Using the remarkable fact that, like Maxwell action, the d = 4 BI action is invariant [27,28,29] under the semiclassical vector duality transformation (A m →à m ), we shall demonstrate that the D-3-brane action (combined with type IIB effective action) is invariant under the SL(2, Z) symmetry of type IIB theory [30,8,9]. 1 This is consistent with expectations [33,10] that while there are SL(2, Z) multiplets of type IIB string solutions, the self-dual 3-brane solution [1,2] should be unique. As in the case of the relation between the scalar-scalar world-sheet duality and Tduality of string theory, we suggest that the world-volume vector duality of D-brane actions is a key to understanding of the SL(2, Z) symmetry of type IIB theory. The presence in the D-3-brane action of a propagating world-volume vector field with 2 physical degrees of freedom (and analogy with D-2-brane related [18,19] to the 11-dimensional supermembrane [34]) points towards possible existence of a (non-Lorentz-covariant) 12-dimensional D-3brane formulation of type IIB theory. As a preparation for a discussion of D-brane world-volume vector field duality transformations we shall first study (in Section 2) the semiclassical duality transformations of Born-Infeld actions in different dimensions. The D-brane actions in type II theories may be found by computing the superstring partition function in the presence of a D-brane hypersurface which can be probed by virtual closed strings only through virtual open strings with ends attached to the surface [4]. In Section 3 we shall present a direct path integral derivation of the DBI action [5] which explains its relation to D = 10 BI action. We shall then include the couplings to the background fields of R-R sector [15,19] and obtain the explicit form (of leading terms in derivative expansions) of D-p-brane actions for p = 1, 2, 3. The duality transformations of these actions will be considered in Section 4. The cases of p = 1, 2 were previously discussed in [19,20] but our approach and interpretation is somewhat different. We shall find that while the action of D-string transforms covariantly under the SL(2, R) duality of type IIB theory, the D-3-brane action is 'self-dual', i.e. is mapped into itself provided one also duality-transforms the d = 4 world-volume vector field. Some conclusions which follow from the analysis of D-brane actions will be presented in Section 5. In particular, we shall suggest that like the D-2-brane in type IIA theory, the D-3-brane plays a special role in type IIB theory. We shall also argue that the analogy with 2-brane case indicates that the dimension 'associated' with D-3-branes is 12 (different remarks on possible 12-dimensional connection of type IIB theory appeared in [35,36,37]). Duality transformations of Born-Infeld actions Before turning to D-brane actions let us discuss semiclassical duals of similar Born-Infeld actions in different dimensions. Dualities between various gaussian fields (scalar-scalar in d = 2, scalar-vector in d = 3, vector-vector in d = 4, etc.) are, in general, true at the quantum (path integral level. Restricting consideration to semiclassical level duality transformations can be applied to more general non-gaussian actions depending only on field strengths. Such 'semiclassical duality' is a relation between the two actions which lead to dual sets of classical equations and have the same 'on-shell' value. Suppose L(F ) is a Lagrangian that depends on a nform field in d dimensions only through its field strength. Then L ′ (F,Ã) = L(F ) + iF dà whereà is añ-form field (ñ = d − n − 2), is equivalent to L(dA). If L is an algebraic function of F (i.e. it does not depend on derivatives of F ) then solving ∂L ′ /∂F = 0 for F and substituting the solution into L ′ will give a local 'dual' LagrangianL(dÃ). 2 In the special case of n =ñ = 1 2 (d − 2) (n = 0 in d = 2, n = 1 in d = 4, n = 2 in d = 6, etc.) it may happen that for some L its 'semiclassical dual'L is the same function of dà as L is of dA. Such L can be called 'self-dual' in the sense that that duality maps L(dA) into L(dÃ) (in general, up to an additional transformation of 'spectator' background fields). The usual gaussian choice L = F 2 is of course of that type. A remarkable non-trivial example of a semiclassically self-dual Lagrangian is the Born-Infeld Lagrangian in four dimensions [27,28,29]. 3 Let us start with the Born-Infeld action [40] in its simplest flat-space form (generalisations to the presence of other fields and couplings are straightforward) S d = d d x det(δ mn + F mn ) , F mn = ∂ m A n − ∂ n A m , (2.1) where we assume the d-dimensional space to have euclidean signature. To perform the semiclassical duality transformation we add the Lagrange multiplier term (Λ mn = −Λ nm ) S d = d d x[ det(δ mn + F mn ) + 1 2 iΛ mn (F mn − 2∂ m A n )] ,(2.2) and first solve for A m , finding that d = 2 : Λ mn = ǫ mn Λ 0 , Λ 0 = const; d = 3 : Λ mn = ǫ mnk ∂ kà ; (2.3) d = 4 : Λ mn = ǫ mnkl ∂ kÃl ; d ≥ 5 : Λ nk = 1 (d−3)! ǫ nkm 1 ...m d−2 ∂ m 1à m 2 ...m d−2 . Then we solve for F mn and get the dual actionS d (F ),F = dÃ. The BI action is semiclassically equivalent to [38,39]. 3 More general self-dual d = 4 vector Lagrangians L(F ) should satisfy a functional constraint which is a 1-st order Hamilton-Jacobi-type partial differential equation. It has a family of solutions parametrised by one function of one variable [29]. The construction of non-gaussian self-dual actions for higher n =ñ = 1 2 (d − 2) forms (e.g. for n = 2 in d = 6) is, in principle, straightforward; in particular, there exists the direct generalisation of the BI action to n > 1, d = 2n + 2 [29]. S d (F, V ) = d d x [ 1 2 V det(δ mn + F mn ) + 1 2 V −1 ] ,(2. where V (x) is an auxiliary field. Using (2.4) leads to a slight simplification in two and three dimensions since for d = 2, 3 det(δ mn + F mn ) = 1 + 1 2 F mn F mn so that (2.4) is quadratic in F and solving for F in the analogue of (2.2) is straightforward. One ends up 5) or, after elimination of V , withS 2,3 = d d x[ 1 2 V + 1 2 V −1 (1 + 1 2 Λ mn Λ mn )] ,(2.S 2,3 = d d x 1 + 1 2 Λ mn Λ mn = d d x det(δ mn + Λ mn ) , (2.6) S 2 = d 2 x 1 + Λ 2 0 , S 3 = d 3 x 1 + ∂ mà ∂ mà = d 3 x det(δ mn + ∂ mà ∂ nà ) . Thus the dual to BI action in d = 2 is a constant (or a 'cosmological term') [41], while the dual in d = 3 is a membrane-type action for a scalar field. L = (1 + f 2 1 )...(1 + f 2 [d/2] ) + iλ 1 f 1 + ... + iλ [d/2] f [d/2] ,(2.7) where λ 1 = Λ 12 , ... . In the case of d = 4, 5 det(δ mn + F mn ) = (1 + f 2 1 )(1 + f 2 2 ) so that solving for f 1 , f 2 and substituting back into the action gives L = (1 + λ 2 1 )(1 + λ 2 2 ) = det(δ mn + Λ mn ) . (2.8) Thus the BI action in d = 4 is 'self-dual', i.e. S 4 = d 4 x det(δ mn +F mn ) ,F mn = ∂ mÃn − ∂ nÃm ,(2.S 4 = d 4 x [ det(δ mn + e − 1 2 φ F mn ) + 1 8 iǫ mnkl CF mn F kl ] , (2.11) where C is a background axion field and we have also introduced a background dilaton field φ with e 1 2 φ playing the role of an effective gauge coupling constant. The analogue of (2.7) then is L 4 = (1 + e −φ f 2 1 )(1 + e −φ f 2 2 ) + iλ 1 f 1 + iλ 2 f 2 + iCf 1 f 2 . (2.12) Solving for f 1 and f 2 one finds f 1 = −i∆ −1 ( ∆ + λ 2 2 ∆ + λ 2 1 λ 1 − ie φ Cλ 2 ) , (2.13) f 2 = −i∆ −1 ( ∆ + λ 2 1 ∆ + λ 2 2 λ 2 − ie φ Cλ 1 ) , ∆ ≡ e −φ + e φ C 2 . The final expression for the dual action is theñ S 4 = d 4 x [ (1 + e −φ λ 2 1 )(1 + e −φ λ 2 2 ) + iCλ 1 λ 2 ] = d 4 x [ det(δ mn + e − 1 2φF mn ) + 1 8 iǫ mnklCF mnFkl ] ,(2.14) where e −φ = 1 e −φ + e φ C 2 ,C = − Ce φ e −φ + e φ C 2 ,(2.15) is the standard inversionλ = −1/λ, λ ≡ C + ie −φ . Since the action (2.11) is also invariant under the shift C → C + q, q = const, the two transformations generate the full SL(2, R) invariance group A m →à m , λ →λ = pλ + q rλ + s , λ ≡ C + ie −φ . (2.16) One can also replace δ mn in (2.11) by a general metric g mn which should be invariant under SL(2, R). We conclude that the generalised BI action (2.11) is SL(2, R)-self-dual, i.e. its form is invariant under the vector duality accompanied by the SL(2, R) transformation of the background fields φ, C. Equivalent observation about the SL(2, R) invariance of equations of motion that follow from this generalised axion-dilaton-BI action was made in [32]. As we shall see in Section 4.3, a similar action describes the dynamics of D-3-brane of type IIB superstring theory. D-brane actions Below we shall first describe a direct path integral derivation of the D-brane actions explaining, in particular, the relation between the BI action of [21] and DBI action of [5]. While an understanding of a fundamental string action as a source added to the effective string field theory action needs a non-perturbative ('wormhole') resummation of string loop expansion [42], the open string description [4,7] of D-branes provides a simple perturbative recipe for deriving their actions. Since D-branes are described by BI-type actions [5] we shall then (in Section 4) use the results of Section 2 to find how these actions change under vector duality transformations. Path integral derivation of D-brane action D-branes represent soliton configurations in superstring theory which have R-Rcharges [7]. Within string perturbation theory they are 'composite' objects which have effective 'thickness' of order √ α ′ [12,13]. As the effective field theory action S ef f for massless string modes, the effective action S D−brane for a D-brane moving in a massless string background is given by power series in α ′ . 4 The tree-level closed string effective action S ef f can be represented in terms of ('renormalised') string partition on the sphere [43,44]. In the case of the open string theory the effective action is given by the partition function on the disc [21,24,25] (loop corrections can be found by adding partition functions on higher genus surfaces [45]). In the superstring case the leading term in the disc partition function is finite (there is no quadratic SL(2, R) Möbius volume infinity [25]) and is equal to the BI action. Our aim is to find the D-brane action in a similar way by evaluating the string path integral in the presence of a D-brane. In view of the open string connection [4], the D-brane effective action (reconstructed in [5] from the equations of motion obtained from conformal invariance conditions [46,22] by the boundary background couplings [4,5]. The combined action of the string massless modes and D-brane source can be represented as (t is a logarithm of 2d cutoff) S ef f (G, B, φ; C) + S D−brane (X i , A m ; G, B, φ; C) = ( ∂ ∂t Z sphere ) t=1 + Z disc , (3.1) Z disc = [dxdψ]e −I , I = I M + I ∂M , (3.2) I M = 1 4πα ′ d 2 ξ [ ( √ gg ab G µν + iǫ ab B µν )(x)∂ a x µ ∂ b x ν + α ′ R (2) φ(x) ] + ... ,(3. 3) I ∂M = 1 2πα ′ dτ [ iA m (x)∂ τ x m + X i (x)∂ ⊥ x i + α ′ eKφ(x) ] + ... . (3.4) Here C denotes the background r-form fields from the R-R sector (C 1 , C 3 in type IIA and C, C 2 , C 4 in type IIB theories) and dots stand for fermionic terms. 5 The string coordinates An alternative possibility is to use the boundary condition [5]. This has an advantage of making obvious the interpretation of X i as the collective coordinates of D-brane but is less natural from the point of view of computing the vacuum partition function and also breaks the symmetry between A m and X i in I ∂M . The two approaches are simply related to leading order of semiclassical expansion [5]: assuming x i | ∂M = X i (x m )| ∂M while setting X i -coupling in (3.4) to zerox i | ∂M = X i (x m )| ∂M and making a shift x i → x i + X i | ∂M so that the new x i is subject to x i | ∂M = 0 one finds that I ∂M gets X i = G ij X j +... . In general the transformation between them is complicated and should involve redefinitions of background fields. Choosing the flat 2d metric (and ignoring the dilaton couplings) we can formally re-write the combined action I as I = 1 4πα ′ d 2 ξ (δ ab G µν + iǫ ab B µν )(x)∂ a x µ ∂ b x ν (3.5) + ∂ a [iǫ ab A m (x)∂ b x m + X i (x)∂ a x i ] + ... . Let us first assume that the closed string couplings do not depend on x i (and are slowly varying in x m ). Then to compute the partition function (3.2) we may first do explicitly the gaussian integration over x i . Because of the absence of the zero mode of x i this will not produce the standard det G ij factor (usually present in the covariant σ-model measure). The only change will be in the couplings in the action (3.5) which will now depend on d = p + 1 coordinates x m with standard Neumann boundary conditions. Expanding x m (ξ) = x m 0 + y m (ξ) one can shift the B(x 0 )∂y∂y term to the boundary. Integrating first over the values of y m at the internal points of the disc and then over their boundary values one finds as in [21,45] that to the leading order in expansion in derivatives of fields (d = p + 1) Z disc = c 1 d d x 0 e −φ det(Ĝ mn +B mn + F mn ) + ... , (3.6) G mn ≡ G µν (x 0 )∂ m X µ ∂ n X ν ,B mn ≡ B µν (x 0 )∂ m X µ ∂ n X ν , X µ ≡ (x m 0 , G ij X j (x 0 )), ∂ m = ∂ ∂x m 0 , F mn = ∂ m A n − ∂ n A m . Equivalently, we may compute Z disc in a more '10d-symmetric' way by first performing the duality transformation [4] x i →x i in (3.5). Then the boundary term takes the standard open string form with A µ = (A m , X i ) so that the result is just the usual D = 10 BI action for the O(9 − p, 9 − p) dual background up to the factor (which as usual can be absorbed into the dilaton) accounting for the fact thatx i does not have zero mode part Z disc = c 1 d d x 0 e −φ 1 det(G ij +B ij ) det(G µν +B µν + F µν ) + ... , (3.7) G ij +B ij = (G ij + B ij ) −1 ,G im +B im = (G ij + B ij ) −1 (G jm + B jm ), .... , F mn = ∂ m A n − ∂ n A m , F mi = ∂ m X i , F ij = 0 . (3.8) The two expressions (3.7),(3.6) agree for constant G, B. The background fields G, B, φ in (3.6) depend only on x m 0 . When their dependence on x i is taken into account in (3.5) one expects to find (3.6) with the fields depending on X µ (x 0 ) = (x m 0 , X i (x 0 )). This indeed is what happens since one can re-introduce the zero mode for x i by the shift x i → x i + X i (x 0 ) which eliminates the linear boundary coupling term X i (x 0 )∂ ⊥ x i . The resulting D-p-brane action S d = Z disc is the same as the DBI action of [5] (here d = p + 1; in what follows we omit the index 0 on x m ) S d = d d x e −φ det(Ĝ mn + F mn ) + ... ,(3. 9) F mn = F mn +B mn ,Ĝ mn +B mn = (G µν + B µν )(X)∂ m X µ ∂ n X n . It is natural to assume that the above action can be generalised to make it manifestly D = 10 Lorentz and world-volume diffeomorphism invariant by relaxing the static gauge condition, i.e. treating all X µ = (X m , X i ) as 10 independent fields in d dimensions. This will be assumed in what follows. 6 It depends only on transverse Couplings to R-R background fields In addition to the above couplings (3.9) to NS-NS background fields (G, B, φ), D-brane actions contain couplings to the background fields C r of R-R sector of type II theories. The leading-order couplings are effectively given by the fermionic zero mode (ψ m 0 ) factor and can be computed systematically [11,15,19,20] using the techniques of [26]. Since the σ-model action contains the boundary term F mn ψ m ψ n plus couplings to R-R fields C r the resulting leading-order term [15] has the following symbolic structure 7 d d x 0 d d ψ 0 C r (ψ 0 ) r e Fψ 0 ψ 0 . In addition, for 'magnetic' D-branes (p ≥ 3) the conformal invariance conditions imply that there should be a source term in the Bianchi identities for the field strengths dC r [7,15,19]. The explicit form of these coupling terms thus depends on particular p. For p = d − 1 = 1, 2, 3 one finds (see also Section 5). 7 One way to understand this expression is to note that the zero mode count on the disc implies that one of the R-R vertex operators should depend explicitly on the potential C r and not on its strength [16]. S 2 = d 2 x [ e −φ det(Ĝ mn + F mn ) + 1 2 iǫ mn (Ĉ mn + CF mn ) + ...] ,(3.10)S 3 = d 3 x [ e −φ det(Ĝ mn + F mn ) + 1 2 iǫ kmn ( 1 3Ĉ kmn +Ĉ k F mn ) + ...] ,(3.S 4 = d 4 x [ e −φ det(Ĝ mn + F mn ) (3.12) + 1 8 iǫ mnkl ( 1 3Ĉ mnkl + 2Ĉ mn F kl + CF mn F kl ) + ...] , whereĈ m ,Ĉ mnk and C,Ĉ mn ,Ĉ mnkl are projections of the R-R fields of type IIA and type IIB theories (Ĉ m = C µ (X)∂ m X µ , etc.) and dots stand for fermionic and higher-order terms. 8 These D-brane actions may be combined according to (3.1) with the effective action S ef f for the massless string modes. For example, in the case of type IIB theory [30] S ef f is given by [47,48] S ef f IIB = c 0 d 10 x √ G e −2φ [R + 4(∂φ) 2 − 3 4 (∂B 2 ) 2 ] (3.13) − 1 2 (∂C) 2 − 3 4 (∂C 2 − C∂B 2 ) 2 − 5 6 F 2 (C 4 ) − 1 48 ǫ 10 √ G C 4 ∂C 2 ∂B 2 + ... . where ∂B 2 = ∂ [µ B νλ] , etc., F (C 4 ) = ∂C 4 + 3 4 (B 2 ∂C 2 − C 2 ∂B 2 ) , and, following [48], it is assumed that the (conformally-invariant) self-duality constraint on F (C 4 ) (F = F * ) is to be added at the level of equations of motion. This action is invariant under the SL(2, R) symmetry [30,47] g µν → g µν , λ → pλ + q rλ + s , (3.14) B µν → sB µν − rC µν , C µν → pC µν − qB µν , g µν ≡ e − 1 2 φ G µν , λ ≡ C + ie −φ ,(3.15) which is expected to be an exact duality symmetry of type IIB superstring theory [8,9,33]. A natural question is whether this symmetry is present in the combined action of type IIB low-energy field theory and a D-brane source. As we shall see below, the combined equations of motion are indeed covariant under SL(2, R) in the case of D-string and D- 3-brane. To demonstrate this it is necessary to perform the world-volume vector duality transformation similar to the one discussed above for BI action. Duality transformations of D-brane actions Since the D-brane actions (3.9)-(3.12) depend on A m only through F mn one can perform the semiclassical duality transformation as in the case of the BI actions in Section 2, i.e. by adding the Lagrange multiplier term 1 2 iΛ mn (F mn −2∂ m A n −B mn ) and eliminating F mn and A m from the action using their equations of motion. 9 The resulting dual action has equivalent set of equations of motion (with the roles of 'dynamical equations' and 'Bianchi identities' interchanged). D-string Starting with (3.10) we find that as in (2.2),(2.3) the A m -equation still implies Λ = Λ 0 = const so that [19] S 2 = d 2 x [ e −2φ + (Λ 0 + C) 2 detĜ mn + 1 2 iǫ mn (Ĉ mn − Λ 0Bmn )] . (4.1) The dual to D-string action can thus be interpreted as the standard fundamental string action in SL(2, R) transformed metric and antisymmetric tensor background (cf.(3.14) with p = 0, q = −1, r = 1, s = Λ 0 , λ → −1/(λ + Λ 0 )) S 2 = d 2 x ( detĜ ′ mn + 1 2 iǫ mnB′ mn ) , (4.2) G ′ µν ≡ e −2φ + (Λ 0 + C) 2 G µν , B ′ µν ≡ C µν − Λ 0 B µν ,(4.3) whereĜ mn = G µν ∂ m X µ ∂ n X ν , etc. This action (and its sum with (3.13)) is thus covariant under the type IIB SL(2, R) transformations (3.15) of the background fields. Indeed, G ′ µν can be written as G ′ µν = ∆(Λ 0 )g µν , ∆(Λ 0 ) = e −φ + e φ (Λ 0 + C) 2 , where g µν is the SL(2, R)-invariant Einstein-frame metric. Generic SL(2, R) transformation shifts the parameter Λ 0 (Λ ′ 0 = sΛ 0 +q) and also rescales the coefficient in front of the action (by factor p + rΛ 0 ). 10 This duality covariance was not apparent before the duality transformation of A m (though it is, of course, present also in the equivalent set of equations that follow from (4.1)). Combined with the type IIB effective action (3.13) the D-string action plays the role of a source (X m = x m , X i = 0) for a set of fundamental string solutions of type IIB effective equations constructed in [33]. These conclusions are consistent with the previous results about SL(2, Z) covariant family of type IIB strings [33,10] supporting the SL(2, Z) duality symmetry of type IIB theory [8,9]. D-2-brane In the case of S 3 (3.11) the Lagrange multiplier is Λ mn = ǫ mnk ∂ kà (cf.(2.3)) and thus elimination of F mn (using, e.g., the representation (2.4)) gives the dual action which is a generalisation of the expression in (2.6) (equivalent but less straightforward derivation of this action was given in [19]) S 3 = d 3 x e −φ det{Ĝ mn + e 2φ (∂ mà +Ĉ m )(∂ nà +Ĉ n )} (4.4) + 1 2 iǫ mnk ( 1 3Ĉ mnk −B mn ∂ kà ) . This action can be re-written in the standard membrane action form with one extra scalar coordinateÃS 3 = d 3 x ( detĜ ′ mn + 1 6 iǫ mnlB′ mnl ) , (4.5) G ′ mn = e − 2 3 φĜ mn + e 4 3 φ (∂ mà +Ĉ m )(∂ nà +Ĉ n ),B ′ mnl =Ĉ mnl − 3B [mn ∂ l]à . Dualising A m one thus finds an action which (for trivial background fields) has hidden global Lorentz symmetry SO(1, 10) of 11-dimensional theory as previously suggested in [49,31]. Because of the same underlying supersymmetry and field content (as implied by the dimensional reduction relation between D = 11 supergravity and type IIA lowenergy effective action) it is not actually surprising to find that the D-2-brane action is dual to the direct dimensional reduction (fields do not depend on X 11 ≡Ã) of the D = 11 supermembrane action coupled to D = 11 supergravity background [18,19]. The degrees of freedom count and the requirement of supersymmetry of a D-2-brane action in a background of N = 2a, D = 10 supergravity uniquely fixes its form to be equivalent to that of the dimensionally reduced D = 11 supermembrane action [34]. This observation can be used to determine [18] the fermionic terms in the D-membrane action (4.4) by starting with the known D = 11 supermembrane action. SL(2, R) self-duality of D-3-brane action Let us now turn to the case of our main interest -the D-3-brane action (3.12) and perform the world-sheet vector duality transformation A m →à m as in the case of the generalised d = 4 BI action (2.11),(2.14). Adding the Lagrange multiplier term (Λ mn = ǫ mnkp ∂ kÃp , cf. (2.3)) we find from (3.12) S 4 = S ′ 4 + S ′′ 4 , S ′′ 4 = d 4 x 1 8 iǫ mnkl ( 1 3Ĉ mnkl − 2B mnFkl ) + ... , (4.6) S ′ 4 = d 4 x [e −φ det(Ĝ mn + F mn ) + 1 2 iΛ mn F mn + 1 8 iǫ mnkl CF mn F kl ] , (4.7) Λ mn ≡ ǫ mnkl (F kl +Ĉ kl ) ,F mn = ∂ mÃn − ∂ nÃm . The remaining problem of eliminating F from S ′ 4 is solved as in the case of BI action (2.11). Indeed, the action (3.12),(4.7) is closely related to (2.11) as can be seen by expressing it in terms of the SL(2, R) invariant D = 10 Einstein-frame metric g µν :Ĝ mn = e 1 2 φĝ mn . Then the dilaton dependence becomes the same as in (2.11) withĝ mn replacing δ mn there. As a result, the action dual to (3.12) is found to be 11 S 4 = d 4 x [ e −φ det(Ĝ mn +F mn ) (4.8) + 1 8 iǫ mnkl ( 1 3Ĉ mnkl + 2Ĉ mnFkl +CF mnFkl ) + ...] ,F mn =F mn +B mn , where the transformed fieldsφ,C are as in (2.15) and G µν = e 1 2 (φ−φ) G µν ,B µν = C µν ,C µν = −B µν . As in the case of (2.14) these redefinitions of background fields correspond to the basic The duality invariance of the D-3-brane action is thus closely related to the fact of self-duality of the d = 4 BI action and its SL(2, R) generalisation existing in the case of specific dilaton-axion-BI system first noted in [32]. 12 11 We assume that the action (3.12),(4.6) contains a specific 'higher -order' term ∼ iǫ mnkl B mnĈmn as demanded by antisymmetric tensor gauge invariance. 12 It was found in [32] that there exists the unique generalisation of the d = 4 BI action to the case of coupling to the dilaton φ and axion C whose equations of motion are invariant under the SL(2, R) duality transformations generalising the vector duality transformations: S GR = d 4 x [e −φ det(e 1 2 φ g mn + F mn ) + 1 8 iCǫ mnkl F mn F kl ]. This action (which is the same as (2.11) with δ mn → g mn ) was combined in [32] with the standard d = 4 axion-dilaton action S ef f = − d 4 x √ g[R − 1 2 (∂φ) 2 − 1 2 e 2φ (∂C) 2 ] . The result S = S ef f + S GR is not, however, the effective action that appears (upon dimensional reduction to D = 4) in type I superstring theory (or that may appear [50] in SO(32) heterotic string theory as suggested by its duality to type I theory in D = 10): for g mn to be the Einstein-frame metric, its factor should be e −2φ , not e Thus we find that in contrast to the case of SL(2, R) covariant D-string action, the D-3-brane action is invariant under the SL(2, R) transformations of the background fields of type IIB theory combined with world-volume vector duality. This conclusion is consistent with the expectation [33] that while there are SL(2) multiplets of string (and 5-brane) solutions in type IIB theory, the self-dual 3-brane solution [1,2] should be unique and related observation about the absence of bound states of D-3-branes was made in [10]. The supersymmetric self-dual 3-brane solution of type IIB effective field equations [2] (extreme limit of black 3-brane of [1]) has non-zero value of C 4 field (C mnkl ∼ ǫ mnkl f (x i ), ∂ [i 1 C i 2 i 3 i 4 i 5 ] ∼ ǫ i 1 i 2 i 3 i 4 i 5 i 6 ∂ i 6 f ) and curved metric. It should be possible to obtain this background as a solution of the combined action of type IIB low-energy field theory (3.13) and D-3-brane (3.12), i.e. it should be supported by the D-3-brane source. Assuming that all the background fields except the metric G mn and C 4 are trivial and that a consistent choice for the 3-brane fields is X m = x m , X i = 0, A m = 0, one finds that differentiating (3.12) over C µνλκ produces a δ (6) -function source in the equation for C 4 , d * dC 4 ∼ δ (6) (x i ). Since the action (3.13) is used under the prescription that that the resulting equations of motion should be consistent with self-duality of F (C 4 ), one should also add the same source to the Bianchi identity ddC 4 ∼ δ (6) (x i ). The self-duality equation for dC 4 then holds even in the presence of the 3-brane source. This prescription is in agreement with what follows from demanding the conformal invariance in the presence of 3-brane [15] and is also consistent with the interpretation of the solution [1,2] as having both 'electric' and That remained a puzzle in [29,32]. As we have seen above, an action closely related to S GR does appear in string theory -as the action of D-3-brane of type IIB superstring theory. In ten (but not in four) dimensions G mn = e 'magnetic' charges. 13 The resulting 3-brane field configuration is then the same as in [2]. 14 Role of world-volume vector field and higher dimensional interpretations of D-branes of type II theories Let us draw some lessons from the above discussion of D-brane actions. One of the conclusions is that in the case of type IIB D-branes the duality transformation of the world-volume vector A m is closely related to (and, in fact, may be the origin of) the SL(2, Z) symmetry of type IIB theory. The combined type IIB + D-brane effective action is covariant under SL(2, Z) provided one also performs the duality rotation of A m . This is strongly analogous to the relation between the world-sheet d = 2 scalar duality and the target space duality symmetry of the string-theory effective action. In fact, the world-sheet duality symmetry is the reason why the string effective action is invariant under T -duality. The sum of string effective action and fundamental string source action is T -duality covariant provided the isometric string source coordinates are transformed simultaneously with the background fields. This analogy suggests that if the type IIB superstring theory could be interpreted also as a theory of fundamental supersymmetric 3branes, then its SL(2, Z) symmetry would be a consequence of the d = 4 'electro-magnetic' 13 There should exist a more systematic approach to derivation of the full set of equations of motion from an action principle based on doubling of the R-R fields (cf. [39]). Then the original 'electric' C 4 and its 'magnetic' doubleC 4 will appear in both the effective type II action and the D-brane action and should automatically lead to a consistent set of field equations with 'dyonic' sources. The dyonic nature of 3-brane is reflected also in the fact that the 3-brane source effectively drops out from the Einstein equations: for the 3-brane metric ds 2 = f −1/2 dx m dx m + f 1/2 dx i dx i the (mn) and (ij) components of the Einstein equations (R µν ∼ (F 5 ) 2 µν ) reduce to f −2 ∂ i ∂ i f = 0 and f −1 ∂ i ∂ i f = 0 which are satisfied everywhere for f = 1 + Q/r 4 (equivalently, that means that the 3-brane source δ-function will be multiplied by a power of r near r = 0). In contrast to the case of other 'elementary' p-brane solutions here the same conclusion applies also to the Einstein equations with raised indices (as they follow from the effective action). This may be related to non-singularity of the black 3-brane solution [51]. 14 It was found in [2] that the supersymmetric 3-brane solution preserves half of N = 2b, D = 10 supersymmetry and thus should be described by 8 + 8 on-shell degrees of freedom of d = 4, N = 4 Maxwell supermultiplet (A m , X IJ , λ I ). Since 4 other supersymmetries are spontaneously broken, they should be realised in a non-linear way, so it was conjectured that the resulting action should be of an unusual Born-Infeld type [2]. This conjecture is indeed confirmed by the above D-brane construction of the 3-brane action. world-volume vector duality. Indeed, just as the string partition function in an isometric background is T -duality invariant (since the string coordinates are integrated out, their transformation is irrelevant), the fundamental 3-brane partition function in type IIB theory background will be invariant under the SL(2, Z) transformations (3.14) (since the vector field A m is an integration variable, the vector duality transformation A m →à m which accompanies (3.14) is irrelevant). The vector duality plays a different role in the case of type IIA D-2-brane: it reveals a hidden O(1, 10) Lorentz symmetry making possible to interpret the resulting action is a dimensional reduction of the D = 11 supermembrane action [18,19]. This implies a special role played by the D-2-brane in type IIA theory. The above remarks suggest that D-3-brane plays analogous special role in type IIB theory. Similarity with 2-brane case suggests that the dimension 'associated' with D-3-branes is 12. Indeed, recall that the D-p-branes of type II theories are described by the the reparametrisation invariant actions depending on the fields X µ , A m (µ = 0, ..., 10; m = 0, ..., d − 1; d = p + 1). As was mentioned already, the number of the physical degrees of freedom is the same for all values of d = p + 1: 10 − d + d − 2 = 8, where 10 − d is the number of transverse coordinates X i remaining, e.g., in the static gauge X m (x) = x m and d − 2 is the number of transverse modes of a vector in d with the standard U (1) Maxwell kinetic term (leading order term in the expansion of (3.9)). 15 Suppose we fix the U (1) gauge but keep the reparametrisation invariance with an idea to reinterpret the D-p-brane action as another reparametrisation invariant p-brane action. Then the number of 'partially off-shell' bosonic degrees of freedom becomes 10 + d − 2 = 9 + p. This gives ten for a string, 16 eleven for the 2-brane, twelve for 3-brane, etc. At the same time, 15 Equivalently, one may fix reparametrisation invariance by choosing A m = f m (x) to be some given functions but then one gets −2 as U (1) ghost contributions. 16 The critical dimension of the D-string should not, of course, change from its standard ( T d 2 x √ det g mn [V (1 + 1 2 F mn F mn ) + V −1 ]. Integrating out A m we do not get any non-trivial determinants (the U (1) ghost contribution cancels out for the same reason why a propagating 2d vector field has zero degrees of freedom). We are then left with an action which is (semiclassically) equivalent to the Nambu action and thus has the same critical dimension 26. the amount of on-shell supersymmetry should still remain the same for all p-branes and their 'higher-dimensional' versions (there are 8 fermionic physical degrees of freedom as demanded by unbroken supersymmetry [7]). This indeed is what happens in the case of the 2-brane -D = 11 membrane connection [18]. Since the vector is dual to a scalar in d = 3 but to a vector in d = 4, in the case of 3-brane the two extra degrees are not easily combined with 10 X µ scalar fields in a D = 12 Lorentz-invariant way. Thus if type IIB superstring theory admits a 3-brane reformulation, its 12-dimensional interpretation should be more subtle than the 11-dimensional one of Suggestions about possible relation of type IIB theory to a 12-dimensional theory were previously made also in [35,36,37]. It was conjectured in [35] that there may exist a 3-brane with world-sheet signature (2, 2) 17 moving in a D = 12 space with signature (2, 10) which reduces upon double-dimensional reduction to the usual type IIB superstring just like the D = 11 supermembrane reduces to the type IIA string [52] (this was motivated by the fact that in the case of (2, 10) signature there exist both Majorana-Weyl spinors and self-dual tensors in the supersymmetry algebra). A picture of (2, 2) world-volumes embedded in (2, 10) space independently emerged in the context of N = 2 string theory [53] (where the 'transverse' 8-dimensional space was treated as an internal one, compactified on a special torus). The existence of D-string in type IIB theory with an extra vector field on the world sheet was used in [37] to conjecture (by analogy with N = 2 strings where there is a local U (1) gauge symmetry on the world-sheet) that there may exist an 'off-shell' extension of type IIB theory where two extra compact (1, 1) dimensions are added both to the worldsheet and the target space. Type IIB strings would then be recovered by wrapping the (1, 1) part of world-volume around the compact (1, 1) part of the target space as in [35]. A possible 12-dimensional origin of SL(2, Z) symmetry of type IIB theory was conjectured in [36] (were the signature of a D = 12 space-time was assumed to be the standard one). These conjectures may be related to the systematic approach [54] to a unified description of different world-sheet string (and membrane) theories as effective target space theories [55] of N = (2, 1) heterotic string. Here the central role is played by (2, 2) self-dual geometries embedded in (2, 10) space (with non-compact transverse string coordinates appearing as zero modes of N = (2, 1) string). Suggestions of a formulation based on (2, 2) world-volumes in (2, 10) space-time are apparently different from what was found above for the D-3-brane. Since all 8 transverse degrees of freedom X i and A ⊥ m have the same signs of kinetic terms it seems natural to assign the standard spatial signature to the extra two dimensions. Also, the D-3-brane has standard (1, 3) world-volume signature (in fact, all D-branes have (1, p) signature). Finally, one should not expect to find the full 12-dimensional Lorentz (super)symmetry. It would be interesting to see if the (1, 11)+(1, 3) picture directly implied by the D-3-brane description can still be related to the (2, 10)+(2, 2) proposals. 18 To conclude, it is likely that the self-dual Dirichlet 3-brane plays a special role in type IIB superstring theory, explaining its SL(2, Z) symmetry in terms of the world-volume 'electro-magnetic' duality and also pointing towards an unusual non Lorentz invariant 12-dimensional reformulation of this theory. Absence of manifest Lorentz symmetry is characteristic to models which describe chiral d-forms [56,57] and also to models with doubled numbers of field variables (but the same number of degrees of freedom) which are manifestly duality invariant [38,39]. It may be that 12-dimensional theory is related to such 'doubled' formulation of a self-dual theory. Acknowledgements I would like to thank G. Gibbons, C. Schmidhuber and P.K. Townsend for stimulating discussions. I acknowledge also the support of PPARC, ECC grant SC1 * -CT92-0789 and NATO grant CRG 940870. 18 To relate the (1, 3) and (2, 2) theories one presumably needs to perform a transformation analogous to interchanging the left and right string modes for a time-like circle as in [37] (I am grateful to C. Vafa for this suggestion). In addition, to relate (1, 11) and (2, 10) descriptions one needs to 'twist' the vector field degrees of freedom. Twisting of 4d vector field may be analogous to twisting of 2d chiral scalars: in the 'doubling' approach to construction of manifestly dual actions [38,39] ) can be derived directly by computing the corresponding string partition function on a disc. This is the partition function of virtual open strings with mixed Dirichlet-Neumann boundary conditions (i.e. with ends attached to a hyperplane) propagating in a condensate of massless string modes. The collective coordinates X i and internal vector A m degrees of freedom of the D-brane are represented 4 In the case of a string (D = 2 effective theory) α ′ -corrections should be trivial up to a field redefinition, i.e. should affect only 'propagator' terms. At the same time, D-string is expected to become fundamental ('structureless') only after a resummation of string loops. x µ (µ = 0, 1, ..., 9) are split into x m (m = 0, ..., p) and x i (i = p+1, ...,9) with the boundary conditions ∂ ⊥ x m | ∂M = 0, x i | ∂M = 0 so that the boundary couplings depend only on x m . D-brane coordinates X i and on A m , i.e. corresponds to the static gauge X m (x) = x m . The number of physical modes is the same for all values of d = p + 1: 10 − d + d − 2 = 8, where d − 2 corresponds to a vector in d dimensions. In addition there are 8 fermionic modes as demanded by unbroken supersymmetry [7] (half of N = 2, D = 10 supersymmetry should be realised linearly and half -in a non-linear way). These on-shell degrees of freedom are the same as of N = 1, D = 10 Maxwell supermultiplet dimensionally reduced to d dimensions [10]. SL( 2 , 2R) duality transformation (3.15) with p = 0, q = 1, r = −1, s = 0, λ → −1/λ and thus together with trivial shifts of C imply the full SL(2, R) invariance of the action under A m →à m combined with the transformation (3.15) of background fields. 1 2 1φ as in S GR . mn is indeed the transformation from the string frame to the SL(2, R) invariant Einstein-frame metric. The analogue of the invariant action S = S ef f + S GR is the sum of D = 10 effective action S ef f IIB (3.13) and 3-brane action (3.12) and the SL(2, R) symmetry in question is not the usual S-duality of d = 4 effective string action but SL(2, R) duality of D = 10 type IIB superstring theory. type IIA theory implied by the D-2-brane -supermembrane connection: there should be no simple D = 12 Lorentz group relating all of the corresponding (10+2) bosonic degrees of freedom. This suggests that the two extra dimensions (effectively represented by the 'transverse' part of A m ) may play only an auxiliary role in some novel realisation of D = 10 supersymmetry. the d = 2 scalar and d = 4 vector Lagrangians have similar structure L d (A) = − 1 2 ∂ 0 AL∂A + 1 2 ∂AM∂A, where A = (A,Ã), A andà are k-forms (spatial parts of gauge potentials), k = 1 2 (d − 2), and L = 0 I (−I) k 0 , M = I 0 0 I . 4) All such dualities are symmetries of equations of motion combined with Bianchi identities, or maps between one action and a dual one (with equivalent sets of equations of motion). To have duality as a manifest symmetry of a (non-Lorentz-invariant) action one is to double the number of field variables by introducing the 'dual fields' on the same footing as the original fields[6]2 11) It is not clear, however, how this can be done in a manifestly supersymmetric way for d ≥ 46 10 or 26) value, in spite of the presence of an extra U (1) vector field in its action. Indeed, let us consider for simplicity the purely bosonic case. The fundamental D-string is described by aformal path integral Z = [dX][dA] exp[−T d 2 x det(g mn + F mn )], where g mn = ∂ m X µ ∂ n X µis also used in the definition of the measure [dx][dA]. As in the Nambu string case, to make this path integral tractable one needs to add some auxiliary fields (like independent 2d metric in Polyakov approach). Since we are interested here in the effect of integrating out A m , it is sufficient to introduce just one auxiliary field V as in(2.4), defining Z as (cf.(2.4),(2.5)) Z = [dX][dV ][dA] exp[− 1 2 This confirms the conjecture made in[31]. The SL(2, R) duality invariance of equations of motion following from a similar BI-dilaton-axion action was first proved in[32] but a connection of their action to string theory was unclear. The D-3-brane action invariant under SL(2, R) symmetry of type IIB theory provides the proper context for the application of the result of[32]. This description applies only to 'electric' (p ≤ 3) D-branes which we shall consider in what follows. To generalise it to 'magnetic' ones one presumably is to double the number of the R-R fields in the action (cf.[38,39]). The constant c 1 in front of the actions (3.6),(3.9) (proportional to the dilaton tadpole on the disc) is assumed to be absorbed into e φ and normalisations of the fields C r . As was mentioned already, these actions are low-energy effective actions (analogous, e.g., to Nambu-type action for a cosmic string or domain wall) and may be treated semiclassically.10 The parameter Λ 0 takes discrete values at the quantum level[10]. In what follows we shall often not differentiate between SL(2, Z) and SL(2, R) transformations. The 4-parameter diffeomorphism invariance is, in principle, sufficient to gauge-fix more than one time-like direction. Ref.[35] discussed various p-branes with several time-like coordinates. . G Horowitz, A Strominger, Nucl. Phys. 360197G. Horowitz and A. Strominger, Nucl. Phys. B360 (1991) 197. . M J Duff, X Lu, Phys. Lett. 273409M.J. Duff and X. Lu, Phys. Lett. B273 (1991) 409. . M J Duff, R Khuri, X Lu, Phys. Repts. 259213M.J. Duff, R. Khuri and X. Lu, Phys. Repts. 259 (1995) 213. . J Dai, R G Leigh, J Polchinski, Mod. Phys. Lett. 42073J. Dai, R.G. Leigh and J. Polchinski, Mod. Phys. Lett. A4 (1989) 2073. . R G Leigh, Mod. Phys. Lett. 42767R.G. Leigh, Mod. Phys. Lett. A4 (1989) 2767. . M B Green, Phys. Lett. 329435M.B. Green, Phys. Lett. B329 (1994) 435. . J Polchinski, hep-th/9510017Phys. Rev. Lett. 75J. Polchinski, Phys. Rev. Lett. 75 (1995) 4724, hep-th/9510017. . C M Hull, P K Townsend, Nucl. Phys. 438109C.M. Hull and P.K. Townsend, Nucl. Phys. B438 (1995) 109. . E Witten, Nucl. Phys. 44385E. Witten, Nucl. Phys. B443 (1995) 85. . E Witten, hep-th/9510135E. Witten, IASSNS-HEP-95-83, hep-th/9510135. . M Li, hep-th/9510161M. Li, BROWN-HET-1020, hep-th/9510161. PUPT-1574, hep-th/9510200. I R Klebanov, L S Thorlacius ; S, A Gubser, I R Hashimoto, J M Klebanov, Maldacena, hep-th/96010571586I.R. Klebanov and L. Thorlacius, PUPT-1574, hep-th/9510200; S.S. Gubser, A. Hashimoto, I.R. Klebanov and J.M. Maldacena, PUPT-1586, hep-th/9601057. . C Bachas, hep-th/9511043C. Bachas, NSF-ITP/95-144, hep-th/9511043. . C G Callan, I R Klebanov, hep-th/95111731578C.G. Callan and I.R. Klebanov, PUPT-1578, hep-th/9511173. . M Douglas, hep-th/9512077M. Douglas, RU-95-92, hep-th/9512077. . J Polchinski, S Chaudhuri, C V Johnson, hep-th/9602052J. Polchinski, S. Chaudhuri and C.V. Johnson, NSF-ITP-96-003, hep-th/9602052. . M B Green, hep-th/9602061DAMTP. 96M.B. Green, DAMTP/96, hep-th/9602061. . P K Townsend, hep-th/9512062P.K. Townsend, DAMTP-R/95/59, hep-th/9512062. . C Schmidhuber, hep-th/96010031585C. Schmidhuber, PUPT-1585, hep-th/9601003. . S P De Alwis, K Sato, hep-th/9601167S.P. de Alwis and K. Sato, COLO-HEP-368, hep-th/9601167. . E S Fradkin, A A Tseytlin, Phys. Lett. 163123E.S. Fradkin and A.A. Tseytlin, Phys. Lett. B163 (1985) 123. . A Abouelsaood, C Callan, C Nappi, S Yost, Nucl. Phys. 280599A. Abouelsaood, C. Callan, C. Nappi and S. Yost, Nucl. Phys. B280 (1987) 599. . E Bergshoeff, E Sezgin, C N Pope, P K Townsend, Phys. Lett. 18870E. Bergshoeff, E. Sezgin, C.N. Pope and P.K. Townsend, Phys. Lett. B188 (1987) 70. . R R Metsaev, M A Rahmanov, A A Tseytlin, Phys. Lett. 193207R.R. Metsaev, M.A. Rahmanov and A.A. Tseytlin, Phys. Lett. B193 (1987) 207; . A A Tseytlin, Phys. Lett. 20281A.A. Tseytlin, Phys. Lett. B202 (1988) 81. . O D Andreev, A A Tseytlin, Phys. Lett. 207205Nucl. Phys.O.D. Andreev and A.A. Tseytlin, Phys. Lett. B207 (1988) 157; Nucl. Phys. B311 (1988) 205. . C G Callan, C Lovelace, C R Nappi, S A Yost, Nucl. Phys. 308221C.G. Callan, C. Lovelace, C.R. Nappi and S.A. Yost, Nucl. Phys. B308 (1988) 221. . E Schrödinger, Proc. Roy. Soc. 150465E. Schrödinger, Proc. Roy. Soc. A150 (1935) 465. . H C Tze, Nuov.Cim. 22507H.C. Tze, Nuov.Cim. 22A (1974) 507. . G W Gibbons, D A Rasheed, hep-th/9506035Nucl. Phys. 454G.W. Gibbons and D.A. Rasheed, Nucl. Phys. B454 (1995) 185, hep-th/9506035. . J H Schwarz, Nucl. Phys. 226269J.H. Schwarz, Nucl. Phys. B226 (1983) 269; . P S Howe, P C West, Nucl. Phys. 238181P.S. Howe and P.C. West, Nucl. Phys. B238 (1984) 181. . M J Duff, J T Liu, R Minasian, hep-th/9506126Nucl. Phys. 452261M.J. Duff, J.T. Liu and R. Minasian, Nucl. Phys. B452 (1995) 261; hep-th/9506126. . G W Gibbons, D A Rasheed, hep-th/9509141Phys. Lett. 365G.W. Gibbons and D.A. Rasheed, Phys. Lett. B365 (1996) 46, hep-th/9509141. . J H Schwarz, Phys. Lett. 36013J.H. Schwarz, Phys. Lett. B360 (1995) 13; . hep-th/9510086B364. 252B364 (1995) 252 (E), hep-th/9508143, hep-th/9509148; CALT-68-2025, hep-th/9510086. . E Bergshoeff, E Sezgin, P K Townsend, Phys. Lett. 18975E. Bergshoeff, E. Sezgin and P.K. Townsend, Phys. Lett. B189 (1987) 75; . Ann. of Phys. 185330Ann. of Phys. 185 (1988) 330. . M Blencowe, M J Duff, Nucl. Phys. 310387M. Blencowe and M.J. Duff, Nucl. Phys. B310 (1988) 387. . C M Hull, hep-th/9512181C.M. Hull, QMW-95-50, hep-th/9512181. . C Vafa, hep-th/9602022C. Vafa, HUTP-96/A004, hep-th/9602022. . A A Tseytlin, Phys. Lett. 242163A.A. Tseytlin, Phys. Lett. B242 (1990) 163; . Nucl. Phys. 350395Nucl. Phys. B350 (1991) 395. . J H Schwarz, A Sen, Nucl. Phys. 41135J.H. Schwarz and A. Sen, Nucl. Phys. B411 (1994) 35; . Phys. Lett. 312105Phys. Lett. B312 (1993) 105. . M Born, L Infeld, Proc. Roy. Soc. 144425M. Born and L. Infeld, Proc. Roy. Soc. A144 (1934) 425. . E Bergshoeff, L A J London, P K Townsend, Class. Quantum Grav. 92545E. Bergshoeff, L.A.J. London and P.K. Townsend, Class. Quantum Grav. 9 (1992) 2545. . A A Tseytlin, Phys. Lett. 530A.A. Tseytlin, Phys. Lett. B251 (1990) 530. . E S Fradkin, A A Tseytlin, Phys. Lett. 15869Phys. Lett.E.S. Fradkin and A.A. Tseytlin, Phys. Lett. B158 (1985) 316; Phys. Lett. B160 (1985) 69. . A A Tseytlin, Phys. Lett. 208221A.A. Tseytlin, Phys. Lett. B208 (1988) 221; . Int. J. Mod. Phys. 41257Int. J. Mod. Phys. A4 (1989) 1257. . R R Metsaev, A A Tseytlin, Nucl. Phys. 298109R.R. Metsaev and A.A. Tseytlin, Nucl. Phys. B298 (1988) 109; . A A Tseytlin, Int. J. Mod. Phys. 5589A.A. Tseytlin, Int. J. Mod. Phys. A5 (1990) 589. . C Callan, D Friedan, E Martinec, M Perry, Nucl. Phys. 262593C. Callan, D. Friedan, E. Martinec and M. Perry, Nucl. Phys. B262 (1985) 593. . E Bergshoeff, C M Hull, T Ortín, hep-th/9504081Nucl. Phys. 451E. Bergshoeff, C.M. Hull and T. Ortín, Nucl. Phys. B451 (1995) 547, hep-th/9504081. . E Bergshoeff, H J Boonstra, T Ortín, hep-th/9508091E. Bergshoeff, H.J. Boonstra and T. Ortín, UG-7/95, hep-th/9508091. . M J Duff, J X Lu, hep-th/9207060Nucl. Phys. 390M.J. Duff and J.X. Lu, Nucl. Phys. B390 (1993) 276, hep-th/9207060. . A A Tseytlin, hep-th/9510173Phys. Lett. 367A.A. Tseytlin, Phys. Lett. B367 (1996) 84, hep-th/9510173 . G W Gibbons, G T Horowitz, P K Townsend, hep-th/9410073Class. Quant. Grav. 12G.W. Gibbons, G.T. Horowitz and P.K. Townsend, Class. Quant. Grav.12 (1995) 297, hep-th/9410073. . M J Duff, P S Howe, T Inami, K S Stelle, Phys. Lett. 19170M.J. Duff, P.S. Howe, T. Inami and K.S. Stelle, Phys. Lett. B191 (1987) 70. . H Ooguri, C Vafa, Nucl. Phys. 361469H. Ooguri and C. Vafa, Nucl. Phys. B361 (1991) 469; . Nucl. Phys. 36783Nucl. Phys. B367 (1991) 83. . D Kutasov, E Martinec, hep-th/9602049D. Kutasov and E. Martinec, EFI-96-04, hep-th/9602049. . M B Green, Nucl. Phys. 293593M.B. Green, Nucl. Phys. B293 (1987) 593. . N Marcus, J H Schwarz, Phys. Lett. 115111N. Marcus and J.H. Schwarz, Phys. Lett. B115 (1982) 111. . R Floreanini, R Jackiw, Phys. Rev. Lett. 591873R. Floreanini and R. Jackiw, Phys. Rev. Lett. 59 (1987) 1873; . M Henneaux, C Teitelboim, Phys. Lett. 206650M. Henneaux and C. Teitelboim, Phys. Lett. B206 (1988) 650.
[]
[]
[ "Saiful R Mondal ", "Al Dhuain Mohammed " ]
[]
[]
Sufficient conditions on A, B, p, b and c are determined that will ensure the generalized Bessel functions u p,b,c satisfies the subordination u p,b,c (z) ≺ (1 + Az)/(1 + Bz). In particular this gives conditions for (−4κ/c)(u p,b,c (z) − 1), c = 0 to be close-to-convex. Also, conditions for which u p,b,c (z) to be Janowski convex, and zu p,b,c (z) to be Janowski starlike in the unit disk D = {z ∈ C : |z| < 1} are obtained.
10.1155/2016/4740819
[ "https://arxiv.org/pdf/1506.03138v1.pdf" ]
119,172,094
1506.03138
673b6d3286157a4b71c5bfb0d5b880cc95390666
9 Jun 2015 Saiful R Mondal Al Dhuain Mohammed 9 Jun 2015Inclusion of generalized Bessel functions in the Janowski class Sufficient conditions on A, B, p, b and c are determined that will ensure the generalized Bessel functions u p,b,c satisfies the subordination u p,b,c (z) ≺ (1 + Az)/(1 + Bz). In particular this gives conditions for (−4κ/c)(u p,b,c (z) − 1), c = 0 to be close-to-convex. Also, conditions for which u p,b,c (z) to be Janowski convex, and zu p,b,c (z) to be Janowski starlike in the unit disk D = {z ∈ C : |z| < 1} are obtained. Introduction Let A denote the class of analytic functions f defined in the open unit disk D = {z : |z| < 1} normalized by the conditions f (0) = 0 = f ′ (0) − 1. If f and g are analytic in D, then f is subordinate to g, written f (z) ≺ g(z), if there is an analytic self-map w of D satisfying w(0) = 0 and f = g • w. For −1 ≤ B < A ≤ 1, let P[A, B] be the class consisting of normalized analytic functions p(z) = 1 + c 1 z + · · · in D satisfying p(z) ≺ 1 + Az 1 + Bz . For instance, if 0 ≤ β < 1, then P[1 − 2β, −1] is the class of functions p(z) = 1 + c 1 z + · · · satisfying Re p(z) > β in D. The class S * [A, B] of Janowski starlike functions [8] consists of f ∈ A satisfying zf ′ (z) f (z) ∈ P[A, B]. These classes have been studied, for example, in [1,2]. A function f ∈ A is said to be close-to-convex of order β [7,12] if Re (zf ′ (z)/g(z)) > β for some g ∈ S * := S * (0). This article studies the generalized Beesel function u p (z) = u p,b,c (z) given by the power series u p (z) = 0 F 1 (κ, −c 4 z) = ∞ k=0 (−1) k c k 4 k (κ) k z k k! ,(1) where κ = p + (b + 1)/2 = 0, −1, −2, −3 · · · . The function u p (z) is analytic in D and solution of the differential equation 4z 2 u ′′ (z) + 4κzu ′ (z) + czu(z) = 0,(2) if b,p,c in C ,such that κ = p + (b + 1)/2 = 0, −1, −2, −3 · · · and z ∈ D. This normalized and generalized Bessel function of the first kind of order p, also satisfy the following recurrence relation 4κu ′ p (z) = −cu p+1 (z),(3) which is an useful tool to study several geometric properties of u p . There has been several works [3,4,15,16,5,6] studying geometric properties of the function u p (z), such as on its close-to-convexity, starlikeness, and convexity, radius of starlikeness and convexity. In Section 2 of this paper, sufficient conditions on A, B, c, κ are determined that will ensure u p satisfies the subordination u p (z) ≺ (1 + Az)/(1 + Bz). It is to be understood that a computationally-intensive methodology with shrewd manipulations is required to obtain the results in this general framework. The benefits of such general results are that by judicious choices of the parameters A and B, they give rise to several interesting applications, which include extending the results of previous works. Using this subordination result, sufficient conditions are obtained for (−4κ/c)u ′ (z) ∈ P[A, B], which next readily gives conditions for (−4κ/c)(u p (z) − 1) to be close-toconvex. Section 3 gives emphasis to the investigation of u p (z) to be Janowski convex as well as of zu p (z) to be Janowski starlike. The following lemma is needed in the sequel. Lemma 1.1. [11,12] Let Ω ⊂ C, and Ψ : C 2 × D → C satisfy Ψ(iρ, σ; z) ∈ Ω whenever z ∈ D, ρ real, σ ≤ −(1 + ρ 2 )/2. If p is analytic in D with p(0) = 1, and Ψ(p(z), zp ′ (z); z) ∈ Ω for z ∈ D, then Re p(z) > 0 in D. In the case Ψ : C 3 ×D → C, then the condition in Lemma 1.1 generalized to Ψ(iρ, σ, µ + iν; z) ∈ Ω ρ real, σ + µ ≤ 0 and σ ≤ −(1 + ρ 2 )/2. Close-to-convexity of the Bessel function In this section, one main result on the close-to-convexity of the generalized Bessel function with several consequences are discussed in details. Theorem 2.1. Let −1 ≤ B ≤ 3 − 2 √ 2 ≈ 0.171573. Suppose B < A ≤ 1, and c, κ ∈ R satisfy κ − 1 ≥ max 0, (1+B)(1+A) 4(A−B) |c| . (4) Further let A, B, κ and c satisfy either the inequality (κ − 1) 2 + (κ−1)(1+B) (1−B) − (κ−1)(A+B) 2(A−B) c + (1+B) 2 (1+A) 4(1−B)(A−B) c ≥ (1−A 2 )(1−B 2 ) 16(A−B) 2 c 2 (5) whenever 2(κ − 1)(1 − B)(A + B)c + (1 + B) 2 (1 + A)c ≥ 1 2 (A − B)(1 − B)c 2 , (6) or the inequality (κ − 1) (A+B) 2(A−B) c + (1+B) 2 ((1+A) 4(1−B)(A−B) c 2 ≤ c 2 4 (κ − 1) 2 + (κ − 1) (1+B) 1−B − (1−AB) 2 16(A−B) 2 c 2 (7) whenever 2(κ − 1)(1 − B)(A + B)c + (1 + B) 2 (1 + A)c < 1 2 (A − B)(1 − B)c 2 . (8) If (1 + B)u p (z) = (1 + A), then u p (z) ∈ P[A, B]. Proof. Define the analytic function p : D → C by p(z) = − (1−A)−(1−B)up(z) (1+A)−(1+B)up(z) , Then, a computation yields u p (z) = (1−A)+(1+A)p(z) (1−B)+(1+B)p(z) ,(9)u p (z) = 2(A−B)p ′ (z) ((1−B)+(1+B)p(z)) 2 ,(10) and u ′′ p (z) = 2(A−B)((1−B)+(1+B)p(z))p ′′ (z)−4(1+B)(A−B)p ′ 2 (z) ((1−B)+(1+B)p(z)) 3 .(11) Thus, using the identities (9)-(11), the Bessel differential equation (2) can be rewrite as z 2 p ′′ (z) − 2(1+B) (1−B)+(1+B)p(z) (zp ′ (z)) 2 + κzp ′ (z) + ((1−B)+(1+B)p(z))((1−A)+(1+A)p(z)) 8(A−B) cz = 0.(12) Assume Ω = {0}, and define Ψ(r, s, t; z) by Ψ(r, s, t; z) := t − 2(1+B) (1−B)+(1+B)r s 2 + κs + ((1−B)+(1+B)r)((1−A)+(1+A)r) 8(A−B) cz. It follows from (12) that Ψ(p(z), zp ′ (z), z 2 p ′′ (z); z) ∈ Ω. To ensure Re p(z) > 0 for z ∈ D, from Lemma 1.1, it is enough to establish Re Ψ(iρ, σ, µ + iν; z) < 0 in D for any real ρ, σ ≤ −(1 + ρ 2 )/2, and σ + µ ≤ 0. (13), a computation yields With z = x + iy ∈ D inRe Ψ(iρ, σ, µ + iν; z) = µ − 2(1−B 2 ) (1−B) 2 +(1+B) 2 ρ 2 σ 2 + κσ − ρ(1−AB) 4(A−B) cy + (1−B)(1−A)−(1+B)(1+A)ρ 2 8(A−B) cx.(14) Since σ ≤ −(1 + ρ 2 )/2, and B ∈ [−1, 3 − 2 √ 2], 2(1−B 2 ) (1−B) 2 +(1+B) 2 ρ 2 σ 2 ≥ 2(1−B 2 ) (1−B) 2 +(1+B) 2 ρ 2 (1+ρ 2 ) 2 4 ≥ 1 + B 2(1 − B) . Thus Re Ψ(iρ, σ, µ + iν; z) ≤ (κ − 1)σ − 1+B 2(1−B) − ρ(1−AB) 4(A−B) cy + (1−B)(1−A)−(1+B)(1+A)ρ 2 8(A−B) cx ≤ − 1 2 (κ − 1)(1 + ρ 2 ) − 1+B 2(1−B) − ρ(1−AB) 4(A−B) cy + (1−B)(1−A)−(1+B)(1+A)ρ 2 8(A−B) cx = p 1 ρ 2 + q 1 ρ + r 1 := Q(ρ), where p 1 = − 1 2 (κ − 1) − (1+B)(1+A)cx 8(A−B) , q 1 = − 1−AB 4(A−B) cy, r 1 = − 1 2 (κ − 1) + (1−B)(1−A) 8(A−B) cx − 1+B 2(1−B) . Condition (4) shows that p 1 = − 1 2 (κ − 1) − (1+B)(1+A)cx 8(A−B) < − 1 2 (κ − 1) − (1+B)(1+A) 4(A−B) |c| < 0. Since max ρ∈R {p 1 ρ 2 + q 1 ρ + r 1 } = (4p 1 r 1 − q 2 1 )/(4p 1 ) for p 1 < 0, it is clear that Q(ρ) < 0 when (1−AB) 2 16(A−B) 2 c 2 y 2 < 4 − 1 2 (κ − 1) − (1+B)(1+A) 8(A−B) cx × − 1 2 (κ − 1) + (1−B)(1−A) 8(A−B) cx − 1+B 2(1−B) , with |x|, |y| < 1. As y 2 < 1 − x 2 , the above condition holds whenever (1−AB) 2 c 2 16(A−B) 2 (1 − x 2 ) ≤ (κ − 1) + (1+B)(1+A)cx 4(A−B) (κ − 1) − (1−B)(1−A) 4(A−B) cx + 1+B 1−B , that is, when c 2 16 x 2 + (κ − 1) (A+B) 2(A−B) c + (1+B) 2 (1+A) 4(1−B)(A−B) c x + (κ − 1) 2 + (κ − 1) 1+B 1−B − (1−AB) 2 16(A−B) 2 c 2 ≥ 0.(15) To establish inequality (15), consider the polynomial R given by R(x) := mx 2 + nx + r, |x| < 1, where m := c 2 16 , n := (κ − 1) (A+B) 2(A−B) c + (1+B) 2 (1+A) 4(1−B)(A−B) c r := (κ − 1) 2 + (κ − 1) 1+B 1−B − (1−AB) 2 16(A−B) 2 c 2 . The constraint (6) yields |n| ≥ 2|m|, and thus R(x) ≥ m + r − |n|. Now inequality (5) readily implies that R(x) ≥ m + r − |n| = c 2 16 + (κ − 1) 2 + (κ − 1) 1+B 1−B − (1−AB) 2 16(A−B) 2 c 2 − (κ − 1) (A+B) 2(A−B) c + (1+B) 2 (1+A) 4(1−B)(A−B) c = (κ − 1) 2 + (κ − 1) (1+B) 1−B − (κ − 1) (A+B) 2(A−B) c + (1+B) 2 (1+A) 4(1−B)(A−B) c − (1−A 2 )(1−B 2 ) 16(A−B) 2 c 2 ≥ 0. Now considers the case of the constraint (8), which is equivalent to |n| < 2m. Then the minimum of R occurs at x = −n/(2m), and (7) yields R(x) ≥ 4mr−n 2 4m ≥ 0. Evidently Ψ satisfies the hypothesis of Lemma 1.1, and thus Re p(z) > 0, that is, Proof. Choose A = −(c + 1)/(c − 1), and B = −1 in Theorem 2.1. Then both the conditions (4) and (6) are equivalent to κ ≥ 1 which clearly holds for κ ≥ 1 + c 2 /2. The proof will complete if the hypothesis (5) holds, i.e., (κ − 1) 2 ≥ 1 2 (κ − 1)c 2 .(16) Since κ ≥ 1 + c 2 /2, it follows that (κ − 1) 2 − 1 2 (κ − 1)c 2 = (κ − 1) κ − 1 − c 2 2 ≥ 0, which establishes (16). Corollary 2.2. Let c, κ be real such that κ ≥ 1, c ≤ 0 1 + c 2 , c ≥ 0. Then Re u p (z) > 1/2. Proof. Put A = 0 and B = −1 in Theorem 2.1. The condition (4) reduces to κ ≥ 1, which holds in all cases. It is sufficient to establish conditions (6) and (5), or equivalently, 4(κ − 1) − c ≥ 0,(17) and (κ − 1) 2 − 1 2 (κ − 1)c ≥ 0.(18) For the case when c ≤ 0, both the inequality (17) and (18) hold as κ ≥ 1. Finally it is readily established for c ≥ 0 and κ − 1 ≥ c/2 that 4(κ − 1) − c ≥ c ≥ 0, and (κ − 1) 2 − 1 2 (κ − 1)c ≥ (κ − 1)(κ − 1 − c 2 ) ≥ 0. It is known that for b = 2 and c = ±1, the generalized Bessel functions u p,2,1 (z) = j p (z) and u p,2,−1 (z) = i p (z) respectively gives the spherical Bessel and modified spherical Bessel functions. This specific choice of b and c, Corollary 2.2 yield Re(i p (z)) > 1/2 for p ≥ −1/2, and Re(j p (z)) > 1/2, for p ≥ 0. Since i ′ p (0) = 1/(4p + 6) for p ≥ −1/2, following inequalities can be obtain with the aid of results in [9]. Corollary 2.3. For p ≥ −1/2, the modified spherical Bessel functions i p satisfy the following inequalities. |i p (z)| ≤ 4p + 6 + |z| 2(2p + 3)(1 − |z| 2 ) ,(19) Re(i p (z) ≥ p + 6 + |z| 4p + 6 + 2|z| + 2(2p + 3)|z| 2 , |i ′ p (z)| ≤ 2 Re(i p (z) − 1 2(1 − |z| 2 ) × |z| 2 + 4(2p + 3)|z| + 1 (2p + 3)|z| 2 + |z| + (2p + 3)(20) . Next theorem gives the sufficient condition for close-to-convexity when B ≥ 3 − 2 √ 2. Theorem 2.2. Let 3 − 2 √ 2 ≤ B < A ≤ 1 and c, κ ∈ R satisfy κ − 1 ≥ max 0, (1+B)(1+A) 4(A−B) |c| . Suppose A, B, κ and c satisfy either the inequality (κ − 1) 2 + 16(κ − 1) B(1−B) (1+B) 3 − (κ−1)(A+B) 2(A−B) c + 4B(1−B 2 )(1+A) (1+B) 3 (A−B) c ≥ (1−A 2 )(1−B 2 ) 16(A−B) 2 c 2 (23) whenever (κ − 1)(1 + B) 3 (A + B)c + 8B(1 − B 2 )(1 + A)c ≥ c 2 4 (A − B)(1 + B) 3 ,(24) or the inequality (κ − 1) (A+B) 2(A−B) c + 4B(1−B 2 )((1+A) (1+B) 3 (A−B) c 2 ≤ c 2 4 (κ − 1) 2 + 16(κ − 1) B(1−B) (1+B) 3 − (1−AB) 2 16(A−B) 2 c 2 (25) whenever (κ − 1)(A + B)(1 + B) 3 + 8B(1 − B 2 )(1 + A) c < c 2 4 (A − B)(1 + B) 3 .(26)If (1 + B)u p (z) = (1 + A), then u p (z) ∈ P[A, B]. Proof. First, proceed similar to the proof of Theorem 2.1 and derive the expression of Re Ψ(iρ, σ, µ + iν; z) as given in (14) . Now for σ ≤ −(1 + ρ 2 )/2, ρ ∈ R, and B ≥ 3 − 2 √ 2, 2(1−B 2 ) (1−B) 2 +(1+B) 2 ρ 2 σ 2 ≥ 2(1−B 2 ) (1−B) 2 +(1+B) 2 ρ 2 (1+ρ 2 ) 2 4 ≥ 8B(1 − B) (1 + B) 3 , and then with z = x + iy ∈ D, and µ + σ < 0, it follows that Re Ψ(iρ, σ, µ + iν; z) ≤ − 1 2 (κ − 1)(1 + ρ 2 ) − (1+B)(1+A)ρ 2 8(A−B) cx − ρ(1−AB) 4(A−B) cy + (1−B)(1−A) 8(A−B) cx − 8B(1−B) (1+B) 3 = p 2 ρ 2 + q 2 ρ + r 2 := Q 1 (ρ), where p 2 = − 1 2 (κ − 1) − (1+B)(1+A) 8(A−B) cx, q 2 = − (1−AB)cy 4(A−B) , r 2 = − 1 2 (κ − 1) + (1−B)(1−A) 8(A−B) cx − 8B(1−B) (1+B) 3 . Observe that the inequality (22) implies that p 2 < 0. Thus Q 1 (ρ) < 0 for all ρ ∈ R provided q 2 2 ≤ 4p 2 r 2 , that is, for |x|, |y| < 1, (1−AB) 2 16(A−B) 2 c 2 y 2 ≤ (κ − 1) + (1+B)(1+A) 4(A−B) cx (κ − 1) − (1−B)(1−A) 4(A−B) cx + 16B(1−B) (1+B) 3 . With y 2 < 1 − x 2 , it is enough to show for |x| < 1, (1−AB) 2 16(A−B) 2 c 2 (1 − x 2 ) ≤ (κ − 1) + (1+B)(1+A) 4(A−B) cx (κ − 1) − (1−B)(1−A) 4(A−B) cx + 16B(1−B) (1+B) 3 , which is equivalent to R 1 (x) := m 1 x 2 + n 1 x + r 1 ≥ 0,(27) where m 1 := c 2 16 , n 1 := (κ − 1) c(A+B) 2(A−B) + 4B(1−B 2 )(1+A) (A−B)(1+B) 3 c ,r 1 := (c − 1) 2 + (c − 1) 16B(1−B) (1+B) 3 − a 2 (1−AB) 2 (A−B) 2 . If (24) holds, then |n 1 | ≥ 2|m 1 |. Since R 1 is increasing, then R 1 (x) ≥ m 1 +r 1 −|n 1 |, which is nonnegative from (23). On the other hand, if (26) holds, then |n 1 | < 2|m 1 |, R 1 (x) ≥ (4m 1 r 1 − n 2 1 )/4m 1 , and (25) implies R 1 (x) ≥ 0. Either case establishes (27). Theorem 2.3. Let −1 ≤ B ≤ 3 − 2 √ 2 ≈ 0.171573. Suppose B < A ≤ 1, c, κ ∈ R with c = 0 and satisfying κ ≥ max 0, (1+B)(1+A) 4(A−B) |c| . Further let A, B, κ and c satisfy either κ 2 + κ 1+B 1−B − κ (A+B) 2(A−B) c + (1+B) 2 (1+A) 4(1−B)(A−B) c ≥ (1−A 2 )(1−B 2 ) 16(A−B) 2 c 2 whenever 2κ(1 − B)(A + B)c + (1 + B) 2 (1 + A)c) ≥ 1 2 (A − B)(1 − B)c 2 , For 0 0≤ β < 1, S * [1 − 2β, −1] := S * (β) is the usual class of starlike functions of order β; S * [1 − β, 0] := S * β = {f ∈ A : |zf ′ (z)/f (z) − 1| < 1 − β}, and S * [β, −β] := S * [β] = {f ∈ A : |zf ′ (z)/f (z) − 1| < β|zf ′ (z)/f (z) + 1|}. Hence there exists an analytic self-map w of D with w(0) that u p (z) ≺ (1 + Az)/(1 + Bz).Theorem 2.1 gives rise to simple conditions on c and κ to ensure u p (z) maps D into a half-plane. Corollary 2. 1 . 1Let c ≤ 0 and 2κ ≥ 2 + c 2 . Then Re u p (z) > c/(c − 1). −. 1 − B)(A + B)c + (1 + B) 2 (1 + A)c < 1 2 (A − B)(1 − B)c 2 . If (1 + B)u p (z) = (1 + A), then (−4κ/c)u ′ p (z) ∈ P[A, B]. Theorem 2.4. Let 3 − 2 √ 2 < B < A ≤ 1. Suppose c, κ ∈ R, a = 0, such that κ ≥ max 0,Suppose A, B, κ and c satisfy eitherκ 2 + 16κ B(1−B) (1+B) 3 − κ (A+B) 2(A−B) c + 4B(1−B 2 )(1+A) (1+B) 3 (A−B) c ≥ (1−A 2 )(1−B 2 ) 16(A−B) 2 c 2 whenever κ(1 + B) 3 (A + B)c + 8B(1 − B 2 )(1 + A)c) ≥ c 2 4 (A − B)(1 + B) 3 ,or the inequality κ (A+B) 2(A−B) c + 4B(1−B 2 )(1+A) (1+B) 3 (A−(1−AB) 2 16(A−B) 2 c 2 when 2κ(1 + B) 3 (A + B)c + 8B(1 − B 2 )(1 + A)c < c 2 4 (A − B)(1 + B) 3 . Let c and κ be real numbers such that (A − B)Further let A, B, κ and c satisfy(A − 2B + κ(1 + B))(2B − A + κ(1 − B)) ≥ (1−B 2 ) 2 16(A−B) c 2 + B 3 −(A−B)(1+B 2 )Then zu p (z) ∈ S * [A, B]. p (z) . − (A−B)(p(z)−1) (p(z)+1)+B(p(z)−1) + (A−B) 2 (p(z)−1) 2 ((p(z)+1)+B(p(z)−1)) 2 .(33) Then (−4κ/c)(u p (z) − 1) is close-to-convex of order (c + 1)/c with respect to the identity function.Corollary 2.5. Let c be a nonzero real number, and κ ≥ |c|/2. Then Re(−4κ/c)u ′ p (z) > 1/2.Janowski starlikeness of generalized Bessel functionsThis section contributes to find conditions to ensure a normalized and generalized Bessel functions zu p (z) in the class of Janowski starlike functions. For this purpose, first sufficient conditions for u p (z) to be Janowski convex is determined, and then an application of relation (3) yields conditions forFurther let A, B, κ and c satisfyProof. Define an analytic function p : D → C byandA rearrangement of (32) yieldsNow a differentiation of (2) leads toUsing(31)and(33), (34) yieldsDefine,. Thus, (35) yields Ψ(p(z), zp ′ (z), z) ∈ Ω = {0}. Now with z = x + iy ∈ D, let4(A−B),For σ ≤ −(1 + ρ 2 )/2, ρ ∈ R,Note that condition (28) implies (1 + 2G 1 )/2 > 0. In this case, Q has a maximum at ρ = G 2 /(1 + 2G 1 ). Thus Q(ρ) < 0 for all real ρ provided|x| < 1. The above inequality is equivalent towhereSince |x| < 1, the left-hand side of the inequality (36) satisfyNow it is evident from (29) that H(x) ≥ 0 which establish the inequality (36).Thus Ψ satisfies the hypothesis of Lemma 1.1, and hence Re p(z) > 0, or equivalentlyBy definition of subordination, there exists an analytic self-map w of D with w(0) = 0 andA simple computation shows thatThe relation (3) also shows that z (zup(z))) ′ zup(z)Together with Theorem 3.1, it immediately yields the following result for zu p (z)) ∈ S * [A, B]. Sufficient conditions for Janowski starlikeness. R M Ali, V Ravichandran, N Seenivasagan, ID 62925Int. J. Math. Math. Sci. 7R. M. Ali, V. Ravichandran and N. Seenivasagan, Sufficient conditions for Janowski starlikeness, Int. J. Math. Math. Sci. 2007, Art. ID 62925, 7 pp. Janowski starlikeness for a class of analytic functions. R M Ali, R Chandrashekar, V Ravichandran, Appl. Math. Lett. 244R. M. Ali, R. Chandrashekar and V. Ravichandran, Janowski starlikeness for a class of analytic functions, Appl. Math. Lett. 24 (2011), no. 4, 501-505. Geometric properties of generalized Bessel functions. Á Baricz, Publ. Math. Debrecen. 731-2Á. Baricz, Geometric properties of generalized Bessel functions, Publ. Math. Debrecen 73 (2008), no. 1-2, 155-178. Geometric properties of generalized Bessel functions of complex order. Á Baricz, Mathematica. 4871Á. Baricz, Geometric properties of generalized Bessel functions of complex order, Mathematica 48(71) (2006), no. 1, 13-18. Starlikeness and convexity of generalized Bessel functions. Á Baricz, S Ponnusamy, Integral Transforms Spec. Funct. 219Á. Baricz and S. Ponnusamy, Starlikeness and convexity of generalized Bessel functions, Integral Transforms Spec. Funct. 21 (2010), no. 9-10, 641-653. The radius of convexity of normalized Bessel functions of the first kind. Á Baricz, R Szász, Anal. Appl. (Singap.). 125Á. Baricz and R. Szász, The radius of convexity of normalized Bessel functions of the first kind, Anal. Appl. (Singap.) 12 (2014), no. 5, 485-509. . A W Goodman, Univalent functions. I & IIA. W. Goodman, Univalent functions. Vol. I & II, Mariner, Tampa, FL, 1983. Some extremal problems for certain families of analytic functions. I, Ann. W Janowski, Polon. Math. 28W. Janowski, Some extremal problems for certain families of analytic func- tions. I, Ann. Polon. Math. 28 (1973), 297-326. Functions with real part greater than α. C P Mccarty, Proc. Amer. Math. Soc. 35C. P. McCarty, Functions with real part greater than α, Proc. Amer. Math. Soc. 35 (1972), 211-216. Univalence of Gaussian and confluent hypergeometric functions. S S Miller, P T Mocanu, Proc. Amer. Math. Soc. 1102S. S. Miller and P. T. Mocanu, Univalence of Gaussian and confluent hyper- geometric functions, Proc. Amer. Math. Soc. 110 (1990), no. 2, 333-342. Differential subordinations and inequalities in the complex plane. S S Miller, P T Mocanu, J. Differential Equations. 672S. S. Miller and P. T. Mocanu, Differential subordinations and inequalities in the complex plane, J. Differential Equations 67 (1987), no. 2, 199-211. S S Miller, P T Mocanu, Differential subordinations, Monographs and Textbooks in Pure and Applied Mathematics. New YorkDekker225S. S. Miller and P. T. Mocanu, Differential subordinations, Monographs and Textbooks in Pure and Applied Mathematics, 225, Dekker, New York, 2000. Univalence and convexity properties for confluent hypergeometric functions. S Ponnusamy, M Vuorinen, Complex Variables Theory Appl. 361S. Ponnusamy and M. Vuorinen, Univalence and convexity properties for con- fluent hypergeometric functions, Complex Variables Theory Appl. 36 (1998), no. 1, 73-97. On the order of starlikeness of hypergeometric functions. St, V Ruscheweyh, Singh, J. Math. Anal. Appl. 1131St. Ruscheweyh and V. Singh, On the order of starlikeness of hypergeometric functions, J. Math. Anal. Appl. 113 (1986), no. 1, 1-11. Geometric properties of normalized Bessel functions. V Selinger, Pure Math. Appl. 62-3V. Selinger, Geometric properties of normalized Bessel functions, Pure Math. Appl. 6 (1995), no. 2-3, 273-277. About the univalence of the Bessel functions. R Szász, P A Kupán, Stud. Univ. Babeş-Bolyai Math. 541R. Szász and P. A. Kupán, About the univalence of the Bessel functions, Stud. Univ. Babeş-Bolyai Math. 54 (2009), no. 1, 127-132. N M Temme, Special functions. New YorkWileyN. M. Temme, Special functions, A Wiley-Interscience Publication, Wiley, New York, 1996.
[]
[ "Global properties of Stochastic Loewner evolution driven by Lévy processes", "Global properties of Stochastic Loewner evolution driven by Lévy processes" ]
[ "P Oikonomou \nThe James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA\n", "I Rushkin \nThe James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA\n", "I A Gruzberg \nThe James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA\n", "L P Kadanoff \nThe James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA\n" ]
[ "The James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA", "The James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA", "The James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA", "The James Franck Institute\nThe University of Chicago\n5640 S. Ellis Avenue60637ChicagoIlUSA" ]
[]
Standard Schramm-Loewner evolution (SLE) is driven by a continuous Brownian motion which then produces a trace, a continuous fractal curve connecting the singular points of the motion. If jumps are added to the driving function, the trace branches. In a recent publication [1] we introduced a generalized SLE driven by a superposition of a Brownian motion and a fractal set of jumps (technically a stable Lévy process). We then discussed the small-scale properties of the resulting Lévy-SLE growth process. Here we discuss the same model, but focus on the global scaling behavior which ensues as time goes to infinity. This limiting behavior is independent of the Brownian forcing and depends upon only a single parameter, α, which defines the shape of the stable Lévy distribution. We learn about this behavior by studying a Fokker-Planck equation which gives the probability distribution for endpoints of the trace as a function of time. As in the short-time case previously studied, we observe that the properties of this growth process change qualitatively and singularly at α = 1. We show both analytically and numerically that the growth continues indefinitely in the vertical direction for α > 1, goes as log t for α = 1, and saturates for α < 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. In the former case, the characteristic scale is X(t) ∼ t 1/α . In the latter case the scale is Y (t) ∼ A + Bt 1−1/α for α = 1, and Y (t) ∼ ln t for α = 1. Scaling functions for the probability density are given for various limiting cases.
10.1088/1742-5468/2008/01/p01019
[ "https://arxiv.org/pdf/0710.2680v2.pdf" ]
3,177,663
0710.2680
054fcb9bdf7d02922065330f482a2189e2a12971
Global properties of Stochastic Loewner evolution driven by Lévy processes 24 Jan 2008 January 14, 2008 P Oikonomou The James Franck Institute The University of Chicago 5640 S. Ellis Avenue60637ChicagoIlUSA I Rushkin The James Franck Institute The University of Chicago 5640 S. Ellis Avenue60637ChicagoIlUSA I A Gruzberg The James Franck Institute The University of Chicago 5640 S. Ellis Avenue60637ChicagoIlUSA L P Kadanoff The James Franck Institute The University of Chicago 5640 S. Ellis Avenue60637ChicagoIlUSA Global properties of Stochastic Loewner evolution driven by Lévy processes 24 Jan 2008 January 14, 2008 Standard Schramm-Loewner evolution (SLE) is driven by a continuous Brownian motion which then produces a trace, a continuous fractal curve connecting the singular points of the motion. If jumps are added to the driving function, the trace branches. In a recent publication [1] we introduced a generalized SLE driven by a superposition of a Brownian motion and a fractal set of jumps (technically a stable Lévy process). We then discussed the small-scale properties of the resulting Lévy-SLE growth process. Here we discuss the same model, but focus on the global scaling behavior which ensues as time goes to infinity. This limiting behavior is independent of the Brownian forcing and depends upon only a single parameter, α, which defines the shape of the stable Lévy distribution. We learn about this behavior by studying a Fokker-Planck equation which gives the probability distribution for endpoints of the trace as a function of time. As in the short-time case previously studied, we observe that the properties of this growth process change qualitatively and singularly at α = 1. We show both analytically and numerically that the growth continues indefinitely in the vertical direction for α > 1, goes as log t for α = 1, and saturates for α < 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. In the former case, the characteristic scale is X(t) ∼ t 1/α . In the latter case the scale is Y (t) ∼ A + Bt 1−1/α for α = 1, and Y (t) ∼ ln t for α = 1. Scaling functions for the probability density are given for various limiting cases. Introduction The study of random conformally-invariant clusters that appear at critical points in two-dimensional statistical mechanics models has been made rigorous with the invention of the so-called Schramm-Loewner evolution (SLE) [2]. SLE refers to a continuous family of evolving conformal maps that specify the shape of a part of a critical cluster boundary. By now SLE has been justly recognized as a major breakthrough, and there are several review papers and one monograph devoted to this beautiful subject, see Refs. [3,4,5,6,7,8,9,10] SLE describes a curve, called trace, growing with time from a boundary in a two-dimensional domain which is usually chosen to be the upper half plane. SLE is based on the Loewner equation in which the shape of the growing curve is determined by a function of time ξ(t) which in SLE is taken to be a scaled Brownian motion. Such a choice of the driving function produces continuous stochastic, fractal and conformally invariant curves, -the kind that appears as the scaling limit of various interfaces in many two-dimensional critical lattice models and growth processes of statistical physics. Well-known examples include boundaries of the Fortuin-Kastelyn clusters in the critical q-state Potts model, loops in the O(n) model, selfavoiding and loop-erased random walks. In Ref. [1] we generalized SLE to a broader class for which ξ(t) is a Markov process with discontinuities. More specifically, we have studied the Loewner evolution driven by a linear combination of a scaled Brownian motion and a symmetric stable Lévy process. The growing curve then exhibits branching. This generalized process might be useful to describe many treelike growth processes, such as branching polymers and various branching growth processes which evolve in time. Such generalized SLEs driven by Lévy processes (Lévy-SLE for short) have also been of interest to mathematics community. Our results [1] on various phase transitions in Lévy-SLE have been put on rigorous basis in Ref. [11], and further properties have been studied in Refs. [12,13]. The interest of mathematicians in these Lévy-SLE processes is partially motivated by the suggestion [13,14] that they may produce fractal objects with large values of multifractal exponents for harmonic measure. Harmonic measure can be thought of as the charge distribution on the boundary of a conducting cluster. On fractal boundaries such a distribution is a multifractal, and in the case of critical clusters (whose boundaries are SLE curves) the full spectrum of multifractal exponents has been obtained analytically, see Refs. [14,15,16,17,18] for various derivations and discussion. While our previous paper [1] focused on local properties of Lévy-SLE, here we study the global behavior of the growth in the upper half plain. The present paper is structured as follows. In Section 2 we define our model and briefly state our previous results on phase transitions in the local behavior of the model. We also present our new results on the global behavior of Lévy-SLE. In Section 3 we derive the Fokker-Planck equation governing the evolution of the probability distribution for the tip of the Lévy-SLE. The equation is our main tool for analysis of the long time global behavior of the growth. We give a qualitative description of the growth and explain the approximations that go into the solution of the Fokker-Planck equation in Section 4. Actual solution of the Fokker-Planck equation and comparison with results from numerically calculated trajectories is given in Section 5. We conclude in Section 6. Some technical details are presented in Appendices. The model and the results, old and new Loewner evolution is a family of conformal maps that appears as the solution of the Loewner differential equation (see, for example, Ref. [3] for details) ∂ t g t (z) = 2 g t (z) − ξ(t) , g 0 (z) = z.(1) valid at any point z in the upper half plane until (and if) this point becomes singular at some (possibly infinite) time τ z : ξ(τ z ) = g τz (z). The set of all singularities is called the hull and the point at which the hull grows is called the tip. The tip γ(t) is defined via its image ξ(t) = g t (γ(t)). More formally, γ(t) = lim w→ξ(t) g −1 t (w),(2) where the limit is taken in the upper half plane. The trace is the path left behind by the tip (the existence of the trace in the setting of this paper has been shown in Ref. [12]). The shape of the growing trace (and the hull) is completely determined by the driving function ξ(t). At any time the function g t (z) conformally maps the exterior of the growing hull to the upper half plane, see Fig. 1. We refer to the z plane where the growth occurs as "the physical plane", and to the w plane as "the mathematical plane". Naturally, if ξ(t) is a stochastic process the shape of the growing trace is also stochastic. The growth process is then a stochastic (Schramm-) Loewner Figure 1: The Loewner evolution shown for the case when the growing hull is a smooth curve. The complement of the segment of the curve (up to its tip γ(t)) in the "physical" z plane is mapped to the entire upper half of the "mathematical" w plane by the function g t (z). evolution (SLE). The standard SLE has a driving function ξ(t) = √ κB(t), where B(t) is a normalized Brownian motion and κ > 0 is the diffusion constant. Many important properties of this process have been established in Ref. [19]. In Ref. [1] we have generalized SLE to ξ(t) = √ κB(t) + c 1/α L α (t),(3) where L α (t) is a normalized symmetric α-stable Lévy process [20,21,22,23], and c > 0 is the "diffusion constant" associated with it. The process L α (t) is composed of a succession of jumps of all sizes. Unlike a Brownian motion, L α (t) is discontinuous on all time-scales. Therefore, the addition of a Lévy processes to the driving force of SLE introduces branching to the trace. The probability distribution function of c 1/α L α (t) is given by the Fourier transform P (x, t) = ∞ −∞ dk 2π e −ikx e −ct|k| α .(4) As it is known in the theory of stable distributions [23], only for 0 < α 2 this Fourier transform gives a non-negative probability density. For 0 < α < 2 the function P (x, t) decays at large distances as a power law: x → ∞ : P (x, t) ∼ ct |x| 1+α ,(5) so that the process scales as |L α (t)| δ ∝ t δ/α(6) for any δ < α. For δ α this average is infinite. For α = 2 the process L 2 (t) is the standard Brownian motion B(t) and P (x, t) is Gaussian. We studied the short-distance properties of the Lévy-SLE process in Ref. [1]. At short times and distances the process is dominated by the Brownian motion and the deterministic drift term (see Eq. (13)), whereas at long times it is dominated by Lévy flights. The crossover between short and long time behavior happens at the time t 0 ∼ 1 c 2 1/(2−α) .(7) This also defines a spatial crossover at length scales l 0 ∝ √ t 0 . For scales smaller than l 0 the trace behaves like standard SLE, while for scales much larger than l 0 it spreads in the x direction forming tree-like structures. In our previous paper [1], using both analytic and numerical considerations, we determined the probability that a point on the x axis is swallowed by the trace. The trace shows a qualitative change in its small-distance, small-time behavior as κ and α each pass though critical values, respectively at four and one. The transition at κ = 4 is quite analogous to the known transition of standard SLE [19]. For the new transition at α = 1, the trace forms isolated trees when α < 1 or a dense forest when α > 1. The latter phase transition at α = 1 was recently studied rigorously in Ref. [11] which expanded the implications of the phase transition to the whole plane at the limit t → ∞. For κ > 4 a point in the upper half plane is swallowed almost surely for α > 1, while it is swallowed with probability smaller than one for α < 1. For κ < 4 and 0 < α < 2 the swallowed points on the plane form a set of measure zero. The large-scale implications of the α = 1 transition can be seen in Figure 2 which shows the shape of the trace at long times. For α < 1 the stochastic evolution produces isolated tree-like structures which are limited in height. For α > 1 the evolution produces an "underbrush" in which structures pile on one another and thereby continue to increase their height. In the rest of the paper we establish the following. The growth at long times is characterized by two very different length scales X(t) and Y (t) (with X(t) ≫ Y (t)) which can be thought of as the typical size of the growing hull Figure 2: Examples of traces produced by Lévy-SLE at long times, up to t = 9000. For the first, second and last thirds of the time interval, the traces are correspondingly colored red, blue, and yellow. Top panel: α = 0.7. The trace looks like many isolated trees whose height saturates at long times. Bottom panel: α = 1.5. Now the tip of the growing trace keeps landing on the previously grown "bushes" so that the trace extends indefinitely in the vertical direction as time increases. Notice the difference in scales for the y axis between the two panels, as well as the much larger spread in the x direction in the top panel. The trace was produced using stable Lévy forcing and c = 10, time step τ = 10 −3 . The trace was calculated only at times when the forcing makes a large jump dξ > √ 200τ . in the x and y directions. More specifically, we find that X(t) ∼ t 1/α , 0 < α < 2,(8)Y (t) ∼ A + Bt 1−1/α , α = 1, ln t, α = 1.(9) (The constants A and B depend upon α.) These scales enter the scaling form of the joint probability distribution ρ(x, y, t) for the real and imaginary parts of the tip γ(t) of the Lévy-SLE, for which we give explicit results in various limiting cases in Section 5, where we also compare analytical results with extensive numerical simulations. Derivation of the Fokker-Planck equation We are interested in characterizing the probability distribution for the point γ(t) at the tip of the trace in the ensemble provided by different realizations of the SLE stochastic process. Eq. (2) implies then that we should study the inverse map g −1 t . However, this is rather difficult, since the map g −1 t satisfies a partial differential equation instead of an ODE. There is a way out which is rather well known and has been successfully used before [13,14,19]. It happens that one needs to consider the backward time evolution: ∂ t f t (w) = − 2 f t (w) − ξ(t) , f 0 (w) = w.(10) The relation of the original Loewner evolution (1) and the backward one (10) in the stochastic setting is as follows. If ξ(t) is a symmetric (in time) process with independent identically distributed increments, which is the case for a Lévy process, then it is easy to show that for any fixed time t the solution f t (w) of the backward equation (10) has the same distribution as g −1 t (w − ξ(t)) + ξ(t), see Refs. [13,19]. Using the symbol d = for equality of distributions for random variables, we can write f t (w) d = g −1 t (w − ξ(t)) + ξ(t).(11) It is useful to introduce a shifted conformal map h t (z) = g t (z) − ξ(t),(12) for which the Loewner equation acquires the Langevin-like form: ∂ t h t (z) = 2 h t (z) − ∂ t ξ(t), h 0 (z) = z,(13) assuming that ξ vanishes at t = 0. The first term is a deterministic drift and the second -a random noise. The tip γ(t) is now mapped to zero, and this can be taken as the definition of the tip. More formally, γ(t) = lim w→0 h −1 t (w)(14) where the limit is taken in the upper half plane. In terms of the shifted map the equality of distributions (11) can be written as f t (w) − ξ(t) d = h −1 t (w).(15) The left hand side z t ≡ f t (w) − ξ(t) of this equation satisfies the Langevinlike equation ∂ t z t = − 2 z t − ∂ t ξ(t), z 0 = w,(16) and in particular, if we set w = 0 in this equation, the resulting stochastic dynamics should be the same as that of the tip of the trace γ(t). Before we convert the Langevin-like equation (16) to our main analytical tool, the corresponding Fokker-Planck equation, let us review again the correspondence between the forward and backward flows and illustrate it with figures. Equation (13) describes a flow in which w t = h t (z) follows a trajectory of a particle in the w plane, z being its initial position. Separating the real and imaginary parts of w t = u t + iv t , we get a system of coupled equations ∂ t u t = 2u t u 2 t + v 2 t − ∂ t ξ(t), u 0 = x, ∂ t v t = − 2v t u 2 t + v 2 t , v 0 = y,(17) describing such a trajectory. As in many modern versions of dynamics, all initial conditions and hence all trajectories are considered at the same time, forming an ensemble. Two such trajectories are presented on the top panel in Fig. 3. For a generic initial point z the trajectory w t goes to infinity in the horizontal u direction, while the vertical coordinate v t monotonously decreases. However, if the initial point happens to be a point γ(T ) on the SLE trace, the forward trajectory hits the origin in the mathematical plane exactly at time T . Conversely, we can fix a point w in the mathematical plane and follow the motion of its image z t under the map f t (w) − ξ(t) in the physical plane, with the initial condition z 0 = w. In components z t = x t + iy t , the trajectories of this backward flow satisfy the system of equations ∂ t x t = − 2x t x 2 t + y 2 t − ∂ t ξ(t), x 0 = u, ∂ t y t = 2y t x 2 t + y 2 t , y 0 = v.(18) The two trajectories shown on the bottom panel in Fig. 3 precisely retrace the trajectories of the forward flow shown on the top panel. This has been achieved by driving the backward evolution (18) by the time reversed noise ξ(T − t) − ξ(T ) d = ξ(t) as compared to the forward evolution. In this case the final point z T of the trajectory that started at the origin coincides with the tip of the trace γ(T ) at that time, but the rest of the trajectory does not follow the SLE trace. If we drive the backward flow by an independent copy of ξ(t), then even the final point z T will be different from γ(T ), but in the statistical ensemble z T and γ(T ) will have the same distribution. Now we can introduce the probability distribution function of the process z t = x t + iy t in the physical plane defined by: ρ(x, y, t) = δ(x t − x)δ(y t − y) .(19) From Eqs. (3,18) it follows immediately that ρ(x, y, t) satisfies the following (generalized) Fokker-Planck equation: ∂ t ρ(x, y, t) = κ 2 ∂ 2 x + c|∂ x | α + ∂ x 2x x 2 + y 2 − ∂ y 2y x 2 + y 2 ρ(x, y, t).(20) Here |∂ x | α (sometimes also written as (−∆) α/2 ) is the Riesz fractional derivative, which is a singular integral operator whose action is easiest to describe in the Fourier space: iff (k) is the Fourier transform of a function f (x), then the Fourier transform of |∂ x | α f (x) is |k| αf (k). As we have discussed, at long times the growth is dominated by the stable process in the driving function (3), and we can set κ = 0. So our main analytical tool is the following Fokker-Planck equation: ∂ t ρ(x, y, t) = c|∂ x | α + ∂ x 2x x 2 + y 2 − ∂ y 2y x 2 + y 2 ρ(x, y, t).(21) Let us discuss the boundary and initial conditions for this equation. The initial condition for the Fokker-Planck equation (21) depends on the initial conditions x 0 = u, y 0 = v in the stochastic equations (18). For the distribution of the SLE tip γ(t) the appropriate initial conditions are x 0 = 0, y 0 = ǫ, where ǫ is an infinitesimal positive number. For the exact Fokker-Planck equation (20) this translates into the initial condition ρ(x, y, 0) = δ(x)δ(y − ǫ).(22) However, for the approximate equation (21) the situation is more subtle. The crossover time t 0 = O(1). For t < t 0 the drift in the x direction (towards x = 0) dominates over the Lévy term. For t > t 0 the opposite is true. A simple picture is then that before t 0 the initial δ function is advected by the drift velocity 2/y in the y direction. By the time t 0 it becomes ρ 0 (x, y) ≡ ρ(x, y, t 0 ) = δ(x)δ(y − y 0 ), y 0 = 2t 1/2 0 ∼ c −1/(2−α) .(23) This is the initial value that we shall assume for our problem. In the following sections we will mostly use the notation ρ 0 (x, y), using the explicit expression when necessary. Let us comment that if we tried to be more careful and included the effects of the Brownian forcing before the crossover time t 0 , then the distribution at time t 0 would not only be advected to y 0 but would also broaden to a Gaussian with variance κt 0 . This refinement would not change any arguments in the later sections, since all we need there is that the Fourier transform in x of the initial distribution is broader than e −ct|k α | for long times, see the discussion preceding Eq. (36). This is a good approximation for both the initial distribution (23) or its Gaussian variant for sufficiently long times, and becomes better and better as time increases. As for the boundary conditions at y = 0, we have no need to be very explicit about them, since ρ(x, y, 0) vanishes for y < y 0 , and our equations of motion (18) represent a situation in which y t continually increases as t increases, so that ρ(x, y, t) will also vanish for y < y 0 at all times t > 0. Qualitative description, distance scales In this section we analyze in qualitative terms the long-time limit of the evolution of the tip γ(t), by looking at the consequences of equations (18). According to the discussion in the previous section, Re γ(t) and Im γ(t) have the same joint distribution as x t and y t . For small times, up to the crossover time t 0 , the drift term in the Langevin equation dominates over the Lévy noise. Therefore, both x t and y t , and ξ(t) as well, grow as √ t. For larger times, t ≫ t 0 , there are two different characteristic length scales, X(t) and Y (t). In this regime the forcing ξ(t) is dominated by the Lévy process L α (t). The probability for a total motion X(t) over a time t for this process is described by Eq. (4). Typically the motion is dominated by a single long jump, and the jump has an order of magnitude X(t) ∼ (ct) 1/α .(24) (This can be understood as rescaled fractional moments |L α (t)| δ 1/δ , see Eq. (6)) Since the typical jumps of ξ(t) become arbitrarily large at long times, x t also becomes large, and therefore, the drift term in the first equation of (18) becomes negligible. In this limit, x t behaves like the driving force, and we find t → ∞ : |x t | ∼ X(t) ∼ (ct) 1/α .(25) The Loewner evolution with Levy flights produces, in general, a forest of (sparse or dense) branching trees, growing form the real axis. The above (16) for Lévy distributed forcing c = 1, time step τ = 10 −4 ; 10000 runs for α = 0.5, 1.0, 1.3, 3000 runs for α = 0.7, 0.9, 1.1, 1.5. The black line is a guide to the eye with the desired slope. The irregular points observed in some of the curves are due to large jumps in the forcing of individuals runs. Such behavior is expected due to the power law distribution of the jumps in Lévy processes. relation then tells us how the forest spreads along the real axis with time. This distance is marked out on the plots of trees shown in Figure 2. Numerical implementation of the Langevin equation (16), details of which are presented in Appendix A.1, confirms these qualitative arguments. Next we turn to a typical distance Y (t) in the y coordinate. Figure 2 clearly shows that this characteristic distance is much smaller than X(t). We understand this as follows. If x t were zero, the second equation in (18) would give y t ∼ t 1/2 . Clearly, any non-zero x t only slows down the growth of y t . We then conclude that y t , and therefore the height of the trees produced by the SLE process cannot grow with time faster than t 1/2 . Since α < 2, it means that Im γ(t) always grows slower than Re γ(t) and they become widely separated at long times. Our major result is that the growing trees spread faster horizontally than they grow vertically. Hence, we have Y (t) ≪ X(t).(26) An estimate of the scaling of Y (t) can be obtained from the second equation in (18) where we replace x t by the Lévy process and average over it using the probability distribution (4). This gives a typical behavior of y t : ∂ t y t ≈ ∞ −∞ dx 2y t y 2 t + x 2 P (x, t) = ∞ −∞ dk e −yt|k|−c|k| α t .(27) To estimate the k integral we can drop the term y t |k| in the exponent, since this quantity is of order Y (t)/X(t) ≪ 1. Thus we get ∂ t y t ≈ 2Γ 1 + 1 α c 1/α t −1/α .(28) The time integration then gives a result that the length scale for the y direction is Y (t) = y 0 + 2 c 1/α Γ(1 + 1 α ) 1 − 1 α t 1− 1 α .(29) Here y 0 is formally the constant of integration, but it should really be thought of as an adjustable constant inserted to make up for any errors we might have made in doing the integrals. In particular, it takes care of any effects from the early-time region, where we surely do not have the calculation under control. The phase transition at α = 1 is manifested by a qualitative difference between α > 1 and α < 1. From (29) we can see that while for α 1 the average height of the trees grows to infinity as t 1−1/α , while for α < 1 it saturates at a finite value y ∞ . Figure 5 provides an illustration of the phase transition at α = 1 separating different behaviors. More detailed comparison between our analytical predictions and numerical simulations is provided in the next Section. Solving the FPE In order to quantify these predictions we need to return our attention to the Fokker-Plank equation (21). If we perform the Fourier transform in x and integrate in time we can find a compact form of this equation, which reads ρ(k, y, t) = e −c|k| α t ρ 0 (k, y) − ∂ y dk ′ t 0 dt ′ e −y|k ′ |−c|k| α (t−t ′ ) ρ(k − k ′ , y, t ′ ) + k dk ′ t 0 dt ′ sgn(k ′ )e −y|k ′ |−c|k| α (t−t ′ ) ρ(k − k ′ , y, t ′ ),(30) where ρ 0 (k, y) is the Fourier transform of the initial distribution (23). At long times ρ(x, y, t) is spread over the scale X(t) as a function of x. Its Fourier transform ρ(k, y, t), as a function of k, is significantly non-zero on the scale X(t) −1 . At the same time, due to the exponential factors e −y|k ′ | , the relevant values of k ′ in the integrals in Eq. (30) are of the order y −1 Y (t) −1 . The scale Y (t) −1 is much larger than the range X(t) −1 where ρ is non-zero, hence, when integrating over k ′ we can use the approximation ρ(k − k ′ , y, t) ≈ δ(k − k ′ ) dk ′′ ρ(k ′′ , y, t) = 2πρ(0, y, t)δ(k − k ′ ).(31) The Fokker-Planck equation then reads: ρ(k, y, t) = e −c|k| α t ρ 0 (k, y) − 2π t 0 dt ′ e −y|k|−c|k| α (t−t ′ ) ∂ y ρ(0, y, t ′ ) + 4π|k| t 0 dt ′ e −y|k|−c|k| α (t−t ′ ) ρ(0, y, t ′ ).(32) This is the main approximation that we will use in order to study the behavior of the Lévy-SLE process at large times. Notice here that the distribution function ρ, for every x and t, depends only on the initial condition ρ 0 and the history of the distribution at x = 0 for earlier times t ′ < t. Therefore, in order to study the probability density function described by the Fokker-Planck equation, we first need to calculate the behavior of this distribution for small x, that is ρ(0, y, t). Then, by substituting in Eq. (32), we can in principle estimate the full distribution. However, in this paper we are mostly interested in the way this process grows in the y direction. Hence, we will first find ρ(0, y, t) which characterizes the growth near x = 0, and then obtain the distribution p(y, t) ≡ ∞ −∞ dx ρ(x, y, t) = ρ(0, y, t)(33) of y's integrated over all x by setting k = 0 in (32): p(y, t) = p 0 (y) − 2π t 0 dt ′ ∂ y ρ(0, y, t ′ ).(34) This equation immediately leads to the average y , which is understood as the average over all x: y = y 0 + 2π t 0 dt ′ ∞ 0 dy ρ(0, y, t ′ ).(35) Therefore, the distribution and its mean in Eqs. (34) and (35) depend only on the behavior at x = 0 at times t ′ < t. This is a direct implication of Eq. (32) and our main approximation (31). Let us emphasize again that our approximation works in the long time limit. We will assume that we can use approximate expressions in time integrals for all t > t 0 . Thus, we will treat all time integrals t 0 as t t 0 + correction. The corrections come from short times, and we cannot extract them from our analysis. They all will be hidden in the terms dependent on the lower limit t 0 of the time integrals. In several cases the lower cut-off at t 0 is necessary to avoid spurious divergencies. Let us now consider ρ(0, y, t). A closed equation for this quantity results from integrating Eq. (32) over k. To do this we observe that in the first term (the initial value at t = 0) for the relevant values of k the function ρ 0 (k, y) is much broader in k than e −c|k| α t at long times. Hence, in the integral over k we can replace ρ 0 (k, y) by its value at k = 0. Then it follows that ρ(0, y, t) = ρ 0 (0, y) 2πX(t) − t 0 dt ′ ∂ y ρ(0, y, t ′ ) X(t − t ′ , y) − 2 t 0 dt ′ ρ(0, y, t ′ )∂ y 1 X(t − t ′ , y) ,(36) where the scale X(t, y) is defined as 1 X(t, y) = ∞ −∞ dk e −c|k| α t−y|k| ,(37) and X(t) = X(t, y = 0) = c 1/α 2Γ 1 + 1 α t 1/α .(38) Equation (36) we obtain an ordinary differential equation ∂ y ρ(0, y, λ) + 1 + 2∂ y K(λ, y) K(λ, y) ρ(0, y, λ) = K(λ) 2πK(λ, y) ρ 0 (0, y),(40) where K(λ, y) = ∞ 0 dt e −λt X(t, y) = ∞ −∞ dk e −y|k| λ + c|k| α ,(41) and K(λ) = K(λ, 0). Using the initial condition ρ 0 (0, y) = δ(y − y 0 ), the straightforward solution of Eq. (40) is ρ(0, y, λ) = K(λ) 2π K(λ, y 0 ) K 2 (λ, y) exp − y y 0 dy ′ K(λ, y ′ ) .(42) The inverse Laplace transform of this solution gives ρ(0, y, t). Notice that (42) is valid only for y > y 0 . Since our approximations only work at long times, we expect our solution to give good results for y ≫ y 0 . The approximations will usually result in the necessity to introduce a fitting parameter (called "correction" in the discussion after Eq. (35)) in the time evolution of averages for the process. Moreover, there is an upper cut-off that stems from the Langevin equation and the fact that y cannot grow faster than t 1/2 (see previous section). Since we used this fact while making the approximations that lead to Eq. (32), the range of validity of our solution is y 0 ≪ y ≪ t 1/2 . In the following we will analyze the properties of the distributions ρ(0, y, t) and p(y, t) in three separate cases α > 1, α = 1 and α < 1. For each case we will repeat the following steps: first we calculate ρ(0, y, t) from Eq. (42), then, by substituting this solution into Eq. (34), we will calculate the average height y and the distribution p(y, t). In these calculations we need approximate expressions for the function K(λ, y). These expressions are derived in Appendix A.2. Results for α > 1 In this case we can use the approximation (86) from Appendix A.2 for K(λ) and K(λ, y). Eq. (42) then gives ρ(0, y, λ) ≈ 1 2π exp − 1 A λ 1−1/α y , A = 2π αc 1/α sin π α . (43) To calculate the time dependence of the distribution we take the inverse Laplace transform: ρ(0, y, t) ≈ 1 2πt a+∞ a−i∞ dλ 2πi e λt−λ 1−1/α y/A .(44) As usual, the integration contour in the last equation goes along a vertical line Re λ = a, where a should be greater than the real part of any singularity of the integrand. Changing the integration variable to λt we obtain that answer which, apart from the overall prefactor 1/t, has acquired the form of a scaling function: ρ(0, y, t) ≈ 1 2πt F (ŷ),ŷ ≡ y Y (t) , Y (t) = 2 c 1/α π α sin π α t 1− 1 α , (45) F (ŷ) = dλ 2πi e λ−λ 1−1/αŷ .(46) Since the scaling function F (ŷ) depends only on the combination yt −1+1/α , its derivatives with respect to y and t are related: ∂ y F (ŷ) = − α α − 1 t y ∂ t F (ŷ).(47) The integrand in Eq. (46) contains a branch cut which we choose to run along the negative real axis. The integration contour can be deformed to go from −∞ to 0 along the lower side of the cut, and then from 0 to −∞ along the upper side. This leads to the final answer for the scaling function F (ŷ): The overall prefactor 1/t in ρ(0, y, t) can be understood as follows. The distribution ρ(x, y, t) at long times spreads in the x direction up to scale X(t), and in the y direction up to scale Y (t). The total area "covered" by the distribution scales with time as X(t)Y (t) ∝ t. Therefore, at the particular value x = 0 the density ρ(0, y, t) decays with time as 1/t. However, if we are looking at the distribution of the y coordinate for x = 0, and its moments y n , we should divide ρ(0, y, t) by the normalization F (ŷ) = 1 π ∞ 0 dλ e −λ−| cos π α |λ 1−1/αŷ sin sin π α λ 1−1/αŷ .(48)∞ 0 dy ρ(0, y, t) = Y (t) 2πtΓ 1 − 1 α .(49) The normalized distribution is then ρ n (0, y, t) ≈ Γ 1 − 1 α Y (t) F (ŷ).(50) Moreover, the integrated distribution p(y, t) exhibits the same scaling as ρ(0, y, t). Indeed, using the relation (47) in Eq. (34) we obtain: p(y, t) = p 0 (y) + 1 Y (t) α α − 1 1 y F (ŷ). (51) ≪ y ≪ t 1/2 . Here, Y (t) = 50, t 1/2 ≈ 300 and t 1/2 0 ≈ 1, so the region of validity of Eq. (51) in the scaled variable is 0.02 ≪ y/Y (t) ≪ 6. This explains the disagreement between the theory and the numerics for y/Y (t) > 2. Also, for small values of y/Y (t) where we should not trust Eq. (51), the theory still gives a significant weight to the distribution p(y, t). This is, presumably, the reason for the discrepancy between the numerics and the theory in the range y/Y (t) < 2. Fig. 6 shows the scaling collapse of the numerically calculated distributions p(y, t) for α = 1.3 and three different times. We see that, indeed, p(y, t) is a scaling function of y/Y (t) in agreement with our predictions. We can calculate the asymptotics of the function F (ŷ). For small values ofŷ we can neglect the term withŷ in the exponential in Eq. (46), as well as replace the sine function under the integral by its (small) argument: F (ŷ ≪ 1) ≈ α − 1 α 1 Γ 1 α ŷ.(52) For largeŷ we need to use the steepest descent method for the contour integral in Eq. (46), which results in We have to remember that we can only trust this result for y 0 ≪ y ≪ t 1/2 . Figure 7 shows a comparison between the numerical data and the theoretical prediction of Eq. (51) for the distribution p(y, t). While the overall dependence on y is similar between the two, we would obtain a better fit for y 0 ≪ y ≪ t 1/2 if we redistributed the weight outside this region to the range were Eq. (51) is valid. F (ŷ ≫ 1) ≈ α 2π 1/2 α − 1 αŷ α/2 exp − 1 α − 1 α − 1 αŷ α .(53) Next, we will calculate the time evolution of the average height of the growing trees y from Eqs. (35, 49): y = y 0 + 1 Γ(1 − 1 α ) t 0 dt ′ Y (t ′ ) t ′ = y 0 + 2 c 1/α Γ(1 + 1 α ) 1 − 1 α t 1− 1 α .(54) Here, all short time contributions are included in y 0 . This nicely fits the numerics, see Fig. 8, and reproduces the result (29) of the simple argument using the Langevin equation. We also want to compare the distribution at x = 0 to the distribution averaged over all x. We calculate the average value y 0 (the superscript indicates that this average is calculated at x = 0) from the distribution (50): 56)). This discrepancy is most probably due to a finite time effect and the limited amount of data close to x = 0. y 0 = 4 αc 1/α cos π α Γ 1 − 1 α Γ 2 α − 1 t 1−1/α .(55) The ratio of the two averages (neglecting y 0 ) is y 0 y = 1 π sin 2π α Γ 1 − 1 α Γ 2 α − 1 Γ 2 − 1 α .(56) This tends to 2 as α → 1 from above, and to π/2 as α → 2 from below. We observe similar behavior in our numerical results, where the average of y at x = 0 is higher than the overall average (Fig. 9). However, the ratio (56) is not matched exactly. Presumably, this is because we do not have enough data close to x = 0 and we cannot reach long enough times in order for the various constants (like y 0 ) to be negligible, so that Eq. (56) is accurate. Results for α = 1 Now we use the approximations (79, 84) from Appendix A.2 in Eq. (42). The resulting expression for ρ(0, y, λ) is difficult to analyze without further approximations. We will evaluate it as well as its inverse Laplace transform with logarithmic accuracy, which amounts to three assumptions. First, we assume that all the logarithms that appear are large compared to constants of order one such as π, c, etc, which will be neglected. Secondly, the logarithms are assumed to be small compared to power laws for large arguments: ln t ≪ t. Finally, the logarithms are slow functions as compared to power laws and exponentials, and in integrals can be replaced by their values at the typical scale of variation of the fastest function under the integral. All subsequent equations in this section will be obtained with logarithmic accuracy using these assumptions. (57) The time dependence now follows from the inverse Laplace transform, using the same contour integral described in the previous section: ρ(0, y, t) ≈ 1 2π 2 t The mean value of the height of trees near x = 0 follows from ρ n (0, y, t) using the same arguments as before: y 0 ≈ 4 c ln c 2 t 2 .(61) The average height (over all x) is also found easily from Eq. (35): y = y 0 + 2 c t t 0 dt ′ t ′ ln t ′ t 0 ln ct ′ y 0 ln 2 c 2 t ′ 2 ≈ 2 c ln t + const.(62) As shown in Fig. 10 this is in good agreement with the numerics. The ratio of the two averages in the long time limit is y 0 / y = 2, consistent with the limit α → 1 of Eq. (56). The asymptotics of the distribution ρ n (0, y, t) for small and large values of y/ ln ct can be found similar to the case α > 1: ρ n (0, y, t) ≈            The asymptotics of these expression follow as before: p(y, t) − p 0 (y) ≈            Results for α < 1 In this case we use the approximation (87) from Appendix A.2 leading to ρ(0, y, λ) ≈ N K(λ)y 2−2α exp − y 2−α C(2 − α) ,(66)C = 2 c Γ(1 − α), N = y α−1 0 2πC .(67) The inverse Laplace transform of this expression gives the leading approximation ρ(0, y, t) ≈ N X(t) y 2−2α exp − (1 − α)c 2Γ(3 − α) y 2−α .(68) The obtained result depends on time only through the overall factor X −1 (t). We can understand this as follows. The distribution ρ(x, y, t) at Figure 11: Distribution fits for α < 1. We compare the numerically calculated distribution p(y, t) to the theoretical curve for ρ ∞ given by Eq. (69) with one free parameter for normalization. We claim that p(y, t) = ρ ∞ (0, y) for y 0 ≪ y ≪ t 1/2 where our solution is valid. long times spreads in the x direction up to the scale X(t) but becomes stationary in the y direction. Therefore, at the particular value x = 0 the density ρ(0, y, t) decays with time as X −1 (t). However, if we are looking at the distribution of the y coordinate for x = 0, we should normalize Eq. (68) which gives the truly stationary distribution (normalized by the appropriate choice of N 1 ) ρ ∞ (0, y) ≈ N 1 y 2−2α exp − (1 − α)c 2Γ(3 − α) y 2−α ,(69) in agreement with numerics, see Fig. 11, where we actually observe that the integrated distribution p(y, t) coincides with ρ ∞ (0, y) at long times. The stationary distribution (69) allows us to calculate the average saturated height of the trees: y ∞ = ∞ 0 dy yρ ∞ (0, y) = 2Γ(3 − α) 1 − α 1/(2−α) Γ −1 3 − 2α 2 − α c −1/(2−α) . (70) This is in very good agreement with the numerically calculated values shown in Fig. 12. Let us discuss now the integrated distribution p(y, t) and its mean y . Unfortunately, in the present case (α < 1), the Eqs. (34) and (35) do not give reliable results simply because the apparent distribution and saturation height are very sensitive to the lower limit t 0 , and the results are of the same order as the initial conditions at t 0 . Analytically, we can see that the distribution p(y, t) becomes stationary as t → ∞, even though we cannot determine p(y, ∞). The time independence of the distribution p(y, t) at long times is checked numerically in Fig. 13. Numerics presented in Fig. 11 indicate that p(y, t) = ρ ∞ (0, y) (see Eq. (69)) for the appropriate range y 0 ≪ y ≪ t 1/2 , and we will discuss why this is true below. We can also see that the way the average tree height approaches its limiting value is given by the power law y = y ∞ − Dc −1/α t 1−1/α .(71) We have previously calculated D in Eq. (29) using the Langevin formulation of the process. This result agrees well with numerics, as demonstrated in Fig. 14. We can argue that y ∞ = y ∞ and that p(y, ∞) = ρ ∞ , if we return to the initial description of the process seen as SLE trees growing forward in time [1]. For α < 1 the jumps of the Lévy process are large and we know that any new tree is most likely to grow starting from the real axis. The trees are sparse and the new tree will grow isolated from its neighbors, hence it will be identical to any other tree, including the trees that grow close to the origin at x = 0. Therefore, we expect the distribution of y's at x = 0 to be identical to the distribution at any other x. Numerics also support this argument. In Fig. 15 we observe that the average height of the trees is practically independent of the value of x. Also, in Fig. 11 we compare p(y, ∞) and ρ ∞ while in Fig. 12 we show that y ∞ = y ∞ . Conclusions In this paper we have analyzed the global properties of growth in the complex plain described by a generalized stochastic Loewner evolution driven by a symmetric stable Lévy process L α (t), introduced in our previous paper [1]. The phase transition at α = 1 whose implications for local properties of growth were the subject of Ref. [1], also manifests itself on the whole plane resulting in a rich scaling behavior. We have used a Fokker-Planck equation to study the joint distribution ρ(x, y, t) for the real and imaginary parts of the tip of the growing trace. The presence of the Lévy flights in the driving force imposes very different dynamics in the x and y directions. While in the x direction the process spreads similarly to the Lévy forcing x ∼ X(t) ∼ t 1/α , the SLE dictates y ≪ X(t), for all values of α. This separation of the horizontal and vertical scales in the process allows us to make sensible approximations and explore geometric properties of the stochastic growth in all phases, α < 1, α = 1, and α > 1, both qualitatively and quantitatively. For α < 1, the vertical growth saturates at a finite height y ∞ . In terms of the picture presented in [1], long jumps occur often so that new trees grow isolated and there is a small chance that the trace grows on an already existing tree. For α > 1, the average height of the process grows as a power law t 1−1/α with time. New trees grow close to old ones, so that when the process returns to a previously visited part of the real axis it will have to grow on top of already existing trees. Eventually the trace will grow past any point on the plane. At the boundary between the two phases, α = 1, the height of the process grows logarithmically with time. with jumps appropriately distributed: ξ(t) = ξ j for (j − 1)τ < t < jτ . For such a driving function the process z t in Eq. (16) can then be calculated numerically as an iteration process of infinitesimal maps [24] starting from the condition z = 0 as follows: z n = z(nτ ) = f n • f n−1 ... • f 1 (0) − ξ n .(72) The infinitesimal conformal map f n at each time interval n is defined by: f n (z) = w −1 n (z) = (z − ξ n ) 2 − 4τ + ξ n(73) The value of ξ n is randomly drawn from the appropriate distribution. The number of steps necessary to produce an SLE trace up to step n grows only as O(n). All numerical results in the next section have been calculated using the average of Eq. (73) over many noise realizations. The trace can also be produced directly [1], as g −1 (ξ(t), t), in which case we approximate γ j = γ(jτ ) = f 1 . . . • f n−1 • f n (ξ n ).(74) However, the number of steps in this method grows as O(n 2 ). We used this method to verify that numerically calculated z and γ have identical distributions. Eq. (74) was also used to calculate the traces shown in Fig. 2. Here, we will assume κ = 0 for simplicity, that is, the driving force is pure Lévy flights ξ(t) = c 1/α L α (t). The addition of a Brownian motion will not affect our conclusions. For all realizations of the Lévy-SLE process we take c = 1 and τ = 10 −4 unless otherwise noted. A.2 Asymptotics for K(λ, y) Let us consider (we need to use the lower cut off t 0 here to have a convergent result for α < 1) K(λ) = ∞ t 0 dt e −λt X(t) = 2Γ 1 + 1 α c 1/α ∞ t 0 dt t −1/α e −λt =      2Γ 1 + 1 α c 1/α λ −1+1/α Γ 1 − 1 α , λt 0 , α = 1, 2 c E 1 (λt 0 ), α = 1,(75) where Γ(a, x) is the incomplete gamma function, and E 1 (x) is the exponential integral. Since λ has the dimension and the meaning of frequency, and we are interested in t ≫ t 0 , we will only need the small argument asymp-In general for λy α /c ≪ 1, a good approximation for K(λ, y) is the sum of expressions in Eqs. (80, 83): K(λ, y) ≈ Aλ −1+1/α + Cy α−1 . Not only this approximation reproduces the correct limits in Eqs. (80) and (83), but in the limit α → 1 it also reduces to Eq. (84). This approximation can be obtained by splitting the k interval in the integral in Eq. (82) into two at the value k 0 = (λ/c) 1/α and in each resulting integral replace the denominator by the largest term in it. Notice that for α > 1, and in the limit of interest λy α /c ≪ 1 the first term in Eq. (85) is much greater than the second, and we can use Eq. (80) for both K(λ) and K(λ, y): K(λ) ≈ K(λ, y) ≈ Aλ −1+1/α , α > 1.(86) For α < 1 the opposite is true, and we can use Eq. (83) as a valid approximation: K(λ, y) ≈ Cy α−1 , C = 2 c Γ(1 − α).(87) for α > 1 . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Results for α = 1 . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.3 Results for α < 1 . . . . . . . . . . . . . . . . . . . . . . . . calculations . . . . . . . . . . . . . . . . . . . . . . 31 A.2 Asymptotics for K(λ, y) . . . . . . . . . . . . . . . . . . . . . 32 Figure 3 : 3Top panel: two trajectories in the forward flow. Bottom panel: the corresponding trajectories in the backward flow. Curved arrows indicate the flow of time. In both cases the flow trajectories are shown in grey, while the black line represents an SLE trace. The trajectories on the bottom panel have been produced with the same noise realization as used for the forward evolution, reversed in time as described in the main text. The grey trajectories in the top and bottom panel are thus identical. Figure 4 : 4Growth of the Lévy-SLE parallel to the boundary. Here we plot |x| 3α/4 for various values of α (Brownian motion is set to zero, κ = 0). The average follows the predicted behavior t 3/4 . Data collected by averaging realizations of equation Figure 4 compares the estimate of Eq. (25) with numerical calculations of the trace via simulations of Eq. (16). The agreement is excellent. Figure 5 : 5Growth of the Lévy-SLE perpendicular to the boundary. We plot y for various values of α. Same details as in previous figure. Initially, the trace grows as √ t for all values of α. This behavior changes around the characteristic time t 0 ∼ 1. The height of the trace saturates for α < 1, while it grows indefinitely for α > 1. This change of behavior demonstrates the global implications of the phase transition at α = 1. is easily solved after performing the Laplace transformation in time t. Figure 6 : 6The distribution of heights scales as y/Y (t), where Y (t) is given by Eq. (45) for α = 1.3. The distribution is shown at three different times (black, red and green curves), all within the limiting region of large times where asymptotic behavior y ∝ t 1−1/α holds. Figure 7 : 7The distribution p(y, t) as a function of y/Y (t). The theoretical prediction Eq. (51) (solid curve) and the numerical distribution (black dots) are different, however, they have a similar dependence on y for the values where we believe the solution is valid, Figure 8 : 8The average height y for SLE driven by Lévy flights with α = 1.3 grows as a power-law t 1−1/α . The red dashed line is a fit to Eq. (54) for t > 1, where we only vary the parameter y 0 . Figure 9 : 9Average height of the trace y = Im γ(t) as a function of x/(ct) 1/α for SLE driven by Levy flights with α = 1.3. y data are bined logarithmically. Here, (ct) 1/α = 1827.15, 6217.82, 21156.6 for the three values of time. The average height close to x = 0 is 2 times bigger than the height at large x and roughly 1.4 times higher than the global average y . The theoretically predicted value of the ratio between the height at x = 0 and the average is 1.9 (Eq. ( Figure 10 : 10The average height y for SLE driven by Lévy flights α = 1.0 grows logarithmically with time. The red dashed line is a one parameter fit for t > 10 to the predicted function A + 2 c ln t.leads to a normalized distribution at x = 0: , ln ct ≪ y ≪ t 1/2 . , ln ct ≪ y ≪ t 1/2 . Figure 12 : 12(Left) The ratio of the numerically calculated over the theoretically predicted value of the saturated height y ∞ for α < 1. (Right) The saturated height of the trees vs. the strength. The dashed lines are the theoretical values for y ∞ , Eq. (70). Figure 13 :Figure 14 : 1314Distribution of heights averaged over all x for SLE driven only by Lévy flights and α = 0.7. The distribution is shown at three different times (black, red and green curves), corresponding at the limit of large time. We see how the distribution of the height of trees is stationary. The average height y for SLE driven by Lévy flights α = 0.7 saturates to y ∞ as t 1−1/α . The red dashed line is the analytic result, Eq. (71), with the value of D = 2Γ(1+1/α) c 1/α (1−1/α) obtained in Eq. (29). y ∞ was calculated from the two numerical points of y for the largest times. Figure 15 : 15Average height of the trace y = Im γ(t) as a function of x/(ct) 1/α for SLE driven by Levy flights with α = 0.7. y data are binned logarithmically and the average of every bin is plotted. (ct) 1/α = 2.9 10 5 , 2.8 10 6 , 2.75 10 7 . AcknowledgementsThis research was supported in part by NSF MRSEC Program under DMR-0213745. IG was also supported by an award from Research Corporation and the NSF Career Award under DMR-0448820. We wish to acknowledge many helpful discussions with Paul Wiegmann, Eldad Bettelheim, and Seung Yeop Lee. IG also acknowledges useful communications with Steffen Rohde.A AppendicesA.1 Numerical calculationsThe interpretation of equation(16)is very helpful to our calculations. z t and the tip of the trace have the same distribution. This allows, instead of calculating the trace γ(t) for every time t and noise realization (O(n 2 )), to efficiently collect statistics for the position of the tip by integrating the Langevin equation(16)(O(n)).Following Ref.[1]we approximate ξ(t) by a piecewise constant functiontotics of these functions:This gives for λt 0 ≪ 1For α > 1 we can set t 0 = 0 and obtainand for α < 1 we can set λ = 0:We now turn to the Laplace transform K(λ, y):Since in the Laplace transform the important values of λ are the inverse typical time scales, this means that the relevant asympotics of K(λ, y) are those with λy α /c ≪ 1. The opposite case of λy α /c ≫ 1 corresponds to short times, where our basic approximation is invalid. So from now on we will focus on the limit λy α /c ≪ 1. This integral can be evaluated exactly in a number of cases. First, when y = 0, the integral converges for α > 1 and gives the same expression as K(λ) in Eq. (80). Secondly, for λ = 0 the integral converges (for y > 0) for α < 1 and gives thenAll the constants A, B, and C defined above diverge as 1/(α − 1) as α → 1. Finally, for α = 1 we get . I Rushkin, P Oikonomou, L P Kadanoff, I A Gruzberg, arXiv:cond-mat/0509187J. Stat. Mech. 1001I. Rushkin, P. Oikonomou, L.P. Kadanoff and I.A. Gruzberg, J. Stat. Mech. P01001 (2006); arXiv: cond-mat/0509187 . O Schramm, Israel J Math, arXiv:math.PR/9904022118221O. Schramm, Israel J. Math. 118, 221 (2000); arXiv: math.PR/9904022. Conformally invariant processes in the plane. G F Lawler ; Providence, R I , Mathematical Surveys and Monographs. 114American Mathematical SocietyG. F. Lawler. Conformally invariant processes in the plane. Mathe- matical Surveys and Monographs, 114. Providence, R.I.: American Mathematical Society, 2005. Random planar curves and Schramm-Loewner evolutions. W Werner, arXiv:math.PR/0303354Lecture Notes in Mathematics, 1840. Berlin, New YorkSpringer-VerlagW. Werner, Random planar curves and Schramm-Loewner evolutions, in Lecture Notes in Mathematics, 1840. Berlin, New York: Springer- Verlag, 2004; arXiv: math.PR/0303354. Conformally invariant scaling limits: an overview and a collection of problems. O Schramm, arXiv:math.PR/0602151International Congress of Mathematicians. ZürichIO. Schramm, Conformally invariant scaling limits: an overview and a collection of problems, in International Congress of Mathematicians. Vol. I, 513, Eur. Math. Soc., Zürich, 2007; arXiv: math.PR/0602151. . M Bauer, D Bernard, arXiv:math-ph/0602049Phys. Rep. 432115M. Bauer and D. Bernard, Phys. Rep. 432, 115 (2006); arXiv: math- ph/0602049. . J Cardy, arXiv:cond-mat/0503313Ann. Phys. 31881J. Cardy, Ann. Phys. 318, 81 (2005); arXiv: cond-mat/0503313. . I A Gruzberg, arXiv:math-ph/0607046J. Phys. A: Math. Gen. 3912601I. A. Gruzberg, J. Phys. A: Math. Gen. 39, 12601 (2006); arXiv: math- ph/0607046. . I A Gruzberg, L P Kadanoff, arXiv:cond-mat/0309292J. Stat. Phys. 1141183I. A. Gruzberg and L. P. Kadanoff, J. Stat. Phys. 114, 1183 (2004); arXiv: cond-mat/0309292. . W Kager, B Nienhuis, arXiv:math-ph/0312056J. Stat. Phys. 1151149W. Kager and B. Nienhuis, J. Stat. Phys. 115, 1149 (2004); arXiv: math-ph/0312056. . Q Guan, M Winkel, arXiv:math.PR/0606685Q. Guan and M. Winkel, arXiv: math.PR/0606685. . Q Guan, arXiv:0705.2321math.PRQ. Guan, arXiv: 0705.2321 [math.PR]. . Z.-Q Chen, S Rohde, arXiv:0708.1805v2math.PRZ.-Q. Chen and S. Rohde, arXiv:0708.1805v2 [math.PR]. D Beliaev, S Smirnov, arXiv:0801.1792v1Harmonic measure and SLE. math.CVD. Beliaev and S. Smirnov, Harmonic measure and SLE, arXiv: 0801.1792v1 [math.CV]. . B Duplantier, arXiv:cond-mat/9908314Phys. Rev. Lett. 841363B. Duplantier, Phys. Rev. Lett. 84, 1363 (2000); arXiv: cond- mat/9908314. B Duplantier ; Providence, R I , arXiv:math-ph/0303034Fractal geometry and applications: a jubilee of Benot Mandelbrot. American Mathematical Society2B. Duplantier, in Fractal geometry and applications: a jubilee of Benot Mandelbrot, Part 2, 365, Proc. Sympos. Pure Math., 72, Part 2, Prov- idence, R.I.: American Mathematical Society, 2004; arXiv: math- ph/0303034. . E Bettelheim, I Rushkin, I A Gruzberg, P Wiegmann, arXiv:hep-th/0507115Phys. Rev. Lett. 95170602E. Bettelheim, I. Rushkin, I. A. Gruzberg, P. Wiegmann, Phys. Rev. Lett. 95, 170602 (2005); arXiv: hep-th/0507115. . I Rushkin, E Bettelheim, I A Gruzberg, P Wiegmann, arxiv: cond-mat/0610550J. Phys. A: Math. Theor. 402165I. Rushkin, E. Bettelheim, I. A. Gruzberg, and P. Wiegmann, J. Phys. A: Math. Theor. 40, 2165 (2007); arxiv: cond-mat/0610550. . S Rohde, O Schramm, arXiv:math.PR/0106036Ann. of Math. 1612883S. Rohde and O. Schramm, Ann. of Math. (2), 161(2), 883 (2005); arXiv: math.PR/0106036. Lévy processes and stochastic calculus. D Appelbaum, Cambridge University PressCambridge, U.K.; New YorkD. Appelbaum. Lévy processes and stochastic calculus, Cambridge, U.K.; New York: Cambridge University Press, 2004. . R Metzler, J Klafter, Phys. Reports. 3391R. Metzler and J. Klafter, Phys. Reports 339, 1 (2000). Stable non-Gaussian random processes: stochastic models with infinite variance. G Samorodnitsky, M S Taqqu, Chapman & HallNew YorkG. Samorodnitsky and M. S. Taqqu. Stable non-Gaussian random pro- cesses: stochastic models with infinite variance, New York: Chapman & Hall, 1994. Lévy processes and infinitely divisible distributions. K Sato, Cambridge University PressCambridge, U.K.; New YorkK. Sato. Lévy processes and infinitely divisible distributions, Cambridge, U.K.; New York: Cambridge University Press, 1999. . M B Hastings, arXiv:cond-mat/9607021Phys. Rev. Lett. 8855506M. B. Hastings, Phys. Rev. Lett. 88, 055506 (2002); arXiv: cond- mat/9607021.
[]
[ "Handover Rate Characterization in 3D Ultra-Dense Heterogeneous Networks", "Handover Rate Characterization in 3D Ultra-Dense Heterogeneous Networks" ]
[ "Rabe Arshad \nUniversity of British Columbia\nCanada\n", "Hesham Elsawy \nKing Abdullah University of Science and Technology\nSaudi Arabia\n", "Lutz Lampe \nUniversity of British Columbia\nCanada\n", "Md Jahangir Hossain \nUniversity of British Columbia\nCanada\n" ]
[ "University of British Columbia\nCanada", "King Abdullah University of Science and Technology\nSaudi Arabia", "University of British Columbia\nCanada", "University of British Columbia\nCanada" ]
[]
Ultra-dense networks (UDNs) envision the massive deployment of heterogenous base stations (BSs) to meet the desired traffic demands. Furthermore, UDNs are expected to support the diverse devices e.g., personal mobile devices and unmanned ariel vehicles. User mobility and the resulting excessive changes in user to BS associations in such highly dense networks may however nullify the capacity gains foreseen through BS densification. Thus there exists a need to quantify the effect of user mobility in UDNs. In this article, we consider a threedimensional N -tier downlink network and determine the association probabilities and inter/intra tier handover rates using tools from stochastic geometry. In particular, we incorporate user and BSs' antenna heights into the mathematical analysis and study the impact of user height on the association and handover rate. The numerical trends show that the intra-tier handovers are dominant for the tiers with shortest relative elevation w.r.t. the user and this dominance is more prominent when there exists a high discrepancy among the tiers' heights. However, biasing can be employed to balance the handover load among the network tiers.
10.1109/tvt.2019.2932401
[ "https://arxiv.org/pdf/1807.02565v1.pdf" ]
49,653,305
1807.02565
9e9ee9fe9dd8ca201121be9dc48c3c9e4703b966
Handover Rate Characterization in 3D Ultra-Dense Heterogeneous Networks Rabe Arshad University of British Columbia Canada Hesham Elsawy King Abdullah University of Science and Technology Saudi Arabia Lutz Lampe University of British Columbia Canada Md Jahangir Hossain University of British Columbia Canada Handover Rate Characterization in 3D Ultra-Dense Heterogeneous Networks 1Index Terms-3-Dimensional NetworksAssociation Probabili- tiesHandover RateStochastic GeometryUltra-Dense Networks Ultra-dense networks (UDNs) envision the massive deployment of heterogenous base stations (BSs) to meet the desired traffic demands. Furthermore, UDNs are expected to support the diverse devices e.g., personal mobile devices and unmanned ariel vehicles. User mobility and the resulting excessive changes in user to BS associations in such highly dense networks may however nullify the capacity gains foreseen through BS densification. Thus there exists a need to quantify the effect of user mobility in UDNs. In this article, we consider a threedimensional N -tier downlink network and determine the association probabilities and inter/intra tier handover rates using tools from stochastic geometry. In particular, we incorporate user and BSs' antenna heights into the mathematical analysis and study the impact of user height on the association and handover rate. The numerical trends show that the intra-tier handovers are dominant for the tiers with shortest relative elevation w.r.t. the user and this dominance is more prominent when there exists a high discrepancy among the tiers' heights. However, biasing can be employed to balance the handover load among the network tiers. I. INTRODUCTION Extreme densification of base stations (BSs) realizing ultradense networks (UDNs) is considered a key enabler to meet the spectral efficiency requirements of fifth generation (5G) cellular systems. UDNs face various challenges in supporting user mobility while offering enhanced user capacity. The deployment of more BSs within the same geographical region shrinks the service area of each BS. Thus user mobility in such a highly dense network may result in frequent user-to-BS association changes, which may jeopardize the key performance indicators (KPIs) [1]. Several studies including [2] and [3] have been conducted in the literature to capture/manage the effect of user mobility on UDN performance metrics. However, none of the aforementioned studies quantified the effect of user mobility on user-to-BS associations. Handover (HO) is the process of changing user association from one BS to another, which is triggered to maintain the best connectivity or signal-to-interference-plus-noise-ratio (SINR). HO frequency/rate depends on various factors including BS intensity, BS transmit power, and user velocity. Several researchers have characterized HO rates in wireless networks by exploiting tools from stochastic geometry. In contrast to the classical works involving coverage oriented BSs deployment that implies hexagonal coverage areas, stochastic geometry has enabled the characterization of HO rates in capacity oriented networks encompassing irregular BSs coverage regions. For instance, [4] exploits stochastic geometry tools to characterize HO rate in a Poisson point process (PPP) based single tier network with the random waypoint user mobility model. The HO rates for multi-tier networks are characterized in [5] with an arbitrary user mobility model. The model in [5] is extended in [6] for the BSs that are deployed according to a Poisson cluster process (PCP). The authors in [7] conducted the HO rate analysis with different path loss exponents for each tier. However, none of the aforementioned studies incorporated user/BSs antenna heights into the mathematical analysis. A recent study [8] incorporated user height in the network analysis and found that decrease in the absolute height difference between the user and the BS in UDN leads to increased area spectral efficiency. This dictates that the network elements' heights should be carefully incorporated in the UDN performance analysis. To the best of authors' knowledge, no study exists that quantifies the interplay between user/BS antenna heights, association probabilities, and HO rates as a function of BS intensity, which is the main contribution in this work. In particular, we present a height-aware analytical model that characterizes the association probabilities and HO rates in a three-dimensional PPP based N -tier UDN. In the developed model, a user could be a pedestrian, a land vehicle or an unmanned aerial vehicle (UAV). We consider an N -tier UDN where the BSs belonging to k-th tier, k ∈ 1, ..., N are modeled via a homogenous PPP Φ k with intensity λ k , transmission power P k , bias factor B k , and antenna height h k . As in [5], [6], we consider a power law path loss model with path loss exponent η > 2. The disparity in the BSs transmit powers and heights yields a weighted Voronoi tessellation [9]. A mobile user following an arbitrary horizontal mobility model with velocity v, changes its association as soon as it crosses the coverage boundary of serving BS. In the next section, we characterize the user-to-BS association probabilities and the service distance distributions, which are utilized to eventually derive the HO rates. II. HANDOVER RATES Without loss of generality, we compute the inter and intratier HO rates by considering any two tiers, say, k, j ∈ {m, s} where 'm' and 's' denote macro BS (MBS) and small BS (SBS), respectively. Let T kj be the set of cell boundaries between the k-th and the j-th tier BSs formed by the virtue of weighted voronoi tessellation (see Fig. 1). We conduct our analysis on a test user that follows an arbitrary long trajectory T u and performs a type kj HO (from k-th to jth tier) as soon as it crosses the kj cell boundary. Let the HO rate per unit time experienced by the test user along its trajectory, which depends on the number of kj boundary crossings per unit trajectory length and the user velocity. In order to determine the HO rate H kj , we need to calculate the total number of intersections N kj between T u and T kj . Note that the number of intersections between T u and T kj is identical to that of intersections between T u and T jk , i.e., N kj = N jk . It is worth stating that the number of intersections between the user trajectory and the cell boundaries can be quantified by determining the intensities of cell boundaries. Let µ(T kj ) denote the length intensity of kj cell boundaries, which is the expected length of kj cell boundaries in a unit square. From [10], we have a general HO rate expression as a function of µ(T kj ), which is given by H kj = 2 π µ(T kj )v, for k = j 1 π µ(T kj )v, for k = j(1) where . π µ(T kj ) denotes the HO rate per unit length (HOL) and v represents the user velocity. Thus the total HO rate is given by H Total = k j H kj .(2) It is evident from (1) that the length intensity of cell boundaries is required to compute the HO rates, which is obtained by determining the probability of having the test user on the cell boundary. Note that higher intensity of cell boundaries leads to higher HO rates. Since it is difficult to conduct the analysis on the boundary line, we follow [5] and extend the cells' boundaries by an infinitesimal width ∆d, as illustrated in Fig. 1. Note that the probability of having the test user on the ∆d−extended cell boundary is equivalent to the expected area of the boundary in a unit square, which can be termed as the area intensity µ(T (∆d) kj ). Once we calculate the area intensity, we can then determine the length intensity by letting ∆d → 0, i.e., µ(T kj ) = lim ∆d→0 µ(T (∆d) kj ) 2∆d . It is worth stating that the characterization of the area intensity of cell boundaries involves determining the user-to-BS association probabilities and the service distance distributions. These probabilities and distributions depend on the relative difference between the user and the BSs' (MBS and SBS) antenna heights, which are not considered in the existing literature. In what follows, we calculate the association probabilities and the probability density functions (PDFs) of the distances between the user and the serving BS, which are then exploited to determine the area intensity and finally the HO rate. Note that the mathematical analysis in this paper follows the twodimensional approach parameterized with network elements' heights. A. Association Probabilities Let Z m and Z s be the Euclidean distance between a test user and the closest MBS and SBS, respectively. The user located in a 3-dimensional plane with height h u associates to the MBS if it provides the highest biased received signal strength (RSS) 1 i.e., B m P m Z −η m > B s P s Z −η s . The association probabilities in a 3-dimensional scenario are expressed in the following Lemma. Lemma 1: The association probabilities in a PPP based two tier UDN with the user height h u are given by (5) where (4) and (5) A k = 1 − e πλm[h 2 um −βmj h 2 uj ] A k 1 + λ kj e πλm[h 2 um −βmsh 2 us ] A k 2 , for h um ≤ h us (4) A k = 1 − e πλs[h 2 us −βsj h 2 uj ] A k 1 + λ kj e πλs[h 2 us −βsmh 2 um ] A k 2 , for h um > h us ,hold for k, j ∈ {m, s}, k = j with λ kj = λ k λ k +λj β jk , β kj = 1 β jk = ( B k P k Bj Pj ) 2/η , h um = |h u − h m |, and h us = |h u − h s |. Proof: See Appendix A. B. Service Distance Distribution In this section, we calculate the distributions of the distances between the test user and the serving macro and small BSs, which are subsequently used to obtain the area intensities. Note that the distance distributions are characterized here by ordering the BSs w.r.t. to their heights. The service distance distributions are given by the following Lemma. Lemma 2: Let X k , k ∈ {m, s}, be the horizontal distance between the test user and the serving BS given that the association is with the k-th tier BS. Then the PDFs of the horizontal distances between the test user and the serving macro and small BSs are given by f Xm (x) =      2πλm Am 1 xe −πλmx 2 , for 0 ≤ x ≤ L m 2πλm Am 2 xe −πx 2 (λm+λsP βsm)−πλs(h 2 um βsm−h 2 us ) , for L m ≤ x ≤ ∞ (6) f Xs (x) =      2πλs As 1 xe −πλsx 2 , for 0 ≤ x ≤ L s 2πλs As 2 xe −πx 2 (λs+λmPms)−πλm(h 2 us βms−h 2 um ) , for L s ≤ x ≤ ∞(7) where f Xm (x) and f Xs (x) represent the service distance distributions for the MBS and SBS cases, respectively, while L m and L s are given by L m = h 2 us β ms − h 2 um , for h um ≤ h us 0 , otherwise L s = h 2 um β sm − h 2 us , for h um > h us 0 , otherwise Proof: See Appendix B. Since we have computed the service distance distributions, we can now characterize the area intensity of the cell boundaries, which is required to calculate the HO rates. Note that the area intensity of cell boundary refers to the probability of having any arbitrary point on the extended cell boundary in a unit square. C. Area Intensities In this section, we calculate the area intensity of the cell boundaries. As mentioned earlier, T kj corresponds to the set of points where the biased received power from the two neighboring BSs is same. Let us assume that the test user located at 0= (0, 0, h u ) is connected to the MBS located at (r 0 , 0, h m ). Let (x 0 , y 0 , h s ) denote the position of a neighboring SBS that provides the best biased RSS among all SBSs. Then T kj could be any point (x, y) on the trace of cell boundary between macro and small BSs, which can be defined as T kj = (x, y) B m P m [(x − r 0 ) 2 + y 2 + h 2 um ] η/2 = B s P s [(x − x 0 ) 2 + (y − y 0 ) 2 + h 2 us ] η/2 . (8) We can now extend the cell boundaries by ∆d, which leads to the extended cell boundaries defined as T (∆d) kj = {u|∃v ∈ T kj , s.t. |u − v| < ∆d} .(9) Now, we calculate the probability of having the test user on the extended cell boundary given that the user is connected to the tier-k BS, which is given by the following Lemma. Lemma 3: Let a test user located at 0 lie on T (∆d) kj while being served by the tier-k BS located at r k , then the conditional probability that 0 ∈ T (∆d) kj is given by P[0 ∈ T (∆d) kj |X = r k , n = k] = 2λ j ∆dϑ(α kj , r k )+O(∆d 2 ) (10) where n ∈ {m, s} represents the associated tier, O(.) denotes the Big O function, and ϑ(α kj , r k ) is given in (11). Proof: See Appendix C. The special case of having the user and the BSs antennas at the same height reduces Lemma 3 into much simpler expression as shown in the next corollary. Corollary 1: In the special case of h um = h us = 0, P[0 ∈ T (∆d) kj |X = r k , n = k] is given by P[0 ∈ T (∆d) kj |X = r k , n = k] = 2λ j ∆dr k β kj π 0 β kj + 1 − 2 β kj cos θdθ + O(∆d 2 ). (12) Note that (12) corresponds to the two tier case in [5]. Since we have calculated the probability that 0 ∈ T (∆d) kj conditioned on the current association (Lemma 3), we can now determine the area intensity µ(T , which is given in the next theorem. Theorem 1: The area intensity or the probability of having any arbitrary point on the ∆d−extended inter-tier cell boundaries is given in (13) 2 . Proof: See Appendix D. Corollary 2: The area intensity of ∆d−extended intra-tier cell boundaries can be obtained by setting k = j in (13) and is given in (14). Note that (14) refers to the intra-tier HO scenario where the user height is different from that of serving/target BS. The area intensities of the cell boundaries computed in Theorem 1 and Corollary 2 can now be exploited to determine the inter and intra-tier HO rates, respectively. First, we calculate the length intensity from the area intensity as shown in (3) and then substitute the result in (1) to obtain the HO rates. III. NUMERICAL RESULTS Figs. 2 and 3 show the HO rates with lines and markers representing analysis and simulations results, respectively, with parameters shown in Table I. Fig. 2 depicts the HO rates for constant and varying user heights. Note that HOL Total refers to the conventional total HO rate where there exists no disparity among the user and BSs heights. Despite the decreasing overall HO rate with the increasing user height, Fig. 2 marks MBSs as bearing the increasing HO trend. This is because we assume that h m > h s and P m > P s , which is the practical scenario. Thus there arises a need to provide an additional signalling capacity to facilitate macro-to-macro HOs in the scenarios where the relative difference between the user and the BSs' antenna heights is not negligible. However, biasing can be employed to balance the HO signalling load between the two tiers. Fig. 3 epitomizes the HO rates with varying SBS intensity and the biasing factors of 0 dB and 6 dB. It is evident from the trends that the 6 dB bias in SBSs results in balancing the inter and intra-tier HO rates. ϑ(α kj , r k ) =        1 β kj π 0 r 2 k (β kj + 1) + h 2 uk β kj − h 2 uj β 2 kj − 2β kj r k cos θ r 2 k +h 2 uk β kj − h 2 uj dθ for k = j x π 0 √ 2 − 2 cos θdθ = 4x for k = j (11) µ(T (∆d) kj ) =            ∞ √ h 2 us βms−h 2 um 2λ j ∆dϑ(α kj , x)f Xm (x)dx + ∞ 0 2λ k ∆dϑ(α jk , x)f Xs (x)dx + O(∆d 2 ), for h um ≤ h us ∞ 0 2λ j ∆dϑ(α kj , x)f Xm (x)dx + ∞ √ h 2 um βsm−h 2 us 2λ k ∆dϑ(α jk , x)f Xs (x)dx + O(∆d 2 ), for h um > h us (13) µ(T (∆d) kk ) = L k 0 2λ k ∆dϑ(α kk , x)f X k (x)dx + ∞ L k 2λ k ∆dϑ(α kk , x)f X k (x)dx(14) IV. CONCLUSION This article characterizes the association probabilities and inter/intra tier HO rates in N -tier UDN by incorporating user/BSs antenna heights into the mathematical analysis. The proposed analytical model can be applied to various practical scenarios where the user could be a land vehicle or an aerial vehicle. In particular, we study the impact of user height on the association probabilities and HO rate, and validate our model via Monte Carlo simulations. The numerical results show decrease in the overall HO rate with the increase in the user height while giving rise to the macro-to-macro HO rate, which can be balanced by adding a positive bias to the SBSs. By the virtue of this model, some HO management techniques could be investigated to minimize the effect of HO rate on the UDN KPIs. As a future work, we will study more advanced propagation and fading models for user-to-BS associations and HO rates. APPENDIX A PROOF OF LEMMA 1 Let R m = Z 2 m − h 2 um be the horizontal distance between the MBS and the test user. Then the distribution of Z m can be calculated as F Zm (x) = P[R m ≤ x 2 − h 2 um ] (a) = 1 − e −πλm(x 2 −h 2 um ) , h um < x < ∞,(15) where (a) follows from the null probability of PPP. Similarly, the distribution of Z s is given by F Zs (x) = 1 − e −πλs(x 2 −h 2 us ) , h us < x < ∞,(16) For the macro association probability A m , we first write In order to calculate the distance distribution, we first calculate the complementary cumulative distribution function (CCDF) given that the user associates with the k-th tier, k ∈ {m, s}. A m = P[B m P m Z −η m > B s P s Z −η s ] = E Zm P[Z s > B s P s B m P m 1/η Z m ](17)P[X k > x] = P[R k > x n = k] = P[R k > x, n = k] P[n = k](18) where P[n = k] = A k , which is given in Lemma 1. The joint distribution P[R k > x, n = k] is calculated as follows P[R k > x, n = k] = P[R k > x, B k P k Z −η k > B j P j Z −η j ] (b) = ∞ x e −πλj [β jk (x 2 +h 2 uk )−h 2 uj ] f R k (x)dx (19) where (b) follows from the null probability of PPP with f R k (x) = 2πλ k xe −πλ k x 2 . Now f X k (x) is calculated by substituting (19) in (18) and invoking A k given in (4) and (5) conditioned on the heights i.e., h um and h us , and then taking the derivative w.r.t. x. APPENDIX C PROOF OF LEMMA 3 Let S be the location of tier-j SBS such that the condition 0 ∈ T (∆d) kj is satisfied. Mathematically, we can define S as S = {x 0 , y 0 |d < ∆d}(20) where d is the distance between 0 and T (∆d) kj , which is calculated by the mathematical manipulation of (8 It is worth stating that there is no tier-j BS between 0 and where S A can be written as S A = 2∆dϑ(α kj , r k ) + O(∆d 2 )(22) Now P[0 ∈ T (∆d) kj |X = r k , n = k] is calculated using the fact that there is at least one tier-j BS in S, which is given by where the integration regions S m and S s correspond to the heights dependent boundaries given in (6) and (7), respectively. The theorem is proved by the substitution of (4), (5), (6), (7), and (10) in (24). ) of the ∆d−extended cell boundaries T (∆d) kj Fig. 2 . 2Handover rates per unit length versus user height without biasing per unit time versus SBS Intensity Fig. 3 . 3Handover rates versus SBS intensity with Bsm = Bs Bm . This implies that S represents a ring region with area given byS A = 2∆dβ ms π 0 r 2 0 (1 +β ms ) + h 2 um β ms − h 2 us β 2 ms −2β ms r 0 cos(θ) ) |X = r k , n = k] = 1 − e −λ j S A = 2λj∆dϑ(α kj , r k ) + O(∆d 2 ) be the area intensity of cell boundaries, which is equal to the probability of having an arbitrary point in a unit square. This implies that |X = rm, n = k]fX (rm|n = k)P[n = k|X = rs, n = j]fX (rs|n = j)P[n = j]drs (24) H kj be arXiv:1807.02565v1 [cs.NI] 27 Jun 2018MBS MBS MBS SBS SBS 2∆d Tms Tmm Tu hs hm hu Fig. 1. Two tier Voronoi tessellation showing inter/intra tier HO boundaries between MBS (red) and SBS (blue). Red and blue solid lines represent MBS and SBS coverage boundaries, respectively, while black dotted lines represent ∆d extended boundaries. T kj , k, j ∈ {m, s} represents the HO boundaries between k and j tier BSs, Tu represents the user trajectory, and hx, x ∈ {m, s, u} represents the height of MBS, SBS, and user, respectively TABLE I SIMULATION IPARAMETERS IN ACCORDANCE WITH[13] Parameter Value Parameter Value MBS Power Pm: 46 dBm SBS Power Ps: 24 dBm MBS λm: 3 BS/km 2 SBS intensity λs: 10 BS/km 2 MBS height hm: 40 m SBS height hs: 25 m User velocity v: 30 km/h Path loss exponent η: 4 us ] over the range h us BmPm BsPs 1/η < Z m < ∞ given that h um ≤ h us . In case of h um > h us , then P [Z s > ( BsPs BmPm ) 1/η Z m ] = e −πλs[( Bs Ps us ] over the range h um < Z m < ∞. A similar approach is followed to calculate the SBS association probability A s .Then we solve (17) by exploiting the fact that P [Z s > ( BsPs BmPm ) 1/η Z m ] = 1 over the range h um ≤ Z m ≤ h us BmPm BsPs 1/η and P [Z s > ( Ps Pm ) 1/η Z m ] = e −πλs[( Bs Ps Bm Pm ) 2/η Z 2 m −h 2 Bm Pm ) 2/η Z 2 m −h 2 APPENDIX B PROOF OF LEMMA 2 ). It is worth mentioning that T to the trace is given by(∆d) kj represents a circle centered at [ βmsx0−r0 βms−1 , βmsy0 βms−1 ] with radius βms(r 2 0 +x 2 0 +y 2 0 −2x0r0+h 2 um +h 2 us − h 2 um βms −h 2 us βms) βms−1 . Thus the distance from 0 d = (βmsx0 − r0) 2 + β 2 ms y 2 0 βms − 1 − βms(r 2 0 + x 2 0 + y 2 0 − 2x0r0 + h 2 um + h 2 us − h 2 um βms − h 2 us βms) βms − 1 (21) Substituting (21) in (20) and transforming (x 0 , y 0 ) to the polar coordinates (r, θ) yields S = (r, θ) r 2 − r 2 0 βms − h 2 um βms + h 2 us < 2∆d βms r 2 0 (1 +βms)+ h 2 um βms−h 2 us β 2 ms −2βmsr0 cos(θ) r 2 0 + h 2 um βms −h 2 us +O(∆d 2 ) 1/2 As in[5]-[7],[11],[12], we consider a widely accepted maximum biased received power (BRP) based association rule that does not depend on the BS antenna characteristics e.g. aperture and radiation pattern. Also, we assume that the BS antennas are properly designed to cover a wide range of user heights. As in[8], it is difficult to find a closed form solution for height-aware models. Therefore, numerical evaluation is performed to solve(13). 5G ultra-dense cellular networks. X Ge, S Tu, G Mao, C.-X Wang, T Han, IEEE Wireless Communications. 231X. Ge, S. Tu, G. Mao, C.-X. Wang, and T. Han, "5G ultra-dense cellular networks," IEEE Wireless Communications, vol. 23, no. 1, pp. 72-79, 2016. Impact of user mobility on transmit power control in ultra dense networks. W Sun, Y Teng, IEEE International Conference on Communications Workshops (ICC Workshops. W. Sun and Y. Teng, "Impact of user mobility on transmit power control in ultra dense networks," in IEEE International Conference on Communications Workshops (ICC Workshops), 2017, pp. 1165-1170. Usercentric mobility management in ultra-dense cellular networks under spatio-temporal dynamics. J Park, S Y Jung, S.-L Kim, M Bennis, M Debbah, IEEE Global Communications Conference (GLOBECOM). J. Park, S. Y. Jung, S.-L. Kim, M. Bennis, and M. Debbah, "User- centric mobility management in ultra-dense cellular networks under spatio-temporal dynamics," in IEEE Global Communications Conference (GLOBECOM), 2016, pp. 1-6. Towards understanding the fundamentals of mobility in cellular networks. X Lin, R K Ganti, P J Fleming, J G Andrews, IEEE Trans. Wireless Commun. 124X. Lin, R. K. Ganti, P. J. Fleming, and J. G. Andrews, "Towards understanding the fundamentals of mobility in cellular networks," IEEE Trans. Wireless Commun., vol. 12, no. 4, pp. 1686-1698, 2013. Stochastic geometric analysis of user mobility in heterogeneous wireless networks. W Bao, B Liang, IEEE J. Sel. Areas Commun. 3310W. Bao and B. Liang, "Stochastic geometric analysis of user mobility in heterogeneous wireless networks," IEEE J. Sel. Areas Commun., vol. 33, no. 10, pp. 2212-2225, 2015. Handoff rate analysis in heterogeneous wireless networks with poisson and poisson cluster patterns. Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing. the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing--, "Handoff rate analysis in heterogeneous wireless networks with poisson and poisson cluster patterns," in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2015, pp. 77-86. Handover rate analysis for k-tier heterogeneous cellular networks with general path-loss exponents. Y Ren, Y Li, C Qi, IEEE Communications Letters. 218Y. Ren, Y. Li, and C. Qi, "Handover rate analysis for k-tier het- erogeneous cellular networks with general path-loss exponents," IEEE Communications Letters, vol. 21, no. 8, pp. 1863-1866, 2017. Please lower small cell antenna heights in 5G. M Ding, D L Pérez, IEEE Global Communications Conference (GLOBECOM). M. Ding and D. L. Pérez, "Please lower small cell antenna heights in 5G," in IEEE Global Communications Conference (GLOBECOM), 2016, pp. 1-6. Generalized dirichlet tessellations. P F Ash, E D Bolker, Geometriae Dedicata. 202P. F. Ash and E. D. Bolker, "Generalized dirichlet tessellations," Ge- ometriae Dedicata, vol. 20, no. 2, pp. 209-243, 1986. Stochastic Geometry and its Applications. S N Chiu, D Stoyan, W Kendall, J Mecke, John Wiley & SonsS. N. Chiu, D. Stoyan, W. Kendall, and J. Mecke, Stochastic Geometry and its Applications. John Wiley & Sons, 2013. Heterogeneous cellular networks with flexible cell association: A comprehensive downlink SINR analysis. H.-S Jo, Y J Sang, P Xia, J G Andrews, IEEE Trans. Wireless Commun. 1110H.-S. Jo, Y. J. Sang, P. Xia, and J. G. Andrews, "Heterogeneous cellular networks with flexible cell association: A comprehensive downlink SINR analysis," IEEE Trans. Wireless Commun., vol. 11, no. 10, pp. 3484- 3495, 2012. A tractable approach to coverage and rate in cellular networks. J G Andrews, F Baccelli, R K Ganti, IEEE Trans. Commun. 5911J. G. Andrews, F. Baccelli, and R. K. Ganti, "A tractable approach to coverage and rate in cellular networks," IEEE Trans. Commun., vol. 59, no. 11, pp. 3122-3134, 2011. Radio Frequency (RF) requirements for LTE pico Node (release 12). 3GPP TR 36.931 v14.0.03GPP TSG RAN. Tech. Rep3GPP TR 36.931 v14.0.0, "Radio Frequency (RF) requirements for LTE pico Node (release 12)," 3GPP TSG RAN, Tech. Rep.
[]
[ "Efficient Processing of Very Large Graphs in a Small Cluster", "Efficient Processing of Very Large Graphs in a Small Cluster" ]
[ "Da Yan [email protected] \nDepartment of Computer Science and Engineering\nThe Chinese University of Hong Kong\n\n", "Yuzhen Huang [email protected] \nDepartment of Computer Science and Engineering\nThe Chinese University of Hong Kong\n\n", "James Cheng [email protected] \nDepartment of Computer Science and Engineering\nThe Chinese University of Hong Kong\n\n", "Huanhuan Wu \nDepartment of Computer Science and Engineering\nThe Chinese University of Hong Kong\n\n" ]
[ "Department of Computer Science and Engineering\nThe Chinese University of Hong Kong\n", "Department of Computer Science and Engineering\nThe Chinese University of Hong Kong\n", "Department of Computer Science and Engineering\nThe Chinese University of Hong Kong\n", "Department of Computer Science and Engineering\nThe Chinese University of Hong Kong\n" ]
[]
Inspired by the success of Google's Pregel, many systems have been developed recently for iterative computation over big graphs. These systems provide a user-friendly vertex-centric programming interface, where a programmer only needs to specify the behavior of one generic vertex when developing a parallel graph algorithm. However, most existing systems require the input graph to reside in memories of the machines in a cluster, and the few out-of-core systems suffer from problems such as poor efficiency for sparse computation workload, high demand on network bandwidth, and expensive cost incurred by external-memory join and group-by.In this paper, we introduce the GraphD system for a user to process very large graphs with ordinary computing resources. GraphD fully overlaps computation with communication, by streaming edges and messages on local disks, while transmitting messages in parallel. For a broad class of Pregel algorithms where message combiner is applicable, GraphD eliminates the need of any expensive external-memory join or group-by. These key techniques allow GraphD to achieve comparable performance to in-memory Pregellike systems without keeping edges and messages in memories. We prove that to process a graph G = (V, E) with n machines using GraphD, each machine only requires O(|V |/n) memory space, allowing GraphD to scale to very large graphs with a small cluster. Extensive experiments show that GraphD beats existing out-of-core systems by orders of magnitude, and achieves comparable performance to in-memory systems running with enough memories.
null
[ "https://arxiv.org/pdf/1601.05590v1.pdf" ]
13,974,814
1601.05590
ee947a4654479e4098142c0369de7698c2e1475d
Efficient Processing of Very Large Graphs in a Small Cluster Da Yan [email protected] Department of Computer Science and Engineering The Chinese University of Hong Kong Yuzhen Huang [email protected] Department of Computer Science and Engineering The Chinese University of Hong Kong James Cheng [email protected] Department of Computer Science and Engineering The Chinese University of Hong Kong Huanhuan Wu Department of Computer Science and Engineering The Chinese University of Hong Kong Efficient Processing of Very Large Graphs in a Small Cluster Inspired by the success of Google's Pregel, many systems have been developed recently for iterative computation over big graphs. These systems provide a user-friendly vertex-centric programming interface, where a programmer only needs to specify the behavior of one generic vertex when developing a parallel graph algorithm. However, most existing systems require the input graph to reside in memories of the machines in a cluster, and the few out-of-core systems suffer from problems such as poor efficiency for sparse computation workload, high demand on network bandwidth, and expensive cost incurred by external-memory join and group-by.In this paper, we introduce the GraphD system for a user to process very large graphs with ordinary computing resources. GraphD fully overlaps computation with communication, by streaming edges and messages on local disks, while transmitting messages in parallel. For a broad class of Pregel algorithms where message combiner is applicable, GraphD eliminates the need of any expensive external-memory join or group-by. These key techniques allow GraphD to achieve comparable performance to in-memory Pregellike systems without keeping edges and messages in memories. We prove that to process a graph G = (V, E) with n machines using GraphD, each machine only requires O(|V |/n) memory space, allowing GraphD to scale to very large graphs with a small cluster. Extensive experiments show that GraphD beats existing out-of-core systems by orders of magnitude, and achieves comparable performance to in-memory systems running with enough memories. INTRODUCTION Google's Pregel [12] and Pregel-like systems (e.g., Giraph [4], GraphLab [10,6] and Pregelix [1]) have become popular for iterative computation over big graphs recently, with numerous applications including social network analysis [13], webpage ranking [12], and graph matching [5,18]. These systems provide a user-friendly programming model, where a user thinks like a vertex when developing a parallel graph algorithm. However, most existing systems require an entire input graph to reside in memories, as well as the huge amounts of messages generated during the computation. While this assumption is proper for big companies and researchers with powerful computing resources, it neglects the need of an average user who wants to process very large graphs, such as small businesses and researchers that cannot afford a large cluster. For example, [1] reported that in the Giraph user mailing list there are 26 cases (among 350 in total) of out-ofmemory related issues from March 2013 to March 2014. Another problem with in-memory systems is that, they often use much more memory than the actual size of the input graph in order to keep the supporting data structures, vertex states, edges and messages. For example, [24] reported that to process a graph dataset that takes only 28GB disk space, Giraph and GraphLab need 370GB and 800GB memory space, respectively; and when memory resources become exhausted, the performance of Giraph degrades seriously while GraphLab simply crashes. Thus, even using a cluster with terabytes of memory space, we may only be able to process a graph of a few hundred GB. However, graphs in realworld applications can easily exceed this size, such as web graphs and the Semantic Web. In fact, the file size of ClueWeb, a web graph used in our experiments, already exceeds 400GB, let alone those web graphs maintained by existing search engine companies, as well as other large graphs from online social networks and telecom operators. One may, of course, increase the memory space in a cluster by adding more machines. However, since there are n 2 communication pairs in a cluster of n machines and they all contend for the shared network resource, the communication overhead outweighs the increased computing power when n becomes too large. Due to the above reasons, researchers have recently developed out-of-core systems for processing big graphs. For example, Pregelix [1] models the semantics of Pregel by relational operations like join and group-by, and leverages a general-purpose dataflow engine for out-of-core execution. Thus, it requires expensive externalmemory join and group-by, which degrade the performance of distributed graph processing. Giraph also supports out-of-core execution, but [1] reported that it does not function properly. Other out-of-core systems adopt the edge-centric Gather-Apply-Scatter model of PowerGraph [6], which is a special case of the vertex-centric model with a narrower application scope. Specifically, a vertex can only communicate with its adjacent vertex along an adjacent edge, which makes it unsuitable for algorithms that require pointer jumping (or path doubling) [23]. Moreover, the Gather-Apply phase is essentially message combining in Pregel. We further categorize these systems into two types as follows. Type 1: Single-Machine Systems. Such systems include GraphChi [9], X-Stream [15], and VENUS [3], which are designed to process a graph on a single PC. These systems require the IDs of vertices in a graph to be numbered as 1, 2, · · · , |V |, and vertices are partitioned into P disjoint ID intervals, so that each partition can be loaded to memory for processing at a time. Besides the strict requirement on vertex ID format, these systems are also inefficient if only a small fraction of vertices need to perform computation in an iteration. This is because a whole partition needs to be loaded for processing as along as one vertex in it needs computation. Type 2: Distributed Systems. We are only aware of one such system, Chaos [14], which scales X-Stream out to multiple machines. However, Chaos represents the other extreme of hardware requirement with respect to single-PC systems. Specifically, its computation model is built upon the assumption that network bandwidth far outstrips storage bandwidth. In fact, [14] reported that Chaos only achieves good performance by using large-SSD machines connected with 40 Gigabit Ethernet, but the performance is undesirable when Gigabit Ethernet is used, which is far more common in most small to medium size companies and most research institutes. While there also exist some in-memory graph processing systems designed to run in a single big-memory machine, they are not designed to process very large graphs. For example, the largest graph tested with GRACE [21] has less than 300 million edges. The high startup overhead is another problem, which we explain by considering the loading of a graph of 100GB size. In a distributed system running with 100 PCs, each PC only needs to load around 1GB data from HDFS (Hadoop Distributed File System). In contrast, a single-machine in-memory system needs to load all the 100GB data from its local disk, and with its loading time alone a distributed system may have already finished many graph jobs. In this paper, we introduce our GraphD system, which supports efficient out-of-core vertex-centric computation even with a small cluster of commodity PCs connected by Gigabit Ethernet, which is affordable to most users. We remark that GraphD aims at providing an efficient solution when memory space is insufficient; otherwise, one may simply use an in-memory system. GraphD specifically provides the following desirable features: (1) When a graph G = (V, E) is processed with n machines using GraphD, we prove that each machine only requires O(|V |/n) memory space, which allows GraphD to scale to very large graphs with a small cluster. (2) By maintaining O(|V |/n) vertex states in each machine, GraphD is able to automatically adapt the amount of edges streamed from local disks to the number of vertices that perform computation in an iteration, achieving high performance even in sparse computation workload (which is not possible in existing out-of-core systems). (3) In a common cluster (connected with Gigabit Ethernet), network transmission is much slower than local disk streaming, and GraphD takes this insight into account in its design by a technique called "outgoing message buffering" to hide disk I/O cost inside network communication cost, leading to high performance as verified by our experiments. (4) For a broad class of Pregel algorithms where message combiner is applicable, GraphD uses a novel ID-recoding technique to eliminate the need of any expensive external-memory join or group-by. In this case, the only external-memory operation is the streaming of edges and messages on local disks, GraphD is able to achieve almost the same performance as an in-memory Pregel-like system when disk I/O is fully hidden inside network communication. The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 presents the distributed semi-streaming computation model of GraphD, and analyzes its space cost. Section 4 discusses the parallel framework of GraphD to fully over-lap computation with communication. Section 5 describes the ID recoding technique and Section 6 reports experimental results. Finally, Section 7 concludes the paper. BACKGROUND AND RELATED WORK We first review the computation model of Pregel, and then review other vertex-centric systems for processing big graphs. In this paper, we assume that the input graph G = (V, E) is stored on HDFS, where each vertex v ∈ V has a unique ID id(v) and an adjacency list Γ(v). For simplicity, we use v and id(v) interchangeably. If G is undirected, Γ(v) contains all v's neighbors; while if G is directed, Γ(v) contains all v's out-neighbors. The degree (or out-degree) of v (i.e., |Γ(v)|) is denoted by d(v) . Each vertex v also maintains a value a(v) which gets updated during computation. A Pregel program is run on a cluster of machines, W, deployed with HDFS. Pregel Review Computation Model. A Pregel program starts by loading an input graph from HDFS into memories of all machines, where each vertex v is distributed to a machine W = hash(v) along with Γ(v), with hash(.) being a partitioning function on vertex ID. Let V (W ) be the set of all vertices assigned to W . Each vertex v also maintains a boolean field active(v) indicating whether v is active or halted. A Pregel program proceeds in iterations, where an iteration is called a superstep. In Pregel, a user needs to specify a user-defined function (UDF) compute(msgs) to be called by a vertex v, where msgs is the set of incoming messages received by v (sent in the previous superstep). In v.compute(.), v may update a(v), send messages to other vertices, and vote to halt (i.e., deactivate itself). Only active vertices will call compute(.) in a superstep, but a halted vertex will be reactivated if it receives a message. The program terminates when all vertices are halted and there is no pending message for the next superstep. Finally, the results are dumped to HDFS. To illustrate how to write compute(.), we consider the PageRank algorithm of [12] where a(v) stores the PageRank value of vertex v, and a(v) gets updated until convergence. In Step 1, each vertex v initializes a(v) = 1/|V | and distributes a(v) to its out-neighbors by sending each out-neighbor a message a(v)/d(v). In Step i (i > 1), each vertex v sums up the received message values, denoted by sum, and computes a(v) = 0.15/|V | + 0.85 · sum. It then distributes a(v)/d(v) to each of its out-neighbors. Combiner. Users may also implement a message combiner to specify how to combine messages targeted at the same vertex v t , so that messages generated on a machine W towards v t will be combined into a single message by W locally, and then sent to v t . Message combiner effectively reduces the number of messages transmitted though the network. In the example of PageRank computation, the combiner can be implemented as computing sum, since only the sum of incoming messages is of interest in compute(.). Aggregator. Pregel also allows users to implement an aggregator for global communication. Each vertex can provide a value to an aggregator in compute(.) in a superstep. The system aggregates those values and makes the aggregated result available to all vertices in the next superstep. For each vertex v, machine W = hash(v) keeps the following information in main memory: (1) the vertex state, which consists of id(v), a(v) and active(v), and (2) the adjacency list Γ(v). Since vertex degree is required by out-of-core systems to demarcate adjacency lists of different vertices, to be consistent, we include d(v) into the vertex state of v, which is given as follows: state(v) = (id(v), a(v), active(v), d(v)). (1) Vertex-Centric Graph Processing Systems As discussed in Section 1, existing vertex-centric systems for big graph processing can be categorized into (1) distributed in-memory systems, and (2) single-PC out-of-core systems, and (3) distributed out-of-core systems. We now review them in more detail. Distributed In-Memory Systems. Since Pregel [12] is only for internal use in Google, many open-source Pregel-like systems have been developed including Giraph [4], GPS [16], GraphX [7], and Pregel+ [22]. Like Pregel, these systems keep an entire input graph in memories during computation, and also buffer intermediate messages in memories. While these systems adopt a synchronous execution model where vertex communicates by message passing, GraphLab [10] adopts a different design. Specifically, a sharedmemory abstraction is adopted where a vertex directly pulls data from its adjacent vertices/edges, and asynchronous execution is supported to allow faster convergence for algorithms where vertex values converge asymmetrically. A subsequent version of GraphLab, PowerGraph [6], partitions the graph by edges instead of vertices, in order to achieve more balanced workload. Since our work is more related to out-of-core systems, we refer interested readers to [8,11] for more discussions on existing in-memory systems. Single-PC Out-of-Core Systems. These systems load one vertex partition to memory at a time for processing. In GraphChi [9], all vertices in a vertex partition and all their adjacent edges need to be loaded into memory before processing begins. X-Stream adopts a different design, which only needs to load all vertices in a partition into memory, while edges are streamed from local disk. Note that sequential streaming only requires a small in-memory buffer in order to achieve sequential I/O bandwidth, whose memory cost is negligible. In both GraphChi and X-Stream, a vertex communicates with each other by writing/reading data on adjacent edges. VENUS [3] avoids the cost of writing data to edges, by letting a vertex obtain values directly from its in-neighbors, but it is not open source. However, all these systems need to scan the whole disk-resident input graph in each iteration, leading to undesirable performance for sparse computation workload. We remark that although GraphChi supports selective scheduling, it is ineffective since a whole partition (including all adjacent edges) needs to be loaded even if just one vertex in the partition needs computation. Distributed Out-of-Core Systems. Compared with single-machine systems, these systems only require a machine to process a partition of the graph, and thus the disk bandwidth of all machines are utilized in parallel. HaLoop [2] improves the performance of Hadoop for iterative computation by allowing a job to cache data to local disks to avoid remote reads in each iteration, but for vertex-centric graph computation, users need to explicitly program the interaction between vertices and messages using the MapReduce model. Pregelix [1] formulates the computation model of Pregel using relational operations like join and group-by, and requires expensive external-memory join and group-by operations. Chaos [14] scales out X-Stream by partitioning the input graph on the disks of multiple machines, each of which streams its own portion of edges but may steal workload from other machines when it becomes idle. However, the system requires high-speed network to synchronize vertex values and to steal workloads, and is inefficient when Gigabit Ethernet is used. DATA ORGANIZATION AND STREAMS In this section, we describe the distributed semi-streaming (DSS) model of GraphD, analyze its memory cost and introduce its disk stream designs. Distributed Semi-Streaming Model We first consider the memory requirement of Pregel. For ease of analysis, we assume that the types of vertex ID, vertex value, adjacency list item, and message all have constant size. Accordingly, a vertex state as given in Eq (1) also has constant size (as active(v) and d(v) have constant size). We remark that these data types are specified by users through C++ template arguments, and can have variable sizes in reality (e.g., vertex ID can be a string). Recall that Pregel keeps the O(|V |) vertex states, O(|E|) edges (i.e., adjacency list items) in memories. Let us denote the set of messages currently in the system by M, where a messages is either on the sender-side or on the receiver-side. Then, O(|M|) memory space is also required for keeping messages. Therefore, the total memory space required by Pregel is O(|V | + |E| + |M|). Note that O(|E|) is typically much larger than O(|V |). For example, a user in a social network can easily have tens of friends. In many Pregel algorithms such as PageRank computation, only one message is transmitted along each edge in a superstep, and thus O(|M|) = O(|E|). However, |M| can be much larger in some Pregel algorithms. For example, in the triangle finding algorithm of [13], to confirm a triangle v 1 v 2 v 3 where v 1 < v 2 < v 3 , v 1 needs to send v 2 a message asking about whether v 3 ∈ Γ(v 2 ) (note that v 1 has access to v 2 and v 3 in Γ(v 1 )). Since there are O(|E| 1.5 ) triangles in a graph [17], O(|M|) is at least O(|E| 1.5 ). According to the above analysis, the dominating memory cost is contributed by adjacency lists (O(|E|)) and messages (O(|M|)). GraphD streams adjacency lists and messages on local disks, leaving only the O(|V |) vertex states in memories, and thus significantly reduces the memory requirement. However, the O(|V |) vertex states can still be too large to fit in the memory of a single machine. In fact, if all vertex states can fit in memory, single-PC systems such as GraphChi often provides an alternative model for more efficient semi-streaming graph processing. Since GraphD is a distributed system, each machine only needs to keep a portion of vertex states. GraphD follows the distributed semi-streaming (DSS) model 1 , where each machine W only keeps the states of all vertices in V (W ) in its memory, and treats their adjacency lists and incoming and outgoing messages as local disk streams. It remains to show that DSS distributes the vertex states evenly among the |W| machines, i.e., each machine holds no more than O(|V |/|W|) vertex states with a small constant (e.g., 2). This memory requirement is very reasonable given the RAM size of a commodity PC today, allowing a small cluster to scale to very large graphs. We now prove this property below, where we regard the machine number |W| as a constant. LEMMA 1. Assume that hash(.) is well chosen so that a vertex is assigned to every machine with equal probability, then with probability of at least (1 − O(1/|V |)), it holds that max W ∈W |V (W )| is less than 2|V |/|W|. PROOF. First, consider a particular machine W . Since every vertex is hashed to W with probability p = 1/|W|, the total number of vertices that are hashed to W (i.e., |V (W )|) conforms to a binomial distribution with mean µ = |V |p and variance σ 2 = |V |p(1 − p) < |V |p = µ. According to Chebyshev's inequality, we have Pr |V (W )| − µ ≥ µ ≤ σ 2 /µ 2 . Since σ 2 < µ, the R.H.S. is less than 1/µ. Moreover, since |V (W )| is positive, the L.H.S. is equivalent to Pr(|V (W )| ≥ 2µ). Therefore, ID Value Active Degree Pr(|V (W )| ≥ 2µ) < 1/µ.(2) Since µ = |V |/|W|, 1/µ = |W|/|V | = O(1/|V |) is a very small number. For example, when we process a billion-node graph using a cluster of 20 PCs, |W| is only 20 but |V | is on the order of 10 9 , and thus 1/µ is in the order of 10 −7 -10 −8 . We now consider all machines in W, and proceed to prove our lemma: (2)). Pr( max W ∈W |V (W )| < 2|V |/|W|) = Pr( max W ∈W |V (W )| < 2µ) = Pr W ∈W |V (W )| < 2µ ≥ 1 − ∑ W ∈W Pr(|V (W )| ≥ 2µ) (using union bound) > 1 − |W|/µ (using Eq The lemma is proved by noticing that |W|/µ = |W| 2 /|V | = O(1/|V |). For example, when |W| is 20 and |V | is in the order of 10 9 , |W| 2 /|V | is in the order of 10 −6 -10 −7 . We additionally require that main memory of a machine be large enough to hold the state state(v) and adjacency list Γ(v) of any single vertex v, so that v can access them in v.compute(.). We add this constraint because Γ(v) of a high-degree vertex v could require more memory space than O(|V |/|W|) (i.e., the bound of Lemma 1), but this constraint is reasonable given the RAM size of a commodity PC today, and it is also required by existing out-of-core systems such as GraphChi and Pregelix. Graph Organization and Edge Streaming While GraphD may load data from HDFS and write results to HDFS, during the iterative computation, GraphD only sequentially reads and/or writes binary streams on local disks for efficiency. When users specify GraphD to load an input graph from HDFS, the graph gets partitioned among all machines, where each machine W saves the adjacency lists of vertices in V (W ) to local disk as an edge stream denoted by S E . Meanwhile, the states of vertices in V (W ) are kept in memory (for computation) and also written to local disk (for subsequent local loading, see below). Optionally, if the graph is previously loaded from HDFS by another job, users may also specify GraphD to load graph from local disks, in which case each machine directly loads the previously saved vertex states to memory. In GraphD, each machine organizes its in-memory vertex states with an array A, as illustrated in Figure 1. Vertices in A are ordered by vertex ID (i.e., 2, 22, 32, 42, · · · in Figure 1), and the edge stream S E simply concatenates their adjacency lists in the same order. In a superstep, compute(.) is scheduled to be called on the active vertices in A in order. Since a vertex v needs to access Γ(v) in v.compute(.), the next d(v) items are sequentially read from S E to form Γ(v). Thus, each superstep only sequentially reads S E once. If topology mutation is enabled, each superstep (say, Step i) should digest an old edge stream S E i−1 and generate a new edge stream S E i (for use in Step (i + 1)), where the subscripts denote the corresponding superstep number. However, this method streams the whole edge stream once in each superstep, even if only a few vertices are active. Note that sparse computation workload is not a problem for in-memory systems since the adjacency lists are stored in RAMs, but it often causes performance bottleneck for disk-based systems like X-Stream. For example, [15] admitted that X-Stream is inefficient for graphs whose structure requires a large number of iterations, as each iteration has to stream all edges of a graph. For the example of Figure 1, since Vertices 22 and 32 are not active, if they also receive no message, then their edges can be skipped. For this purpose, our streaming algorithm should support a function skip(num items), to skip the next num items items from the stream. Referring to Figure 1 again, after Vertex 2 is processed, we may skip the edges of Vertices 22 and 32 by calling skip (4), where 4 is computed by adding their degrees d(v) (i.e., 3 and 1 in array A). However, it is inefficient to perform a random disk read each time skip(.) is called. This is because, if there are many small series of inactive vertices in A, too many random disk I/Os are incurred, which may be even more costly than streaming the whole S E . We want our streaming algorithm to automatically adapt to the fraction of active vertices, i.e., (1) it should achieve sequential disk bandwidth when the workload is dense, and (2) should be able to skip a large amount of inactive vertices with a few random reads when the workload is sparse. We also need to guarantee that (3) the worst case cost is no larger than streaming the whole S E once. Before describing our streaming algorithm, we first consider how a stream (i.e., a file) is normally read. Specifically, an in-memory buffer B of size b is maintained throughout the streaming of a file. To read data from the stream, we continue reading from the latest read-position in B, and if we reach the end of B, we refill B with the next b bytes of data from the stream file on disk. Since each batch of b bytes of data is read into B using one random disk read, the costs of the random read (e.g., seek time and rotational latency) is amortized by all b bytes, and as long as b is not too small, the disk reads become sequential. GraphD sets b to 64 KB as default, which is more than enough for achieving sequential bandwidth in most platforms, but is negligible given the RAM size of a modern PC. To achieve the aforementioned 3 requirements, skip(.) avoids reading data from the file if after the skipping, the position to read data from is still in the buffer B. Obviously, this approach limits the number of random reads to be no more than that incurred when streaming the whole S E . More specifically, skip(k) is implemented as follows: we move the latest read-position in buffer B forward for k adjacency list items, to the position pos. If pos is still inside B, we are done and no random read is incurred. Otherwise, pos has exceeded the end of B, and we move the read-position in the stream file forward for (pos − b) bytes (i.e., the amount to skip right after the end of B), to locate the start position for reading the next b bytes from the file; we then refill B with the next b bytes of data read from the file. When the workload is sparse, this method is able to avoid sequentially reading a lot of useless items (by one random disk read). … V(W) S 1 O S I … … S 2 O S | | O Basic Stream 1 … … Merged Stream Basic Stream 2 Basic Stream k ( a) Message Flow of a Worker (b) Merging of Sorted Streams Figure 2: Message Streams in GraphD Note that an important reason of maintaining vertex states in main memory is to quickly access vertex degrees for computing the number of bytes to skip. Otherwise, the vertex states can be treated as a disk stream that gets streamed along with S E during computation, in which case we only require a minimal amount of memory. Message Streams We now consider the message streams in our DSS model. As Figure 2(a) shows, each machine can have an incoming message stream (IMS) S I , and |W| outgoing message streams (OMSs) S O i (i = 1, 2, · · · , |W|) on disk. Here, S O i is used to buffer those messages towards vertices on the i-th machine, denoted by W i . We now describe these streams. Outgoing Message Streams When a vertex v sends a message to another vertex u in v.compute(.), we may either (1) buffer it in memory for sending, or we may (2) append it to an OMS S O hash(u) on local disk, to be loaded later for sending. The actual decision depends on whether disk streaming bandwidth or network bandwidth is larger. Option (2) appears a bit strange at first glance, since it needs to write each message to disk and then load it back, leading to additional disk I/O. However, we need it since messages are generated by vertex-centric computation quickly, and if the speed of message sending cannot catch up but we keep buffering new messages, we might end up buffering too many messages in memory (or even causing memory overflow), breaching the bound of O(|V |/|W|) established in Lemma 1. We explain when and why we adopt option (1) or option (2) below. Design Philosophy. If network bandwidth is higher than disk streaming bandwidth (e.g., when the 40 Gigabit Ethernet used in [14] is adopted), GraphD does not create OMSs. However, since vertexcentric computation generates messages quickly, the memory budget for buffering messages will soon be reached. To avoid memory overflow, vertex-centric computation is stalled while the buffered messages are being sent. Then, these messages are removed from the in-memory message buffer, allowing vertex-centric computation to continue (to generate and buffer more messages). The stalling degrades performance, since it leads to repeated serial execution of message sending followed by vertex-centric computation (i.e., buffer refilling), and thus computation and communication do not overlap with each other. Note that the cost of computation in GraphD is not negligible since a machine needs to stream S E . However, it makes no sense to write messages to OMSs, since writing a message to disk is even slower than sending it. In contrast, if disk streaming bandwidth is higher than network bandwidth (e.g., when the commonly used Gigabit Ethernet is adopted), GraphD uses OMSs to buffer the generated messages that are to be sent out. Since messages are buffered to local disks, vertexcentric computation is never stalled, allowing computation to be perform in parallel with message sending. The resulting parallelism of OMS appending and message sending, in turn, hides the cost of the former inside that of the latter, as disk bandwidth is higher than network bandwidth. In fact, in a cluster assembled with PCs and 1 Gbps switches that are commonly available to average users, local disk streaming bandwidth is much larger than network bandwidth. This observation has been reported by existing work such as [19], which proposes a faster fault-recovery method for the framework of Pregel, but requires every machine W to log all messages sent by W also to the local disk. Experiments of [19] (using 1 Gbps switch) reported that execution with message logging is almost as fast as execution without logging, which shows that the cost of sequentially writing messages to disks is negligible compared with message transmission. This is also confirmed by our experimental results reported in Table 4 of Section 6, which shows that the total time of vertexcentric computation (which performs message streaming) accounts for a very small fraction of the running time of a superstep, while message transmission lasts throughout the whole superstep. We studied the reasons behind this observation, and found that (i) disk streaming is significantly accelerated by OS memory cache, and that (ii) the network resource is contended by all the |W| machines, limiting the connection throughput between any pair of machines. In this common setting, our use of OMSs is able to hide the disk I/O cost inside the communication cost, leading to full overlapping between computation and communication. OMS Structure. Recall that vertex-centric computation appends messages to an OMS S O i , and meanwhile, earlier messages in S O i are loaded to memory for sending, and should then be garbage collected from S O i . One may organize an OMS as an append-only streaming file, where new data are always written to its in-memory buffer B (of size b = 64 KB), and when B becomes full, the data in B gets flushed to the stream file and B is emptied for appending more messages. However, this solution has several weaknesses. Firstly, since vertex-centric computation continually appends data to the OMS file, it is difficult to track whether sufficient new messages have been written to the file so that sending them will not underutilize the network bandwidth. Secondly, a message that gets sent cannot be garbage collected from its OMS. In short, it is not desirable to obtain messages from a file that is appending new data. To solve this problem, we implement an OMS as a splittable stream that supports concurrent data appending (at the tail) and data fetching (at the head). Specifically, a splittable stream S breaks a long stream of data items into multiple files F 1 , F 2 , . . ., where each file F j either has at most B bytes, or contains only one data item whose size is larger than B. Here, B is a parameter of a splittable stream and we shall discuss how to set it shortly. A splittable stream S appends data items to each of its file in a streaming manner, which only requires an in-memory buffer B to achieve sequential disk I/O. Let us assume that S is currently writing F j . To append a data item o to S, S checks whether F j will have more than B bytes after appending o: (1) if so, F j is closed and a new file F j+1 is created for appending o; (2) otherwise, o is directly appended to F j . Since S writes to only one file at a time, S requires only b = 64 KB of memory. In GraphD, since each OMS is organized as a splittable stream, the |W| OMSs in a machines take |W| · b bytes of memory in total. Even when |W| = 1000, all OMSs take merely 64 MB of RAM. Sending Messages in OMSs. When an OMS S O i is writing F j , messages in F 1 , . . . , F j−1 can be sent to machine W i to utilize the network bandwidth. We now describe how GraphD sends messages in OMSs. As Figure 3 shows, each machine W i maintains an inmemory sending buffer B send , and a fully-written file F k of an OMS Figure 3: Sending Messages in OMSs S O j is sent to W j by first loading the messages in F k to B send , which are then sent to W j in one batch. Obviously, the buffer size |B send | should be as large as the largest possible size of F j , which is at least B. B send W i B recv W j 1 2 3 … j | | F 1 F 2 F 3 F 4 F 5 S j O sent We will discuss how to set |B send | properly shortly. Now, we first discuss how we set B. Obviously, the smaller B is, the finergrained each message file is, and thus the less likely that message sending will be stalled on a file that is being appended. However, since messages are sent in batches of size around B, B cannot be too small as sending messages in small batches is inefficient. GraphD sets B as 8 MB by default, which is large enough to fully utilize the network bandwidth (and keep the number of files tractable), while small enough to avoid file collision for both message appending and message sending. We remark that maintaining a sending buffer of 8 MB (or larger, as we shall explain) is well acceptable given the RAM size of a modern PC. Sending Strategies. Referring to Figure 3 again, each machine W i orders the |W| OMSs into a ring, where each OMS keeps track of the batch number of the last file that has been sent (resp. fully written), denoted by no s (resp. no w ). For example, for S O j in Figure 3 which is currently appending messages to F 5 , no s = 2 and no w = 4. Moreover, each machine keeps track of the position in the ring, denoted by p, from whose OMS (i.e., S O p ) the previous message file is selected to be loaded to B send for sending. If message combiner is not used, we scan through the ring from position p, until an OMS S O j is reached whose no s < no w (i.e., there is at least one file to send). There are two possible cases. Case 1: if such an OMS is found before the scan reaches p again, we load F no s +1 to B send for sending, and then update p as j. For example, for S O j in Figure 3, we only send F 3 . Then, the same scan operation is repeated starting from the updated position p in the ring. Note that even if S O j has more than one file to send to W j (e.g., F 4 in Figure 3), the next scan will pick a file from another OMS S O j ( j = j) for sending to W j (if exists), to avoid communication bottleneck on the receiver-side. For the same reason, different machines will initialize p to be different values when a job begins. Case 2: if the scan reaches p again without finding a valid OMS, then no OMS has a file to send, and thus the scanning thread goes to sleep. The thread is awakened to repeat the scan whenever a new message file is written. On the other hand, if message combiner is used, we adopt a different scanning strategy to maximize the effect of message combining: if the scan locates a valid OMS, all its message files from F no s +1 to F no w are combined for sending in one batch. Specifically, the messages are first merge-sorted (i.e., grouped) by destination vertex ID, and then another pass over the sorted messages combines each group into one message and appends this message to B send for sending. The strategy is effective, since (1) when all active vertices have called compute(.) in the current superstep, OMSs are finalized and our strategy essentially combines all remaining messages in each OMS, while (2) otherwise, message combining runs in parallel with vertex-centric computation, and thus does not increase the computation time. Here, combined messages are appended to B send , and since the messages come from multiple files, |B send | may need to increase beyond B. However, since there could be at most one combined message for each vertex in the target machine, |B send | is upper bounded by O(max W ∈W |V (W )|). While GraphD sets |B send | as B by default, if combiner is used, GraphD increases |B send | to O(max W ∈W |V (W )|) (if it is larger than B). According to Lemma 1 of Section 3.1, the memory bound of O(|V |/|W|) is still kept. Finally, we show that merge-sorting message files takes only constant memory space which is well-affordable to a modern PC. Assume that we sort files F 1 , F 2 , . . . , F n by k-way merge-sort, then it takes log k n sequential passes over all the messages. At any time during the merge-sort, only one merge operation is running where (at most) k sorted message files are being merged into one larger message file as Figure 2(b) illustrates. Since we treat each sorted message file as a stream when reading/appending messages, the merge-sort uses (k +1) in-memory buffers, which takes (k +1)b memory space. GraphD sets k to 1000, and thus a merge-sort operation takes merely (64 MB + 64 KB) RAM despite the large k. Moreover, the large value of k allows merge-sort to take only one pass even for very large graphs, since the number of message files to combine is usually smaller than k = 1000. To see this, recall that each message file has size around B = 8 MB, and thus k files have size around 8 GB, which is quite large for an OMS (which only contains messages transmitted between one pair of machines). Also note that this strategy is just a baseline, and in Section 5 we shall see that when our ID recoding technique is used, messages can be combined in memory without performing merge-sort first. Incoming Message Stream We now consider the IMS S I . Since outgoing messages are loaded to B send and sent in batches, each machine also needs to maintain an in-memory receiving buffer B recv with |B recv | = |B send |. In each machine, a receiving thread listens on the network, and uses B recv to receive one message batch at a time and adds the messages to S I . Next, we discuss how to add received messages to S I . In a superstep, each active vertex v calls compute(msgs), where msgs is obtained from S I . Since the vertex-state array A and edge stream S E are already ordered by vertex ID, we require messages in S I also to be ordered by destination vertex ID, so that vertex-centric computation may simply proceed in one pass over A by sequentially reading from both S I and S E . Specifically, to call v.compute(msg), v may read the next d(v) items from S E , and sequentially read messages targeted at v from S I and append them to msgs until a message targeted at u > v (or the end of S I ) is reached. However, the order that messages in S I are received depends on the actual communication process. We adopt the following approach to make S I ordered. Specifically, whenever a machine receives a batch of messages in B recv , it sorts the messages by destination vertex ID, and then writes the sorted messages to a file on disk. Finally, when all incoming messages for the current superstep are received, the sorted message files are further merged (or merged-sorted) into one sorted message file, which becomes S I . As we have discussed, GraphD uses 1000-way merge-sort which takes merely (64 MB + 64 KB) RAM. Moreover, since each received message batch has size around 8 MB, when there are no more than 8 GB messages, the message files are simply merged. Moreover, merge-sort is unlikely to take more than 2 passes since this requires a machine to receive over 8 TB messages. Again, this solution is just a baseline, as the use of the ID recoding technique allows incoming messages to be digested in memory, and thus there is no S I (and no merge-sort). Cost Analysis We now analyze the total memory cost of GraphD when both IMS and OMSs are maintained, assuming that |W| < 1000. For communication, each machine maintains two buffers B send and B recv which take 2B = 16 MB memory space. For computation, the OMSs need |W| · b < 64 MB memory for appending messages, and 2b = 128 KB memory for reading the edge stream and the IMS. When combiner is used, the merge-sort for combining messages before sending takes (64 MB + 64 KB) memory. After all messages are received, the merge-sort for constructing S I needs (64 MB + 64 KB) memory. Therefore, each machine requires only around 200 MB additional memory space besides the vertex-state array A. Therefore, the space bound of O(|V |/|W|) established by Lemma 1 is still kept. As for the disk I/O cost of a superstep, all the streams S E , S I and S O i are sequentially read and/or written for only one pass, while the merge-sort for combining messages (resp. for constructing S I ) additionally takes one pass (or at most two passes for an enormous graph) over the outgoing (resp. incoming) messages. Other Issues Data Loading. Data loading from HDFS is similarly processed, except that now data items in an OMS and an IMS becomes vertices (along with their adjacency lists) rather than messages. Specifically, each machine parses a portion of the input file from HDFS, and for each vertex v parsed, it is appended to the OMS S O i (i = hash(v)) to be directed to W i . Since there could be highdegree vertices, we require |B send | and |B recv | to be at least large enough to hold one vertex and its adjacency list (which could be larger than O(|V |/|W|)) at the loading stage. The received vertices are merge-sorted by vertex ID into S I , which then gets splitted into A and S E using another pass over S I . Topology Mutation. GraphD also supports algorithms that perform topology mutation, by associating a type with each message indicating whether the message is an ordinary one or is for vertex mutation. Edge mutations are performed in v.compute(.) by directly updating Γ(v), which is written to a new local edge stream for the next superstep. Vertex mutations are performed after vertexcentric computation, where new vertices are appended to the vertexstate array A, and deleted vertices are simply marked in A without actual deletion. This design guarantees that existing vertices never change their positions in A, which is required for our ID recoding technique to be described in Section 5. Fault Tolerance. Our design of streams naturally supports checkpointing. The vertex states and edge streams are backed up to HDFS at the beginning of a job. To checkpoint a superstep, the IMSs of all machines are backed up to HDFS; and if topology mutation happens, the locally-logged incremental updates since last checkpoint are also backed up to HDFS. When failure happens, a machine loads its vertex states and edge stream from HDFS, replays the mutation operations, and loads incoming messages from the latest checkpoint to resume execution. Our DSS model straightforwardly supports the message-log based fast recovery approach of [19] mentioned in Section 3.3.1, since every machine writes outgoing messages to OMSs on local disk. The only change required is the timing of garbage collecting OMSs: each machine keeps all its OMSs on local disk (for use during recovery) until a new checkpoint is written to HDFS, instead of deleting a message file immediately after its messages are sent. PARALLEL FRAMEWORK OF DSS In Section 3, we have discussed the graph and stream data organization of our DSS model. In this section, we introduce how these structures are actually used in parallel graph computation. We focus on both the parallelism between machines, and that within a machine. Due to the space limitation, we only discuss the most complicated case when all streams S E , S I and S O i are used by GraphD, i.e., when local disk streaming bandwidth is larger than network bandwidth as is common in a commodity cluster. In this setting, the intra-machine parallelism mainly refers to the overlapping of computation (local disk streaming) with communication (message transmission). We now present our parallel framework. We use FIFO communication channels, i.e., if a machine W i sends message m a and then m b to another machine W j , W j is guaranteed to receive m a before receiving m b . We also use condition variables to avoid a blocking thread from occupying CPU resources: suppose that a thread w a needs to block until a condition holds, it may wait on a condition variable cond-var and will no longer occupy CPU, when another thread w b updates the condition, it may wake up w a to continue execution using cond-var. Each machine runs three units in parallel: (1) a sending unit U s that sends outgoing messages; (2) a receiving unit U r that receives incoming messages; and (3) a computing unit U c that performs vertex-centric computation (to generate messages). We now explain how they interact with each other. Synchronization Between Supersteps. Since Pregel adopts synchronous execution, it is unreasonable to delay the transmission of messages generated in Step i, by transmitting messages generated in Step (i + 1), especially in a commodity cluster where network bandwidth is limited. Therefore, U s of all machines should block the sending of messages generated by their U c in Step (i + 1), until all messages generated in Step i have been received by U r in all machines. Our framework guarantees this property, by letting U r in each machine to synchronize with the receiving units of all other machines, after it has received all the messages towards its machine (generated in Step i). We will discuss how U r determines this condition shortly. After the synchronization, U r guarantees that all messages generated in Step i have been transmitted, and thus it notifies U s to continue sending messages generated in Step (i + 1). Message Receiving. We now explain how U r decides whether it has received all messages of Step i. Specifically, whenever U s in a machine W j has sent all its messages towards another machine W k (i.e., W j 's OMS S O k is exhausted), it will send an end tag (a special message) to W k . As a result, a machine W k just needs to count the number of end tags received, and if it reaches |W|, messages from all machines must have been received. This is because GraphD guarantees the property that all messages (including end tags) generated in Step i must be transmitted before any message (including an end tag) generated in Step (i + 1), as we have described before. Here, U s decides that it has exhausted its OMS S O k (and sends an end tag to W k ) if the following two conditions are met: (1) U c has finished vertex-centric computation for Step i, and will thus generate no more messages of Step i; and (2) there is no more message file in OMS S O k for sending. Vertex-Centric Computation. When U c finishes its computation of Step i, it has to block until U r has received all messages towards it in Step i, before starting to compute Step (i + 1). This is because, to call v.compute(msg) in Step (i + 1), we need to guarantee that msg contains all the messages sent to v from Step i. However, unlike U s , U c does not need to wait till all receiving units are synchronized, and may start generating messages of Step (i + 1) earlier, although these messages will only be sent by U s after the synchronization. To summarize, in Step i, U r first keeps receiving messages until |W| end tags are received, then notifies U c that it is allowed to compute Step (i + 1), then synchronizes with the receiving units of the other machines; and if the job should continue, U r then notifies U c that it is allowed to send messages for Step (i + 1). The benefit of letting U c start computing Step (i + 1) earlier is that, when U s starts to send messages of Step (i + 1), it can readily find fully-written OMS files for sending, and thus can fully utilize the network bandwidth. Synchronization of Global Information. When U c of a machine W performs vertex-centric computation in Step i, it will aggregate data to its local aggregator, and update local control information such as whether W has sent any message and whether any vertex is active after calling compute(.). These data needs to be synchronize to decide whether to continue computing Step (i + 1), and to obtain the global aggregator value for use by compute(.) in Step (i + 1). Although we can do that during the previously-described synchronization among the receiving units, this may delay the computation of Step (i + 1) since U c needs to wait for the global aggregator value, which in turn needs to wait for the transmission of all messages generated in Step i (recall that we assume communication bandwidth is limited). Instead, we let the computing units of all machines synchronize these global data as soon as they finish their vertex-centric computation, and there is no need to wait for the slower message transmission to complete. This allows U c to start computing a new superstep much earlier than the synchronization among receiving units. If U c decides that the job should terminate after synchronizing with other computing units, it signals U s and U r to terminate after they finish processing their current superstep, and then terminates itself. THE ID-RECODING TECHNIQUE For Pregel algorithms where message combiner is applicable, GraphD supports a more efficient computation model which uses a novel ID-recoding technique to (1) directly digest incoming messages in memory, which eliminates S I , and to (2) combine outgoing messages in memory, which eliminates the need of externalmemory merge-sort on OMS files. Recall the disk I/O cost from Section 3.3.3, then the two aforementioned improvements essentially means that in a superstep, the only disk I/O cost is to streaming S E , and to sequentially appending messages to OMSs. In other words, each superstep only requires one sequential pass over the edge stream, and one sequential pass over the generated messages, which is almost the minimum possible I/O cost that any out-ofcore Pregel-like system can achieve (if edges and messages are streamed on disks). In contrast, the state-of-the-art out-of-core system, Pregelix, still performs expensive external-memory sort and group-by operations even for algorithms where combiner applies. We remark that this kind of algorithms cover a broad number of Pregel algorithms. In fact, many systems adopt a narrower edgecentric Gather-Apply-Scatter (GAS) computation model, such as PowerGraph [6], GraphChi [9] and X-Stream [15], and this model is essentially Pregel algorithms with message combining. Specifically, in the GAS model, each vertex v gathers a message from every in-edge, and combines them to update the value of v. Vertex ID Recoding. Vertex ID recoding is required by many graph systems to enable efficient computation, although their purposes are different. For example, single-machine systems like Graph- Chi [9], X-Stream [15] and VENUS [9] all require the vertex IDs to be numbered as 1, 2, · · · , since their computation model partitions vertices based on vertex ID intervals. While distributed Pregel-like systems should allow users to specify the type of vertex ID (e.g., using a C++ template argument), one such system, GPS [16], still requires the vertex IDs to be numbered as 1, 2, · · · . Since the vertex IDs are dense, when a message targets at vertex v is received, the machine can directly locate the incoming message queue of v to append the message, without looking up its location from a lookup table (which incurs much overhead since the lookup is needed for every message). As a result, GPS can achieve better performance than other systems like Giraph and GraphLab, as reported in [11]. Giraph++ [20] partitions a graph to allow more efficient computation, but in order to allow the vertex-to-machine mapping to still be captured by a simple function hash(.) during sebsequent computation, it also recodes the vertex IDs. We design a new ID recoding for GraphD to allow in-memory message combining and digesting, while retaining the memory bound of O(|V |/|W|) established by Lemma 1. The key idea is to establish a one-to-one mapping between the ID of a vertex and its position in the state array A, which is efficient to compute. We now explain how GraphD establishes this mapping, assuming that vertex IDs are numbered as 0, 1, · · · , |V | − 1. In GraphD, machines are numbered as 0, 1, · · · , |W| − 1. When GraphD is running in recoded mode, it uses the vertex partitioning function hash(v) = id(v) modulo |W|. As an illustration, Figure 4 shows the vertex state arrays A of a cluster of 3 machines processing a graph with 12 vertices, where we only show the old IDs and the new (i.e., recoded) IDs of the vertices in A. We can see that the old IDs are sparse, and are now recoded into dense IDs numbered as 0, 1, · · · , 11. In the recoded mode, the new IDs are treated as the actual vertex ID, and obviously we have hash(v) = id(v) modulo 3 (e.g., Vertex 5 is assigned to Machine 2 since 5 modulo 3 is equal to 2). For a vertex at position pos of array A in Machine i, we can compute its new ID as (|W| · pos + i). For example, in Figure 4, the vertex whose old ID is 102 is at position 1 of array A in Machine 2, and thus its new ID is computed as (3 · 1 + 2) = 5. Moreover, given the new ID of a vertex, id, on Machine i, we can compute its position in A as id/|W| . For example, in Figure 4, the vertex whose new ID is 5 (in Machine 2) is at position 5/3 = 1. Preprocessing. To run a job in recoded mode, either the vertices already have their IDs numbered as 0, 1, · · · , |V | − 1, or we need to preprocess the graph to assign its vertices with new IDs 0, 1, · · · , |V |− 1. We now describe our algorithm for the preprocessing, which is essentially a GraphD job running in normal mode (and thus requires only O(|V |/|W|) memory on each machine). In preprocessing, the old IDs are used as the input to hash(.) called during vertex assignment and message passing. After the input graph is loaded, each machine scans array A and assigns each vertex a new ID which is computed from its position in A. However, for each vertex v, the neighbor IDs in Γ(v) (stored in S E ) are still the old IDs, and we need to replace them with their new IDs (which are required for sending messages in recoded mode later). For a directed graph, recoding the IDs in adjacency lists takes 3 supersteps. Let us denote the old (resp. new) ID of a vertex v by id old (v) (resp. id new (v)). In Step 1, each vertex v sends id old (v) to every out-neighbor u ∈ Γ(v) asking for id new (u). In Step 2, a vertex u responds to each requester id old (v) (recall that hash(.) takes the old ID) by sending it id new (u). Finally, in Step 3, each vertex v simply appends the received new neighbor IDs to a new edge stream S E rec , which is treated as the edge stream for streaming in recoded mode later. For an undirected graph, we skip Step 1 since a vertex u can directly send id new (u) to each neighbor v ∈ Γ(u). The whole recoding process sends only O(|E|) messages, and our experimental results in Section 6 show that the preprocessing time is comparable to that of parallel graph loading from HDFS. If graph recoding is performed right after we put G onto HDFS, it adds very little additional time compared with the time for putting G. Execution in Recoded Mode. If the vertex IDs of the original graph are already numbered as 0, 1, · · · , |V | − 1, our recoded mode can directly load it from HDFS. Otherwise, we need to preprocess the input graph as described above. After the graph is recoded, state array A and stream S E rec of each machine are already on its local disks, and thus our recoded mode simply let each machine load A to memory, and stream S E rec on local disk (instead of S E ). Our recoded mode additionally requires users to specify an identity element e 0 , such that when we combine e 0 with any message m, the combined message is still m. For example, e 0 = 0 for PageRank computation since e 0 + m = m; while if the operation of the combiner is to take minimum, e 0 can be set as ∞. In-Memory Message Digesting. In recoded mode, U r now directly digests messages in memory. Specifically, in Step i, before receiving messages, U r first creates an in-memory array with |V (W )| message elements, denoted by A r . Here, A r [pos] refers the combined message towards the corresponding vertex of A[pos]. Each element in A r is initialized as e 0 . When a batch of messages is received into B recv , for each message, we compute the position of its destination vertex u in array A from u's ID, which is pos = id(u)/|W| , and then combine the message to A r [pos]. After all messages generated in Step i are received and U c starts processing Step (i + 1), the corresponding vertex of A[pos] is regarded as having received messages only if A r [pos] = e 0 , in which case compute(msgs) is called on the vertex with msgs containing only the combined message A r [pos]. When U c finishes computing Step (i + 1), it frees A r from memory. Let us define A (i) r as the array A r that is created by U r for receiving messages generated in Step (i − 1) and then freed by U c after it finishes computing Step i. Then, two arrays of A r coexist in any superstep: in Step i, U r creates A (i+1) r and updates it with received messages (for use by U c in Step (i + 1)), while U c obtains incoming messages from A Each element of A s is initialized as e 0 . Recall that U s combines and sends those messages from one OMS (i.e., towards one destination machine) at a time. To combine a set of messages towards machine W i , for each message that targets at a vertex u, U s computes its position in array A of the destination machine, which is pos = id(u)/|W| , and then combines the message to A s [pos]. After all messages in an OMS are combined to A s , for each message element A s [pos] = e 0 , U s attach the message value with the ID of its target vertex, which is |W| · pos + i; U s then appends the target-labeled message to B send for sending. To guarantee that all elements of A s are e 0 before combining the next batch of message files, U s also sets A s [pos] back to e 0 after the corresponding message gets appended to B send . Topology Mutation. Topology mutation is handled similarly as described in Section 3.4, with a change for vertex addition. Specifically, in a superstep, after vertex-centric computation, U c first recodes the IDs of the newly added vertices by synchronizing with the computing units of other machines, using the same method as in preprocessing; U c then appends these recoded vertices to A (implemented using STL vector). The cost of the above intra-superstep idrecoding operation is proportional to the number of vertices added. EXPERIMENTS We evaluate the performance of GraphD by comparing it with distributed out-of-core systems Pregelix (Release 0.2.12) and HaLoop, and single-PC out-of-core systems GraphChi and X-Stream (v1.0). We also report the performance of an in-memory system, Pregel+, as a reference to measure the disk I/O overhead incurred by out-ofcore execution. Pregel+ is a fair choice since it has been shown to outperform other in-memory graph systems for various algorithmgraph combinations in a recent performance study [11]. The source code of our GraphD system and all the applications used in our evaluation are available from: http://www.cse.cuhk.edu. hk/systems/graphd. All experiments were conducted on two clusters, both connected by Gigabit Ethernet. The first cluster consists of 16 commodity PCs, each with four 3.40GHz processors (Intel Core i5-4670), 8GB RAM and a 320GB disk. The PCs are connected by an unmanaged switch that provides a relatively low network speed. The second cluster consists of 15 servers, each with twelve 2.0GHz cores (two Intel Xeon E5-2620 CPUs), 48GB RAM and a 200GB disk. In addition, one server additionally has access to another 2TB disk. These servers are connected by Cisco C2960 switch which provides a relatively high network speed. We denote the first cluster by W PC and the second one by W high . For distributed systems, all machines in a cluster were used; while for single-PC systems, only one of the machines was used. Notably, W high has 0.72TB memory space in total, and we use it in order to compare the out-of-core systems with the in-memory Pregel+ system running with enough memory (as the memory space of W PC is insufficient to run Pregel+ in most graphs we tested). Table 1 lists the five real graph datasets that we used: two directed web graphs WebUK 2 and ClueWeb 3 ; two social networks Notably, ClueWeb has 42 billion edges and its input file size exceeds 400GB, and thus single-PC systems can only process it on the machine of W high that has access to the 2TB disk. Three Pregel algorithms were used in our evaluation: PageRank and single-source shortest path (SSSP) computation [12] and the Hash-Min algorithm of [23] for computing connected components. Performance of PageRank. The experiments were ran on the three directed graphs shown in Table 1. We only ran 10 iterations on We-bUK and Twitter and 5 supersteps on ClueWeb, since each iteration takes roughly the same time, and while it is efficient for GraphD and Pregel+, it is time-consuming for all the other out-of-core systems that we compared with. Table 2 (resp. Table 3) reports the running time of various systems on W PC (resp. W high ), where row [IO-Basic] (resp. row [IO-Recoded]) reports the performance of the normal mode (resp. recoded mode) of GraphD. Row [IO-Recoding] reports the preprocessing time of ID recoding, and we use grey font to differentiate it from other rows that refer to PageRank computation. The other rows report the performance of the systems we compared with, whose header names are self-explanatory. Column [Load] refers to the time of graph loading. For IO-Basic, Pregel+ and Pregelix, the time is for loading from HDFS; while for IO-Recoded, the time is for loading from local disks (each machine simply loads the recoded state array A). In all our tables, an entry "-" means "not applicable". HaLoop has no loading time since it scans the graph on HDFS in every iteration, and neither do single-PC systems which scan the graph on the local disk in every iteration. Moreover, GraphChi needs to preprocess a graph first by partitioning it into shards, whose time is reported in Column [Preprocess]. Finally, Column [Compute] reports the total time of iterative computation. From Tables 2 and 3, we can see that computation on W high is much faster than on W PC due to the more powerful machines and the faster switch. Also, the time of IO-Recoding is consistently less than twice of the data loading time, and is thus an efficient preprocessing. IO-Recoded only slightly improves the performance of IO-Basic on W PC , since W PC has a lower network bandwidth that forms the bottleneck, and the cost of merge-sort in IO-Basic is mostly hidden inside the cost of message transmission. In contrast, significant improvement is observed on W high (e.g., over 7 times on ClueWeb), since IO-Recoded eliminates merge-sort whose cost cannot be fully hidden when the network bandwidth is higher. As Table 2 shows, Pregel+ can only process Twitter on W PC due to its limited RAM space, and it is even slightly slower than IO-Basic and IO-Recoded. This is because, network bandwidth is the bottleneck in W PC rather than disk IO, and GraphD's parallel framework fully hides the computation cost inside the communication cost; while in Pregel+'s implementation, message transmission starts after computation finishes (i.e., all messages are generated). In contrast, as Table 3 shows, Pregel+ is faster than IO-Basic on W high for both WebUK and Twitter since the cost of merge-sort in IO-Basic is not hidden. However, ID-Recoded still beats Pregel+ on Twitter since the high parallelism of GraphD's execution framework hides the cost of streaming S E and OMSs. Among the other systems, Pregelix is much slower than IO-Basic since it performs costly relational operations, and X-Stream is generally much slower than GraphChi as was also observed by [3]. However, preprocessing in GraphChi is expensive: sharding ClueWeb takes 26604 seconds on W high , for which IO-Recoded can already finish over 100 supersteps. Finally, HaLoop is sometimes even slower than X-Stream even though HaLoop uses all machines. Among the 6 data-cluster combinations reported by Table 2 and Table 3, IO-Recoded beats Pregel+ (and is the fastest system) in 5 of them, which is quite amazing given that GraphD is an out-ofcore system while Pregel+ is an in-memory system. Table 4 shows the time taken by both IO-Basic and IO-Recoded to transmit mes- Since network bandwidth is the bottleneck, we can see from Table 4 that in all the 6 data-cluster combinations, message transmission happens during the whole period of each superstep, but U c only computes in the early stage (often less than half) of each superstep. Message Generation and Transmission Costs. Performance of Hash-Min. The experiments were ran on the two undirected graphs shown in Table 1, and the results are reported in Tables 5 and 6. Similar to the experiments on PageRank computation, we can observe that IO-Basic, IO-Recoded and Pregel+ have similar performance on W PC whose network bandwidth is low, and IO-Recoded even beats Pregel+ over Friendster on W high . The computation workload of Hash-Min is typically as follows: most vertices perform computation in the first few supersteps, but as computation goes on, less and less vertices perform computation in a superstep, making the computation workload very sparse. Sparse workload is not a problem for Pregel+ since all adjacency lists are in memories; meanwhile, GraphD is also able to avoid accessing many useless adjacency lists with the help of its streaming function skip(num items) which we introduced in Section 3.2. However, the other out-of-core systems do not have effective support for sparse workload, and thus as Tables 5 and 6 show, their computation times are much longer than those of GraphD and Pregel+. Performance of SSSP. The experiments were ran on the graphs in Table 1 except for ClueWeb, for which we could not find a source vertex that can reach to a relatively large amount of other vertices after a long period of trials. All edges were given weight 1, and thus the computation is essentially breadth-first search (BFS). Unlike PageRank computation and the Hash-Min algorithm discussed before, the computation workload of every superstep is sparse for BFS (or more generally, SSSP). To see this, consider BFS, where a vertex will only send messages to its neighbors when it is reached from the source vertex for the first time. Since every vertex sends messages along adjacent edges for only once during the whole period of computation, the total workload is merely O(|E|), which amounts to the workload of just one superstep in PageRank computation. We remark that BFS (or more generally, SSSP) represents the class of Pregel algorithms that are the most challenging to out-of-core systems which scan disk-resident graphs. The experimental results are reported in Tables 7 and 8, where we can see that Pregel+ beats all the out-of-core systems in 6 out of the 8 data-cluster combinations. This is, however, not surprising since Pregel+ keeps all adjacency lists in memories. GraphD is not much slower than Pregel+, and even won in 2 data-cluster combinations, thanks to the use of streaming function skip(num items). Surprisingly, on BTC and WebUK, IO-Basic even outperforms IO-Recoded. This is because, if there are too few messages to send in each superstep, the overhead of manipulating the additional arrays (i.e. A r and A s ) in recoded mode backfires. Note that all computations on BTC finished in seconds for both mode of GraphD, whose workload is really low. While computations on WebUK took longer time, this is mainly because of the large number of supersteps (i.e., 665). After all, IO-Recode needs to create/update/free those large additional arrays for 665 supersteps. Also surprisingly, on WebUK, Pregelix is over two orders of magnitude slower than GraphD on W PC . We found that Pregelix incurs a fixed cost of at least 35 seconds for each superstep, while a superstep of IO-Basic can be as low as 0.02-0.03 seconds. In contrast, Pregelix is much faster on W high due to faster network, and the fixed cost for a superstep is reduced to 3-4 seconds. Tables 7 and 8 also show that X-Stream is impractical for jobs that run many iterations of sparse-workload vertex computation, since it needs to stream all edges in each iteration. For example, X-Stream could not finish on WebUK in both W PC and W high after a whole day. In fact, the authors of X-Stream themselves admitted this problem at the end of Section 5.3 in [15]. Finally, graph loading in IO-Recoding is faster than IO-Basic in Tables 7 and 8. This is because during IO-Recoding, S E does not include edge weights. We only attach edge weights when we append recoded adjacency list items to S E rec . CONCLUSIONS We presented an efficient Pregel-like system, called GraphD, for processing very large graphs in a small cluster with ordinary computing resources that are available to most users. To process a graph G = (V, E) with n machines using GraphD, we proved that each machine only requires O(|V |/n) memory space. While sparse computation workload is not well supported by all previous out-of-core systems, GraphD adopts a new streaming function skip(num items) to handle sparse computation workload efficiently, while attaining sequential I/O bandwidth when the computation workload becomes dense. For the common cluster setting where machines are connected with Gigabit Ethernet, GraphD fully overlaps computation with communication by buffering outgoing messages to local disks, whose cost is, in turn, hidden inside the cost of message transmission. When message combining is applicable, GraphD further uses an effective ID-recoding technique to eliminate the need of any expensive external-memory operations such as merge-sort, achieving almost the minimum possible I/O cost that can be expected from any out-of-core Pregel-like system which streams edges and messages on secondary storage. Open-source implementation of GraphD is provided, and extensive experiments were conducted showing that GraphD's performance is order of magnitude faster than existing out-of-core systems, and is competitive even when compared with an in-memory Pregel-like system running with sufficient memory. Figure 4 : 4Example of ID Recoding computation. Since the two arrays require O(|V (W )|) memory, according to Lemma 1, the memory bound of O(|V |/|W|) still holds. In-Memory Message Combining. Similarly, U s always maintains an in-memory array with max W ∈W |V (W )| message elements, denoted by A s , for combining outgoing messages. According to Lemma 1, maintaining A s does not breach the memory bound of O(|V |/|W|). Column [M-Send]), and the fraction of time that U c spent on generating messages (Column [M-Gene]), for the previous experiments on PageRank computation. Since the behavior of U c on different machines may vary, we only report the time for U c on the first machine, and the time is summed over the vertex-centric computation time of all the 10 (or 5) supersteps. Table 1 : 1Graph DatasetsData Type |V| |E| AVG Deg MAX Deg WebUK directed 133,633,040 5,507,679,822 41.21 22,429 ClueWeb 978,408,098 42,574,107,469 43.51 7,447 Twitter 52,579,682 1,963,263,821 37.34 779,958 Friendster undirected 65,608,366 3,612,134,270 50.06 5,214 BTC 164,732,473 772,822,094 4.69 1,637,619 Table 2 : 2Performance of PageRank Computation on W PC (time marked with : smallest among all systems)WebUK (10 supersteps) ClueWeb (5 supersteps) Twitter (10 supersteps) Preprocess Load Compute Preprocess Load Compute Preprocess Load Compute IO-Basic - 628.9 s 1189 s - 5835 s 7920 s - 188.7 s 458.2 s IO-Recoding - 651.4 s 841.7 s - 6020 s 10956 s - 189.4 s 288.0 s IO-Recoded ID-Recoding 1.74 s 982.3 s * ID-Recoding 23.0 s 4639 s * ID-Recoding 1.02 s 434.6 s * Pregel+ Insufficient Main Memories Insufficient Main Memories - 187.7 s 480.6 s Pregelix - 426.3 s 7390 s - 3221 s 13861 s - 119.5 s 1419 s HaLoop - - 19954 s Insufficient Disk Space - - 3218 s GraphChi 2114 s - 3614 s Insufficient Disk Space 622.2 s - 1488 s X-Stream - - 17669 s Insufficient Disk Space - - 5989 s Table 3 : 3Performance of PageRank Computation on W high (time marked with : smallest among all systems)WebUK (10 supersteps) ClueWeb (5 supersteps) Twitter (10 supersteps) Preprocess Load Compute Preprocess Load Compute Preprocess Load Compute IO-Basic - 141.5 s 1093 s - 1271 s 7422 s - 44.7 s 424.6 s IO-Recoding - 157.8 s 213.8 s - 1786 s 3306 s - 66.4 s 86.2 s IO-Recoded ID-Recoding 3.92 s 331.6 s ID-Recoding 22.1 s 1003 s * ID-Recoding 2.71 s 121.2 s * Pregel+ - 70.3 s 234.9 s * Insufficient Main Memories - 28.5 s 135.7 s Pregelix - 144.0 s 1744 s - 1847 s 13820 s - 59.2 s 877.6 s HaLoop - - 17532 s Insufficient Disk Space - - 3607 s GraphChi 2048 s - 1768 s 26604 s - 15966 s 729.4 s - 1166 s X-Stream - - 11198 s - - 75637 s - - 4542 s Twitter 4 and Friendster 5 ; and an RDF graph BTC 6 . Table 5 : 5Performance of Hash-Min on W PC IO-Recoded ID-Recoding 1.25 s 82.4 s ID-Recoding 1.08 s 279.9 s *BTC (30 supersteps) Friendster (22 supersteps) Preprocess Load Compute Preprocess Load Compute IO-Basic - 116.8 s 81.7 s * - 367.0 s 309.5 s IO-Recoding - 112.4 s 51.3 s - 380.5 s 273.3 s Pregel+ - 115.9 s 88.9 s - 388.7 s 294.7 s Pregelix - 96.3 s 337.9 s - 204.2 s 1397 s HaLoop - - 8152 s - - 11534 s GraphChi 217.3 s - 353.4 s 1240 s - 6815 s X-Stream - - 2518 s - - 12012 s Table 6 : 6Performance of Hash-Min on W high IO-Recoded ID-Recoding 2.44 s 34.5 s ID-Recoding 2.58 s 94.8 s *BTC (30 supersteps) Friendster (22 supersteps) Preprocess Load Compute Preprocess Load Compute IO-Basic - 30.4 s 59.2 s - 75.2 s 197.3 s IO-Recoding - 33.5 s 14.3 s - 75.6 s 96.9 s Pregel+ - 18.6 s 20.7 s * - 47.2 s 104.8 s Pregelix - 102.5 s 503.3 s - 95.6 s 1236 s HaLoop - - 9507 s - - 5817 s GraphChi 169.2 s - 550.0 s 1430 s - 1808 s X-Stream - - 124.8 s - - 6513 s Table 4 : 4Message Generation v.s. Message Transmission Table 7 : 7Performance of SSSP Computation on W PC IO-Recoded ID-Recoding 1.26 s 3.28 s ID-Recoding 1.08 s 143.9 s * ID-Recoding 2.86 s 223.8 s ID-Recoding 1.BTC (16 supersteps) Friendster (23 supersteps) WebUK (665 supersteps) Twitter (16 supersteps) Preprocess Load Compute Preprocess Load Compute Preprocess Load Compute Preprocess Load Compute IO-Basic - 170.3 s 1.70 s * - 642.6 s 150.8 s - 1152.8 s 191.6 s * - 335.2 s 69.1 s IO-Recoding - 116.9 s 57.5 s - 403.5 s 300.1 s - 667.1 s 914.2 s - 199.5 s 286.0 s 04 s 65.8 s Pregel+ - 177.1 s 2.24 s Insufficient Main Memories Insufficient Main Memories - 334.0 s 54.1 s * Pregelix - 193.5 s 60.1 s - 405.9 s 1648 s - 620.0 s 24108 s - 197.8 s 236.9 s HaLoop - - 3729 s - - 10663 s - - > 24 hr - - 3790 s GraphChi 235.7 s - 72.8 s 1150 s - 10230 s 1884 s - 41538 s 583.3 s - 2017 s X-Stream - - 1025 s - - 11803 s - - > 24 hr - - 3102 s Table 8 : 8Performance of SSSP Computation on W high Preprocess Load Compute Preprocess Load Compute Preprocess Load Compute Preprocess Load ComputeBTC (16 supersteps) Friendster (23 supersteps) WebUK (665 supersteps) Twitter (16 supersteps) IO-Basic - 31.6 s 4.75 s - 135.0 s 118.3 s - 252.8 s 166.2 s - 88.9 s 35.0 s IO-Recoding - 30.7 s 29.0 s - 55.1 s 78.8 s - 134.3 s 202.2 s - 65.7 s 72.3 s IO-Recoded ID-Recoding 2.57 s 9.06 s ID-Recoding 2.23 s 66.2 s ID-Recoding 2.65 s 253.6 s ID-Recoding 2.33 s 25.3 s Pregel+ - 25.3 s 1.59 s * - 67.9 s 43.4 s * - 102.7 s 74.4 s * - 39.3 s 19.8 s * Pregelix - 172.1 s 200.5 s - 137.3 s 568.3 s - 186.6 s 3586 s - 119.1 s 462.5 s HaLoop - - 9016 s - - 3781 s - - > 24 hr - - 1828 s GraphChi 161.6 s - 155.9 s 1478 s - 2041 s 1922 s - 14740 s 631.2 s - 637.4 s X-Stream - - 105.5 s - - 4943 s - - > 24 hr - - 2413 s We name the model as DSS due to its similarity to semi-streaming computation of external-memory graph algorithms. http://law.di.unimi.it/webdata/uk-union-2006-06-2007-05 3 http://law.di.unimi.it/webdata/clueweb12 http://konect.uni-koblenz.de/networks/twitter mpi 5 http://snap.stanford.edu/data/com-Friendster.html 6 http://km.aifb.kit.edu/projects/btc-2009/ Pregelix: Big(ger) graph analytics on a dataflow engine. Y Bu, V R Borkar, J Jia, M J Carey, T Condie, PVLDB. 82Y. Bu, V. R. Borkar, J. Jia, M. J. Carey, and T. Condie. Pregelix: Big(ger) graph analytics on a dataflow engine. PVLDB, 8(2):161-172, 2014. Haloop: Efficient iterative data processing on large clusters. Y Bu, B Howe, M Balazinska, M D Ernst, PVLDB. 31Y. Bu, B. Howe, M. Balazinska, and M. D. Ernst. Haloop: Efficient iterative data processing on large clusters. PVLDB, 3(1):285-296, 2010. VENUS: vertex-centric streamlined graph computation on a single PC. J Cheng, Q Liu, Z Li, W Fan, J C S Lui, C He, ICDE. J. Cheng, Q. Liu, Z. Li, W. Fan, J. C. S. Lui, and C. He. VENUS: vertex-centric streamlined graph computation on a single PC. In ICDE, pages 1131-1142, 2015. One trillion edges: Graph processing at facebook-scale. A Ching, S Edunov, M Kabiljo, D Logothetis, S Muthukrishnan, PVLDB8A. Ching, S. Edunov, M. Kabiljo, D. Logothetis, and S. Muthukrishnan. One trillion edges: Graph processing at facebook-scale. PVLDB, 8(12):1804-1815, 2015. Continuous pattern detection over billion-edge graph using distributed framework. J Gao, C Zhou, J Zhou, J X Yu, ICDE. J. Gao, C. Zhou, J. Zhou, and J. X. Yu. Continuous pattern detection over billion-edge graph using distributed framework. In ICDE, pages 556-567, 2014. Powergraph: Distributed graph-parallel computation on natural graphs. J E Gonzalez, Y Low, H Gu, D Bickson, C Guestrin, OSDI. J. E. Gonzalez, Y. Low, H. Gu, D. Bickson, and C. Guestrin. Powergraph: Distributed graph-parallel computation on natural graphs. In OSDI, pages 17-30, 2012. Graphx: Graph processing in a distributed dataflow framework. J E Gonzalez, R S Xin, A Dave, D Crankshaw, M J Franklin, I Stoica, OSDI. J. E. Gonzalez, R. S. Xin, A. Dave, D. Crankshaw, M. J. Franklin, and I. Stoica. Graphx: Graph processing in a distributed dataflow framework. In OSDI, pages 599-613, 2014. An experimental comparison of Pregel-like graph processing systems. M Han, K Daudjee, K Ammar, M T Özsu, X Wang, T Jin, PVLDB. 712M. Han, K. Daudjee, K. Ammar, M. T.Özsu, X. Wang, and T. Jin. An experimental comparison of Pregel-like graph processing systems. PVLDB, 7(12):1047-1058, 2014. GraphChi: Large-scale graph computation on just a PC. A Kyrola, G E Blelloch, C Guestrin, OSDI. A. Kyrola, G. E. Blelloch, and C. Guestrin. GraphChi: Large-scale graph computation on just a PC. In OSDI, pages 31-46, 2012. Distributed GraphLab: A framework for machine learning in the cloud. Y Low, J Gonzalez, A Kyrola, D Bickson, C Guestrin, J M Hellerstein, PVLDB. 58Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed GraphLab: A framework for machine learning in the cloud. PVLDB, 5(8):716-727, 2012. Large-scale distributed graph computing systems: An experimental evaluation. Y Lu, J Cheng, D Yan, H Wu, PVLDB8Y. Lu, J. Cheng, D. Yan, and H. Wu. Large-scale distributed graph computing systems: An experimental evaluation. PVLDB, 8(3), 2015. Pregel: a system for large-scale graph processing. G Malewicz, M H Austern, A J C Bik, J C Dehnert, I Horn, N Leiser, G Czajkowski, SIGMOD. G. Malewicz, M. H. Austern, A. J. C. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski. Pregel: a system for large-scale graph processing. In SIGMOD, pages 135-146, 2010. Using pregel-like large scale graph processing frameworks for social network analysis. L Quick, P Wilkinson, D Hardcastle, ASONAM. L. Quick, P. Wilkinson, and D. Hardcastle. Using pregel-like large scale graph processing frameworks for social network analysis. In ASONAM, pages 457-463, 2012. Chaos: Scale-out graph processing from secondary storage. A Roy, L Bindschaedler, J Malicevic, W Zwaenepoel, SOSP. A. Roy, L. Bindschaedler, J. Malicevic, and W. Zwaenepoel. Chaos: Scale-out graph processing from secondary storage. In SOSP, 2015. X-stream: edge-centric graph processing using streaming partitions. A Roy, I Mihailovic, W Zwaenepoel, SOSP. A. Roy, I. Mihailovic, and W. Zwaenepoel. X-stream: edge-centric graph processing using streaming partitions. In SOSP, pages 472-488, 2013. GPS: a graph processing system. S Salihoglu, J Widom, SSDBM. 22S. Salihoglu and J. Widom. GPS: a graph processing system. In SSDBM, page 22, 2013. Algorithmic aspects of triangle-based network analysis. T Schank, University KarlsruhePhd in computer scienceT. Schank. Algorithmic aspects of triangle-based network analysis. Phd in computer science, University Karlsruhe, 2007. Parallel subgraph listing in a large-scale graph. Y Shao, B Cui, L Chen, L Ma, J Yao, N Xu, SIGMOD. Y. Shao, B. Cui, L. Chen, L. Ma, J. Yao, and N. Xu. Parallel subgraph listing in a large-scale graph. In SIGMOD, pages 625-636, 2014. Fast failure recovery in distributed graph processing systems. Y Shen, G Chen, H V Jagadish, W Lu, B C Ooi, B M Tudor, PVLDB. 84Y. Shen, G. Chen, H. V. Jagadish, W. Lu, B. C. Ooi, and B. M. Tudor. Fast failure recovery in distributed graph processing systems. PVLDB, 8(4):437-448, 2014. From "think like a vertex" to "think like a graph. Y Tian, A Balmin, S A Corsten, S Tatikonda, J Mcpherson, PVLDB. 73Y. Tian, A. Balmin, S. A. Corsten, S. Tatikonda, and J. McPherson. From "think like a vertex" to "think like a graph". PVLDB, 7(3):193-204, 2013. Fast iterative graph computation with block updates. W Xie, G Wang, D Bindel, A J Demers, J Gehrke, PVLDB6W. Xie, G. Wang, D. Bindel, A. J. Demers, and J. Gehrke. Fast iterative graph computation with block updates. PVLDB, 6(14):2014-2025, 2013. Effective techniques for message reduction and load balancing in distributed graph computation. D Yan, J Cheng, Y Lu, W Ng, WWW. D. Yan, J. Cheng, Y. Lu, and W. Ng. Effective techniques for message reduction and load balancing in distributed graph computation. In WWW, pages 1307-1317, 2015. Pregel algorithms for graph connectivity problems with performance guarantees. D Yan, J Cheng, K Xing, Y Lu, W Ng, Y Bu, PVLDB. 714D. Yan, J. Cheng, K. Xing, Y. Lu, W. Ng, and Y. Bu. Pregel algorithms for graph connectivity problems with performance guarantees. PVLDB, 7(14):1821-1832, 2014. Mocgraph: Scalable distributed graph processing using message online computing. C Zhou, J Gao, B Sun, J X Yu, PVLDB8C. Zhou, J. Gao, B. Sun, and J. X. Yu. Mocgraph: Scalable distributed graph processing using message online computing. PVLDB, 8(4):377-388, 2014.
[]
[ "Inclusive Diffraction at HERA", "Inclusive Diffraction at HERA" ]
[ "Paul Laycock \nUniversity of Liverpool -Dept of High Energy Physics Oliver Lodge Laboratory\nL69 7ZELiverpoolUK\n" ]
[ "University of Liverpool -Dept of High Energy Physics Oliver Lodge Laboratory\nL69 7ZELiverpoolUK" ]
[ "34 th International Conference on High Energy Physics" ]
The H1 and Zeus collaborations have measured the inclusive diffractive DIS cross section ep → eXp and these measurements are in good agreement within a normalisation uncertainty. Diffractive parton density functions (DPDFs) have been extracted from NLO QCD fits to inclusive measurements of diffractive DIS and the predictions of these DPDFs are compared with measurements of diffractive dijets in DIS, testing the validity of the factorisation approximations used in their extraction. H1 then use these diffractive dijets in DIS data to provide further constraints in a combined QCD fit, resulting in the next generation of DPDFs which have constrained the diffractive gluon at large momentum fractions. Finally, the predictions of DPDFs are compared to diffractive dijets in photoproduction where the issue of survival probability in a hadron-hadron environment can be studied.
null
[ "https://arxiv.org/pdf/0809.3605v2.pdf" ]
16,644,082
0809.3605
e5b58fbd5c580d1e7ef6024f86104a1f54541173
Inclusive Diffraction at HERA 2008 Paul Laycock University of Liverpool -Dept of High Energy Physics Oliver Lodge Laboratory L69 7ZELiverpoolUK Inclusive Diffraction at HERA 34 th International Conference on High Energy Physics Philadelphia2008(for the H1 and Zeus collaborations) The H1 and Zeus collaborations have measured the inclusive diffractive DIS cross section ep → eXp and these measurements are in good agreement within a normalisation uncertainty. Diffractive parton density functions (DPDFs) have been extracted from NLO QCD fits to inclusive measurements of diffractive DIS and the predictions of these DPDFs are compared with measurements of diffractive dijets in DIS, testing the validity of the factorisation approximations used in their extraction. H1 then use these diffractive dijets in DIS data to provide further constraints in a combined QCD fit, resulting in the next generation of DPDFs which have constrained the diffractive gluon at large momentum fractions. Finally, the predictions of DPDFs are compared to diffractive dijets in photoproduction where the issue of survival probability in a hadron-hadron environment can be studied. Diffraction at HERA It has been shown by Collins [1] that the NC diffractive DIS process ep → eXp at HERA factorises; a useful additional assumption is often made whereby the proton vertex dynamics factorise from the vertex of the hard scatter -proton vertex factorisation. The kinematic variables used to describe inclusive DIS are the virtuality of the exchanged boson Q 2 , the Bjorken scaling variable x and y the inelasticity. In addition, the kinematic variables x IP and β are useful in describing the diffractive DIS interaction. x IP is the longitudinal fractional momentum of the proton carried by the diffractive exchange and β is the longitudinal momentum fraction of the struck parton with respect to the diffractive exchange; x = x IP β. The data are discussed in terms of a reduced diffractive cross-section, σ D(3) r (β, Q 2 , x IP ), which is related to the measured differential cross section by: d 3 σ ep→eXp dβdQ 2 dx IP = 4πα 2 em βQ 4 (1 − y + y 2 2 )σ D(3) r (β, Q 2 , x IP ).(1) In the proton vertex factorisation scheme, the Q 2 and β dependences of the reduced cross section factorise from the x IP dependence. Measurements of the reduced diffractive cross section from both H1 and Zeus are shown in Figure 1, where the new Zeus preliminary measurement has been scaled by a factor of 0.87, a factor consistent with the normalisation uncertainties of the two analyses. The measurements agree rather well. Diffractive PDFs from Inclusive data Using the approximation of proton vertex factorisation, the H1 and Zeus collaborations have extracted DPDFs using NLO QCD fits to the β and Q 2 dependencies of the reduced cross section [2,3]. H1 obtained two fits of approximately equal quality, Fit A and Fit B, differing only in the number of terms used to parameterise the gluon. The two fits, while fully consistent at low fractional momentum, yield very different results for the diffractive gluon at high fractional momentum. This is due to quark-driven evolution dominating the logarithmic Q 2 derivative of the reduced cross section at high β, which in turn greatly reduces the sensitivity of this quantity to the gluon. Diffractive dijets in DIS Diffractive dijets in DIS provide a sensitive experimental probe of the diffractive gluon, as the dominant production mechanism is boson-gluon fusion. The sensitive variable is z IP = Q 2 +M 2 12 Q 2 +M 2 X , where M 12 is the invariant mass of the dijet system. Both H1 and Zeus have measured the diffractive dijet cross section in DIS [4,5]. In Figure 2, the Zeus measurement is compared to the predictions of a Zeus fit to inclusive data and H1 Fit A and Fit B. At low z IP , where the inclusive data have sensitivity to the diffractive gluon, the results of the predictions are very similar and agree well with the data. This supports the use of the proton vertex factorisation approximation needed to make the NLO QCD fits. At high z IP the data clearly prefer the prediction of Fit B. Having shown the sensitivity of the diffractive dijets in DIS data, H1 have included their data in a combined fit with the inclusive diffractive DIS data [4]. The resulting fit is indistinguishable from Fit A and Fit B in its description of the inclusive data and produces a better description of the diffractive dijet data, consistent with that of Fit B. The resulting DPDFs from this combined fit, are shown in Figure 3. Both singlet and gluon are constrained with similar good precision across the whole kinematic range. Diffractive dijets in photoproduction Despite the success of factorisation in diffractive DIS at the HERA experiments, there is a long-standing issue that the predictions obtained with HERA DPDFs grossly overshoot the diffractive dijet cross section at the Tevatron. At HERA, photoproduction events, where Q 2 ∼ 0, provides an environment similar to a hadron-hadron collider. The variable x γ is the fraction of the four momentum of the photon transferred to the hard interaction; the lower the value of x γ the more hadron-like the photon. Both H1 and Zeus have measured diffractive dijets in photoproduction [6,7]. The latest preliminary results from H1 [8] are shown in Figure 4 compared to the predictions of Fit A and Fit B and the prediction of the combined fit including dijet data described above. There is a suppression of the cross section with respect to the predictions and this suppression is independent of x γ . There is also a suggestion that this suppression is dependent of the E T of the jet. This would be consistent with the Zeus analysis at higher E T where less suppression is observed. Conclusions The H1 and Zeus collaborations have measured the inclusive diffractive DIS cross section ep → eXp and these measurements are in good agreement within their normalisation uncertainties. The DPDFs from NLO QCD fits to inclusive diffractive DIS data can predict the diffractive dijets in DIS cross section at low z IP while at high z IP the data favour Fit B. Including the diffractive dijet data in a combined fit further constrains the NLO QCD fit, where the inclusive data alone are unable to unambiguously constrain the diffractive gluon. The resulting DPDFs from H1 are constrained with good precision across the whole kinematic range. Finally, when compared to the predictions of DPDFs, diffractive dijets in photoproduction show a suppression of the cross section which is independent of x γ but which is consistent with an E T dependence. Figure 4: The diffractive dijets in photoproduction data compared to the predictions of fits to inclusive DIS data and a combined fit to inclusive and dijet data. Figure 1 : 1The reduced diffractive cross section as measured by the H1 and Zeus collaborations. Figure 2 : 2The diffractive dijets in DIS data compared to the predictions of fits to inclusive DIS data. Figure 3 : 3The H1 DPDFs resulting from the combined fit to the inclusive and dijet diffractive DIS data. Proof of Factorization for Diffractive Hard Scattering. John C Collins, Phys. Rev. 57John C. Collins. Proof of Factorization for Diffractive Hard Scattering. Phys. Rev., D57:3051-3056, 1998. Measurement and QCD Analysis of the Diffractive Deep-Inelastic Scattering Cross Section at HERA. A Aktas, Eur. Phys. J. 48A. Aktas et al. Measurement and QCD Analysis of the Diffractive Deep-Inelastic Scattering Cross Section at HERA. Eur. Phys. J., C48:715-748, 2006. Dijet Cross Sections and Parton Densities in Diffractive DIS at HERA. A Aktas, JHEP. 1042A. Aktas et al. Dijet Cross Sections and Parton Densities in Diffractive DIS at HERA. JHEP, 10:042, 2007. Dijet production in diffractive deep inelastic scattering at HERA. S Chekanov, Eur. Phys. J. 52S. Chekanov et al. Dijet production in diffractive deep inelastic scattering at HERA. Eur. Phys. J., C52:813-832, 2007. Tests of QCD factorisation in the diffractive production of dijets in deep-inelastic scattering and photoproduction at HERA. A Aktas, Eur. Phys. J. 51A. Aktas et al. Tests of QCD factorisation in the diffractive production of dijets in deep-inelastic scattering and photoproduction at HERA. Eur. Phys. J., C51:549-568, 2007. Diffractive photoproduction of dijetsin ep collisions at HERA. Sergei Chekanov, Eur. Phys. J. 55Sergei Chekanov et al. Diffractive photoproduction of dijetsin ep collisions at HERA. Eur. Phys. J., C55:177-191, 2008.
[]
[ "POWER OF OBSERVATIONAL HUBBLE PARAMETER DATA: A FIGURE OF MERIT EXPLORATION", "POWER OF OBSERVATIONAL HUBBLE PARAMETER DATA: A FIGURE OF MERIT EXPLORATION" ]
[ "Cong Ma ", "Tong-Jie Zhang " ]
[]
[]
We use simulated Hubble parameter data in the redshift range 0 ≤ z ≤ 2 to explore the role and power of observational H(z) data in constraining cosmological parameters of the ΛCDM model. The error model of the simulated data is empirically constructed from available measurements and scales linearly as z increases. By comparing the median figures of merit calculated from simulated datasets with that of current type Ia supernova data, we find that as many as 64 further independent measurements of H(z) are needed to match the parameter constraining power of SNIa. If the error of H(z) could be lowered to 3%, the same number of future measurements would be needed, but then the redshift coverage would only be required to reach z = 1. We also show that accurate measurements of the Hubble constant H 0 can be used as priors to increase the H(z) data's figure of merit.
10.1088/0004-637x/730/2/74
[ "https://arxiv.org/pdf/1007.3787v2.pdf" ]
119,181,595
1007.3787
6e8fa9b2e3ae2ba6efab71d316b75b397c4aa761
POWER OF OBSERVATIONAL HUBBLE PARAMETER DATA: A FIGURE OF MERIT EXPLORATION December 25, 2010 December 25, 2010 Cong Ma Tong-Jie Zhang POWER OF OBSERVATIONAL HUBBLE PARAMETER DATA: A FIGURE OF MERIT EXPLORATION December 25, 2010 December 25, 2010Draft version Preprint typeset using L A T E X style emulateapj v. 8/13/10 Draft versionSubject headings: cosmological parameters -dark energy -distance scale -methods: statistical We use simulated Hubble parameter data in the redshift range 0 ≤ z ≤ 2 to explore the role and power of observational H(z) data in constraining cosmological parameters of the ΛCDM model. The error model of the simulated data is empirically constructed from available measurements and scales linearly as z increases. By comparing the median figures of merit calculated from simulated datasets with that of current type Ia supernova data, we find that as many as 64 further independent measurements of H(z) are needed to match the parameter constraining power of SNIa. If the error of H(z) could be lowered to 3%, the same number of future measurements would be needed, but then the redshift coverage would only be required to reach z = 1. We also show that accurate measurements of the Hubble constant H 0 can be used as priors to increase the H(z) data's figure of merit. INTRODUCTION The expansion of the Universe can be quantitatively studied using the results from a variety of cosmological observations, for example the mapping of the cosmic microwave background (CMB) anisotropies (Spergel et al. 2007; Komatsu et al. 2010), the measurement of baryon acoustic oscillation (BAO) peaks Percival et al. 2010), the linear growth of large-scale structures (LSS) (Wang & Tegmark 2004), strong gravitational lensing (Yang & Chen 2009), and measurements of "standard candles" such as the redshift-distance relationship of type Ia supernovae (SNIa) (Riess et al. 1998;Hicken et al. 2009b) and gamma-ray bursts (GRBs) (Ghirlanda et al. 2004;). Among the various observations is the determination of the Hubble parameter H which is directly related to the expansion history of the Universe by its definition: H =ȧ/a, where a denotes the cosmic scale factor andȧ is its rate of change with respect to the cosmic time. In practice, the Hubble parameter is usually measured as a function of the redshift z, and the redshift is related to a by the formula a(t)/a(t 0 ) = 1/(1 + z) where t 0 is the current time that is taken to be a constant. In the rest of this paper we will use the abbreviation "OHD" for observational H(z) data. Though not directly observable, H(z) can nevertheless be deduced from various observational data, such as cosmological ages ("standard clocks") or sizes ("standard rods"). The former method has been illustrated by Jimenez & Loeb (2002) and leads to OHD found from differential ages of galaxies. The latter method has been discussed by Blake & Glazebrook (2003) and Seo & Eisenstein (2005), which leads to H(z) found from BAO peaks. Moreover, the Hubble parameter as a quantitative measure of the cosmic expansion rate is closely related to cosmological distances. In particular, it can be [email protected] 1 Department of Astronomy, Beijing Normal University, Beijing 100875, China 2 Center for High Energy Physics, Peking University, Beijing 100871, China structed from the luminosity distances of SNIa (Wang & Tegmark 2005;Shafieloo et al. 2006;Mignone & Bartelmann 2008) and GRBs (Liang et al. 2010). On the other hand, given the aforementioned availability of independent determination of H(z), and the expectation of more available data in the future, we are interested in the comparison of OHD and SNIa data in terms of their respective merits in constraining cosmological parameters. This interest of ours is expressed by the following two questions: Can future observational determinations of the Hubble parameter be used as a viable alternative to current SNIa data? If so, how many more datapoints are needed so that the cosmological parameter constraints obtained from H(z) data are as good as those obtained from SNIa distance-redshift relations? We attempt to illustrate possible answers to these two questions via an exploratory, statistical approach. This paper is organized as follows: we first briefly summarize the current status of available OHD results in Section 2. Next, we show how the simulated H(z) datasets are used in our exploration in Section 3, and present the results from the simulated data in Section 4. In Section 5 we turn to the data expected from future observation programs. Finally, in Section 6 we discuss the limitation and implications of our results. AVAILABLE HUBBLE PARAMETER DATASETS Currently the amount of available H(z) data is scarce compared with SNIa luminosity distance data. Jimenez et al. (2003) first obtained one H(z) data point at z ≈ 0.1 from observations of galaxy ages (henceforward the "JVTS03" dataset). Simon et al. (2005) further obtained 8 additional H(z) values up to z = 1.75 from the relative ages of passively evolving galaxies (henceforward "SVJ05") and used it to constrain the redshift dependency of dark energy potential. Stern et al. (2010a) obtained an expanded dataset (henceforward "SJVKS10") and combined it with CMB data to constrain dark energy parameters and the spatial curvature. Besides H(z) determinations from galaxy ages, observations of BAO peaks have also been used to extract H(z) values at low redshift (Gaztañaga et al. 2009, henceforward Table 3 of Gaztañaga et al. (2009). b Data in this row are taken from JVTS03. "GCH09"). These datasets are summarized in Table 1 and displayed in Figure 1. These datasets have seen wide application in cosmological research. In addition to those mentioned above, Yi & Zhang (2007) first used the SVJ05 dataset to con-strain cosmological model parameters. Samushia & Ratra (2006) also used the data to constrain parameters in various dark energy models. Their results are in consistence with other observational data, in particular the SNIa. Besides parameters constraints, OHD can also be used as an auxiliary model selection criterion (Li et al. 2009). In this paper, the OHD sets used are taken from the union of JVTS03, GCH09, and SJVKS10. The SVJ05 dataset, having been replaced by SJVKS10, is no longer used, and is listed in Table 1 for reference only. Using these datasets, we find the parameter constraints for a non-flat ΛCDM universe Ω m = 0.37 +0.15 −0.16 and Ω Λ = 0.93 +0.25 −0.29 assuming the Gaussian prior H 0 = 74.2± 3.6 km s −1 Mpc −1 suggested by Riess et al. (2009). We also use a conservative, "top-hat" prior on H 0 , namely a uniform distribution in the range [50,100], and obtained Ω m = 0.34 +0.20 −0.27 and Ω Λ = 0.86 +0.44 −0.64 . These H 0 priors are discussed in detail in Section 3.2, and their effects on the parameter constraints are illustrated in Section 4. 3. PARAMETER CONSTRAINT WITH SIMULATED DATASETS In the current absence of more OHD, we turn to simulated H(z) datasets in the attempt to explore the answer to the questions raised in Section 1. To proceed with our exploration, we must prepare ourselves with a) a way of generating simulated H(z) datasets, b) an "evaluation" model (or class of models) of cosmic expansion in which parameter constraint is performed using the simulated data, and c) a quantified measure of the datasets' ability of tightening the constraints in the model's parameter space, i.e. a well-defined "figure of merit" (FoM). These topics are discussed in detail in the rest of this section. Generation of Simulated Datasets Our simulated datasets are based on a spatially flat ΛCDM model with Ω m = 0.27 and Ω Λ = 0.73. This fiducial model is consistent with the 7-year Wilkinson Microwave Anisotropy Probe (WMAP) (Komatsu et al. 2010), the BAO (Percival et al. 2010), and SNIa (Hicken et al. 2009b) observations. Therefore, it summarizes our current knowledge about the recent history of cosmic expansion fairly well. In this fiducial model the Hubble parameter is expressible as a function of redshift z by the simple formula H fid (z) = H 0 Ω m (1 + z) 3 + Ω Λ(1) where H 0 is the Hubble constant. The modelling of the observational data's deviations from the fiducial model, as well as the statistical and systematic uncertainties of the data, can be rather difficult. For SNIa, there are planned projects such as the Wide-Field Infrared Survey Telescope (WFIRST 3 ) with welldefined redshift distribution of targets (Aldering et al. 2004) and uncertainty model (Kim et al. 2004;Huterer 2009) based on which simulated data can be generated. However, this is not true for OHD, as there have not been a formal specification of future observational goals known to the authors. Consequently, we have to work around this difficulty by approaching the problem from a phenomenological point of view. By inspecting the uncertainties on H(z) in the SJVKS10 dataset (Figure 2), we can see that the general trend of the errors' increasing with z despite the two outliers at z = 0.48 and 0.88. Excluding the outliers from the dataset, we find that the uncertainties σ(z) are bounded by two straight lines σ + = 16.87z + 10.48 and σ − = 4.41z + 7.25 from above and below respectively. If we believe that future observations would also yield data with uncertainties within the strip bounded by σ + and σ − , we can take the midline of the strip σ 0 = 10.64z + 8.86 as an estimate of the mean uncertainty of future observations. In our code, this is done by drawing a random numberσ(z) from the Gaussian distribution N (σ 0 (z), ε(z)) where ε(z) = (σ + − σ − )/4. The parameter ε is chosen so that the probability ofσ(z) falling within the strip is 95.4%. Having found a method of generating the random uncertaintyσ(z) for a simulated datapoint, we are able to simulate the deviation from H fid . Namely, we assume that the deviation of the simulated observational value from the fiducial, ∆H = H sim (z) − H fid (z), satisfies the Gaussian distribution N (0,σ(z)) from which ∆H can be drawn as a random variable. Thus a complete procedure of generating a simulated H(z) value at any given z is formed. First, the fiducial value H fid (z) is calculated from equation (1). After that, a random uncertaintyσ(z) is drawn using the aforementioned method. This uncertainty is in turn used to draw a random deviation ∆H from the Gaussian distribution N (0,σ(z)). The final result of this process is a datapoint H sim (z) = H fid (z) + ∆H with uncertaintyσ(z). In addition to the procedures described above, the Hubble constant (in the units of km s −1 Mpc −1 ) is also taken as a random variable and is drawn from the Gaussian distribution N (70.4, 1.4) suggested by 7-year WMAP results when we evaluate the right-hand side of equation (1). We could have fixed H 0 at a constant value, but as we shall see in Section 3.2, in our analysis we treat H 0 and Ω m quite differently. Namely, Ω m is a parameter with a posterior distribution to be inferred, but H 0 is a nuisance parameter that is marginalized over using some independent measurement results as prior knowledge. This treatment can be found in many works, such as (Stern et al. 2010a) and (Wei 2010). It can be justified by the need to reduce the dimension of the parameter space given limited data, and we usually prioritize other parameters such as Ω m over H 0 . Therefore, when generating simulated datasets we sample H 0 from a random distribution to reflect the uncertainty to be marginalized over. Otherwise, we could have "cheated" by forcing a δfunction prior on H 0 centered at its fiducial value in the analysis of simulated data and obtain overly optimistic predictions from the simulated data. The quality of simulated data thus generated is similar to that of SJVKS10. A snapshot realization of this simulation scheme is displayed in Figure 3, where a total of 128 datapoints with z evenly spaced within the range 0.1 ≤ z ≤ 2.0. The Evaluation Model We use a standard non-flat ΛCDM model with a curvature term Ω k = 1 − Ω m − Ω Λ to evaluate the qualities of the simulated datasets. In this model, the Hubble parameter is given by H(z) = H 0 Ω m (1 + z) 3 + Ω k (1 + z) 2 + Ω Λ = H 0 E(z; Ω m , Ω Λ ).(2) Our model choice is mainly motivated by our desire to reduce unnecessary distractions arising from the intrinsic complexity of certain cosmological models involving dark energy or modified gravitation. We perform a standard maximal likelihood analysis using this evaluation model. In our analysis we intend to marginalize the likelihood function over the Hubble parameter H 0 , thus obtaining parameter constraints in the (Ω m , Ω Λ ) subspace. This marginalization process also allows us to incorporate a priori knowledge about H 0 into our analysis. There is a fair amount of available information from which reasonable priors can be constructed. Samushia & Ratra (2006) used two different Gaussian priors on H 0 : one with H 0 = 73 ± 3 km s −1 Mpc −1 from 3year WMAP data (Spergel et al. 2007), the other with H 0 = 68 ± 4 km s −1 Mpc −1 by Gott III et al. (2001) (for a discussion of the non-Gaussianity of the error distribution in H 0 measurements, see Chen et al. 2003). Lin et al. (2009) used H 0 = 72 ± 8 km s −1 Mpc −1 as suggested by (Freedman et al. 2001). In our work we use a more recent determination of H 0 = 74.2 ± 3.6 km s −1 Mpc −1 (Riess et al. 2009) as an update to the ones cited above. We also consider a "top-hat" prior, i.e. a uniform distribution in the interval [50,100]. Compared with any of the peaked priors above, this prior shows less preference of a particular central value while still characterizing our belief that any value of H 0 outside the said range is unlikely to be true. Notice that the intrinsic spread of H 0 involved in the generation of simulated datasets is smaller than either prior chosen in this step, in consistent with the belief that the prior adopted in the estimation of parameters should not be spuriously optimistic. Having chosen the priors on H 0 , it is straightforward to derive, up to a non-essential multiplicative constant, the posterior probability density function (PDF) of parameters given the dataset {H i } by means of Bayes' theorem: P (Ω m , Ω Λ | {H i }) = P (Ω m , Ω Λ , H 0 | {H i }) dH 0 = L ({H i }|Ω m , Ω Λ , H 0 ) P (H 0 ) dH 0 (3) where L is the likelihood and P (H 0 ) is the prior PDF of H 0 . Assuming that each measurement in {H i } has Gaussian error distribution of H(z) and is independent from other measurements, the likelihood is then given by L ({H i }|Ω m , Ω Λ , H 0 ) = i 1 2πσ 2 i exp − χ 2 2 ,(4) where the χ 2 statistic is defined by χ 2 = i [H 0 E(z i ; Ω m , Ω Λ ) − H i ] 2 σ 2 i ,(5) and σ i 's are the uncertainties quoted from the dataset. The posterior PDF can thus be found by inserting equation (4) and the exact form of P (H 0 ) into equation (3). We now show that the integral over H 0 in equation (3) can be worked out analytically for our P (H 0 ) choices. Gaussian prior. -Let P (H 0 ) = 1 2πσ 2 H exp − (H 0 − µ H ) 2 2σ 2 H .(6) In this case, equation (3) reduces to P (Ω m , Ω Λ | {H i }) = 1 √ A erf B √ A + 1 exp B 2 A (7) where A = 1 2σ 2 H + i E 2 (z i ; Ω m , Ω Λ ) 2σ 2 i , B = µ H 2σ 2 H + i E(z i ; Ω m , Ω Λ )H i 2σ 2 i , and erf stands for the error function. The form shown in equation (7) is not normalized; all multiplicative constants have been discarded. ) = Θ(H 0 − x)Θ(y − H 0 )/(y − x) where Θ denotes the Heaviside unit step function. With this prior on H 0 , equation (3) becomes P (Ω m , Ω Λ | {H i }) = U (x, C, D) − U (y, C, D) √ C exp D 2 C (8) where C = i E 2 (z i ; Ω m , Ω Λ ) 2σ 2 i , D = i E(z i ; Ω m , Ω Λ )H i 2σ 2 i , and U (x, α, β) = erf β − xα √ α . The normalization constant has been dropped from formula (8) as well. Figure of Merit The posterior probability density functions obtained above put statistical constraints over the parameters. As the dataset {H i } improves in size and quality, the constraints are tightened. To evaluate a dataset's ability of tightening the constrains, a quantified figure of merit (FoM) must be established. We note that the FoM can be defined arbitrarily as long as it reasonably rewards a tight fit while punishing a loose one. Its definition can be motivated purely statistically, for example the reciprocal hypervolume of the 95% confidence region in the parameter space (Albrecht et al. 2006). However, a definition that is sensitive to some physically significant structuring of the parameter space can be preferable if our scientific goal requires it (Linder 2006). In this paper we use a statistical FoM definition similar to the ones of Albrecht et al. (2006), Liu et al. (2008), and Bueno Sanchez et al. (2009). Our FoM is defined to be the area enclosed by the contour of P (Ω m , Ω Λ | {H i }) = exp(−∆χ 2 /2)P max where P max is the maximum value of the posterior PDF, and the constant ∆χ 2 is taken to be 6.17. This value is so chosen that the region enclosed by this contour coincides with the 2σ or 95.4% confidence region if the posterior is Gaussian. In the rest of this paper we will use the term "2σ region" to refer to the region defined in this way. The 1σ and 3σ regions can be defined in the same manner by setting the constant ∆χ 2 to 2.3 and 11.8 respectively. It is important to bear in mind that our definition of "nσ regions" are motivated by nomenclature brevity rather than mathematical concreteness, since the posterior PDFs (eqs. [7] and [8]) are manifestly non-Gaussian. We make two further remarks on our FoM definition. First, since our definition of FoM is statistical rather than physical, we do not exclude the unrealistic parts of the confidence regions from the area. Second, the FoM by our definition is obviously invariant under the multiplication of P (Ω m , Ω Λ | {H i }) by a positive constant, which justifies the omission of normalization constants in Section 3.2. 4. RESULTS FROM THE SIMULATED DATASETS Using the method described in Section 3.1, we generate 500 realizations of the simulated H(z) dataset. Each realization contains 128 datapoints evenly spaced in the redshift range 0.1 ≤ z ≤ 2.0. From each realization, successively shrinking subsamples of 64, 32, and 16 datapoints are randomly drawn. These subsamples are then used in conjunction with the full OHD set to obtain their respective figures of merit. The figures of merit are naturally divided into 4 groups by the size of the simulated subsample used in the calculation. For each group, median and median absolute deviation (MAD) statistics are calculated. The median and the MAD are used to represent the central value and spread of FoM data respectively, and they are used in preference to the customary pair of the mean and the standard deviation, because they are less affected by egregious outliers (see NIST 2003, Chapter 1.3.5.6). The outliers mainly arise from "worst case" realizations of simulated H(z) data, for example one with a large amount of datapoints deviating too much (or too little) from the fiducial model. To compare the parameter constraining abilities of our simulated H(z) datasets with those of SNIa data, we have also fitted our evaluation model (eq. [2]) to the ConstitutionT dataset which is a subset of the Constitution redshift-distance dataset (Hicken et al. 2009b) deprived of outliers that account for internal tensions (Wei 2010). It is worthwhile to point out that the prior of H 0 used in the SNIa fitting procedure is fundamentally different from either one discussed in Section 3.2. Namely, when SNIa data is used, the parameter H 0 and the intrinsic absolute magnitude of SNIa, M 0 , are combined into one "nuisance parameter" M = M 0 − 5 lg H 0 , and is marginalized over under the assumption of a flat prior over (−∞, +∞) (Perlmutter et al. 1999). This discrepancy should be kept in mind when comparing or combining SNIa and H(z) datasets, and we hope it could be closed in the future by better constraints, either theoretical or observational, on M 0 . Our main results are show in Figure 4. As one may intuitively assume, the median FoM increases with the size of the dataset. We find that the data subset with 64 simulated datapoints leads to a FoM of 8.6 ± 0.7 under the Gaussian prior on H 0 . This median FoM already surpasses that of ConstitutionT. However, the top-hat prior leads to significantly lower FoM. Under the top-hat prior we used, as many as 128 simulated datapoints are Figure 5. Confidence regions in the (Ωm, Ω Λ ) parameter subspace calculated using the snapshot realization shown in Figure 3. The Gaussian prior H 0 = 74.2 ± 3.6 km s −1 Mpc −1 is assumed. The shaded regions, from lighter to darker, correspond to the 2σ constraints obtained from the data subsets containing 16, 32, 64, and 128 simulated H(z) datapoints respectively. For comparison, the 2σ region from pure OHD alone, with the same Gaussian prior on H 0 , is plotted in the dashed contour. The three dotted contours are the 1, 2, and 3σ constraints from the ConstitutionT SNIa dataset. The solid straight line signifies the boundary of Ω k = 0. needed to reaches the FoM level of ConstitutionT. The degeneracy of confidence regions obtained from H(z) data in the (Ω m , Ω Λ ) parameter subspace is shown in Figure 5. Because of this degeneracy, H(z) datasets cannot be used alone to constrain Ω k well. This degen-erate behavior is similar to that of SNIa data. FISHER MATRIX ANALYSIS OF FUTURE DATA The simulated data used in Section 3 are based on the quality of currently available measurements, and we have tried not to be too optimistic about their uncertainties. However, we have reasons to expect an increase in the quality of future H(z) data. First, Crawford et al. (2010) analysed the observational requirement of measuring H(z) to 3% at intermediate redshifts with agedating. Second, the Baryon Oscillation Spectroscopic Survey (BOSS 4 ) is designed to constrain H(z) with 2% precision at redshifts z ≈ 0.3 and 0.6 by measuring BAO imprints in the galaxy field, and at z ≈ 2.5 using the Lyman-α absorption spectra of quasars. By incorporating these specifications of future data, we can estimate their expected figures of merit using the Fisher matrix forecast technique (Dodelson et al. 1997). The 3×3 Fisher matrix F is calculated from equation (5) with the σ i 's determined by future data specifications, and the evaluation of matrix elements is made at the fiducial parameter values (see Section 3.1). The matrix elements are the second partial derivatives of χ 2 with respect of the parameters: F ij = 1 2 ∂ 2 χ 2 ∂θ i ∂θ j ,(9) where the θ i 's are the parameters, namely (Ω m , Ω Λ , H 0 ). Notice that we use the word "Fisher matrix" only loosely. The second derivative matrix defined by equation (9) is not the Fisher matrix in the strict sense, but the ideas are intimately related (see Dodelson 2003). In order to obtain the FoM in the 2-dimensional parameter space of (Ω m , Ω Λ ), we must marginalize over H 0 . We adopt the Gaussian prior on H 0 with σ H = 3.6 km s −1 Mpc −1 , which is the same as the one used in Section 3.2. Gaussian marginalization is performed using a straightforward modification of the projection technique from (Dodelson 2003) and (Press et al. 2007). A more general and detailed analysis of this problem was made by Taylor & Kitching (2010), but for our purpose, simply adding 1/σ 2 H to F 33 and then projecting onto the first two dimensions will do the work. This is applicable when the mean of the prior H 0 is close to the fiducial value, and we have numerically verified the validity of this approximation in our work. Denoting the marginalized Fisher matrix by F , the iso-∆χ 2 contour in the parameter space is approximated by the quadratic equation (∆θ) T F ∆θ = ∆χ 2(10) where ∆θ = θ − θ fid is the parameters' deviation from the fiducial value, and ∆χ 2 is our chosen constant 6.17 in Section 3.3. By construction, F is positive-definite 5 , therefore the above equation describes an ellipse. Its enclosed area is simply A = π/ det( F /∆χ 2 ), therefore we can estimate the FoM by FoM = 1 A = 1 π det 1 ∆χ 2 F . In our setup, we assume that the relative error of H(z) anticipated by Crawford et al. (2010) could be globally achieved within the redshift interval 0.1 ≤ z ≤ 1.0. Under this assumption, we can estimate how many datapoints will be needed to reach the ConstitutionT FoM using the Fisher matrix method discussed above. We chose not to incorporate the available data so that we can work with the simple error model specified by future data. Our main results are shown in Figures 6 and 7. For relative H(z) errors of 3%, 5%, and 10%, the required number of measurements are 21, 62, and 256 respectively. This number N increases steeply as the relative error increases, as shown in Figure 7. We can also apply the Fisher matrix analysis to future BOSS data. We find BOSS alone gives FoM ≈ 15. This promising result is a combined consequence of its high precision and extended redshift coverage. In Figure 8 we plot the Fisher matrix forecast of the confidence regions. CONCLUSION AND DISCUSSIONS We have explored the possibility of using OHD as an alternative to SNIa redshift-distance data in the sense of offering comparable or higher FoM. By using simulated H(z) datasets with an empirical error model similar to that of current age-dating data, we show that more than 60 future measurements of H(z) in the redshift range 0 ≤ z ≤ 2 could be needed to acquire parameter constraints comparable with those obtained from SNIa datasets like ConstitutionT. In addition, precise and ac- Figure 4, but for the forecast of future data. The light shaded region is the 2σ region from the 21 age-dating measurements with relative error 3%. The unfilled region bounded by the dashed contour is the 2σ region from BOSS H(z). The joint 2σ region of BOSS and age-dating is shown as the inner and darker shaded area. Like in Figure 4, 1, 2, and 3σ constraints from ConstitutionT are shown as dotted contours for comparison. curate determination of H 0 is crucial for improving the FoM obtained from OHD, and a broad prior on H 0 leads to lower FoM. When we progressively lower the error of future measurements to 3% as discussed by (Crawford et al. 2010), we estimate that ∼60 measurements in the shorter redshift interval [0.1, 1.0] will be needed to achieve the same result. In summary, we give an affirmative answer to the first of the two questions raised in Section 1 and a semi-quantitative answer to the second. Our result furthers a conclusion of Lin et al. (2009) and Carvalho et al. (2008), namely, that OHD play almost the same role as that of SNIa for the joint constraints on the ΛCDM model. We have shown that the OHD set alone is potentially capable to be used in place of current SNIa datasets if it is large enough. We note that our forecast of OHD data requirement is competitive in the sense of observational cost compared with supernova observations. Throughout our analysis, we used the ConstitutionT dataset used as a standard of FoM. This dataset is a subset of the Constitution compilation, a combination of the ground-based CfA3 supernova observations (Hicken et al. 2009a) and Union, a larger compilation of legacy supernovae and space-based observations (Kowalski et al. 2008). The CfA3 sample alone requires 10 nights for each of the 185 supernovae observed. On the other hand, current OHD from age-dating does not require space-based observations. As described in detail by Stern et al. (2010b), 24 galaxy cluster containing target chronometer galaxies were obtained in only two nights using the Keck I telescope. In (Crawford et al. 2010), it was estimated that the South African Large Telescope (SALT) is capable to measure H(z) to 3% at an individual redshift in ∼180 hours. Admittedly, our approach to the problem is tentative as well as model-dependent. The conclusion is reached under the assumption of the fiducial ΛCDM model (eq. [1]), the model of uncertainties described in Sections 3.1 and 5, as well as the evaluation model of Section 3.2. Therefore, care must be taken not to extrapolate our conclusions well beyond those assumptions, especially when used in the planning of observations. Even if our uncertainty models fit real-world observations well, in practice the results may be less promising than what we suggested in this paper. We assumed a fairly deep redshift coverage which may be difficult to reach by current observations (see Stern et al. 2010b, for the redshift distribution of LRGs used to deduce SJVKS10). Ultimately, the FoM from OHD may be limited by our ability to select enough samples for future observation and the redshift distribution of these samples, rather than by the relative error of each individual measurement. Fortunately, the BOSS project could extend H(z) measurement into deeper redshift. Also, It is worth noting that the proposed Sandage-Loeb observational plan (Corasaniti et al. 2007;Zhang et al. 2010;Araújo & Stoeger 2010) could be used to extend our knowledge of cosmic expansion into the even deeper redshift realm of 2 ≤ z ≤ 5 by measuring the secular variation of cosmological redshifts of the Lyman-α forest with future high-resolution, space-based spectroscopic instruments (see (Sandage 1962) and (Loeb 1998) for the foundations of the test). Finally, we note that future CMB observation programs, such as the Atacama Cosmology Telescope 6 , may be able to identify more than 2000 passively evolving galaxies up to z ≈ 1.5 via the Sunyaev-Zel'dovich effect, and their spectra can be analyzed to yield age measurements that will yield approximately 1000 H(z) determinations with 15% error (Simon et al. 2005). This promises a future data capacity an order of magnitude more than what we have estimated to be enough to match current SNIa datasets. Combining this prospect with future high-z, high-accuracy H(z) determinations from BAO observations, it is reasonable to expect that the OHD will play an increasingly important role in the future study of the expansion history of the universe and cosmological parameters. Figure 1 . 1Full OHD set and best-fit ΛCDM models. The top panel shows the dataset and fit results. The bottom panel shows the residuals with respect to the best-fit model with Gaussian prior on H 0 . The H 0 priors are described in Section 3.2. Figure 2 . 2Uncertainties of H(z) in the SJVKS10 dataset. Solid dots and circles represents non-outliers and outliers respectively. Our heuristic bounds σ + and σ − are plotted as the two dotted lines. The dash-dotted line shows our estimated mean uncertainty σ 0 . Figure 3 . 3Snapshot of a simulated dataset realized using our method. The uncertainties in the simulated data are modelled phenomenologically after SJVKS10, which is also shown for comparison. Figure 4 . 4Figures of merit from each group found by combining all 500 realizations. FoM medians are plotted against the sizes of the simulated subsample, and median absolute deviations are shown as error bars. Horizontal lines across the figure marks the figures of merit of purely observational datasets. Figure 6 . 6Predicted FoM using the error model of(Crawford et al. 2010). The dependence of FoM on the number of measurements is shown for three cases: the solid, dashed, and dotted lines are for 3%, 5%, and 10% relative error respectively. The horizontal line across the figure shows the FoM of ConsitutionT for comparison. Figure 7 .Figure 8 . 78Number of measurements required to match Constitu-tionT, N , as a function of the relative error of H(z). The curve turns upward steeply when the error is large. Same as Table 1 1Currently Available H(z) Datasets Note. -H(z) figures quoted in this table are in the units of km s −1 Mpc −1 .z SVJ05 SJVKS10 GCH09 a 0.09 b 69 ± 12 69 ± 12 · · · 0.17 83 ± 8.3 83 ± 8 · · · 0.24 +0.06 −0.09 · · · · · · 79.69 ± 2.65 0.27 70 ± 14 77 ± 14 · · · 0.4 87 ± 17.4 95 ± 17 · · · 0.43 +0.04 −0.03 · · · · · · 86.45 ± 3.68 0.48 · · · 97 ± 62 · · · 0.88 117 ± 23.4 90 ± 40 · · · 0.9 · · · 117 ± 23 · · · 1.3 168 ± 13.4 168 ± 17 · · · 1.43 177 ± 14.2 177 ± 18 · · · 1.53 140 ± 14 140 ± 14 · · · 1.75 202 ± 40.4 202 ± 40 · · · a Uncertainties include both statistical and system- atic errors: σ 2 = σ 2 sta + σ 2 sys . See Section 2.4 and http://wfirst.gsfc.nasa.gov/ http://www.sdss3.org/boss.php 5 This can be proved using the positive-definiteness conditions of the Schur complement(Puntanen & Styan 2005). http://www.physics.princeton.edu/act/index.html We thank the anonymous referee whose suggestions greatly helped us improve this paper. CM is grateful to Chen-Tao Yang for useful discussions. This work was supported by the National Science Foundation of China (Grants No. 10473002), the Ministry of Science and Technology National Basic Science program (project 973) under grant No. 2009CB24901, the Fundamental Research Funds for the Central Universities. . A Albrecht, arXiv:astro-ph/0609591preprintAlbrecht, A. et al. 2006, preprint, arXiv:astro-ph/0609591 . G Aldering, arXiv:astro-ph/0405232preprintAldering, G. et al. 2004, preprint, arXiv:astro-ph/0405232 . M E Araújo, W R Stoeger, arXiv:1009.2783Phys. Rev. D. 82123513Araújo, M. E., & Stoeger, W. R. 2010, Phys. Rev. D, 82, 123513, arXiv:1009.2783 . C Blake, K Glazebrook, arXiv:astro-ph/0301632ApJ. 594665Blake, C., & Glazebrook, K. 2003, ApJ, 594, 665, arXiv:astro-ph/0301632 . J C Bueno Sanchez, S Nesseris, L Perivolaropoulos, arXiv:0908.2636J. Cosmol. Astropart. Phys. 1129Bueno Sanchez, J. C., Nesseris, S., & Perivolaropoulos, L. 2009, J. Cosmol. Astropart. Phys., 11, 29, arXiv:0908.2636 . F C Carvalho, E M Santos, J S Alcaniz, J Santos, arXiv:0804.2878J. Cosmol. Astropart. Phys. 9Carvalho, F. C., Santos, E. M., Alcaniz, J. S., & Santos, J. 2008, J. Cosmol. Astropart. Phys., 9, 8, arXiv:0804.2878 . G Chen, Iii Gott, J R Ratra, B , arXiv:astro-ph/0308099PASP. 1151269Chen, G., Gott III, J. R., & Ratra, B. 2003, PASP, 115, 1269, arXiv:astro-ph/0308099 . P Corasaniti, D Huterer, A Melchiorri, arXiv:astro-ph/0701433Phys. Rev. D. 7562001Corasaniti, P., Huterer, D., & Melchiorri, A. 2007, Phys. Rev. D, 75, 062001, arXiv:astro-ph/0701433 . S M Crawford, A L Ratsimbazafy, C M Cress, E A Olivier, S Blyth, K J Van Der Heyden, arXiv:1004.2378MNRAS. 4062569Crawford, S. M., Ratsimbazafy, A. L., Cress, C. M., Olivier, E. A., Blyth, S., & van der Heyden, K. J. 2010, MNRAS, 406, 2569, arXiv:1004.2378 S Dodelson, Modern Cosmology. San Diego, CAAcademic PressDodelson, S. 2003, Modern Cosmology (San Diego, CA: Academic Press) . S Dodelson, W H Kinney, E W Kolb, arXiv:astro-ph/9702166Phys. Rev. D. 563207Dodelson, S., Kinney, W. H., & Kolb, E. W. 1997, Phys. Rev. D, 56, 3207, arXiv:astro-ph/9702166 . D J Eisenstein, arXiv:astro-ph/0501171ApJ. 633560Eisenstein, D. J. et al. 2005, ApJ, 633, 560, arXiv:astro-ph/0501171 . W L Freedman, arXiv:astro-ph/0012376ApJ. 55347Freedman, W. L. et al. 2001, ApJ, 553, 47, arXiv:astro-ph/0012376 . E Gaztañaga, A Cabré, L Hui, arXiv:0807.3551MNRAS. 3991663Gaztañaga, E., Cabré, A., & Hui, L. 2009, MNRAS, 399, 1663, arXiv:0807.3551 . G Ghirlanda, G Ghisellini, D Lazzati, C Firmani, arXiv:astro-ph/0408350ApJ. 13Ghirlanda, G., Ghisellini, G., Lazzati, D., & Firmani, C. 2004, ApJ, 613, L13, arXiv:astro-ph/0408350 . Iii Gott, J R Vogeley, M S Podariu, S Ratra, B , arXiv:astro-ph/0006103ApJ. 5491Gott III, J. R., Vogeley, M. S., Podariu, S., & Ratra, B. 2001, ApJ, 549, 1, arXiv:astro-ph/0006103 . M Hicken, arXiv:0901.4787ApJ. 700331Hicken, M. et al. 2009a, ApJ, 700, 331, arXiv:0901.4787 . M Hicken, W M Wood-Vasey, S Blondin, P Challis, S Jha, P L Kelly, A Rest, R P Kirshner, arXiv:0901.4804ApJ. 7001097Hicken, M., Wood-Vasey, W. M., Blondin, S., Challis, P., Jha, S., Kelly, P. L., Rest, A., & Kirshner, R. P. 2009b, ApJ, 700, 1097, arXiv:0901.4804 . D Huterer, Nuclear Physics B Proceedings Supplements. 194239Huterer, D. 2009, Nuclear Physics B Proceedings Supplements, 194, 239 . R Jimenez, A Loeb, arXiv:astro-ph/0106145ApJ. 57337Jimenez, R., & Loeb, A. 2002, ApJ, 573, 37, arXiv:astro-ph/0106145 . R Jimenez, L Verde, T Treu, D Stern, arXiv:astro-ph/0302560ApJ. 593622Jimenez, R., Verde, L., Treu, T., & Stern, D. 2003, ApJ, 593, 622, arXiv:astro-ph/0302560 . A G Kim, E V Linder, R Miquel, N Mostek, arXiv:astro-ph/0304509MNRAS. 347909Kim, A. G., Linder, E. V., Miquel, R., & Mostek, N. 2004, MNRAS, 347, 909, arXiv:astro-ph/0304509 . E Komatsu, arXiv:1001.4538ApJS, submitted. Komatsu, E. et al. 2010, ApJS, submitted, arXiv:1001.4538 . M Kowalski, arXiv:0804.4142ApJ. 686749Kowalski, M. et al. 2008, ApJ, 686, 749, arXiv:0804.4142 . H Li, J.-Q Xia, J Liu, G.-B Zhao, Z.-H Fan, X Zhang, arXiv:0711.1792ApJ. 68092Li, H., Xia, J.-Q., Liu, J., Zhao, G.-B., Fan, Z.-H., & Zhang, X. 2008, ApJ, 680, 92, arXiv:0711.1792 . M Li, X.-D Li, S Wang, X Zhang, arXiv:0904.0928J. Cosmol. Astropart. Phys. 636Li, M., Li, X.-D., Wang, S., & Zhang, X. 2009, J. Cosmol. Astropart. Phys., 6, 36, arXiv:0904.0928 . N Liang, P Wu, S N Zhang, arXiv:0911.5644Phys. Rev. D. 8183518Liang, N., Wu, P., & Zhang, S. N. 2010, Phys. Rev. D, 81, 083518, arXiv:0911.5644 . H Lin, C Hao, X Wang, Q Yuan, Z.-L Yi, T.-J Zhang, Wang, arXiv:0804.3135Mod. Phys. Lett. A. 241699Lin, H., Hao, C., Wang, X., Yuan, Q., Yi, Z.-L., Zhang, T.-J., & Wang, B.-Q. 2009, Mod. Phys. Lett. A, 24, 1699, arXiv:0804.3135 . E V Linder, arXiv:astro-ph/0604280Astropart. Phys. 26102Linder, E. V. 2006, Astropart. Phys., 26, 102, arXiv:astro-ph/0604280 . D.-J Liu, X.-Z Li, J Hao, X.-H Jin, arXiv:0804.3829MNRAS. 388275Liu, D.-J., Li, X.-Z., Hao, J., & Jin, X.-H. 2008, MNRAS, 388, 275, arXiv:0804.3829 . A Loeb, arXiv:astro-ph/9802112ApJ. 499111Loeb, A. 1998, ApJ, 499, L111, arXiv:astro-ph/9802112 . C Mignone, M Bartelmann, arXiv:0711.0370A&A. 481295Mignone, C., & Bartelmann, M. 2008, A&A, 481, 295, arXiv:0711.0370 NIST/SEMATECH e-Handbook of Statistical Methods. Gaithersburg, MDNIST ; National Institute of Standards and TechnologyNIST. 2003, NIST/SEMATECH e-Handbook of Statistical Methods (Gaithersburg, MD: National Institute of Standards and Technology), http://www.itl.nist.gov/div898/handbook/index.htm . W J Percival, arXiv:0907.1660MNRAS. 4012148Percival, W. J. et al. 2010, MNRAS, 401, 2148, arXiv:0907.1660 . S Perlmutter, arXiv:astro-ph/9812133ApJ. 517565Perlmutter, S. et al. 1999, ApJ, 517, 565, arXiv:astro-ph/9812133 W H Press, S A Teukolsky, W T Vetterling, B P Flannery, Numerical Recipes: the Art of Scientific Computing. CambridgeCambridge Univ. Press3rd edn.Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 2007, Numerical Recipes: the Art of Scientific Computing, 3rd edn. (Cambridge: Cambridge Univ. Press) S Puntanen, G P H Styan, The Schur Complement and Its Applications. C. Brezinski & F. ZhangNew YorkSpringer4Puntanen, S., & Styan, G. P. H. 2005, in Numerical Methods and Algorithms, Vol. 4, The Schur Complement and Its Applications, ed. C. Brezinski & F. Zhang (New York: Springer), 163-226 . A G Riess, arXiv:astro-ph/9805201arXiv:0905.0695ApJ. 116539AJRiess, A. G. et al. 1998, AJ, 116, 1009, arXiv:astro-ph/9805201 --. 2009, ApJ, 699, 539, arXiv:0905.0695 . L Samushia, B Ratra, arXiv:astro-ph/0607301ApJ. 6505Samushia, L., & Ratra, B. 2006, ApJ, 650, L5, arXiv:astro-ph/0607301 . A Sandage, ApJ. 136319Sandage, A. 1962, ApJ, 136, 319 . H.-J Seo, D J Eisenstein, arXiv:astro-ph/0507338ApJ. 633575Seo, H.-J., & Eisenstein, D. J. 2005, ApJ, 633, 575, arXiv:astro-ph/0507338 . A Shafieloo, U Alam, V Sahni, A A Starobinsky, arXiv:astro-ph/0505329MNRAS. 3661081Shafieloo, A., Alam, U., Sahni, V., & Starobinsky, A. A. 2006, MNRAS, 366, 1081, arXiv:astro-ph/0505329 . J Simon, L Verde, R Jimenez, arXiv:astro-ph/0412269Phys. Rev. D. 71123001Simon, J., Verde, L., & Jimenez, R. 2005, Phys. Rev. D, 71, 123001, arXiv:astro-ph/0412269 . D N Spergel, arXiv:astro-ph/0603449ApJS. 170377Spergel, D. N. et al. 2007, ApJS, 170, 377, arXiv:astro-ph/0603449 . D Stern, R Jimenez, L Verde, M Kamionkowski, S A Stanford, arXiv:0907.3149J. Cosmol. Astropart. Phys. 28Stern, D., Jimenez, R., Verde, L., Kamionkowski, M., & Stanford, S. A. 2010a, J. Cosmol. Astropart. Phys., 2, 8, arXiv:0907.3149 . D Stern, R Jimenez, L Verde, S A Stanford, M Kamionkowski, arXiv:0907.3152ApJS. 188280Stern, D., Jimenez, R., Verde, L., Stanford, S. A., & Kamionkowski, M. 2010b, ApJS, 188, 280, arXiv:0907.3152 . A N Taylor, T D Kitching, arXiv:1003.1136MNRAS. 408865Taylor, A. N., & Kitching, T. D. 2010, MNRAS, 408, 865, arXiv:1003.1136 . Y Wang, M Tegmark, arXiv:astro-ph/0403292Phys. Rev. Lett. 92241302Wang, Y., & Tegmark, M. 2004, Phys. Rev. Lett., 92, 241302, arXiv:astro-ph/0403292 . arXiv:astro-ph/0501351Phys. Rev. D. 71103513--. 2005, Phys. Rev. D, 71, 103513, arXiv:astro-ph/0501351 . H Wei, arXiv:0906.0828Phys. Lett. B. 687286Wei, H. 2010, Phys. Lett. B, 687, 286, arXiv:0906.0828 . X.-J Yang, D.-M Chen, arXiv:0812.0660MNRAS. 3941449Yang, X.-J., & Chen, D.-M. 2009, MNRAS, 394, 1449, arXiv:0812.0660 . Z.-L Yi, T.-J Zhang, arXiv:astro-ph/0605596Mod. Phys. Lett. A. 2241Yi, Z.-L., & Zhang, T.-J. 2007, Mod. Phys. Lett. A, 22, 41, arXiv:astro-ph/0605596 . J Zhang, L Zhang, X Zhang, arXiv:1006.1738Phys. Lett. B. 69111Zhang, J., Zhang, L., & Zhang, X. 2010, Phys. Lett. B, 691, 11, arXiv:1006.1738
[]
[ "Nonparametric Estimation for SDE with Sparsely Sampled Paths: an FDA Perspective", "Nonparametric Estimation for SDE with Sparsely Sampled Paths: an FDA Perspective" ]
[ "Neda Mohammadi [email protected] \nInstitut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne\n\n", "Leonardo V Santoro [email protected] \nInstitut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne\n\n", "Victor M Panaretos [email protected] \nInstitut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne\n\n" ]
[ "Institut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne\n", "Institut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne\n", "Institut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne\n" ]
[]
We consider the problem of nonparametric estimation of the drift and diffusion coefficients of a Stochastic Differential Equation (SDE), based on n independent replicates {X i (t) : t ∈ [0, 1]} 1≤i≤n , observed sparsely and irregularly on the unit interval, and subject to additive noise corruption. By sparse we intend to mean that the number of measurements per path can be arbitrary (as small as two), and remain constant with respect to n. We focus on time-inhomogeneous SDE of the formwhere α ∈ {0, 1} and β ∈ {0, 1/2, 1}, which includes prominent examples such as Brownian motion, Ornstein-Uhlenbeck process, geometric Brownian motion, and Brownian bridge. Our estimators are constructed by relating the local (drift/diffusion) parameters of the diffusion to their global parameters (mean/covariance, and their derivatives) by means of an apparently novel PDE. This allows us to use methods inspired by functional data analysis, and pool information across the sparsely measured paths. The methodology we develop is fully non-parametric and avoids any functional form specification on the time-dependency of either the drift function or the diffusion function. We establish almost sure uniform asymptotic convergence rates of the proposed estimators as the number of observed curves n grows to infinity. Our rates are non-asymptotic in the number of measurements per path, explicitly reflecting how different sampling frequency might affect the speed of convergence. Our framework suggests possible further fruitful interactions between FDA and SDE methods in problems with replication.
null
[ "https://arxiv.org/pdf/2110.14433v2.pdf" ]
239,998,336
2110.14433
e57d89b07d71616780255e6fe2d80da00174831d
Nonparametric Estimation for SDE with Sparsely Sampled Paths: an FDA Perspective Neda Mohammadi [email protected] Institut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne Leonardo V Santoro [email protected] Institut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne Victor M Panaretos [email protected] Institut de Mathématiqueś Ecole Polytechnique Fédérale de Lausanne Nonparametric Estimation for SDE with Sparsely Sampled Paths: an FDA Perspective AMS 2000 subject classifications: Primary 62M05; 62G08 Keywords and phrases: Drift and diffusion estimationItô diffusion processLocal linear smoothingNonparametric estimationSDEFDABrownian motion We consider the problem of nonparametric estimation of the drift and diffusion coefficients of a Stochastic Differential Equation (SDE), based on n independent replicates {X i (t) : t ∈ [0, 1]} 1≤i≤n , observed sparsely and irregularly on the unit interval, and subject to additive noise corruption. By sparse we intend to mean that the number of measurements per path can be arbitrary (as small as two), and remain constant with respect to n. We focus on time-inhomogeneous SDE of the formwhere α ∈ {0, 1} and β ∈ {0, 1/2, 1}, which includes prominent examples such as Brownian motion, Ornstein-Uhlenbeck process, geometric Brownian motion, and Brownian bridge. Our estimators are constructed by relating the local (drift/diffusion) parameters of the diffusion to their global parameters (mean/covariance, and their derivatives) by means of an apparently novel PDE. This allows us to use methods inspired by functional data analysis, and pool information across the sparsely measured paths. The methodology we develop is fully non-parametric and avoids any functional form specification on the time-dependency of either the drift function or the diffusion function. We establish almost sure uniform asymptotic convergence rates of the proposed estimators as the number of observed curves n grows to infinity. Our rates are non-asymptotic in the number of measurements per path, explicitly reflecting how different sampling frequency might affect the speed of convergence. Our framework suggests possible further fruitful interactions between FDA and SDE methods in problems with replication. Introduction Stochastic Differential Equations (SDEs) and Diffusion Processes have proven to be fundamental in the study and modelling of a wide range of phenomena coming from economics, finance, biology, medicine, engineering and physics [31]. Most generally, any diffusion process can be defined via an SDE of the form: dX(t) = µ (t, X(t)) dt + σ (t, X(t)) dB(t), t ∈ [0, T ], (1.1) where B(t) denotes a standard Brownian Motion and (1.1) is interpreted in the sense of stochastic Itô integrals. Any diffusion thus involves two component: the (infinitesimal) conditional mean, i.e. the drift µ, and the (infinitesimal) conditional variation, i.e. the diffusion σ. These functional parameters fully determine the probabilistic behaviour of its solutions. In particular, the Fokker-Plank Equation -also known in the literature as Kolmogorov Forward Equation -states that the marginal probability density functions of a diffusion process satisfying (1.1) can be characterized as the solution to a (deterministic) Partial Differential Equation (PDE) involving only the drift and diffusion functionals (see Särkkä and Solin [46]). Consequently, drift and diffusion estimation have long been a core statistical inference problem, widely investigated by many authors in both parametric and nonparametric frameworks. A particularly interesting aspect of this inference problem is the fact that diffusions are typically observed discretely, and different estimation regimens are possible, depending on whether one has infill asymptotics, or time asymptotics (for example). Our focus in this article will be on the case where discrete measurements are potentially few, irregular and noise-corrupted, but where sample path replicates are available. While replication is increasingly being considered in the SDE literature, the sparse/noisy measurement setting appears to have so far remained effectively wide open. This is a setting that is routinely investigated in the context of functional data analysis (Yao et al. [51], Zhang and Wang [52], Hall et al. [22] and Li and Hsing [33]), but under the restrictive assumption of smooth sample paths, thus excluding SDE. Our approach to making valid inferences for sparsely/noisely sampled SDE will blend tools from both SDE and FDA, and for this reason we begin with a short review before presenting our contributions. Inference for SDE Much of the early theory and methodology for drift and diffusion estimation concerns the time-homogeneous case. Early authors considered estimation from a single sample solution, and developed an estimation framework building on kernel methods (both parametric and nonparametric). Paths were assumed to either be observed continuously (Banon [5], Geman [21], Tuan [50], Banon and Nguyen [6] among others) or discretely (Florens-Zmirou [19], Stanton [47], Aït-Sahalia and Lo [2], Jiang and Knight [29], Bandi and Phillips [4]), in the latter case investigating the asymptotic theory of the estimators as the sample frequency increases or/and the observation time span grows to infinity. Indeed, the methodology developed showed that the drift term can not be nonparametrically identified from one sample curve on a fixed interval without further cross-restrictions (as in Jiang and Knight [29], for instance), no matter how frequently the data are sampled (see Bandi [3] and Merton [36], among others). Together with this standard asymptotic framework, authors typically also assumed stationarity and ergodicity of the underlying diffusion process, which allowed for great simplification and mathematical convenience. More recently, the kernel-based approach for nonparametrically estimating the drift coefficient from one continuously observed sample solution has been revisited and extended to the multidimensional setting (Strauch [48], Strauch [49], Nickl and Ray [38], Aeckerle-Willems and Strauch [1]), within -however -the setting of stationarity. In these papers, the estimation of the drift is not direct: one has to estimate the product of the drift by the invariant density of the process and then divide the resulting estimator by an estimator of the invariant density. A different approach was considered by Hoffmann [24], Comte et al. [12] and Comte and Genon-Catalot [11], who developed a projection method based on least-squares and sieves: in such case an estimation set, say A, is fixed and a collection of finite-dimensional subspaces of L 2 (A, dx) is chosen. By minimization of a least-squares contrast on each subspace, this leads to a collection of estimators of the drift restricted to the estimation set A, indexed by the dimension of the projection space. The estimators of the drift are defined directly; however, there is a support constraint as the drift is not estimated outside the estimation set. All these approaches also assume 2 strict stationarity (and additionally ergodicity) of the diffusion process, and are limited by the choice of the nested projection subspaces. Bayesian approaches to the problem of statistical analysis of diffusion models, nonparametrically or semiparametrically, have also been considered: see, for example, Papaspiliopoulos et al. [41] and Jenkins et al. [28]. SDE with time-varying drift and diffusion have also been studied, although less extensively (see Särkkä and Solin [46] for an introduction to the topic). Still, many applications feature diffusion processes whose intrinsic nature involves time evolving diffusion and drift. Time homogeneous approaches cannot simply be extended to this setting, and require a different methodological framework. Clearly, the most flexible model one ideally would consider is free of any restriction on the structure of the bivariate drift and diffusion, as in (1.1). However, such unconstrained model is known to be severely unidentifiable (in the statistical sense), i.e. not allowing estimation of the drift/diffusion (Fan et al. [18]). Hence, when considering time-varying stochastic differential equations, some semi-parametric form for the drift and/or diffusion functions should be imposed. For instance, Ho and Lee [23], Hull and White [27], Black et al. [7], Black and Karasinski [8] and later Fan et al. [18] made various semi-parametric assumptions to explicitly express the time-dependence of interest rates, bond/option pricing, securities and volatility functions. The semi-parametric forms they considered can be seen as sub-cases of the following time-dependent model: dX(t) = (α 0 (t) + α 1 (t)X(t) α2 )dt + β 0 (t)X β1 dB(t). (1. 2) The cited papers developed nonparametric techniques based on kernel methods on a single sampled curve, showing convergence of the proposed estimators as the distance between successive discrete observations goes to zero. More recently, Koo and Linton [32] considered the case of (1.2) with β 1 = 1/2: under the assumption of local-stationarity, the authors established an asymptotic theory for the proposed estimators as the time span of the discrete observations on [0, T ] increases. The FDA Perspective Functional Data Analysis (FDA, see Hsing and Eubank [25] or Ramsay and Silverman [44]) considers inferences on the law of a continuous time stochastic process {X(t) : t ∈ [0, T ]} on the basis of a collection of n realisations of this stochastic process, {X i (t)} n i=1 . These are viewed as a sample of size n of random elements in a separable Hilbert space of functions (usually L 2 ([0, T ], R)). This global view allows for nonparametric inferences (estimation, testing, regression, classification) to be made making use of the mean function and covariance kernel of X (particularly the spectral decomposition of the covariance kernel). That the inferences can be nonarametric is due to the availability of n replications, sometimes referred to as a panel or longitudinal setting, where repeated measurements on a collection of individuals are taken across time. This setting encompasses a very expansive collection of applications (see Rao and Vinod [45]). When the processes {X i (t)} n i=1 are only observed discretely, smoothing techniques are applied, and these usually come with C 2 assumptions on the process itself and/or its covariance, that a priori rule out sample paths of low regulartity such as diffusion paths. Some recent work on drift and diffusion estimation make important contact with the FDA framework, even if not always presented as such explicitly: in particular, Ditlevsen and De Gaetano [16], Overgaard et al. [40], Picchini et al. [43], Picchini and Ditlevsen [42], Comte et al. [13], Delattre et al. [14] and Dion and Genon-Catalot [15] modeled diffusions as functional data with parametric approaches, more precisely as stochastic differential equations with non-linear mixed effects. These approaches are not fully functional, as they use a parametric specification of the drift and diffusion. By contrast, the main strength of FDA methods is that they can be implemented nonparametrically. In this sense, Comte and Genon-Catalot [10] and Marie and Rosier [35] seem to be the first to provide a fully nonparametric FDA analysis of replicated diffusions: the first by extending the least-squares projection method and the second by implementing a nonparametric Nadaraya-Watson estimator. However, these techniques come at the cost of assuming continuous (perfect) observation of the underlying sample paths. And, there is no clear way to adapt them to discretely/noisely observed paths, since path recovery by means of smoothing cannot work due to the roughness of the paths. Our Contributions In the present work we provide a way forward to the problem of carrying out nonparametric inference for replicated diffusions, whose paths are observed discretely, possibly sparsely, and with measurement error. We focus on the following class of linear, time dependent stochastic differential equations: dX t = µ(t)X α t dt + σ(t)X β t dB t (1.3) where α ∈ {0, 1} and β ∈ {0, 1/2, 1}. To make the presentation of our results easier to follow, we first present our methodology and results in the case α = 1, β = 0. Then, we show in Section 4 how our results can be extended and adapted to include all cases in (1.3). We introduce nonparametric estimators of the drift µ(t) and diffusion σ(t) functions from n i.i.d. copies {X i (t)} n i=1 solving (1.3), each of which is observed at r ≥ 2 random locations with additive noise contamination, Y ij = X i (T ij ) + U ij , i = 1, ..., n; j = 1, ..., r, where {Y ij } are the observed quantities, {T ij } are random times, and {U ij } is random noise (see Equation (2.6) for more details). Our approach is inspired by FDA methods, but suitably adapted to the SDE setting where sample paths are rough. Specifically, 1. We first establish systems of PDE explicitly relating the drift and diffusion coefficients of X to the mean and covariance functions of X (and their first derivatives). See Propositions 1 and 2. In fact, Proposition 2 is of interest in its own right, as it links intrinsically the infinitesimal local variation of the process at time t, to its global first and second order behaviour. 2. We subsequently nonparametrically estimate mean and covariance (and their derivatives) based on a modification of standard FDA methods: pooling empirical correlations from measured locations on different paths to construct a global nonparametric estimate of the covariance and its derivative e.g. Yao et al. [51]). The modification accounts for the fact that, contrary to the usual FDA setting, the sample paths are not smooth in the diffusion case, and the covariance possesses a singularity on the diagonal (Mohammadi and Panaretos [37]). 3. We finally plug our mean/covariance estimators (and their derivatives) into (either) PDE system, allowing us to translate the global/pooled information into local information on the drift and diffusion. In short, we obtain plug-in estimators for the drift and diffusion via the PDE system (see Equations (3.14) and (3.15)). We establish the consistency and almost sure uniform convergence rates of our estimators as the number n or replications diverges while the number of measurements per curve r is unrestricted, and can be constant or varying in n (our rates reflect the effect of r explicitly). See Theorem 3. To our knowledge, this is the first example in the literature to consider methods and theory applicable to sparse and noise corrupted data. Our methods do not rely on stationarity, local stationarity or assuming a Gaussian law, and do not impose range restrictions on the drift/diffusion. Another appealing feature is that they estimate drift and diffusion simultaneously, rather than serially. Our methods are computationally easy to implement, and their finite sample performance is illustrated by two small simulation studies (see Section 5 ). Setting Background Consider the Itô diffusion equation (1.1) restricted -without loss of generality -to the time window [0, 1]. That is: X(t) = X(0) + t 0 µ (u, X(u)) du + t 0 σ (u, X(u)) dB(u), t ∈ [0, 1], (2.1) where {B(t)} 0≤t≤1 denotes a standard Brownian motion and the integral appearing in the last term is to be intended in the sense of Itô integrals. Under rather weak regularity conditions, one can show that a solution to (2.1) exists and is unique (see Øksendal [39], Theorem 5.2.1): Theorem 1 (Existence and Uniqueness). Let µ(·, ·), σ(·, ·) : [0, 1] ×R → R be measurable functions satisfying a linear growth condition: |µ(t, x)| + |σ(t, x)| ≤ L(1 + |x|), x ∈ R, t ∈ [0, 1],(2. 2) and Lipschitz continuity in the space variable: |µ(t, x)| − µ(t, y)| + |σ(t, x) − σ(t, y)| ≤ L|x − y|, x, y ∈ R, t ∈ [0, 1], (2.3) for some constant L > 0. Let moreover X(0) be a random variable independent of the σ-algebra F ∞ generated by {B(s)} and such that E X(0) 2 < ∞. Then the stochastic differential equation (2.1) admits a unique (in the sense of indistinguishablity) solution with time-continuous trajectories and adapted to the filtration F X0 t 0≤t≤1 generated by X(0) and the Brownian motion {B(t)}. Model and Measurement Scheme In the following we mainly focus on a particular class of diffusion processes, corresponding to the solutions of the following class of linear stochastic differential equations: dX(t) = µ(t)X(t)dt + σ(t)dB(t), t ∈ [0, 1], or equivalently, in integral form: X(t) = X(0) + t 0 µ(u)X(u)du + t 0 σ(u)dB(u), t ∈ [0, 1]. (2.4) Nevertheless, the methodology we develop can be extended in a straightforward manner to a wider class of diffusions, of the form: dX(t) = µ(t)X(t) α dt + σ(t)X(t) β dB(t), t ∈ [0, 1], where α ∈ {0, 1} and β ∈ {0, 1 2 , 1}, as we describe in Section 4. We choose to focus on the case of (2.4) for the purpose of making the presentation of our result more readable. In the following, we will assume that µ(·), σ(·) are continuously differentiable on the unit interval [0, 1]. Let {X i (t), t ∈ [0, 1]} 1≤i≤n , be n iid diffusions satisfying the stochastic differential equations: dX i (t) = µ(t)X i (t)dt + σ(t)dB i (t), i = 1, ..., n, t ∈ [0, 1],(2.5) where B 1 , . . . , B n and X 1 (0), . . . , X n (0) are totally independent sequences of Brownian motions and random initial points 1 . We assume to have access to the n random curves through the observations Y i (T ij ) = X i (T ij ) + U ij i = 1, . . . n, j = 1, . . . r(n),(2.6) where: i. {U ij } forms an i.i.d. array of centered measurement errors with finite variance ν 2 . ii. {T ij } is the triangular array of random design points (T ik < T ij , for k < j, and i = 1, 2, . . . , n), drawn independently from a strictly positive density on [0, 1]. iii. {r(n)} is the sequence of grid sizes, determining the denseness of the sampling scheme. The sequence of grid sizes satisfies r(n) ≥ 2 (otherwise observations cannot provide information about (co)variation) but is otherwise unconstrained and need not diverge with n. iv. {X i }, {T ij } and {U ij } are totally independent across all indices i and j. Notice that requirements (ii) and (iii) are very lax, allowing a great variety of sampling schemes: from dense to sparse. See Figure 1 for a schematic illustration of the measurement scheme. For the sake of simplicity we will reduce the notations Y i (T ij ) and X i (T ij ) to Y ij and X ij , respectively. Methodology and Theory Motivation and Strategy It is well known that the drift and diffusion functional coefficients determine the local behaviour of the process. Indeed, they can be naturally interpreted as conditional mean and variance of the process. This is particularly easy to see in the case of time-invariant stochastic differential equations, i.e. (2.5) with timehomogeneous drift and diffusion: µ(x) = lim h→0 1 h E[X t+h − x|X t = x], σ 2 (x) = lim h→0 1 h E[(X t+h − x) 2 |X t = x]. For this reason, the standard approach to inference for (any class of) SDE, stems from estimating local properties of the observed process. Clearly, this can only be done directly if one has access to continuous or dense observations. The local perspective shared by existing methods does not adapt to sparsely observed paths, especially when there is noise. Our approach is global rather than local, and unfolds over three steps. In Step (1), it accumulates information by sharing information across replications by suitably pooling observations. In Step (2), it connects the global features we can estimate from pooling to the local features we are targeting, by means of suitable PDE connecting the two. And, in Step (3) it plugs in suitable estimates of global features into the PDE of Step (2) to obtain plug-in estimators of the drift and diffusion coefficients. In more detail: Step 1 : Pooling data and estimating the covariance The idea of sharing information is a well developed and widely used approach in FDA, mostly for estimating the covariance structure from sparse and noise corrupted observations of iid data: see Yao et al. [51]. One first notice that, given observations Y ij = X i (T ij ) + U ij as in (2.6), one can view the product off-diagonal observations as proxy for the second-moment function of the latent process. Indeed: E[Y ij Y ik ] = E[X(T ij )X(T ik )] + δ j,k · ν 2 . Intuitively, one can thus recover the covariance by performing 2D scatterplot smoothing on the pooled product observations: see Figure 2. In particular, if one is interested in estimating not only the latent covariance surface, but also its (partial) derivatives, a standard approach is to apply local linear smoothing: see Fan and Gijbels [17]. Broadly speaking, the main idea of local linear smoothing is to fit local planes in two dimensions by means of weighted least squares. In particular, such an approach has the advantage of 6 being fully non parametric. However, the original approach due to Yao et al. [51] requires the covariance surface to be smooth to achieve consistent estimation, and can not be straightforwardly applied in the setting of SDEs: indeed, in such case, the covariance possesses a singularity on the diagonal. A recent modification by Mohammadi and Panaretos [37], however, considers smoothing the covariance as a function defined on the closed lower (upper) triangle: := {(s, t) : 0 ≤ s ≤ t ≤ 1}. (3.1) We will show that covariances corresponding to SDEs of the form (2.4) are smooth on , provided some regularity of the drift and diffusion functionals: see C(0). This, in turn, allows us to apply the methodology in Mohammadi and Panaretos [37] to consistently estimate the covariance function from iid, sparse and noise corrupted samples, with uniform convergence rates. In fact, we will show how to extend such estimation procedure to recover the partial derivatives of the covariance function as well: see Subsections 3.3 and 3.4. Step 2 : Linking the global (FDA) and local (SDE) parametrisations The method described in Step 1 allows us to consistently recover, with uniform convergence rates, the mean and covariance functions of the latent process, together with their derivatives. It is however not immediately clear that such global information can be translated into information on the local features (drift and diffusion). For example, the Fokker-Plank equation (Øksendal [39]) shows that drift and diffusion fully characterize the probability law of its solution. The same is not necessarily true for the mean and covariance, except in Gaussian diffusions -and we have made no such restriction. Still, we prove two systems of PDEs explicitly relating the drift and diffusion coefficients to the mean and covariance functions (and their first derivatives), valid for any solution to (2.4) : See Propositions 1 and 2 below. Step 3 : Plug-in estimation We finally plug our mean/covariance estimators (and their derivatives) into (either) PDE system, allowing us to translate the global/pooled information into local information on the drift and diffusion. In short, we obtain plug-in estimators for the drift and diffusion via the PDE systems: see Equations (3.14) and (3.15) below. Regularity and Two Systems of PDE As outlined in the previous subsection, our methodology for estimating the drift µ(·) and diffusion σ(·) functions builds on two different systems of deterministic ODE/PDE -derived from Itô's formula -that relate the SDE parameters with the mean function t → m(t) := EX(t), the second moment function (s, t) → G(s, t) := E[X(s)X(t)] , and their (partial) derivatives. We will require the following assumption: C(0) The drift and diffusion functionals are d-times continuously differentiable on the unit interval i.e. µ(·), σ(·) ∈ C d ([0, 1], R) for some d ≥ 1. To derive the first system, we consider the second moment function on the diagonal D(t) := G(t, t), t ∈ [0, 1], and derive a system of ODEs as a result of applying Itô's formula to the functions φ(x, t) = x and φ(x, t) = x 2 . Proposition 1. Assume C(0). Then, m(·), D(·) ∈ C d+1 ([0, 1], R). In particular: ∂m(t) = m(t)µ(t), σ 2 (t) = ∂D(t) − 2µ(t)D(t), t ∈ [0, 1]. (3.2) In spite of its appealing simplicity, the system (3.2), only involves marginal variances, i.e. second order structure restricted to the diagonal segment {G(t, t) : 0 ≤ t ≤ 1}. Moreover, such diagonal segment is at the boundary of the domain of G, and is thus potentially subject to boundary effects when smoothing. Therefore, to circumvent this issue, we seek to replace the second equation appearing in (3.2) by some deterministic PDE involving the entire second order structure {G(s, t) : 0 ≤ s ≤ t ≤ 1}. Such a relation, cannot be obtained via direct application of Itô's formula, as it requires to consider the process at different points in time s < t. Nevertheless, we establish such result in Proposition 2, by an application of stopping times to Itô's formula. Indeed, it would appear that Proposition 2 is original in applying Itô's formula to functions with two dimensional time points, i.e. φ(x, t, s) with s, t ∈ [0, 1], x ∈ R. Proposition 2. Assume C(0). Then, G(·, ·) ∈ C d+1 ( , R). Moreover: G(s, t) = G(0, 0) + 2 s 0 G(u, u)µ (u) du + t s G(s, u)µ(u)du + s 0 σ 2 (u) du, 0 ≤ s ≤ t ≤ 1.(3.3) In particular: ∂ s G(s, t) = µ(s)G(s, s) + t s µ (u) ∂ s G(s, u)du + σ 2 (s), 0 ≤ s ≤ t ≤ 1, (3.4) and the following system of PDEs holds: ∂m(s) = m(s)µ(s), σ 2 (s) = 1 1−s 1 s ∂ s G(s, t) − µ(s)G(s, s) − t s µ (u) ∂ s G(s, u)du dt, s ∈ [0, 1]. (3.5) We remark a distinguishing feature of (3.5), mainly in the fact that it relates an intrinsically local information as the pointwise diffusion -interpretable as the infinitesimal local variation of the process at time t -to a global behaviour of the process, by appropriately integrating the covariance kernel of the process over its full domain. Note that (3.5) is obtained in consequence to (3.4) by an averaging argument; however, we could have arbitrarily considered any choice of a weighting kernel. Estimators We now wish to estimate m, G and their derivatives, by means of local polynomial smoothing (Fan and Gijbels [17]). The mean function can be easily estimated by pooling all the observations. As for the covariance, we pool the empirical second moments from all individuals on the closed lower (upper) triangle: := {(s, t) : 0 ≤ s ≤ t ≤ 1}. (3.6) This method is due to Mohammadi and Panaretos [37], and modifies the classical approach of Yao et al. [51] to allow for processes with paths of low regularity, with a covariance function that is non-differentiable on the diagonal {(t, t) : t ∈ [0, 1]}, but rather is of the form: E(X(t)X(s)) = g(min(s, t), max(s, t)), (3.7) for some smooth function g that is at least of class C d+1 for some d ≥ 1. Fortunately, under assumption C(0), this is satisfied for the SDE (2.4) under consideration: see Proposition 1 and 2. Consequently, one can employ local polynomial smoothing to yields consistent estimators of the partial derivatives, as required. In detail, given any 0 ≤ t ≤ 1 the pointwise local polynomial smoothing estimator of order d for m(t) and its derivative ∂m(t) are obtained as m(t), h m ∂m(t) T =   (1, 0, . . . , 0 d times ) T β p 0≤p≤d , (0, 1, 0, . . . , 0 d−1 times ) T β p 0≤p≤d   T (3.8) where the vector β p 0≤p≤d is the solution of the following optimization problem argmin (βp) 0≤p≤d n i=1 r(n) j=1 Y ij − d p=0 β p (T ij − t) p 2 K h 2 m (T ij − t) , (3.9) with h m a one-dimensional bandwidth parameter depending on n and K h 2 m (·) = h −1 m K m h −1 m · , for some univariate integrable kernel K m . Similarly, to estimate G(s, t) restricted to the lower triangle = {(s, t) : 0 ≤ s ≤ t ≤ 1} we apply a local surface regression method on the following 2D scatter plot: {((T ik , T ij ), Y ij Y ik ) : i = 1, ..., n, k < j}. (3.10) Note that we excluded diagonal points (squared observations with indices j = k) from the regression, as the measurement error in the observations causes such diagonal observations to be biased : E [Y ij Y ik ] = G(T ij , T ik ) + δ j,k · ν 2 . (3.11) In detail, for s ≤ t, the local surface smoothing of order d for function G(s, t) and its partial derivative ∂ s G(s, t) are proposed to be G(s, t), h G ∂ s G(s, t) T = (1, 0, . . . , 0) T ( γ p,q )) 0≤p+q≤d , (0, 1, 0, . . . , 0) T ( γ p,q )) 0≤p+q≤d T (3.12) where the vector ( γ 0,0 , γ 1,0 , γ 0,1 , . . . , γ d,0 , γ 0,d ) T = ( γ p,q ) T 0≤p+q≤d is the minimizer of the following problem: arg min (γp,q) i≤n k<j    Y ij Y ik − 0≤p+q≤d γ p,q (T ij − s) p (T ik − t) q    2 K H G (T ij − s, T ik − t) (3.13) = argmin (γp,q) i≤n k<j    Y ik Y ij − 0≤p+q≤d γ p,q h p+q G T ij − s h G p T ik − t h G q    2 × K H G ((T ij − s) , (T ik − t)) . Here H 1/2 G is a 2 × 2 symmetric positive definite bandwidth matrix and K H G (·) = |H G | −1/2 K G H −1/2 G × · for some bi-variate integrable kernel K G . A two-dimensional Taylor expansion at (s, t) motivates estimating G and its partial derivatives as the vector minimizing (3.13). Having defined our estimators for m, G and their (partial) derivatives, we can combine this with either of Proposition 1 or Proposition 2 to obtain the following two different pairs of simultaneous estimators ( µ, σ 2 D ) and ( µ, σ 2 T ) for the drift and diffusion functions (the subscripts D and T alludes to the use of the diagonal or triangular domain, respectively): µ(t) = ( m(t)) −1 ∂m(t)I ( m(t) = 0) , σ 2 D (t) = ∂D(t) − 2 µ(t) D(t), t ∈ [0, 1], (3.14) µ(s) = ( m(s)) −1 ∂m(s)I ( m(s) = 0) , σ 2 T (s) = 1 1−s 1 s ∂ s G(s, t) − µ(s) G(s, s) − t s µ (u) ∂ s G(s, u)du dt, s ∈ [0, 1], (3.15) for t ∈ A, where A is Asymptotic Theory In the present subsection we establish the consistency and convergence rate of the nonparametric estimators defined through (3.14) and (3.15). We will consider the limit as the number of replications diverges, n → ∞. Our rates are non-asymptotic in terms of r(n) reflecting how different sampling frequency might affect the speed of convergence. We will assume H G to be a multiple of the identity matrix, H G = diag h 2 G , h 2 G . We set K H G (·, ·) = W (·)W (·) and K m (·) = W (·) for some appropriately chosen symmetric univariate probability density kernel W (·), with finite dth moment. The kernel function W (·) depends in general on n. For the sake of simplicity we choose W (·) to be in one of the forms {W n } or {W n } below: W n (u) = exp − u a n , W n (u) = W 1 (u) if |u| < 1, W n (u) if |u| ≥ 1, where {a n } n is a sequences of positive numbers tending to zero sufficiently fast such that W n (1 + ) O h d+3 G . See Subsection 5.2 of Mohammadi and Panaretos [37] for a discussion around the choice of kernel function W (·). We additionally make the following assumptions: C(1) There exists some positive number M for which we have 0 < P (T ij ∈ [a, b]) ≤ M (b − a), for all i, j and 0 ≤ a < b ≤ 1. C(2) E|U ij | ρ < ∞ and E|X(0)| ρ < ∞. The first condition concerns the sampling design, and asks that the domain is sampled relatively evenly. The second condition bounds the ρ-moment (for a ρ to be specified as needed later) of the noise contaminant, as well as of the starting point (initial distribution) of the diffusion. The latter clearly holds whenever the initial point X(0) follows a degenerate distribution i.e. X(0) = x a.s. for some x in R. Remarkably, these two conditions alone are sufficient to prove consistency; any other regularity property needed to ensure convergence of the local polynomial regression method can be shown to be satisfied in the present setting as a consequence to the stochastic behaviour constrained by (2.4). Indeed, in light of Proposition 2 we conclude smoothness of function G in accordance with condition C(2) of Mohammadi and Panaretos [37]. Moreover, Lemma 1 below justifies condition C(1) of Mohammadi and Panaretos [37] We are now ready to present the limit behaviour of the estimates proposed above. Theorem 2. Assume conditions C(0), C(1) and C(2) hold for ρ > 2, and let m(·) and ∂m(·) be the estimators defined in (3.8). Then with probability 1: sup 0≤t≤1 | m(t) − m(t)| = O (R(n)) (3.16) sup 0≤t≤1 | ∂m(t) − ∂m(t)| = h −1 m O (R(n)) , (3.17) where R(n) = h −2 m logn n h 2 m + hm r 1/2 + h d+1 m . If additionally C(2) holds for ρ > 4, then G(·) and ∂ s G(·) defined in (3.12) satisfy with probability 1: sup 0≤s≤t≤1 | G(s, t) − G(s, t)| = O (Q(n)) (3.18) sup 0≤s≤t≤1 | ∂ s G(s, t) − ∂ s G(s, t)| = h −1 G O (Q(n)) , (3.19) where Q(n) = h −4 G logn n h 4 G + h 3 G r + h 2 G r 2 1/2 + h d+1 G . Theorem 3. Assume conditions C(0), C(1) and C(2) hold for ρ > 2 and m(0) = 0. Let µ(·) be the estimator defined in (3.14) (or (3.15)). Then, with probability 1: sup 0≤t≤1 | µ(t) − µ(t)| = h −1 m O (R(n)) , (3.20) where R(n) = h −2 m logn n h 2 m + hm r 1/2 + h d+1 m . If additionally C(2) holds for ρ > 4, then σ 2 (·) defined in either (3.14) or (3.15) satisfies with probability 1: sup 0≤t≤1 | σ 2 (t) − σ 2 (t)| = h −1 m O (R(n)) + h −1 G O (Q(n)) , (3.21) where Q(n) = h −4 G logn n h 4 G + h 3 G r + h 2 G r 2 1/2 + h d+1 G . Corollary 1. If we assume a dense sampling regime, i.e. r(n) ≥ M n for some increasing sequence satisfying M n ↑ ∞ and M −1 n h m , h G logn n 1/(2(1+d)) , then with probability 1: sup 0≤t≤1 | m(t) − m(t)| = O logn n 1/2 , sup 0≤t≤1 | ∂m(t) − ∂m(t)| = O logn n d/(2(1+d)) , sup 0≤s≤t≤1 | G(s, t) − G(s, t)| = O logn n 1/2 , sup 0≤s≤t≤1 | ∂ s G(s, t) − ∂ s G(s, t)| = O logn n d/(2(1+d)) , sup 0≤t≤1 | µ(t) − µ(t)| = O logn n d/(2(1+d)) , sup 0≤t≤1 | σ 2 (t) − σ 2 (t)| = O logn n d/(2(1+d)) . Remark 1. When the drift and diffusion functionals are (only) continuously differentiable ( i.e. d = 1 in condition C(0)), in the case of dense observations as in Corollary 1, we obtain: sup 0≤t≤1 | µ(t) − µ(t)| = O logn n 1/4 , sup 0≤t≤1 | σ 2 (t) − σ 2 (t)| = O logn n 1/4 . (3.22) On the other hand, when the drift and diffusion functions are infinitely many times continuously differentiable( i.e. d = ∞ in condition C(0)), in the case of dense observations as in Corollary 1, we obtain classical nonparametric rates. That is, for arbitrarily small positive ε we can perform d (for any d ≥ d(ε) = 1 2 ) local polynomial smoothing, which entails: sup 0≤t≤1 | µ(t) − µ(t)| = O logn n 1/2−ε , sup 0≤t≤1 | σ 2 (t) − σ 2 (t)| = O logn n 1/2− . (3.23) The proofs of the results presented in this subsection are established in the Appendix. Extensions The methodology developed for the specific case of time-inhomogeneous Ornstein-Uhlenbeck process (2.4) can be extended to the following wider class of models: dX t = µ(t)X α t dt + σ(t)X β t dB t with α ∈ {0, 1} and β ∈ {0, 1/2, 1}. Case I : Time-inhomogeneous Geometric Brownian Motion (α = 1 and β = 1): We first provide counterparts of the results above to the case α = 1 and β = 1 i.e. dX(t) = µ(t)X(t)dt + σ(t)X(t)dB(t), t ∈ [0, 1], (4.1) for µ(·), σ(·) ∈ C d ([0, 1]). Proposition 3. Let the stochastic process {X(t)} t≥0 satisfy (4.1) and assume C(0) holds. Then, m(·), D(·) ∈ C d+1 ([0, 1], R). In particular: ∂m(t) = m(t)µ(t), ∂D(t) = 2µ(t) + σ 2 (t) D(t), t ∈ [0, 1],(4. 2) Proposition 4. Let the stochastic process {X(t)} t≥0 satisfy (4.1) and assume C(0) holds. Then, G(·, ·) ∈ C d+1 ( , R). Moreover: G(s, t) = G (0, 0) + 2 s 0 D(u)µ (u) du + t s G (s, u) µ(u)du + s 0 σ 2 (u) D(u)du (4.3) In particular: By Lemma 2 and Remark 2, we can obtain analogue results for the case of (4.1) as obtained in Theorems 2 and 3. Indeed, local polynomial smoothing entails the same rates of convergence for the mean (and its derivative) and for the second moment function (and its partial derivatives) as stated in Theorems 2, provided C(0), C(1) and C(2) hold with ρ > 2 and 4, respectively. Moreover, defining µ, σ 2 D and σ 2 T by plug-in estimation in terms of (4.2) and (4.5), we can show that the same rates of convergence hold as stated in Theorem 3, provided m(0) = 0, C(0), C(1) and C(2) hold with ρ > 2 and 4 for the drift and diffusion estimators, respectively. Case II : Time-inhomogeneous Cox-Ingersoll-Ross model (α = 1 and β = 1/2): Next we focus on the case α = 1 and β = 1/2, i.e.: dX(t) = µ(t)X(t)dt + σ(t)X 1/2 (t)dB(t), t ∈ [0, 1]. (4.6) Proposition 5. Let the stochastic process {X(t)} t≥0 satisfies (4.6) and assume C(0) holds. Then, m(·), D(·) ∈ C d+1 ([0, 1], R). In particular: ∂m(t) = m(t)µ(t), ∂D(t) = 2µ(t)D(t) + σ 2 (t)m(t), t ∈ [0, 1]. (4.7) Proposition 6. Let the stochastic process {X(t)} t≥0 satisfies (4.6) and assume C(0) holds. Then, G(·, ·) ∈ C d+1 ( , R). Moreover: G(s, t) = G (0, 0) + 2 s 0 D(u)µ (u) du + t s G (s, u) µ(u)du + s 0 σ 2 (u) m(u)du,(4. 8) In particular: 9) and the following system of PDEs holds: ∂ s G(s, t) = µ(s)G(s, s) + t s µ (u) ∂ s G(s, u)du + σ 2 (s)m(s),(4.∂m(t) = m(t)µ(t), ∂ s G(s, t) = µ(s)D(s) + t s µ (u) ∂ s G(s, u)du + σ 2 (s)m(s), (s, t) ∈ . (4.10) To deduce analogue results for the case of (4.6) as obtained for the cases (2.4) and (4.1), we require the following stronger assumption C(3): C(3) E|U ij | ρ < ∞ and sup 0≤t≤1 E|X(t)| ρ < ∞. See Lemma 3. Indeed, local polynomial smoothing entails the same rates of convergence for the mean (and its derivative) and for the second moment function (and its partial derivatives) as stated in Theorems 2, provided C(0), C(1) and C(3) hold with ρ > 2 and 4, respectively. Moreover, defining µ and σ 2 D (or σ 2 T ) by plug-in estimation in terms of (4.7) and (4.10), we can show that the same rates of convergence hold as stated in Theorem 3, provided m(0) = 0 (see (A.51)), C(0), C(1) and C(3) hold with ρ > 2 and 4 for the drift and diffusion estimators, respectively. The remaining cases corresponding to α = 0 and β ∈ {0, 1/2, 1} can be handle altogether: dX(t) =µ(t)dt + σ(t)X β (t)dB(t), 0 ≤ t ≤ 1, β ∈ {0, 1/2, 1}. (4.11) Proposition 7. Let the stochastic process {X(t)} t≥0 satisfy (4.11) and assume C(0) holds. Then, m(·), D(·) ∈ C d+1 ([0, 1], R). In particular: ∂m(t) = µ(t), ∂D(t) = 2[m(t) + m(0)]∂m(t) + σ 2 (t)E X 2β (t) , t ∈ [0, 1], β ∈ {0, 1/2, 1}. (4.12) Proposition 8. Let the stochastic process {X(t)} t≥0 satisfy (4.11) and assume C(0) holds. Then, G(·, ·) ∈ C d+1 ( , R). Moreover: G(s, t) = G (0, 0) + 2 s 0 m (u) µ (u) du + m (s) t s µ(u)du + s 0 σ 2 (u) E(X 2β (u))du (4.13) In particular: ∂ s G(s, t) = m(s)µ(s) + ∂m(s) t s µ(u)du + σ 2 (s)E(X 2β (s)) (4.14) and the following system of PDEs holds: ∂m(t) = µ(t), ∂ s G(s, t) = m(s)µ(s) + ∂m(s) t s µ(u)du + σ 2 (s)E(X 2β (s)) (s, t) ∈ . (4.15) Notice that the expression E X 2β (t) appearing in (4.12) and (4.15) equals to 1, m(t) and D(t) corresponding to the choices β = 0, β = 1/2 and β = 1, respectively. Again, we get the same rates of convergence for the mean (and its derivative) and for the second moment function (and its partial derivatives) as stated in Theorems 2, provided C(0), C(1) and C(3) hold with ρ > 2 and 4, respectively. Moreover, defining µ and σ 2 D (or σ 2 T ) by plug-in estimation in terms of (4.12) and (4.15), we can show that the same rates of convergence hold as stated in Theorem 3, provided C(0), C(1) and C(3) hold with ρ > 2 and 4 for the drift and diffusion estimators, respectively. Note that in the case β = 1/2 the plug in estimates σ 2 D and σ 2 T are defined only for t ∈ A, with A is defined above as above: inf t∈A |m(t)| > 0. Moreover, we remark that such set A admits positive Lebesgue measure unless in the trivial case of constant null drift µ(t) = 0 for all t. Simulation Studies In this section we illustrate how our methodology can be used to estimate the drift and diffusion functions from multiple i.i.d. diffusions observed sparsely, on an random grid, with stochastic noise corruption. We simulated the n iid trajectories of the diffusions {X i (t)} i=1,...,n by the Euler-Maruyama approximation scheme (see e.g. Lord et al. [34]) with step-size dt = 10 −3 . For each curve X i , we selected r random locations {T ij } j=1,...,r (uniformly) in the unit interval and ordered them increasingly. Finally, we considered the value of each trajectory X i at the sampled times T ij , with a random additive noise error. That is, we consider: Y ij = X i (T ij ) + U ij , i = 1, Example 1: Brownian-Bridge The first case we consider is that of a Brownian-Bridge started at 2:    dX(t) = − 1 1 − t X(t)dt + dB(t), t ∈ [0, 1]. X(0) = 2. Note that the process is defined in 1 by its left-hand limit (well known to be a.s. 0). The fact that the drift term is not welldefined at t = 1 implies that (5.1) does not satisfy our assumptions. However, a quick look at the closed form expressions for the first and second moment of the Brownian Bridge will show that our methodology is successful in dealing with such case as well. Indeed, the the key feature needed for local linear (surface) regression estimates to satisfy the convergence rates in Theorem 2 is the smoothness of m and G. Although the average RISE for the estimators is presented in common scale, one should refrain from comparing the performance. Indeed, both the nature of the functional object (a conditional average vs a conditional variance) and the estimation methodology (diffusion estimation requires knowledge of the covariance structure) are intrinsically different. We also investigated the effect of noise contamination on our estimation procedure, and -in particular -we conducted a comparison between estimating assuming observations with no noise error (including diagonal observations in local surface regression for G) and assuming some unknown random measurement error (excluding diagonal observations): the plots in Figure 7 show the average RISE for fixed r = 5 as n grows to infinity, with 3 different values for the standard deviation of the error ν ∈ {0, 0.05, 0.5} Considering that the estimator σ T is linked to the entire covariance surface, whereas σ D only to its diagonal values, we expected the estimator σ T achieve a lower average RISE, although the rates of convergence proven are the same. And indeed, our analysis shows eviedence confirming our hypothesis, as the estimator σ 2 T seems to suffer less from boundary issues, and appears to have lower variance, compared to σ 2 D . Indeed, comparing the 95% confidence bands of the estimated trajectories for the diffusion in Figure 8 that σ T T seems qualitatively preferable to σ 2 D : it is less subject to the boundary effects plaguing local-linear based estimators, and appears to provide smoother estimates. Example 2: time-in-homogeneous Ornstein-Uhlenbeck The second case we consider is that of time-inhomogeneous Ornstein-Uhlenbeck: The first case we consider is that of a (particular choise of) Brownian-Bridge started at 2: This particular choice of sinusoidal drift and negative-exponential diffusion was done to highlight the ability of the described methodology to recover complex time-varying drift and diffusions. A similar discussion to that of the previous case can be made here, and we therefore restrict ourselves to presenting the corresponding plots. D and σ 2 T over 100 independent experiments, for different values of (n, r). Observational error variance was set at ν = 0.05. Dark blue shades indicate a low average RISE, while dark red shades indicate high average RISE. The heat-maps show the convergence of the estimators in the sparse regime, as n increases for different values of r: that is, for every fixed value of r (row) we see how increasing n (column) leads to progressively more accurate estimates, and that convergence is achieved faster with greater values of r. Proof of the Results of Section 3 In the sequel we use the fact that the stochastic process {X(t)} satisfying equation (2.4) admits the following closed form representation X(t) = X(0) · exp t 0 µ(v)dv + t 0 exp t v µ(u)du σ(v)dB(v) (A.1) = X(s) · exp t s µ(v)dv + t s exp t v µ(u)du σ(v)dB(v), 0 ≤ s ≤ t ≤ 1, (A.2) see Øksendal [39], p. 98. proof of Proposition 1. In light of the closed form solution (A.1), the functions m(·) and D(·) admit the following forms: m(t) = m(0) · exp t 0 µ(v)dv , (A.3) and D(t) = D(0) · exp 2 t 0 µ(v)dv + t 0 exp t v µ(u)du σ(v) 2 dv, and hence the desired smoothness property m(·), D(·) ∈ C d+1 ([0, 1], R) is deduced via the assumption that µ(·), σ(·) ∈ C d ([0, 1]). The remaining part of Proposition 1 is a direct consequences of applying Itô's formula to the smooth functions φ(x, t) = x and φ(x, t) = x 2 , see for example Särkkä and Solin [46]. This entails the desired smoothness of function G(·, ·) on . For s ≥ 0, define the stopped process: Z(t) = X(t)I{t ≤ s} + X(s)I{s < t}, t ≥ 0. Consequently: X(t) Z(t) = X(0) X(0) + t 0 µ(u)X(u) µ(u)X(u)I{u ≤ s} du + t 0 σ(u) σ(u)I{u ≤ s} dB(u), (A.5) which gives a well-defined and unique (indistinguishable) Itô diffusion process. Applying Itô's integration by parts formula (see Øksendal [39]) to (A.5), we obtain the following representation for E (Z(t)X(t)): E (Z(t)X(t)) = E (X(0)X(0)) + E t 0 X(u)dZ(u) + E t 0 Z(u)dX(u) + E t 0 dX(u)dZ(u). In particular, by the definition of the stopped process {Z(t)}, we conclude: E (X(s)X(t)) = E (X(0)X(0)) + 2E s 0 X 2 (u)µ (u) du + E t s X(s)X(u)µ (u) du + E s 0 σ 2 (u) du. By the linear growth condition, the continuity of mean and mean squared functions, and the Fubini-Tonelli Theorem, we can interchange integral and expectation to obtain: E (X(s)X(t)) = E (X(0)X(0)) + 2 E|X(s)| ρ , we investigate the summands appearing in the right hand side of (A.1) separately. We first observe that s 0 E X 2 (u) µ (u) du + t s E (X(s)X(u)) µ(u)du + s 0 σ 2 (u) du,|X(0)| ρ · exp ρ t 0 µ(v)dv ≤ |X(0)| ρ · exp ρ t 0 |µ(v)| dv ≤ |X(0)| ρ · exp ρ 1 0 |µ(v)| dv . Hence sup 0≤t≤1 E |X(0)| ρ · exp ρ t 0 µ(v)dv is dominated by E |X(0)| ρ · exp ρ 1 0 |µ(v) |dv which in turn is finite as long as E |X(0)| ρ is finite. For the second term appearing in (A.1), note that: sup 0≤t≤1 E t 0 exp t v µ(u)du σ(v)dB(v) ρ = sup 0≤t≤1 E exp t 0 µ(u)du t 0 exp − v 0 µ(u)du σ(v)dB(v) ρ = sup 0≤t≤1 exp ρ t 0 µ(u)du E t 0 exp − v 0 µ(u)du σ(v)dB(v) ρ ≤ exp ρ 1 0 |µ(u)|du sup 0≤t≤1 E t 0 exp − v 0 µ(u)du σ(v)dB(v) ρ ≤ exp ρ 1 0 |µ(u)|du E sup 0≤t≤1 t 0 exp − v 0 µ(u)du σ(v)dB(v) ρ ≤ CE · 0 exp − v 0 µ(u)du σ(v)dB(v) ρ/2 1 (A.6) = CE 1 0 exp −2 v 0 µ(u)du σ 2 (v)dv ρ/2 (A.7) < ∞. Inequality (A.6) is a consequence of Burkholder-Davis-Gundy inequality, see Theorem 5.1 in Gall [20]. The integrand appearing in the stochastic integral t 0 exp − v 0 µ(u)du σ(v)dB(v) is deterministic, and hence adapted and measurable. By the continuity of the drift and diffusion functions µ(·) and σ(·), the integrand is also bounded. Hence, one may apply Proposition 3.2.17 of Karatzas and Shreve [30] to conclude well-definiteness of the quadratic variation process Equality (A.7) is a consequence of the same result. · 0 exp − v 0 µ(u)du σ(v)dB(v) . proof of Theorem 2. The proof of Theorem 2 mimics the lines of the proof of Theorem 1 in Mohammadi and Panaretos [37] which we reproduce here in its entirety for the sake of completeness. We rigorously justify the convergence rate for the function G and its first order partial derivative ∂G ((3.18) and (3.19)). Relations (3.16) and (3.17) can be drawn by an easy modification. First, for each subject i, 1 ≤ i ≤ n, and pair (s, t) ∈ we define d * = 1 2 (d + 1)(d + 2) vectors T i,(p,q) (s, t) corresponding to all possible pairs (p, q) such that 0 ≤ p + q ≤ d in the following form T i,(p,q) (s, t) = T ij − s h G p T ik − t h G q 1≤k<j≤r We then bind the d * vectors T i,(p,q) (s, t) column wise to obtain T i (s, t). And finally we bind the matrices T i (s, t) row wise to obtain the design matrix T(s, t). We also define the diagonal weight matrix W(s, t) in the form W(s, t) = diag K H G T (1,1) (s, t) , where T (1,1) (s, t) is the row of the design matrix T(s, t) corresponding to the pair (p, q) = (1, 1). It's now easy to conclude that the local least square problem (3.13) admits the following closed-form solution ( γ p,q ) T 0≤p+q≤d = h p+q G ∂ p+q s p t q G(s, t) T 0≤p+q≤d = T T (s,t) W (s,t) T (s,t) −1 T T (s,t) W (s,t) Y. (A.8) The equation (A.8) in turn can be reduced to           G(s, t) h G ∂ s G(s, t) h G ∂ t G(s, t) . . . h d G ∂ s d G(s, t) h d G ∂ t d G(s, t)           d * ×1 =           A 0,0 A 1,0 A 0,1 . . . A d,0 A 0,d A 1,0 A 2,0 A 1,1 . . . A d+1,0 A 1,d A 0,1 A 1,1 A 0,2 . . . A 0,d+1 A d,1 . . . . . . . . . . . . . . . . . . A d,0 A d+1,0 A d,1 . . . A 2d * −2,0 . . . A 0,d A 1,d A 0,d+1 . . . . . . A 0,2d * −2           −1 d * ×d *          S 0,0 S 1,0 S 0,1 . . . S d,0 S 0,d          d * ×1 . (A.9) This implies G(s, t) = 1 D(s, t) A 0,0 A 1,0 A 0,1 . . . A d,0 A 0,d          S 0,0 S 1,0 S 0,1 . . . S d,0 S 0,d          , (A.10) and h G ∂ s G(s, t) = 1 D(s, t) A 1,0 A 2,0 A 1,1 . . . A d+1,0 A 1,d          S 0,0 S 1,0 S 0,1 . . . S d,0 S 0,d          , (A.11) where A p,q are the arrays of the inverse matrix appearing in (A.9), A p,q = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G T ij − s h G p T ik − t h G q , 0 ≤ p + q ≤ d, S p,q = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ij − s h G T ij − s h G p T ik − t h G q Y ij Y ik , 0 ≤ p + q ≤ d, and D(s, t) = det                     A 0,0 A 1,0 A 0,1 . . . A d,0 A 0,d A 1,0 A 2,0 A 1,1 . . . A d+1,0 A 1,d A 0,1 A 1,1 A 0,2 . . . A 0,d+1 A d,A d,0 A d+1,0 A d,1 . . . A 2d * −2,0 . . . A 0,d A 1,d A 0,d+1 . . . . . . A 0,2d * −2                     . (A.12) 22 This yields G(s, t) = 0≤p+q≤d A p,q (s, t)S p,q (s, t) D(s, t) (A.13) = A 0,0 (s, t)S 0,0 (s, t) + A 1,0 (s, t)S 1,0 (s, t) + A 0,1 (s, t)S 0,1 (s, t) + · · · + A d,0 (s, t)S d,0 (s, t) + A 0,d (s, t)S 0,d (s, t) D(s, t) , (A.14) likewise, ∂ s G(s, t) = 0≤p+q≤d A 1+p,q (s, t)S p,q (s, t) D(s, t) (A.15) = A 1,0 (s, t)S 0,0 (s, t) + A 2,0 (s, t)S 1,0 (s, t) + A 1,1 (s, t)S 0,1 (s, t) + · · · + A d+1,0 (s, t)S d,0 (s, t) + A 1,d (s, t)S 0,d (s, t) D(s, t) , (A.16) We can drop the index n for sake of simplicity. According to (A.8) and observing that          G(s, t) h G ∂ s G(s, t) h G ∂ t G(s, t) . . . h d G ∂ s d G(s, t) h d G ∂ t d (s, t)          = T T (s,t) W (s,t) T (s,t) −1 T T (s,t) W (s,t) T (s,t)          G(s, t) h G ∂ s G(s, t) h G ∂ t G(s, t) . . . h d G ∂ s d G(s, t) h d G ∂ t d (s, t)          , we obtain sup 0≤s≤t≤1 G(s, t) − G(s, t) = sup 0≤s≤t≤1 0≤p+q≤d A p,q (s, t) S p,q (s, t) D(s, t) (A.17) likewise, sup 0≤s≤t≤1 h G ∂ s G(s, t) − h G ∂ s G(s, t) = sup 0≤s≤t≤1 0≤p+q≤d A p+1,q (s, t) S p,q (s, t) D(s, t) , (A.18) where S p,q (s, t) = S p,q (s, t) − 0≤p +q ≤d A p +p,q +q h p +q G ∂G s p t q (s, t) (A.19) =: S 0,0 (s, t) − Λ p,q (s, t), 0 ≤ p + q ≤ d. We now investigate the different terms appearing in (A.17) and (A.18) separately. First, we observe that A p,q , for 0 ≤ p + q ≤ d, and hence the determinant (A.12) are convergent. Moreover, the determinant (A.12) is bounded away from zero. See equation (3.13) of Fan and Gijbels [17] and discussion around. So, the rate of convergence of vector G(s, t) h G ∂ s G(s, t) h G ∂ t G(s, t) . . . h G ∂ s d G(s, t) h d G ∂ t d G(s, t) T is determined by the rest of the terms i.e. S p,q , 0 ≤ p + q ≤ d. We now study S 0,0 : S 0,0 (s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G Y ij Y ik − Λ 0,0 (s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G (X ij + U ij ) (X ik + U ik ) −Λ 0,0 (s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G U ij U ik + 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G X ij U ik + 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G U ij X ik + 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G (X ij X ik − G (T ij , T ik )) + 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G G (T ij , T ik ) − Λ 0,0 (s, t) =: A 1 + A 2 + A 3 + A 4 + A 5 . The expressions A 1 − A 4 , representing the variance term, can be written in the general form 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G Z ijk = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G Z ijk × [I ((T ij , T ik ) ∈ [s − h G , s + h G ] c × [t − h G , t + h G ] c ) (A.20) + I ((T ij , T ik ) ∈ [s − h G , s + h G ] c × [t − h G , t + h G ]) (A.21) + I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ] c ) (A.22) + I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ])] (A.23) =:Z 1,1 (s, t) +Z 1,0 (s, t) +Z 0,1 (s, t) +Z 0,0 (s, t), where each Z ijk has mean zero. The term A 5 corresponds to the bias term for which we use a Taylor series expansion to obtain the desired almost sure uniform bound. For (A.20), observe that Z 1,1 (s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G Z ijk (A.24) × I ((T ij , T ik ) ∈ [s − h G , s + h G ] c × [t − h G , t + h G ] c ) ≤ W 2 1 + 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | = O h −2 G W 2 1 + , a.s uniformly on 0 ≤ t ≤ s ≤ 1 = O h d+1 G , a.s uniformly on 0 ≤ t ≤ s ≤ 1. (A.25) For (A.21) (similarly (A.22)) we havē Z 1,0 (s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G Z ijk (A.26) × I ((T ij , T ik ) ∈ [s − h G , s + h G ] c × [t − h G , t + h G ]) ≤ W † (u) du W 1 + 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | 24 = O h −2 G W † (u) du W 1 + , a.s uniformly on 0 ≤ t ≤ s ≤ 1 = O h d+1 G , a.s uniformly on 0 ≤ t ≤ s ≤ 1, (A. 27) where W † indicates Fourier transform of kernel function W . For (A.23) we havē Z 0,0 (s, t) = 1 n n i=1 2 r(r − 1) 1≤k<j≤r Z ijk e −ius−ivt+iuTij +ivT ik W † (h G u)W † (h G v)dudv (A.28) ×I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) . Regarding EZ 0,0 (s, t) = 0, for all 0 ≤ t ≤ s ≤ 1, we have |Z 0,0 (s, t) − EZ 0,0 (s, t)| ≤ W † (h G u) du 2 (A.29) × 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) = O h −2 G × 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) . (A.30) The remaining argument for (A.23) is similar to the proof of Lemma (8.2.5) of Hsing and Eubank [26]. In more detail, for the summation term appearing in (A.30) we have 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) = 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | I (|Z ijk | ≥ Q n ∪ |Z ijk | < Q n ) ×I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) =: B 1 + B 2 . According to condition C(2) and Lemma 1 and choosing Q n in such a way that logn n h 4 G + h 3 G r(n) + h 2 G r 2 (n) −1/2 Q 1−α n = O(1), we conclude B 1 = O logn n h 4 G + h 3 G r(n) + h 2 G r 2 (n) 1/2 , almost surely uniformly. In more detail B 1 = 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | 1−α+α I (|Z ijk | ≥ Q n ) I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) ≤ 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | α Q 1−α n = O(1)Q 1−α n = O logn n h 4 G + h 3 G r(n) + h 2 G r 2 (n) 1/2 , a.s uniformly on 0 < t < s < 1. For B 2 , first, define B 2 = 1 n n i=1 2 r(r − 1) 1≤k<j≤r |Z ijk | I (|Z ijk | ≤ Q n ) × I ((T ij , T ik ) ∈ [s − h G , s + h G ] × [t − h G , t + h G ]) 25 =: 1 n n i=1 2 r(r − 1) 1≤k<j≤r Z ijk (s, t). We then apply Bennett's concentration inequality (see [9]) to complete the proof of this part. We obtain a uniform upper bound for Var 2 r(r−1) 1≤k<j≤r Z ijk (s, t) , i = 1, 2, . . . , n, in the following way Var   2 r(r − 1) 1≤k<j≤r Z ijk (s, t)   = 2 r(r − 1) 2 1≤k1<j1≤r 1≤k2<j2≤r Cov (Z ij1k1 (s, t), Z ij2k2 (s, t)) ≤ c h 4 G + h 3 G r(n) + h 2 G r 2 (n) , (A.31) for some positive constant c which depends neither on (s, t) nor on i. Note that inequality (A.31) is a direct consequence of conditions C(1) and C(2) and Lemma 1. Finally, applying Bennett's inequality and choosing Q n = logn n −1/2 h 4 G + h 3 G r(n) + h 2 G r 2 (n) 1/2 we have, for any positive number η, P   1 n n i=1 2 r(r − 1) 1≤k<j≤r Z ijk (s, t) ≥ η logn n h 4 G + h 3 G r(n) + h 2 G r 2 (n) 1/2   ≤ exp    − η 2 n 2 logn n h 4 G + h 3 G r(n) + h 2 G r 2 (n) 2nc h 4 G + h 3 G r(n) + h 2 G r 2 (n) + 2 3 ηn h 4 G + h 3 G r(n) + h 2 G r 2 (n)    = exp − η 2 logn 2c + 2 3 η = n − η 2 2c+ 2 3 η , ∀0 ≤ t ≤ s ≤ 1. (A.32) Choosing η large enough we conclude summability of (A.32). This result combined with the Borel-Cantelli lemma completes the proof of this part. In other words we conclude there exists a subset Ω 0 ⊂ Ω of full probability measure such that for each ω ∈ Ω 0 there exists n 0 = n 0 (ω) with 1 n n i=1 2 r(r − 1) 1≤k<j≤r Z ijk (s, t) ≤ η logn n h 4 G + h 3 G r(n) + h 2 G r 2 (n) 1/2 , n ≥ n 0 . (A.33) We now turn to the bias term A 5 and investigate its convergence. Observe that A 5 = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G G (T ij , T ik ) − Λ 0,0 (s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G [G (T ij , T ik ) − G(s, t) − (T ij − s) ∂ s G(s, t)(s, t) − (T ik − t) ∂ t G(s, t)(s, t) − · · · − (T ik − t) d ∂ t d G(s, t)(s, t) − (T ij − s) d ∂ s d G(s, t)(s, t) = 1 nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G m+m =d+1 1 m!m ! ∂ s m t m G(s, t)(s, t) (T ij − s) m (T ik − t) m = h d+1 G nh 2 G n i=1 2 r(r − 1) 1≤k<j≤r W T ij − s h G W T ik − t h G m+m =d+1 1 m!m ! ∂ s m t m G(s, t)(s, t) T ij − s h G m T ik − t h G m 26 = O h d+1 G , a.s uniformly on 0 ≤ t ≤ s ≤ 1. (A.34) The last expression is a consequence of the smoothness condition C(0). Relations (A.33) and (A.34) complete the proof for term S 0,0 appearing in decomposition (A.17). The proofs for the other terms S p,q are similar. This together with decompositions (A.17) and (A.18) entail the rate Q(n) for G(s, t) and h G ∂ s G(s, t) as desired. The following Lemma investigates the rate preserving property of the reciprocal function and paves the way of proving the results in Theorem 3. . Proof. Let (Ω, F, P) be the underlying probability space and Ω 0 be a subset of Ω with full measure, P(Ω 0 ) = 1, on which we have (3.16). For ω in Ω 0 , choose n 0 and M 0 such that: sup 0≤t≤1 | m(t, ω) − m(t)| R(n) < M 0 , ∀n 0 < n,|| m(t, ω)| − |m(t)|| ≤ | m(t, ω) − m(t)| < 1 2 M * < 1 2 |m(t)|, n 1 < n, ∀t ∈ [0, 1], as a result of uniform convergence (3.16). Consequently, 0 < 1 2 M * ≤ 1 2 |m(t)| < | m(t, ω)| , ∀n 1 < n, ∀t ∈ [0, 1], and hence 0 < 1 2 M * 2 ≤ 1 2 |m(t)| 2 ≤ | m(t, ω)m(t)| , ∀n 1 < n, ∀t ∈ [0, 1]. (A.37) Combining (A.36), (A.37) and m(t, ω) = m * (t, ω) for ω ∈ Ω 0 and sufficiently large n (n 1 < n) we observe that: We are now ready to establish Theorem 3. 1 m * (t,ω) − 1 m(t) R(n) = 1 | m * (t, ω)m(t)| | m * (t, ω) − m(t)| R(n) ≤ 2 M * 2 M 0 , max(n 0 , n 1 ) < n, ∀t ∈ A. proof of Theorem 3. Regarding Lemma 4 and Theorem 2, for any positive number M 2 there exists n 2 such that with probability 1: m −1 * (t, ω) − m −1 (t) R(n) < M 2 < ∞, ∀n 2 < n, ∀t ∈ [0, 1], (A.38) and ∂m(t, ω) − ∂m(t) h −1 m R(n) < M 2 < ∞, ∀n 2 < n, ∀t ∈ [0, 1]. (A.39) We now have: Proof of the Results of Section 4 proof of Propositions 3 and 4. We first claim that the unique solution of equation (4.1) admits the following form | m −1 * (t, ω) ∂m(t, ω) − m −1 (t)∂m(t)| h −1 m R(n) ≤ | ∂m(t, ω)|| m −1 * (t, ω) − m −1 (t)| h −1 m R(n) + |m −1 (t)|| ∂m(t, ω) − ∂m(t)| h −1 m R(n) ≤ | ∂m(t, ω) − ∂m(t)| + |∂m(t)| | m −1 * (t, ω) − m −1 (t)| h −1 m R(n) + |m −1 (t)| | ∂m(t, ω) − ∂m(t)| h −1 m R(n) ,X(t) = X(0) exp t 0 µ(v) − 1 2 σ 2 (v) dv exp t 0 σ(v)dB(v) (A.40) = X(s) exp t s µ(v) − 1 2 σ 2 (v) dv + t s σ(v)dB(v) , 0 ≤ s ≤ t ≤ 1. Indeed, the (stochastic) differential of the process {X(t)} t≥0 appearing in (A.40) reads dX(t) = X(0) exp t 0 µ(v) − 1 2 σ 2 (v) dv µ(t)dt − 1 2 σ 2 (t)dt exp t 0 σ(v)dB(v) + X(0) exp t 0 µ(v) − 1 2 σ 2 (v) dv exp t 0 σ(v)dB(v) σ(t)dB(t) + 1 2 σ 2 (t)dt = X(0) exp t 0 µ(v) − 1 2 σ 2 (v) dv + t 0 σ(v)dB(v) (µ(t)dt + σ(t)dB(t)) = X(t) (µ(t)dt + σ(t)dB(t)) . G(s, t) = G(s, s) exp t s µ(v) − 1 2 σ 2 (v) dv E exp t s σ(v)dB(v) (A.44) = G(s, s) exp t s µ(v) − 1 2 σ 2 (v) dv exp t s 1 2 σ 2 (v)dv = D(s) exp t s (µ(v)) dv . This together with regularity of drift function µ(·) (as assumed) and second order function D(·) (as asserted in Proposition 3) lead to G(·, ·) ∈ C d+1 ( , R). The rest of the argument is similar to the proof of Proposition 2. For s ≥ 0, define the stopped process: Z(t) = X(t)I{t ≤ s} + X(s)I{s < t}, t ≥ 0. and the corresponding coupled process X(t) Z(t) = X(0) X(0) + t 0 µ(u)X(u) µ(u)X(u)I{u ≤ s} du + t 0 σ(u)X(u) σ(u)X(u)I{u ≤ s} dB(u), (A.45) which gives a well-defined and unique (indistinguishable) Itô diffusion process. Applying Itô's integration by parts formula (see Øksendal [39]) delivers the following representation for E (Z(t)X(t)) (= E (X(s)X(t))) with s < t: E (Z(t)X(t)) = E (X(0)X(0)) + E t 0 X(u)dZ(u) + E t 0 Z(u)dX(u) + E t 0 dX(u)dZ(u). By definition of the stopped process {Z(t)} this implies: E (X(s)X(t)) = E (X(0)X(0)) + 2E s 0 X 2 (u)µ (u) du + E t s X(s)X(u)µ (u) du + E s 0 σ 2 (u) X 2 (u)du. By the linear growth condition, the continuity of mean and mean squared functions, and the Fubini-Tonelli Theorem, we can interchange integral and expectation to obtain: E (X(s)X(t)) = E (X(0)X(0)) + 2 s 0 E X 2 (u) µ (u) du + t s E (X(s)X(u)) µ(u)du + s 0 σ 2 (u) E(X 2 (u))du, that is G (s, t) = G (0, 0) + 2 s 0 G (u, u) µ (u) du + t s G (s, u) µ(u)du +E (X ρ (t)) = exp ρ t 0 µ(v) − 1 2 σ 2 (v) dv E (X ρ (0)) E exp ρ t 0 σ(v)dB(v) , (A.46) where the product of the expectations on the right hand side comes from the fact that the initial distribution is independent of the filtration generated by the Brownian motion {B(t)} t≥0 . Furthermore, the random variable t 0 σ(v)dB(v) is identical to the random variable B( t 0 σ 2 (v)dv) i.e Brownian motion at time t 0 σ 2 (v)dv. This leads to E (X ρ (t)) = exp ρ t 0 µ(v) − 1 2 σ 2 (v) dv E (X ρ (0)) exp 1 2 ρ 2 t 0 σ 2 (v)dv , (A.47) and hence sup 0≤t≤1 E (X ρ (t)) ≤ exp ρ 1 0 |µ(v)| + 1 2 σ 2 (v) dv + 1 2 ρ 2 1 0 σ 2 (v)dv E (X ρ (0)) . (A.48) This completes the proof of the claim: finiteness of sup 0≤t≤1 E|X(t)| ρ is a result of finiteness of E|X(0)| ρ . proof of Proposition 5. Multiplying each side of the time-inhomogeneous Cox-Ingersoll-Ross model dX(t) = µ(t)X(t)dt + σ(t)X 1/2 (t)dB(t), t ∈ [0, 1], (A.49) with exp{Λ(t)} := exp{− t 0 µ(s)ds} we obtain exp{Λ(t)} (dX(t) − µ(t)X(t)dt) = exp{Λ(t)}σ(t)X 1/2 (t)dB(t), t ∈ [0, 1]. Consequently This confirms the desired smoothness condition D(·) ∈ C d+1 ([0, 1], R). Next, we apply Itô formula to the equation (A.49) with φ(x, t) = x 2 and then we take expectation to conclude which gives a well-defined and unique (indistinguishable) Itô diffusion process. Here again, we apply Itô's integration by parts formula (see Øksendal [39]) to obtain the following representation for E (Z(t)X(t)) (= E (X(s)X(t))) with s < t: D(t) = EX 2 (t) = EX 2 (0) + E t 0 2X 2 (s)µ(s)ds + E E (Z(t)X(t)) = E (X(0)X(0)) + E The definition of the stopped process {Z(t)} implies that: E (X(s)X(t)) = E (X(0)X(0)) + 2E By the linear growth condition, the continuity of mean and mean squared functions, and the Fubini-Tonelli Theorem, we can interchange integral and expectation to obtain: E (X(s)X(t)) = E (X(0)X(0)) + 2 s 0 E X 2 (u) µ (u) du + t s E (X(s)X(u)) µ(u)du + as desired in equation (4.9) and the second equality in (4.10). The first equation appearing in (4.10) follows from Proposition 5. proof of Lemma 3. We prove the assertion by conducting an induction argument on ρ. Step 1 below justifies the result for ρ = 2 i.e. sup and smoothness of the functions appearing on the right hand side delivers the result for ρ = 2. Step 2: Assume that sup This confirms regularity property m(·) ∈ C d+1 ([0, 1], R) as well as the first equation appearing in (4.12). For the rest of the claim observe that This confirms D(·) ∈ C d+1 ([0, 1], R), even in the case for β = 1, as in that case D appears on the right hand side after convolution with the smooth function σ 2 . In particular, this shows the second equation appearing in (4.12). proof of Proposition 8. Using (4.11) we first conclude G(·, ·) ∈ C d+1 ( , R). Indeed, for s < t G(s, t) =E(X(t)X(s)) = E(X 2 (s)) + E(X(s)) Smoothness oft the drift function and the conclusion of Proposition 7 imply the desired smoothness property G(·, ·). The rest of the argument is similar to the proof of Proposition 2. For s ≥ 0, define the stopped process: Z(t) = X(t)I{t ≤ s} + X(s)I{s < t}, t ≥ 0. and the corresponding coupled process X(t) Z(t) = X(0) X(0) + t 0 µ(u) µ(u)I{u ≤ s} du + t 0 σ(u)X β (u) σ(u)X β (u)I{u ≤ s} dB(u). (A.62) This defines a well-defined and unique (indistinguishable) Itô diffusion process. Itô's integration by parts formula (see Øksendal [39]) for E (Z(t)X(t)) (= E (X(s)X(t))) with s < t gives E (Z(t)X(t)) = E (X(0)X(0)) + E By the linear growth condition, the continuity of mean and mean squared functions, and the Fubini-Tonelli Theorem, we can interchange integral and expectation to obtain: E (X(s)X(t)) = E (X(0)X(0)) + 2 s 0 E (X(u)) µ (u) du + E (X(s)) t s µ(u)du + showing (4.14) and the second equality in (4.15). The first equation appearing in (4.15) follows from Proposition 7. Fig 1 : 1Illustration of the measurement scheme on the i.i.d diffusions X 1 , . . . , X n , with n = 4 and r = 10. The red crosses illustrate what is observable, whereas elements in blue are latent. Fig 2 : 2Estimated and true covariance surface -in red and blue, respectively -with 2D scatter plot of the off-diagonal product observations. some subset of [0, 1] on which the mean function is bounded away from zero: inf t∈A |m(t)| > 0. However, its easy to see that the maximal choice of A corresponds to the full unit interval [0, 1], provided m(0) = 0. For further details we refer to the Appendix, and in particular to Lemma 4. t)| ρ ) as a result of finiteness of E|X(0)| ρ . Lemma 1. Let ρ be a positive number, then sup 0≤t≤1 E|X(t)| ρ < ∞ holds if and only if E|X(0)| ρ < ∞. µ (u) ∂ s G(s, u)du + σ 2 (s)D(s)(4.4) and the following system of PDEs holds:∂m(t) = m(t)µ(t), ∂ s G(s, t) = µ(s)D(s) + t s µ (u) ∂ s G(s, u)du + σ 2 (s)D(s), (s, t) ∈ .(4.5) Lemma 2. Let the stochastic process {X(t)} t≥0 be defined as (4.1) and ρ be a positive number, then condition sup 0≤t≤1 E|X(t)| ρ < ∞ holds if and only if E|X(0)| ρ < ∞. Remark 2. In light of equation (A.42), one might choose the maximal set A for which inf t∈A |m(t)| > 0 to be A = [0, 1] if and only if m(0) = 0. Lemma 3 . 3Let the stochastic process {X(t)} t≥0 be defined as (4.6) and ρ be natural powers of 2 i.e. ρ ∈ {2, 4, 8, 16, · · · }, then condition sup 0≤t≤1 E|X(t)| ρ < ∞ holds if and only if E|X(0)| ρ < ∞.Case III : α = 0 and β ∈ {0, 1/2, 1} 14 Fig 3 : 143. . . , n, j = 1, . . . , r where U ij are iid mean-zero Gaussian measurement errors with finite variance ν 2 .We investigated the performance of our estimation methodology across different values of n -the number of sampled trajectories, ranging from 100 to 1000 -and r -the number of observed locations per curve, ranging from 2 to 10. In addition, we explored the performance of our methodology across different values for the standard deviation of the measurement error: ν ∈ {0, 0.05, 0.1}. For both linear and surface smoothing we used the Epanechnikov kernel, with bandwidth h = (n · r) −(1/5) . Evaluation of the estimated functional values was done on a discretized grid over [0, 1], which was subdivided into 25 sub-intervals.Weinvestigate two examples of time-varying SDEs: Brownian-Bridge, which has a time-varying drift and constant diffusion: see Subsection 5.1; and a time-inhomogeneous Ornstein-Uhlenbeck process, with sinusoidal time-varying drift and negative exponential time-varying diffusion: see Subsection 5.2. For every triple (n, r, ν), 100 Monte Carlo iterations were run. The performance of the proposed estimators was evaluated by computing the average square root of the integrated squared error (RISE) obtained over 100 Monte Carlo runs, i.e. the average L 2 -distance between each functional estimator and the corresponding true function. That is, every Monte Carlo run accounted for computing the errors µ− µ L 2 ([0,1]) , σ 2 − σ 2 D L 2 ([0,1])and σ 2 − σ 2 T L 2 ([0,1]) , and we looked into the distribution of these errors over 100 iterations. Above, n = 100 simulated sample paths of the two time-inhomogeneous diffusion processes we consider in our analysis in Subsection 5.1 and 5.2, respectively. Below, r = 5 observations for every curve sampled uniformly on [0, 1], with additive mean-zero Gaussian noise with standard error ν = 0.05. e. (2.4) with time-varying drift and constant diffusion: µ(t) = −1/(1 − t), σ(t) = 1. Fig 4 : 4Example 1. Drift and diffusion functions for the Brownian Bridge process (5.1) We explored the behaviour of the drift estimator µ and of the two diffusion estimators σ D , σ T across different values for the number of curves n ∈ {100, 200, 500, 1000} and for the number of observed points per curve r ∈ {2, 3, 5, 10}. The average RISE and its distribution are shown in the heatmaps and boxplots in Fig 5: Example 1. Heatmap illustrating the average RISE of our estimators µ, σ 2D and σ 2 T over 100 independent experiments, for different values of (n, r). Observational error variance was set at ν = 0.05. Dark blue shades indicate a low average RISE, while dark red shades indicate high average RISE. The heat-maps show the convergence of the estimators in the sparse regime, as n increases for different values of r: that is, for every fixed value of r (row) we see how increasing n (column) leads to progressively more accurate estimates, and that convergence is achieved faster with greater values of r. Figure 5 5and 6, respectively. The effect of measurement noise is investigated inFigure 7, where we consider gaussian additive noise with standard deviation ν ∈ {0, 0.05, 0.1}. Fig 6 : 6Example 1. Boxplots of the average RISE scores for different values of (n, r). Fig 7 : 7Example 1. Comparing the average RISE with different values for the observational error standarddeviation: ν = 0, 0.05, 0.5. We consider r = 5 observations per curve, and n = 100, 200, 500, 1000 curves. Fig 8 : 8Example 1. 95% confidence bands of the estimated drift and diffusion functions in 100 Monte carlo runs. We assumed no error contamination, r = 3 and n = 100, 200, 500, 1000. Fig 9 : 9sin(2πt))X(t)dt + e (1−t) 2 dB(t), t ∈ [0, 1]. Example 2. Drift and diffusion functions for the time-inhomogenous Ornstein-Uhlenbeck process(5.2) Fig 10 : 10Example 2. Heatmap illustrating the average RISE of our estimators µ, σ 2 Fig 11 : 11Example 2. Comparing the average RISE with different values for the observational error standarddeviation: ν = 0, 0.05, 0.5. We consider r = 5 observations per curve, and n = 100, 200, 500, 1000 curves. Fig 12 : 12Example 2. Boxplots of the average RISE scores for different values of (n, r). Fig 13 : 13Example 2. 95% confidence bands of the estimated drift and diffusion functions in 100 Monte carlo runs. We assumed no error contamination, r = 3 and n = 100, 200, 500, 1000. proof of Proposition 2 . 2First, observe that the closed form representation (A.2) implies G(s, t) = D(s) · exp t s µ(v)dv , 0 ≤ s ≤ t ≤ 1. (A.4) integral with respect to t of equation above leads to the second equation appearing in system(3.5). Combining this with the first equation in (3.2) completes the proof.proof of Lemma 1. Recall that the stochastic process {X(t)} satisfying equation (2.4) admits the representation (A.1). In order to prove finiteness of sup 0≤s≤1 Lemma 4 . 4[Consistency and rate of convergence of m −1 (·)] Assume conditions C(0), C(1) and C(2) hold for ρ > 2 and m(0) = 0. Let m(·) be the estimators defined in (3.8) and set m * (t) = m(t)I ( m(t) = 0) + I ( m(t) = 0). Then with probability 1r(n) ≥ M n , for some increasing sequence M n ↑ ∞, and M 0 , ∀n 0 < n, ∀t ∈ [0, 1]. (A.36) Recall that A is a subset of [0, 1] on which the mean function is bounded away from zero, i.e. M * := inf t∈A |m(t)| > 0. So, there exists n 1 such that: That establishes the result for all values of t in A. Moreover, by (A.3), we deduce that A = [0, 1] if and only if m(0) = 0, and A = Ø otherwise. This completes the proof. 27 Uniform consistency of ∂m(t, ω), uniform boundedness of m −1 (t) and ∂m(t) on [0, 1] together with (A.38) and (A.39) entail the desired rate for sup0≤t≤1 | m −1 * (t, ω) ∂m(t, ω) − m −1 (t)∂m(t)|.Comparing definition of m −1 * (·) ∂m(·) and µ(·) justifies(3.20). Similar argument leads to the uniform rate h −1 m O (R(n))+h −1 G O (Q(n)) for either products µ(t) D(t), µ(s) G(s, s) or µ(s) ∂ s G(s, u). This entails(3.21). A.41) confirms the claim. Taking expectation from both sides of (A.40) gives: m(t) = m(0) A.42) and (A.43) confirm the smoothness conditions m(·), D(·) ∈ C d+1 ([0, 1], R) and also justify (4.2). Equation (A.40), in particular, gives: in(4.3). Taking partial derivatives with respect to s of equation above we obtain:∂ s G(s, t) = 2µ(s)G(s, s) − µ(s)G(s, s) + t s µ (u) ∂ s G(s, u)du + σ 2 (s)G(s, s),which implies (4.4) and the second equality in (4.5). The first equation appearing in (4.5) follows from Proposition 3. 29proof of Lemma 2. In order to prove finiteness of sup 0≤t≤1 E|X(t)| ρ as a result of finiteness of E|X(0)| ρ , we take advantage of representation (A.40) to obtain , d (exp{Λ(t)}X(t)) = exp{Λ(t)}σ(t)X 1/2 (t)dB(t), m(·) ∈ C d+1 ([0, 1], R) as well as the first equation appearing in (4.7) (or (4.10)). Likewise, representation (A.50) and an application of Itô isometry entail D(t) = exp{−2Λ(t)}D(0) + exp{−2Λ(t)} t 0 exp{2Λ(s)}σ 2 (s)m(s)ds (A.52) v)dv σ 2 (s)m(s)ds. A.53) is a consequence of the smoothness of µ(·), σ(·), m(·) and D(·) and an application of the Fubini-Tonelli Theorem. Equation (A.53) validates the second equality appearing in (4.7).proof of Proposition 6. By the closed form equation (A.50) we see that G(·, ·) ∈ C d+1 ( , R). Indeed, for s < t G(s, t) =E(X(t)X(s)) = exp{Λ(t)Λ(s)}E X 2 (0) + exp{Λ(t)last equation follows from Itô isometry. Smoothness of drift and diffusion functions as well as the conclusion of Proposition 5 imply the desired regularity of G(·, ·). The rest of the argument is similar to the proof of Proposition 2. For s ≥ 0, define the stopped process: Z(t) = X(t)I{t ≤ s} + X(s)I{s < t}, t ≥ 0. G (s, t) = G (0, 0) + 2 s 0 D (u) µ (u) du + t s G (s, u) µ(u)du + s 0 σ 2 (u) m(u)du,31as claimed in(4.8). Taking partial derivatives with respect to s of equation above we obtain: 0≤t≤1 D 0≤t≤1(t) is finite if and only if D(0) is finite. Step 2 assumes the conclusion of Lemma 3 to be true for ρ and proves the conclusion for 2ρ. In other words, Step 2 assumes sup 0≤t≤1 E|X(t)| ρ is finite if and only if E|X(0)| ρ is finite and then obtains the equivalency of finiteness of sup 0≤t≤1 E|X(t)| 2ρ and E|X(0)| 2ρ . Step 1: Set ρ = 2, equation (A. 0≤t≤1 E|X(t)| ρ is finite if and only if E|X(0)| ρ is finite. The closed form equation (A.50) together with Minkowski inequality leads to|X(t)| 2ρ ≤ 2 2ρ−1 exp{−2ρΛ(t)}|X(0)| 2ρ + 2 2ρ−1 exp{−2ρΛ(second summand on the right hand side of (A.57) A.58) is driven by Jensen Inequality. If we assume E|X(0)| 2ρ < ∞, then clearly E|X(0)| ρ < ∞. This in turn, by inductive assumption, implies that sup 0≤t≤1 E|X(t)| ρ < ∞ and hence finiteness of (A.59). So, finiteness of the left hand side of inequality (A.57) is guaranteed by E|X(0)| 2ρ < ∞. The reverse conclusion is clear: sup 0≤t≤1 E|X(t)| 2ρ < ∞ implies E|X(0)| 2ρ < ∞. du, 0 ≤ t ≤ 1, β ∈ {0, 1/2, 1}. (u)E X 2β (u) du + 2m(0)m(t), 0 ≤ t ≤ 1, β ∈ {0, 1/2, 1}. The definition of the stopped process {Z(t)} implies:E (X(s)X(t)) = E (X(0)X(0)) + 2E s 0 X(u)µ (u) du + E t s X(s)µ (u) du + E s 0 σ 2 (u) X 2β (u)du. (u) E(X 2β (u))du, as claimed in(4.13). Taking partial derivatives with respect to s of equation above we obtain:∂ s G(s, t) = m(s)µ(s) + ∂m(s) t s µ(u)du + σ 2 (s)E(X 2β (s)), Note that the probabilistic framework introduced in this model above is well defined, as (2.5) fully determines the probabilistic behaviour of its solutions -under minimal assumptions -all finite-dimensional distributions of the SDE in Equation(2.5) are uniquely determined by the drift and diffusion coefficients: see Theorem 1. Appendix Sup-norm adaptive simultaneous drift estimation for ergodic diffusions. C Aeckerle-Willems, C Strauch, arXiv:1808.10660arXiv preprintAeckerle-Willems, C. and C. Strauch (2018). Sup-norm adaptive simultaneous drift estimation for ergodic diffusions. arXiv preprint arXiv:1808.10660 . Nonparametric estimation of state-price densities implicit in financial asset prices. Y Aït-Sahalia, A W Lo, The journal of finance. 532Aït-Sahalia, Y. and A. W. Lo (1998). Nonparametric estimation of state-price densities implicit in financial asset prices. The journal of finance 53 (2), 499-547. Short-term interest rate dynamics: A spatial approach. F M Bandi, Journal of Financial Economics. 651Bandi, F. M. (2002). Short-term interest rate dynamics: A spatial approach. Journal of Financial Economics 65 (1), 73-110. Fully nonparametric estimation of scalar diffusion models. F M Bandi, P C B Phillips, Econometrica. 711Bandi, F. M. and P. C. B. Phillips (2003). Fully nonparametric estimation of scalar diffusion models. Econometrica 71 (1), 241-283. Nonparametric identification for diffusion processes. G Banon, SIAM Journal on Control and Optimization. 163Banon, G. (1978). Nonparametric identification for diffusion processes. SIAM Journal on Control and Optimization 16 (3), 380-395. Recursive estimation in diffusion model. G Banon, H T Nguyen, SIAM Journal on Control and Optimization. 195Banon, G. and H. T. Nguyen (1981). Recursive estimation in diffusion model. SIAM Journal on Control and Optimization 19 (5), 676-685. A one-factor model of interest rates and its application to treasury bond options. F Black, E Derman, W Toy, Financial analysts journal. 461Black, F., E. Derman, and W. Toy (1990). A one-factor model of interest rates and its application to treasury bond options. Financial analysts journal 46 (1), 33-39. Bond and option pricing when short rates are lognormal. F Black, P Karasinski, Financial Analysts Journal. 474Black, F. and P. Karasinski (1991). Bond and option pricing when short rates are lognormal. Financial Analysts Journal 47 (4), 52-59. Concentration Inequalities: A Nonasymptotic Theory of Independence. S Boucheron, G Lugosi, P Massart, OUPOxfordBoucheron, S., G. Lugosi, and P. Massart (2013). Concentration Inequalities: A Nonasymptotic Theory of Independence. OUP Oxford. Nonparametric drift estimation for iid paths of stochastic differential equations. F Comte, V Genon-Catalot, Annals of Statistics. 486Comte, F. and V. Genon-Catalot (2020). Nonparametric drift estimation for iid paths of stochastic differential equations. Annals of Statistics 48 (6), 3336-3365. Drift estimation on non compact support for diffusion models. F Comte, V Genon-Catalot, Stochastic Processes and their Applications. 134Comte, F. and V. Genon-Catalot (2021). Drift estimation on non compact support for diffusion models. Stochastic Processes and their Applications 134, 174-207. Penalized nonparametric mean square estimation of the coefficients of diffusion processes. F Comte, V Genon-Catalot, Y Rozenholc, Bernoulli. 132Comte, F., V. Genon-Catalot, and Y. Rozenholc (2007). Penalized nonparametric mean square estima- tion of the coefficients of diffusion processes. Bernoulli 13 (2), 514-543. Nonparametric estimation for stochastic differential equations with random effects. F Comte, V Genon-Catalot, A Samson, Stochastic processes and their Applications. 123Comte, F., V. Genon-Catalot, and A. Samson (2013). Nonparametric estimation for stochastic differ- ential equations with random effects. Stochastic processes and their Applications 123 (7), 2522-2551. Parametric inference for discrete observations of diffusion processes with mixed effects. Stochastic processes and their. M Delattre, V Genon-Catalot, C Larédo, Applications. 1286Delattre, M., V. Genon-Catalot, and C. Larédo (2018). Parametric inference for discrete observations of diffusion processes with mixed effects. Stochastic processes and their Applications 128 (6), 1929-1957. Bidimensional random effect estimation in mixed stochastic differential model. C Dion, V Genon-Catalot, Statistical Inference for Stochastic Processes. 192Dion, C. and V. Genon-Catalot (2016). Bidimensional random effect estimation in mixed stochastic differential model. Statistical Inference for Stochastic Processes 19 (2), 131-158. Mixed effects in stochastic differential equation models. S Ditlevsen, A De Gaetano, REVSTAT-Statistical Journal. 32Ditlevsen, S. and A. De Gaetano (2005). Mixed effects in stochastic differential equation models. REVSTAT-Statistical Journal 3 (2), 137-153. Local Polynomial Modelling and Its Applications. J Fan, I Gijbels, LondonFan, J. and I. Gijbels (1996). Local Polynomial Modelling and Its Applications. London. Time-dependent diffusion models for term structure dynamics. J Fan, J Jiang, C Zhang, Z Zhou, Statistica Sinica. Fan, J., J. Jiang, C. Zhang, and Z. Zhou (2003). Time-dependent diffusion models for term structure dynamics. Statistica Sinica, 965-992. On estimating the diffusion coefficient from discrete observations. D Florens-Zmirou, Journal of applied probability. Florens-Zmirou, D. (1993). On estimating the diffusion coefficient from discrete observations. Journal of applied probability, 790-804. Brownian Motion, Martingales, and Stochastic Calculus. J.-F L Gall, Springer International PublishingGraduate Texts in MathematicsGall, J.-F. L. Brownian Motion, Martingales, and Stochastic Calculus. Graduate Texts in Mathematics. Springer International Publishing. On a common sense estimator for the drift of a diffusion. S A Geman, Division of Applied Mathematics. Brown UnivGeman, S. A. (1979). On a common sense estimator for the drift of a diffusion. Division of Applied Mathematics, Brown Univ. Properties of principal component methods for functional and longitudinal data analysis. P Hall, H G Müller, J L Wang, Annals of Statistics. 343Hall, P., H. G. Müller, and J. L. Wang (2006). Properties of principal component methods for functional and longitudinal data analysis. Annals of Statistics 34 (3), 1493-1517. Term structure movements and pricing interest rate contingent claims. the. T S Y Ho, S.-B Lee, Journal of Finance. 415Ho, T. S. Y. and S.-B. Lee (1986). Term structure movements and pricing interest rate contingent claims. the Journal of Finance 41 (5), 1011-1029. Adaptive estimation in diffusion processes. M Hoffmann, Stochastic processes and their Applications. 79Hoffmann, M. (1999). Adaptive estimation in diffusion processes. Stochastic processes and their Appli- cations 79 (1), 135-163. Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators. T Hsing, R Eubank, Wiley Series in Probability and Statistics. John Wiley & SonsHsing, T. and R. Eubank (2015a). Theoretical Foundations of Functional Data Analysis, with an In- troduction to Linear Operators. Wiley Series in Probability and Statistics. Chichester, UK: John Wiley & Sons, Ltd. Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators. T Hsing, R Eubank, Wiley Series in Probability and Statistics. WileyHsing, T. and R. Eubank (2015b). Theoretical Foundations of Functional Data Analysis, with an Intro- duction to Linear Operators. Wiley Series in Probability and Statistics. Wiley. Pricing interest-rate-derivative securities. J Hull, A White, The review of financial studies. 34Hull, J. and A. White (1990). Pricing interest-rate-derivative securities. The review of financial stud- ies 3 (4), 573-592. Bayesian semi-parametric inference for diffusion processes using splines. P A Jenkins, M Pollock, G O Roberts, arXiv:2106.05820Jenkins, P. A., M. Pollock, and G. O. Roberts (2021). Bayesian semi-parametric inference for diffusion processes using splines. arXiv:2106.05820 . A nonparametric approach to the estimation of diffusion processes, with an application to a short-term interest rate model. G J Jiang, J L Knight, Econometric Theory. Jiang, G. J. and J. L. Knight (1997). A nonparametric approach to the estimation of diffusion processes, with an application to a short-term interest rate model. Econometric Theory, 615-645. Brownian Motion and Stochastic Calculus. I Karatzas, S E Shreve, Graduate Texts in Mathematics. 113SpringerKaratzas, I. and S. E. Shreve (1988). Brownian Motion and Stochastic Calculus, Volume 113 of Graduate Texts in Mathematics. Springer New York. Numerical Solution of Stochastic Differential Equations. P E Kloeden, Berlin, HeidelbergKloeden, P. E. (1992). Numerical Solution of Stochastic Differential Equations. Berlin, Heidelberg. Semiparametric estimation of locally stationary diffusion models. B Koo, O B Linton, LSE STICERD Research Paper No. EM551Koo, B. and O. B. Linton (2010). Semiparametric estimation of locally stationary diffusion models. LSE STICERD Research Paper No. EM551 . Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. Y Li, T Hsing, The Annals of Statistics. 386Li, Y. and T. Hsing (2010). Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. The Annals of Statistics 38 (6), 3321-3351. An introduction to computational stochastic PDEs. G J Lord, C E Powell, T Shardlow, Lord, G. J., C. E. Powell, and T. Shardlow (2014). An introduction to computational stochastic PDEs. N Marie, A Rosier, arXiv:2105.06884Nadaraya-Watson Estimator for IID Paths of Diffusion Processes. arXiv preprintMarie, N. and A. Rosier (2021). Nadaraya-Watson Estimator for IID Paths of Diffusion Processes. arXiv preprint arXiv:2105.06884 . On estimating the expected return on the market: An exploratory investigation. R C Merton, Journal of financial economics. 84Merton, R. C. (1980). On estimating the expected return on the market: An exploratory investigation. Journal of financial economics 8 (4), 323-361. Functional data analysis with rough sample paths?. N Mohammadi, V M Panaretos, Mohammadi, N. and V. M. Panaretos (2021). Functional data analysis with rough sample paths? https://arxiv.org/pdf/2105.12035.pdf . Nonparametric statistical inference for drift vector fields of multidimensional diffusions. R Nickl, K Ray, Annals of Statistics. 483Nickl, R. and K. Ray (2020). Nonparametric statistical inference for drift vector fields of multi- dimensional diffusions. Annals of Statistics 48 (3), 1383-1408. B Øksendal, Stochastic Differential Equations. Universitext. Berlin. Heidelberg; Berlin HeidelbergSpringerØksendal, B. (2003). Stochastic Differential Equations. Universitext. Berlin, Heidelberg: Springer Berlin Heidelberg. Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm. R V Overgaard, N Jonsson, C W Tornøe, H Madsen, Journal of pharmacokinetics and pharmacodynamics. 321Overgaard, R. V., N. Jonsson, C. W. Tornøe, and H. Madsen (2005). Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm. Journal of pharmacoki- netics and pharmacodynamics 32 (1), 85-107. Nonparametric estimation of diffusions: a differential equations approach. O Papaspiliopoulos, Y Pokern, G O Roberts, A M Stuart, Biometrika. 993Papaspiliopoulos, O., Y. Pokern, G. O. Roberts, and A. M. Stuart (2012). Nonparametric estimation of diffusions: a differential equations approach. Biometrika 99 (3), 511-531. Practical estimation of high dimensional stochastic differential mixed-effects models. U Picchini, S Ditlevsen, Computational Statistics & Data Analysis. 553Picchini, U. and S. Ditlevsen (2011). Practical estimation of high dimensional stochastic differential mixed-effects models. Computational Statistics & Data Analysis 55 (3), 1426-1444. Stochastic differential mixed-effects models. U Picchini, A D E Gaetano, S Ditlevsen, Scandinavian Journal of statistics. 371Picchini, U., A. D. E. GAETANO, and S. Ditlevsen (2010). Stochastic differential mixed-effects models. Scandinavian Journal of statistics 37 (1), 67-90. Functional data analysis. J O Ramsay, B W Silverman, Ramsay, J. O. and B. W. Silverman (2008). Functional data analysis. C R Rao, H D Vinod, Conceptual Econometrics Using R. ElsevierRao, C. R. and H. D. Vinod (2019). Conceptual Econometrics Using R. Elsevier. Applied stochastic differential equations. S Särkkä, A Solin, Särkkä, S. and A. Solin (2019). Applied stochastic differential equations. A nonparametric model of term structure dynamics and the market price of interest rate risk. R Stanton, The Journal of Finance. 525Stanton, R. (1997). A nonparametric model of term structure dynamics and the market price of interest rate risk. The Journal of Finance 52 (5), 1973-2002. Sharp adaptive drift estimation for ergodic diffusions: the multivariate case. C Strauch, Stochastic Processes and their Applications. 125Strauch, C. (2015). Sharp adaptive drift estimation for ergodic diffusions: the multivariate case. Stochas- tic Processes and their Applications 125 (7), 2562-2602. Exact adaptive pointwise drift estimation for multidimensional ergodic diffusions. C Strauch, Probability Theory and Related Fields. 1641-2Strauch, C. (2016). Exact adaptive pointwise drift estimation for multidimensional ergodic diffusions. Probability Theory and Related Fields 164 (1-2), 361-400. Nonparametric estimation of the drift coefficient in the diffusion equation. P D Tuan, Series Statistics. 121Tuan, P. D. (1981). Nonparametric estimation of the drift coefficient in the diffusion equation. Series Statistics 12 (1), 61-73. Functional data analysis for sparse longitudinal data. F Yao, H G Müller, J L Wang, Journal of the American Statistical Association. 100470Yao, F., H. G. Müller, and J. L. Wang (2005). Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association 100 (470), 577-590. From sparse to dense functional data and beyond. X Zhang, J.-L Wang, The Annals of Statistics. 445Zhang, X. and J.-L. Wang (2016). From sparse to dense functional data and beyond. The Annals of Statistics 44 (5), 2281-2321.
[]
[ "ON MIXED DIRICHLET-NEUMANN EIGENVALUES OF TRIANGLES", "ON MIXED DIRICHLET-NEUMANN EIGENVALUES OF TRIANGLES" ]
[ "Bartłomiej Siudeja " ]
[]
[]
We order lowest mixed Dirichlet-Neumann eigenvalues of right triangles according to which sides we apply the Dirichlet conditions. It is generally true that Dirichlet condition on a superset leads to larger eigenvalues, but it is nontrivial to compare e.g. the mixed cases on triangles with just one Dirichlet side. As a consequence of that order we also classify the lowest Neumann and Dirichlet eigenvalues of rhombi according to their symmetry/antisymmetry with respect to the diagonal.We also give an order for the mixed Dirichlet-Neumann eigenvalues on arbitrary triangle, assuming two Dirichlet sides. The single Dirichlet side case is conjectured to also have appropriate order, following right triangular case.
10.1090/proc/12888
[ "https://arxiv.org/pdf/1501.07618v1.pdf" ]
119,596,508
1501.07618
6f251eae04be2c39f4a0d15527c3e9251daa7bd0
ON MIXED DIRICHLET-NEUMANN EIGENVALUES OF TRIANGLES 29 Jan 2015 Bartłomiej Siudeja ON MIXED DIRICHLET-NEUMANN EIGENVALUES OF TRIANGLES 29 Jan 2015 We order lowest mixed Dirichlet-Neumann eigenvalues of right triangles according to which sides we apply the Dirichlet conditions. It is generally true that Dirichlet condition on a superset leads to larger eigenvalues, but it is nontrivial to compare e.g. the mixed cases on triangles with just one Dirichlet side. As a consequence of that order we also classify the lowest Neumann and Dirichlet eigenvalues of rhombi according to their symmetry/antisymmetry with respect to the diagonal.We also give an order for the mixed Dirichlet-Neumann eigenvalues on arbitrary triangle, assuming two Dirichlet sides. The single Dirichlet side case is conjectured to also have appropriate order, following right triangular case. INTRODUCTION Laplace eigenvalues are often interpreted as frequencies of vibrating membranes. In this context, the natural (Neumann) boundary condition corresponds to a free membrane, while Dirichlet condition indicates a membrane is fixed in place on the boundary. Intuitively, mixed Dirichlet-Neumann conditions should mean that the membrane is partially attached, and the larger the attached portion, the higher the frequencies. Using variational characterization of the frequencies (see Section 2) one can easily conclude that increasing the attached portion leads to increased frequencies. In this paper we investigate a harder, yet still intuitively clear case of imposing Dirichlet conditions on various sides of triangles. Imposing Dirichlet condition on one side gives smaller eigenvalues than imposing it on that side and one more. However, is it true that imposing Dirichlet condition on shorter side leads to smaller eigenvalue than the Dirichlet condition on a longer side? Note that one can also think about eigenvalues as related to the survival probability of the Brownian motion on a triangle, reflecting on the Neumann boundary, and dying on the Dirichlet part. In this context, it is clear that enlarging the Dirichlet part leads to shorter survival time. It is also reasonable, that having Dirichlet condition on one long side gives larger chance of dying, than having shorter Dirichlet side. However, this case is far from obvious to prove, especially that the difference might be very small for nearly equilateral triangles. Let L, M and S denote the lengths of the sides of a triangle T , so that L ≥ M ≥ S. Let the smallest eigenvalue corresponding to Dirichlet condition applied to a chosen set of sides be denoted by λ set 1 . E.g. λ LS 1 would correspond to Dirichlet condition imposed on the longest and shortest sides. Let also µ 2 and λ 1 denote the smallest nonzero pure Neumann and pure Dirichlet eigenvalues of the same triangles. 1 Theorem 1.1. For any right triangle with smallest angle satisfying π/6 < α < π/4 0 = µ 1 < λ S 1 < λ M 1 < µ 2 < λ L 1 < λ M S 1 < λ LS 1 < λ LM 1 < λ 1 . When α = π/6 (half-of-equilateral triangle) we have λ M 1 = µ 2 , and for α = π/4 (right isosceles triangle) we have S = M and λ L 1 = µ 2 . All other inequalities stay sharp in these cases. Furthermore for arbitrary triangle min{λ S 1 , λ M 1 , λ L 1 } < µ 2 ≤ λ M S 1 < λ LS 1 < λ LM 1 , as long as the appropriate sides have different lengths. However, it is possible that µ 2 > λ L 1 (for any small perturbation of the equilateral triangle) or µ 2 < λ M 1 (for right triangles with α < π/6). Note that, for arbitrary polygonal domains, it is not always the case that a longer restriction leads to a higher eigenvalue (see Remark 3.3). The theorem also asserts that a precise position of the smallest nonzero Neumann eigenvalue in the ordered sequence is an exception, rather than a rule (even among triangles). Nevertheless, we conjecture that mixed eigenvalues of triangles can be fully ordered. More precisely, we conjecture that all cases missing in the above theorem are still true: Conjecture 1.2. For arbitrary triangle λ S 1 < λ M 1 < λ L 1 < λ M S 1 , as long as appropriate sides have different lengths. Even though right triangles are a rather special case, they are of interest in studying other polygonal domains. In particular, recent paper by Nitsch [23] studies regular polygons via eigenvalue perturbations on right triangles. Similar approach is taken in the author's upcoming collaboration [22]. Finally, right triangles play the main role in the recent progress on the celebrated hot-spots conjecture. Newly discovered approach due to Miyamoto [21,20] led to new partial results for acute triangles [28] (see also Polymath 7 project polymathprojects.org/tag/polymath7/). The acute cases rely on eigenvalue comparisons of triangles, which were first considered by Miyamoto on right triangles. Eigenvalue problems on right triangles were also used to establish symmetry (or antisymmetry) of the eigenfunction for the smallest nonzero Neumann eigenvalue of kites (Miyamoto [20], the author of the present paper [28]) and isosceles triangles [17] (in collaboration with Richard Laugesen). It is almost trivial to conclude that the eigenfunction can be assumed symmetric or antisymmetric with respect to a line of symmetry of a domain. It is however very hard to establish which case actually happens. This problem is also strongly connected to the hot-spots conjecture, given that many known results assume enough symmetry to get a symmetric eigenfunction, e.g. Jerison-Nadirashvili [15] or Bañuelos-Burdzy [3]. As a particular case, the latter paper implies that the smallest nonzero Neumann eigenvalue of a narrow rhombus is antisymmetric with respect to the short diagonal. In order to claim the same for all rhombi one needs to look at the very important hot-spots result due to Atar and Burdzy [1]. Their Corollary 1, part ii) can be applied to arbitrary rhombi, but it requires a very sophisticated stochastic analysis argument and a solution of a more complicated hot-spots conjecture to achieve the goal. As a consequence of the ordering of mixed eigenvalues of right triangles we order first four Neumann (and two Dirichlet) eigenvalues of rhombi, depending on their symmetry/antisymmetry. We achieve more than the above mentioned papers, using elementary techniques. Our result applies to all rhombi not narrower than the "equilateral rhombus" composed of two equilateral triangles. This particular case, as well as the square are interesting boundary cases due to the presence of multiple eigenvalues. Corollary 1.3. For rhombi with the smallest angle 2α > π/3 we have • µ 2 , µ 3 , µ 4 and λ 2 are simple. • µ 4 < λ 1 , • the eigenfunction for µ 2 is antisymmetric with respect to the short diagonal, • the eigenfunction for µ 3 is antisymmetric with respect to the long diagonal, • the eigenfunction for µ 4 is doubly symmetric, • the eigenfunction for λ 2 is antisymmetric with respect to the short diagonal, Furthermore, if 2α < π/3 then the doubly symmetric mode belongs to µ 3 , and the mode antisymmetric with respect to the long diagonal can have arbitrarily high index (as α → 0). Perhaps the most interesting case of our result about rhombi is that the fourth Neumann eigenvalue of nearly square rhombus is smaller than its smallest Dirichlet eigenvalue (and is doubly symmetric). This strengthens classical eigenvalue comparison results: Payne [24], Levine-Weinberger [19], Friedlander [12] and Filonov [11] (on smooth enough domains the third Neumann eigenvalue is smaller than the first Dirichlet eigenvalue, while on convex polygons only the second eigenvalue is guaranteed to be below the Dirichlet case, and the third is not larger than it). This type of eigenvalue comparison is traditionally used to derive some conclusions about the nodal set of the Neumann eigenfunction, e.g. an eigenfunction for µ 2 cannot have a nodal line that forms a loop. Recent progress on hot-spots conjecture due to Miyamoto [20] and the author [28] relies on such eigenvalue comparisons and similar nodal line considerations. Furthermore, author's forthcoming collaboration [22] leverages the improved fourth eigenvalue comparison on rhombi in studying regular polygons. Our proofs for mixed eigenvalues on triangles are short and elementary, yet a very broad spectrum of techniques is actually needed. Evan though the comparisons look mostly the same, their proofs are strikingly different. Depending on the case, we use: variational techniques with explicitly or implicitly defined test functions, polarization (a type of symmetrization) applied to mixed boundary conditions, nodal domain considerations, or an unknown trial function method (see [17,18]). VARIATIONAL APPROACH AND AUXILIARY RESULTS The mixed Dirchlet-Neumann eigenvalues of the Laplacian on a right triangle T with sides of length L ≥ M ≥ S can be obtained by solving ∆u = λ D u, on T, u = 0 on D ⊂ {L, M, S}, ∂ ν u = 0 on ∂T \ D. The Dirichlet condition imposed on D can be any combination of the triangle's sides, as mentioned in the introduction. For simplicity we denote λ = λ LM S (purely Dirichlet eigenvalue) and µ = λ ∅ (purely Neumann eigenvalue). The same eigenvalues can also be obtained by minimizing the Rayleigh quotient R[u] =´T |∇u| 2 T u 2 . In particular λ D 1 = inf u∈H 1 (T ),u=0 on D R[u],(1)µ 2 = inf u∈H 1 (T ),´T u=0 R[u],(2) For an overview of the variational approach we refer the reader to Bandle [2] or Blanchard-Brüning [4]. For each kind of mixed boundary conditions we have an orthonormal sequence of eigenfunctions and 0 < λ D 1 < λ D 2 ≤ λ D 3 ≤ · · · → ∞, as long as D is not empty. When D is empty (purely Neumann case) we have 0 = µ 1 < µ 2 ≤ µ 3 ≤ µ 4 ≤ · · · → ∞. The sharp inequality µ 2 < µ 3 for all nonequilateral triangles was recently proved by the author [28]. Similar result λ 2 < λ 3 should hold for purely Dirichlet eigenvalues, but this remains an open problem. The fact that λ D 1 < λ D 2 is a consequence of the general smallest eigenvalue simplicity: Lemma 2.1. Let Ω be a domain with Dirichlet condition on D = ∅ and Neumann condition on ∂Ω \ D. Then 0 < λ D 1 < λ D 2 and the eigenfunction u 1 belonging to λ D 1 can be taken nonnegative. Proof. Suppose u 1 is changing sign. Then |u 1 | is a different minimizer of the Rayleigh quotient. Any minimizer of the Rayleigh quotient is an eigenfunction (see [2] or a more recent exposition [16,Chapter 9]). But ∆|u 1 | = −λ 1 |u 1 | ≤ 0, hence minimum principle ensures u 1 cannot equal zero at any inside point of the domain, giving contradiction. Hence u 1 has a fixed sign. If there were two eigenfunctions for λ D 1 , we could make a linear combination that changes sign, which is not possible. Hence the smallest eigenvalue is simple. Finally, λ D 1 = 0 would imply that |∂u 1 | = 0 a.e., hence the eigenfunction is constant. But it equals 0 on D, hence u 1 ≡ 0. If D 1 ⊂ D 2 then (1) implies that λ D 1 1 ≤ λ D 2 1 . Indeed, any test function u that satisfies u = 0 on D 2 , can be used in the minimization of λ D 1 1 . However, the relation between e.g. λ L 1 and λ M 1 is not clear. In the second part of the paper we will consider rhombi R created by reflecting a right triangle T four times. Lemma 2.2. Let u belong to λ D 1 (T ) or µ 2 (T ). Letū be the extension of u to R, that is symmetric with respect to the sides of T with Neumann condition and antisymmetric with respect to the Dirichlet sides. Thenū is an eigenfunction of R. Furthermore, if v is another eigenfunction of R with the same symmetries asū, then v belongs to higher eigenvalue than u, or v = Cū for some constant C. Proof. Suppose v is an eigenfunction of R with the same symmetries as u. Its restriction to T satisfies Dirichlet and Neumann conditions on the same sides as u. It also satisfies the eigenvalue equation pointwise on T . Hence v is an eigenfunction on T . However, λ D 1 and µ 2 are simple, hence v = Cu, or v belongs to a higher eigenvalue on T . The extensionū has the same Rayleigh quotient on R, as on T (due to symmetries). Henceū can be used as a test function for the lowest eigenvalue on R with the symmetries ofū. Hence that eigenvalue of R must be smaller or equal to the eigenvalue of u on T . However, it cannot be smaller by the argument from the previous paragraph. In particular, this lemma implies that λ 1 (R) = λ L 1 (T ) . We can also claim that µ 2 (T ) equals the smallest Neumann eigenvalue of the rhombus with a doubly symmetric eigenfunction. However, this eigenvalue will not be second on R, due to the presence of possibly lower antisymmetric modes (corresponding to λ M 1 (T ) and λ S 1 (T )). INEQUALITIES BETWEEN MIXED EIGENVALUES OF RIGHT TRIANGLES In this section we prove Theorem 1.1. We split the proof into several sections, each treating one or two inequalities. Each section introduces a different technique of proving eigenvalue bounds. Before we proceed we wish to make a few remarks. For a thorough overview of the explicitly computable cases and the geometric properties of eigenfunction we refer the reader to [13] and references therein. We need the following three lemma: Lemma 3.4. For β > π/4 µ 2 (O(β)) < π 2 4h 2 (1 − h 2 ) And the bound saturates for the right isosceles triangle (h 2 = 1/2 or β = π/4). Proof. Note that h 2 < 1/2. Take the second eigenfunction for the right isosceles triangle (0, 1), (±1, 0) and deform it linearly to fit O(β). That is take ϕ = sin(πx/2) cos(πy/2) and compose with the linear transformation L(x, y) = (x/ √ 1 − h 2 , y/h). Resulting function can be used as a test function for µ 2 (O(β)) µ 2 (O(β)) ≤´T (β) |∇(ϕ • L)| 2 T (β) |ϕ • L| 2 = π 2 + 16h 2 − 8 4h 2 (1 − h 2 ) < π 2 4h 2 (1 − h 2 ) Lemma 3.5. Let u be any antisymmetric function on A(α) (so that u(x, −y) = −u(x, y)). ThenˆA (α) u 2 y > π 2 4h 2ˆA (α) u 2 . Proof. Note that for fixed x function u(x, ·) is odd, hence it can be used as a test function for the second Neumann eigenvalue on any vertical interval contained in the triangle A(α). We get the largest interval [−h, h] when x = 0. Hencê [−cx,cx] u 2 y (x, y) dy ≥ µ 2 ([−c x , c x ])ˆ[ −cx,cx] u 2 (x, y) dy ≥ π 2 4h 2ˆ[ −cx,cx] u 2 (x, y) dy. Integrate over x to get the result. A special case of [17, Corollary 5.5], noting that α < β, can be stated as Lemma 3.6. Let u be the eigenfunction belonging to µ 2 (A(α)). Then µ 2 (O(β)) < µ 2 (A(α)) if´A (α) u 2 ý A(α) u 2 x > tan 2 (β).(3) Suppose the condition (3) is false (hence we cannot conclude that µ 2 (O(β)) < µ 2 (A(α))). That isˆA (α) u 2 y ≤ tan 2 (β)ˆA (α) u 2 x Then µ 2 (A(α)) =´A (α) u 2 x + u 2 ý A(α) u 2 ≥ 1 + 1 tan 2 (β) ´A (α) u 2 ý A(α) u 2 > 1 sin 2 (β) π 2 4h 2 = = π 2 4h 2 (1 − h 2 ) > µ 2 (O(β)), where the last inequality in the first line follows from Lemma 3.5, while the inequality in the second line follows from Lemma 3.4. Therefore, regardless if we can apply Lemma 3.6 or not (condition (3) is true or false), we get µ 2 (O(β)) < µ 2 (A(α)). Since O(β) is obtuse and isosceles, [17,Theorem 3.2] implies that λ S 1 (T ) = µ 2 (O(β)). The eigenfunction for λ M 1 (T ) extends to an antisymmetric eigenfunction on A(α), hence µ 2 (A(α)) ≤ λ M 1 (T ). Therefore we proved that λ S 1 < λ M 1 for any nonisosceles right triangle. 3.2. For right triangles: λ M 1 < µ 2 if and only if α > π/6. Comparison of Neumann eigenfunctions of an isosceles triangle. In this section we will use the notation introduced in [17, Section 3]. All isosceles triangles can be split into equilateral triangles, subequilateral triangles (with angle between equal sides less than π/3), and superequilateral (with the angle above π/3). Note that α = π/6 means that we are working with a half of an equilateral triangle. The eigenvalues are explicit and λ M 1 = µ 2 . Mirroring a right triangle along middle side M gives a superequilateral triangle if and only if α > π/6. Any superequilateral triangle has antisymmetric second Neumann eigenfunction [17, Theorem 3.2] and simple second eigenvalue [20] equal λ M 1 (T ). This proves that λ M 1 (T ) < µ 2 (T ). At the same time any subequilateral triangle has symmetric second eigenfunction [17, Theorem 3.1], with simple eigenvalue [20] equal µ 2 (T ). Hence λ M 1 > µ 2 (T ) if α > π/6. 3.3. For right triangles: µ 2 < λ L 1 < λ M S 1 . Variational approach and domain monotonicity. Assume that the right triangle T has vertices (0, 0), (1, 0) and (0, b). We can use four such right triangles to build a rhombus R. Then λ L 1 = λ 1 (R), since reflected eigenfunction for λ L 1 is nonnegative and satisfies Dirichlet boundary condition on R (see Lemma 2.2). Hooker and Protter [14] proved the following lower bound for the ground state of rhombi λ L 1 = λ 1 (R) ≥ π 2 (1 + b) 2 4b 2 .(4) We need to prove an upper bound for µ 2 that is smaller than this lower bound. Consider two eigenfunctions of the right isosceles triangle with vertices (0, 0), (1, 0) and (0, 1) ϕ 1 (x, y) = cos(πy) − cos(πx), ϕ 2 (x, y) = cos(πy) cos(πx). The first one belongs to µ 2 and is antisymmetric, the second belongs to µ 3 and is symmetric. In fact all we need is that these integrate to 0 over the right isosceles triangle. Consider a linear combination of the linearly deformed eigenfunctions f (x, y) = ϕ 1 (x, y/b) − (1 − b)ϕ 2 (x, y/b), where 0 < b < 1. This function integrates to 0 over the right triangle T , hence it can be used as a test function for µ 2 in (2). As a result we get the following upper bound µ 2 ≤ 3π 2 ((b − 1) 2 + 2)(b 2 + 1) − 64(b − 1) 2 (b + 1) 3b 2 ((b − 1) 2 + 4)(5) Note that when b = 1 bounds (4) and (5) reduce to the same value. In fact λ L 1 = µ 2 in this case (right isosceles triangle). Moreover 3π 2 ((b − 1) 2 + 2)(b 2 + 1) − 64(b − 1) 2 (b + 1) 3b 2 ((b − 1) 2 + 4) − π 2 (1 + b) 2 4b 2 = = (b − 1) 2 12b 2 ((b − 1) 2 + 4) (9π 2 b 2 − (256 + 6π 2 )b + 21π 2 − 256), and the quadratic expression in b is negative for b ∈ (0, 1). Therefore µ 2 < λ L 1 . For right triangles, λ L 1 is the same as λ 1 for a rhombus built from four triangles, while λ M S 1 is the same as λ 1 of a kite built from two right triangles. Sharpest angle of the kite is the same as the acute angle of the rhombus. We can put the kite inside of the rhombus by putting the vertex of the sharpest angle at the vertex of the rhombus. Therefore λ L 1 < λ M S 1 by domain monotonicity (take the eigenfunction of the kite, extend with 0, and use as trial function on the rhombus). For arbitrary triangle: min{λ S 1 , λ M 1 , λ L 1 } < µ 2 ≤ λ M S 1 . Nodal line consideration and eigenvalue comparisons. To get a lower bound for µ 2 we will define a trial function based on the Neumann eigenfunction, without knowing its exact form, and use it as a trial function for a mixed eigenvalue problem. Note that the eigenfunction of µ 2 has exactly two nodal domains, by Courant's nodal domain theorem ([8, Sec. V.5, VI.6]) and orthogonality to the first constant eigenfunction. Hence the closure of at least one of these nodal domains must have empty intersection with the interior of one of the sides (nodal line might end in a vertex, but the eigenfunction must have fixed sign on at least one side). Let us call this side D and consider λ D 1 . Let u be the eigenfunction of µ 2 restricted to the nodal domain not intersecting D. Extend u with 0 to the whole triangle T . We get a valid trial function for λ D 1 . Hence min{λ S 1 , λ M 1 , λ L 1 } ≤ λ D 1 < µ 2 . Note that we already proved that for right triangles λ S 1 < λ M 1 , µ 2 < λ L 1 , and λ M 1 < µ 2 if and only if smallest angle α > π/6. Hence for right triangles the minimum can be replaced by λ S 1 , or even λ M 1 if α > π/6. Note also, that the result of this section generalizes to arbitrary polygons. Lemma 3.7. The smallest nonzero Neumann eigenvalue on a polygon with 2n+1 or 2n+2 sides is bounded below by the minimum of all mixed Dirichlet-Neumann eigenvalues with Dirichlet condition applied to at least n consecutive sides. Furthermore, for arbitrary domain, the Neumann eigenvalue is bounded below by the infimum over all mixed eigenvalue problems with half of the boundary length having Dirichlet condition applied to it. As in the previous section, λ M S 1 equals λ 1 of a kite built from two triangles. The Neumann eigenfunction for µ 2 extended to the kite gives a symmetric eigenfunction of the kite. Given that µ 2 and µ 3 of the kite together can have at most one antisymmetric mode (see [28, Lemma 2.1]), we conclude that µ 2 of the triangle is no larger than µ 3 of the kite. Levine and Weinberger [19] proved that λ 1 ≥ µ 3 for any convex polygon, including kites, giving us the required inequality. 3.5. Proof of: λ LS 1 < λ LM 1 . Symmetrization of isosceles triangles. To prove this inequality we will use a symmetrization technique called the continuous Steiner symmetrization FIGURE 2. Acute isosceles triangle (thick red line) and obtuse isosceles triangle (thin black line) generated by the same right triangle (their intersection). Two cases of continuous Steiner symmetrization based on the shape of the acute isosceles triangle: subequilateral on the left, superequilateral on the right. introduced by Pólya and Szegö [25,Note B], and studied by Solynin [29,31] and Brock [5]. The author already used this technique for bounding Dirichlet eigenvalues of triangles in [27]. See Section 3.2 in the last reference for detailed explanation. The most important feature of the transformation is that if one can map one domain to another using that transformation, then the latter has smaller Dirichlet eigenvalue. Note that mirroring a right triangle along the middle side shows that λ LS 1 of the triangle equals λ 1 of an acute isosceles triangle. Similarly, λ LM 1 equals λ 1 for an obtuse isosceles triangle. We need to show that the acute isosceles triangle has smaller Dirichlet eigenvalue. Figure 2 shows both isosceles triangles. Position the isosceles triangles as on the figure, and perform the continuous Steiner symmetrization with respect to the line perpendicular to the common side. If the acute isosceles triangle is subequilateral (vertical side is the shortest, left picture on Figure 2), then before we fully symmetrize the obtuse triangle, we will find the acute one. The arrow on the figure shows how far we should continuously symmetrize. Therefore the acute isosceles triangle has smaller eigenvalue. If the acute isosceles triangle is superequilateral (vertical side is the longest, right picture on Figure 2), then we first reflect the acute triangle across the symmetrization line, then perform continuous Steiner symmetrization. Again, we get that the acute isosceles triangle has smaller eigenvalue. Note that this case seems similar to Section 3.1. However, we get to use symmetrization technique due to Dirichlet boundary, while in the other section we had to us a less powerful, but more broadly applicable, unknown trial function method. 3.6. For arbitrary triangles: λ M S 1 < λ LS 1 < λ LM 1 . Polarization with mixed boundary conditions. For this inequality we use another symmetrization technique called polarization. It was used by Dubinin [10], Brock and Solynin [6,30,7,31], Draghici [9], and the author [27] to study various aspects of spectral and potential theory of the Laplacian. As with other kinds of symmetrization, if one can map a domain to some other domain, then the latter has smaller eigenvalue. Polarization involves a construction of a test function for the lowest Dirichlet eigenvalue of the transformed domain from the nonnegative eigenfunction of the original domain. We choose to deemphasize the geometric transformation involved, and focus on the transplanted eigenfunction. In fact we transform a triangle into itself, so that boundary conditions change the way we need for the proof. Note that unlike in other applications of polarization mentioned above, we apply it to mixed boundary conditions. We showed in Lemma 2.1 that the lowest mixed eigenvalue is simple and has a nonnegative eigenfunction. We will use this eigenfunction to create an eigenfunction on transformed domain. Let T be a triangle. We apply the Dirichlet condition on two sides. Without loss of generality let us assume we do this on L and S (see left picture on Figure 3). Let the eigenfunction for λ LS 1 equal u, v and w on the parts of the domain shown on the figure. Let u andv be the symmetric extensions of u and v along the bisector of their common angle (dotted line). We rearrange the parts to fit the dashed triangle, as on the right picture of the same figure. We need to check that the rearranged trial function is continuous on the blue triangle, and it satisfied the Dirichlet conditions on correct sides. It is crucial in this step that u snd v are nonnegative. On the dotted lineū = u = v =v, due to continuity of the original eigenfunction. On the dashed line max(ū,v) =v, since u satisfies the Dirichlet condition there. Hence the test function is continuous on the dashed line due to continuity of the original eigenfunction on the interface of v and w. On the long sloped blue side of the triangle min(ū,v) = v = 0. On the part of the short sloped side to the right of the dashed line we have w = 0. Finally, the part to the right satisfies min(ū,v) = u = 0. Therefore the trial function satisfied the Dirichlet boundary condition on the middle and short sides of the blue right triangle. We polarized the red right triangle into the blue right triangle (same shape), but the Dirichlet conditions moved from LS to MS, as we needed. In fact there is an additional part of the third side with Dirichlet condition applied, ensuring strict inequality in our result. µ 2 (R) = λ S 1 (T ) µ 3 (R) = λ M 1 (T ) µ 4 (R) = µ 2 (T ) λ 2 (R) = λ LS 1 (T ) The only assumptions we needed in the construction is that the |OB| > |OA| and Dirichlet condition on AB. The same conditions can be enforced in the comparison of λ LM 1 and λ LS 1 . PROOF OF COROLLARY 1.3 Four copies of the same right triangle can be used to build a rhombus. Let R denote the rhombus, and T the right triangle that can be used to build R (see Figure 4). The order of eigenfunctions that is claimed in the theorem follows from the order of the eigenvalues for the triangle, Theorem 1.1. We need to show that there are no other eigenfunctions intertwined with the ones listed. All eigenfunctions can be taken symmetric or antisymmetric with respect to each diagonal. By [28, Lemma 2.1], the eigenspace S of µ 2 (R) and µ 3 (R) can contain at most one eigenfuncion antisymmetric with respect to a given diagonal. Therefore if there are more than 2 eigenfunctions in S, the extra ones must be doubly symmetric. But the lowest doubly symmetric mode equals µ 2 (T ) and it is larger than λ M 1 and λ S 1 (these two eigenvalues generate antisymmetric eigenfunctions on R). Therefore µ 2 (R) and µ 3 (R) are simple. Suppose an antisymmetric mode belongs to µ 4 (R). Then it belongs to one of the following eigenvalues on T : λ M S 1 , λ S k or λ M k with k ≥ 2. But these are larger than µ 2 (T ). Hence µ 4 (R) consists of only doubly symmetric modes. On the other hand µ 2 (T ) is simple, hence µ 4 (R) is also simple. Finally λ 1 (R) = λ L 1 > µ 2 (T ) = µ 4 (R). Remark 4.1. If 2α = π/3 then the rhombus has antisymmetric µ 2 . But then there is a double eigenvalue µ 3 which equals to µ 2 for equilateral triangle. Hence the above theorem fails for α ≤ π/6. When α < π/6, argument involving subequilateral triangles shows that µ 3 is doubly symmetric. When α is very small the mode antisymmetric with respect to the long diagonal may have arbitrarily high index. Remark 4.2. Numerical results suggest that the eigenfunction for µ 5 is either doubly antisymmetric for nearly square rhombi (same as λ M S 1 ), or antisymmetric along the short diagonal with one more nodal line in each half (same as λ S 2 ). The second doubly symmetric mode is always larger than the latter, but can be smaller than the former. The eigenfunction for λ 3 is either antisymmetric with respect to the long diagonal, or doubly symmetric. The numerical experiments suggest that the first case holds. Remark 4.3. Pütter [26] showed that the nodal line for the second Neumann eigenfunction for certain doubly symmetric domains is on the shorter axis of symmetry. However, rhombi do not satisfy the conditions required for these domains. FIGURE 1 . 11 on the half-ofequilateral triangle corresponds to equilateral triangle with Dirichlet condition on two Obtuse isosceles triangle O(β) = ABC and acute isosceles triangle A(α) = ADC.sides. The eigenfunction is not trigonometric (as all other known cases), and to the best of our knowledge there is no closed formula for the eigenvalue. Remark 3. 3 . 3Note that for the trapezium with vertices (−3, 0),(3, 0),(3,2) and (0, 2) imposing the Dirichlet condition on the sloped side leads to smaller eigenvalue than imposing it on the top (numerically).3.1. For nonisosceles right triangles: λ S 1 < λ M 1 . Unknown trial function method for isosceles triangles. In this subsection O(β) is an obtuse isosceles triangle with equal sides of length 1 and aperture angle 2β, with vertices A, B, C equal (0, h), (± √ 1 − h 2 , 0), respectively. Let A(α) be an acute isosceles triangle with vertices A, D, C equal (0, ±h), ( √ 1 − h 2 , 0) (aperture angle 2α). See Figure 1 for both triangles. Finally their intersection is a right triangle T = AEC. Note that h and both angles are related by: h = sin α = cos β, and β > π/4. FIGURE 3 . 3A triangle with Dirichlet condition on two sides (OA and AB), and the same triangle reflected along the bisector of one the angle AOB (with Dirichlet condition on blue lines). The eigenfunction from the left picture can be rearranged into a test function on the right picture, preserving boundary conditions, as long as |OB| < |OA|. FIGURE 4 . 4Neumann eigenfunctions for nearly square rhombi (red/solid lines -antisymmetry/nodal line, blue/dashed lines -symmetry). Note that µ 2 , µ 3 and λ 2 correspond to mixed eigenvalues on a right triangle, while µ 4 corresponds to the Neumann mode on the same triangle (the position of the nodal arcs for µ 4 is based on numerical computations). Remark 3.1. All eigenvalues of the right isosceles triangle can be explicitly calculated using eigenfunctions of the square. Obviously S = M in this case, hence some eigenvalue inequalities from Theorem 1.1 become obvious equalities. Furthermore µ 2 = λ L 1 , as can be seen by taking two orthogonal second Neumann eigenfunctions of the unit square with the diagonal nodal lines. One corresponds to µ 2 on the right triangle, the other λ L 1 . Remark 3.2. Similarly, some of the mixed eigenvalues of the half-of-equilateral triangle can be calculated explicitly using eigenfunctions of the equilateral triangle. In particular λ M 1 = µ 2 , since the corresponding equilateral triangle has double second Neumann eigenvalue. On the other hand, any mixed case that leads to a mixed case on the equilateral triangle cannot be explicitly calculated. In particular the value of λ LS On Neumann eigenfunctions in lip domains. R Atar, K Burdzy, J. Amer. Math. Soc. 1722051611electronicR. Atar and K. Burdzy, On Neumann eigenfunctions in lip domains, J. Amer. Math. Soc. 17 (2004), no. 2, 243-265 (electronic). MR2051611 C Bandle, Isoperimetric inequalities and applications, Monographs and Studies in Mathematics. Boston, Mass.-LondonPitman (Advanced Publishing Program7572958C. Bandle, Isoperimetric inequalities and applications, Monographs and Studies in Mathematics, vol. 7, Pitman (Advanced Publishing Program), Boston, Mass.-London, 1980. MR572958 On the "hot spots" conjecture of J. Rauch. R Bañuelos, K Burdzy, J. Funct. Anal. 1641R. Bañuelos and K. Burdzy, On the "hot spots" conjecture of J. Rauch, J. Funct. Anal. 164 (1999), no. 1, 1-33. MR1694534 P Blanchard, E Brüning, Variational methods in mathematical physics, Texts and Monographs in Physics. Gillian M. Hayes. MR1230382BerlinSpringer-VerlagP. Blanchard and E. Brüning, Variational methods in mathematical physics, Texts and Monographs in Physics, Springer-Verlag, Berlin, 1992, A unified approach, Translated from the German by Gillian M. Hayes. MR1230382 Continuous Steiner-symmetrization. F Brock, 25-48. MR1330619Math. Nachr. 172F. Brock, Continuous Steiner-symmetrization, Math. Nachr. 172 (1995), 25-48. MR1330619 Continuous polarization and symmetry of solutions of variational problems with potentials, Calculus of variations, applications and computations. F Brock, Pitman Res. Notes Math. Ser. 3261419331Pont-à-MoussonLongman Sci. Tech.F. Brock, Continuous polarization and symmetry of solutions of variational problems with potentials, Calculus of variations, applications and computations (Pont-à-Mousson, 1994), Pitman Res. Notes Math. Ser., vol. 326, Longman Sci. Tech., Harlow, 1995, pp. 25-34. MR1419331 An approach to symmetrization via polarization. F Brock, A Y Solynin, Trans. Amer. Math. Soc. 35241695019F. Brock and A. Y. Solynin, An approach to symmetrization via polarization, Trans. Amer. Math. Soc. 352 (2000), no. 4, 1759-1796. MR1695019 . R Courant, D Hilbert, Methods of mathematical physics. I65391Interscience Publishers, IncR. Courant and D. Hilbert, Methods of mathematical physics. Vol. I, Interscience Publishers, Inc., New York, N.Y., 1953. MR0065391 Polarization and rearrangement inequalities for multiple integrals. C Draghici, ProQuest LLC. Washington University in St. Louis. MR2705198ThesisC. Draghici, Polarization and rearrangement inequalities for multiple integrals, ProQuest LLC, Ann Arbor, MI, 2003, Thesis (Ph.D.)-Washington University in St. Louis. MR2705198 Capacities and geometric transformations of subsets in n-space. V N Dubinin, Geom. Funct. Anal. 341223435V. N. Dubinin, Capacities and geometric transformations of subsets in n-space, Geom. Funct. Anal. 3 (1993), no. 4, 342-369. MR1223435 On an inequality for the eigenvalues of the Dirichlet and Neumann problems for the Laplace operator. N Filonov, Algebra i Analiz. 162N. Filonov, On an inequality for the eigenvalues of the Dirichlet and Neumann problems for the Laplace operator, Algebra i Analiz 16 (2004), no. 2, 172-176. MR2068346 L Friedlander, 153-160. MR1143438Some inequalities between Dirichlet and Neumann eigenvalues. 116L. Friedlander, Some inequalities between Dirichlet and Neumann eigenvalues, Arch. Rational Mech. Anal. 116 (1991), no. 2, 153-160. MR1143438 Geometrical structure of Laplacian eigenfunctions. D S Grebenkov, B.-T Nguyen, SIAM Rev. 5543124880D. S. Grebenkov and B.-T. Nguyen, Geometrical structure of Laplacian eigenfunctions, SIAM Rev. 55 (2013), no. 4, 601-667. MR3124880 Bounds for the first eigenvalue of a rhombic membrane. W Hooker, M H Protter, 18-34. MR0127610J. Math. and Phys. 39W. Hooker and M. H. Protter, Bounds for the first eigenvalue of a rhombic membrane, J. Math. and Phys. 39 (1960/1961), 18-34. MR0127610 The "hot spots" conjecture for domains with two axes of symmetry. D Jerison, N Nadirashvili, J. Amer. Math. Soc. 134D. Jerison and N. Nadirashvili, The "hot spots" conjecture for domains with two axes of symmetry, J. Amer. Math. Soc. 13 (2000), no. 4, 741-772. MR1775736 R S Laugesen, ArXiv:1203.2344Spectral Theory of Partial Differential Equations -Lecture Notes. R. S. Laugesen, Spectral Theory of Partial Differential Equations -Lecture Notes. ArXiv:1203.2344 Minimizing Neumann fundamental tones of triangles: an optimal Poincaré inequality. R S Laugesen, B A Siudeja, 118-135. MR2644129J. Differential Equations. 2491R. S. Laugesen and B. A. Siudeja, Minimizing Neumann fundamental tones of triangles: an optimal Poincaré inequality, J. Differential Equations 249 (2010), no. 1, 118-135. MR2644129 Dirichlet eigenvalue sums on triangles are minimal for equilaterals. R S Laugesen, B A Siudeja, 855-885. MR2886710Comm. Anal. Geom. 195R. S. Laugesen and B. A. Siudeja, Dirichlet eigenvalue sums on triangles are minimal for equilaterals, Comm. Anal. Geom. 19 (2011), no. 5, 855-885. MR2886710 H A Levine, H F Weinberger, Inequalities between Dirichlet and Neumann eigenvalues. 94H. A. Levine and H. F. Weinberger, Inequalities between Dirichlet and Neumann eigenvalues, Arch. Rational Mech. Anal. 94 (1986), no. 3, 193-208. MR846060 A planar convex domain with many isolated "hot spots" on the boundary. Y Miyamoto, Jpn. J. Ind. Appl. Math. 301Y. Miyamoto, A planar convex domain with many isolated "hot spots" on the boundary, Jpn. J. Ind. Appl. Math. 30 (2013), no. 1, 145-164. MR3022811 The "hot spots" conjecture for a certain class of planar convex domains. Y Miyamoto, J. Math. Phys. 50102572703Y. Miyamoto, The "hot spots" conjecture for a certain class of planar convex domains, J. Math. Phys. 50 (2009), no. 10, 103530, 7. MR2572703 N Nigam, B Siudeja, B Young, Nearly radial Neumann eigenfunctions on symmetric domains. preprintN. Nigam, B. Siudeja and B. Young, Nearly radial Neumann eigenfunctions on symmetric domains, preprint. On the first Dirichlet Laplacian eigenvalue of regular polygons. C Nitsch, Kodai Math. J. 373C. Nitsch, On the first Dirichlet Laplacian eigenvalue of regular polygons, Kodai Math. J. 37 (2014), no. 3, 595-607. MR3273886 Inequalities for eigenvalues of membranes and plates. L E Payne, J. Rational Mech. Anal. 4L. E. Payne, Inequalities for eigenvalues of membranes and plates, J. Rational Mech. Anal. 4 (1955), 517-529. MR0070834 G Pólya, G Szegö, Isoperimetric Inequalities in Mathematical Physics. Princeton, N. J.Princeton University Press43486G. Pólya and G. Szegö, Isoperimetric Inequalities in Mathematical Physics, Annals of Mathematics Studies, no. 27, Princeton University Press, Princeton, N. J., 1951. MR0043486 On the nodal lines of second eigenfunctions of the free membrane problem. R Pütter, Appl. Anal. 423-4R. Pütter, On the nodal lines of second eigenfunctions of the free membrane problem, Appl. Anal. 42 (1991), no. 3-4, 199-207. MR1124959 Isoperimetric inequalities for eigenvalues of triangles. B Siudeja, 1097-1120. MR2779073Indiana Univ. Math. J. 593B. Siudeja, Isoperimetric inequalities for eigenvalues of triangles, Indiana Univ. Math. J. 59 (2010), no. 3, 1097-1120. MR2779073 On the hot spots conjecture for acute triangles. B Siudeja, ArXiv:1308.3005B. Siudeja, On the hot spots conjecture for acute triangles. ArXiv:1308.3005 Continuous symmetrization of sets. A Y Solynin, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI). 101097593Anal. Teor. Chisel i Teor. Funktsii.A. Y. Solynin, Continuous symmetrization of sets, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 185 (1990), no. Anal. Teor. Chisel i Teor. Funktsii. 10, 125-139, 186. MR1097593 Polarization and functional inequalities. A Y Solynin, 148-185. MR1458141Algebra i Analiz. 86A. Y. Solynin, Polarization and functional inequalities, Algebra i Analiz 8 (1996), no. 6, 148-185. MR1458141 Continuous symmetrization via polarization. A Y Solynin, MR3013297Algebra i Analiz. 241A. Y. Solynin, Continuous symmetrization via polarization, Algebra i Analiz 24 (2012), no. 1, 157-222. MR3013297
[]
[ "Reconstructing Nonlinear Stochastic Bias from Velocity Space Distortions", "Reconstructing Nonlinear Stochastic Bias from Velocity Space Distortions" ]
[ "Ue-Li Pen \nHarvard Society of Fellows\nHarvard-Smithsonian Center for Astrophysics\n\n" ]
[ "Harvard Society of Fellows\nHarvard-Smithsonian Center for Astrophysics\n" ]
[]
We propose a strategy to measure the dark matter power spectrum using minimal assumptions about the galaxy distribution and the galaxy-dark matter cross-correlations. We argue that on large scales the central limit theorem generically assures Gaussianity of each smoothed density field, but not coherence. Asymptotically, the only surviving parameters on a given scale are galaxy variance σ, bias b = Ω .6 /β and the galaxy-dark matter correlation coefficient r. These can all be determined by measuring the quadrupole and octupole velocity distortions in the power spectrum. Measuring them simultaneously may restore consistency between all β determinations independent of galaxy type.The leading deviations from Gaussianity are conveniently parameterized by an Edgeworth expansion. In the mildly non-linear regime, two additional parameters describe the full picture: the skewness parameter s and non-linear bias b 2 . They can both be determined from the measured skewness combined with second order perturbation theory or from an N-body simulation. By measuring the redshift distortion of the skewness, one can measure the density parameter Ω with minimal assumptions about the galaxy formation process. This formalism also provides a convenient parametrization to quantify statistical galaxy formation properties.
10.1086/306098
[ "https://arxiv.org/pdf/astro-ph/9711180v1.pdf" ]
15,507,566
astro-ph/9711180
4fac370a49fdebbdc4e287342ec01d653828355a
Reconstructing Nonlinear Stochastic Bias from Velocity Space Distortions arXiv:astro-ph/9711180v1 17 Nov 1997 Ue-Li Pen Harvard Society of Fellows Harvard-Smithsonian Center for Astrophysics Reconstructing Nonlinear Stochastic Bias from Velocity Space Distortions arXiv:astro-ph/9711180v1 17 Nov 1997 We propose a strategy to measure the dark matter power spectrum using minimal assumptions about the galaxy distribution and the galaxy-dark matter cross-correlations. We argue that on large scales the central limit theorem generically assures Gaussianity of each smoothed density field, but not coherence. Asymptotically, the only surviving parameters on a given scale are galaxy variance σ, bias b = Ω .6 /β and the galaxy-dark matter correlation coefficient r. These can all be determined by measuring the quadrupole and octupole velocity distortions in the power spectrum. Measuring them simultaneously may restore consistency between all β determinations independent of galaxy type.The leading deviations from Gaussianity are conveniently parameterized by an Edgeworth expansion. In the mildly non-linear regime, two additional parameters describe the full picture: the skewness parameter s and non-linear bias b 2 . They can both be determined from the measured skewness combined with second order perturbation theory or from an N-body simulation. By measuring the redshift distortion of the skewness, one can measure the density parameter Ω with minimal assumptions about the galaxy formation process. This formalism also provides a convenient parametrization to quantify statistical galaxy formation properties. Introduction The measurement of the distribution of matter in the universe has been one of the frontier goals of modern cosmology. The correlation of galaxies has been measured in several surveys and is known to significant accuracy. The abundance of data has led to many theoretical challenges, especially for flat Cold Dark Matter (CDM) cosmologies. In the simplest models, galaxies are considered an inbiased tracer of the mass. Several different measurements of velocities then allow us to measure the density of the total matter, using either cluster mass-to-light ratios or pairwise velocities (Peebles 1993). Under these assumptions, one obtains values of the density parameter Ω 0 ∼ 0.2. It is known, however, that galaxies of different types correlate in different ways (Strauss and Willick 1995). Thus, not all galaxies can simultaneously trace the mass, and it appears plausible that no galaxy type is a perfect tracer of mass. At this point, galaxy formation is not completely understood. Simulations of galaxy formation suggest that the galaxy formation process is stochastic and non-linear (Cen and Ostriker 1992). This complicates the derivation of Ω 0 from dynamical measurements. Peculiar velocity fields in principle allow one to measure the mass fluctuation spectrum (Strauss and Willick 1995), but even here one must assume that statistical properties of galaxies, such as the Tully-Fischer relation, do not depend on the local density of matter. Assumptions need to be applied to relate the distribution of visible galaxies to that of the total matter. The most popular of relations has been to assume linear biasing, where the power spectrum of galaxies P g (k) is related to the matter power spectrum P (k) by P g (k) = b 2 P (k), motivated by the peak biasing paradigm (Kaiser 1984). The gravitational effects of dark matter lead to velocity fields which amplify the redshift space power spectrum in the linear regime. By measuring the distortions and imposing a stronger assumption δ g = bδ, a parameter β = Ω .6 /b can be constrained (Kaiser 1987). Recent attempts to measure these have resulted in a confusing picture, wherein the results depend strongly on the galaxy types (Hamilton 1997), scale (Dekel 1997) and surveys. A potential explanation for this state of confusion is that the linear galaxy biasing model is too simplistic. It has been proposed that galaxy formation may be non-local (Heyl et al. 1995). A general model assuming only that galaxy formation is a stochastic process has been proposed by Dekel and Lahav (1997). Unfortunately, the most general stochastic distribution introduces several new free functions, which make an unambiguous measurement of the matter distribution seemingly impossible. In principle, higher order statistics (Fry 1994) can be used to to break some of the degeneracies, but the application of the full theory is still in its infancy. Nevertheless, the fact that the correlation properties are distorted in redshift space tells us that gravitational interactions are at work. The challenge is to translate these measured distortions into conventional cosmological parameters. An added complication arises from the fact that the redshift distortions are more accurately measured in the quasi-linear regime where non-linear corrections are believed to be important (Hamilton 1997). If we are given complete freedom to create galaxies with any distribution we please, and to allow these galaxies to have any form of correlation or anti-correlation with the dark matter density, can we still say something about the dark matter distribution? Can we invoke more observables to constrain the exhaustive class of galaxy distribution models? In this paper we present the framework for a formalism which allows us to parameterize all galaxy distribution freedom into a simple unified picture. In section 2 we begin by constructing a hierarchy of approximations. The central limit theorem argues that the density field of both the galaxy distribution and of the matter distribution will tend to be Gaussian when smoothed on sufficiently large scales. Their correlation coeffient, however, remains a free parameter. For general random processes, a bivariate Gaussian distribution will have an arbitrary correlation coefficient −1 ≤ r ≤ 1. The distribution has three free parameters: two variances and a cross-correlation coefficient. We show in section 3 how to determine all three using linear power spectrum distortions. The first deviation from Gaussianity is characterized by skewness. For a bivariate distribution, one has in general four skewness parameters. We show in section 2 how to reduce this number systematically. Once the power spectrum of the matter has been determined, the skewness of the matter can be computed using either second order perturbation theory (Hivon et al. 1995) or using N-body simulations. Skewness is also subject to redshift distortions, which can be measured. We show how these two additional observables allows one to constrain Ω in section 3. Individual galaxies may be a biased tracer of the local dark matter velocity field (Carlberg et al. 1990). But it is reasonable to expect that when smoothed on sufficiently large scales, the averaged velocity field will be unbiased. A geometric interpretation of the joint distribution function is given in section 4. To assure accuracy, any parameters derived using this formalism must be checked against an N-body simulation. Such simulations also allow one to probe further into the non-linear regime. We describe in section 5 how to build mock galaxy catalogs which obey the formal expansion presented in this paper. Galaxy Distribution We will analyze the density field smoothed by a tophat window W R (r) on some scale R, which is zero for r > R and 3/4πR 3 elsewhere. The perturbation variable δ ≡ (ρ −ρ)/ρ is convolved to obtain the smoothed density field δ R = d 3 x ′ δ(x ′ )W (|x − x ′ |). The galaxy density field will be described with a subscript g , and we similarly derive a smoothed galaxy density field δ R g . Only the galaxy field is directly observable. We do not have an exact theory about how and where the galaxies formed, nor how they relate to the distribution of the dark matter. The smoothed galaxy density field δ R g (x) could be a function of not only δ R (x), but also of its derivative, and possibly long range influences. Even non-gravitational effects may play a role. Quasars exhibit a proximity effect, which may influence galaxy formation. If we consider all these variables hidden and unknown, we must prescribe the density of the galaxies δ R g as an effectively stochastic distribution relative to the dark matter δ R . In the most general galaxy distribution, we need to specify the joint probability distribution function (PDF) P (δ R g , δ R ). At lowest order, we will approximate these to be Gaussians with some finite covariance P (δ R g , δ R ) = √ ac − b 2 π exp − a(δ R g ) 2 2 + bδ R g δ R − c(δ R ) 2 2 .(1) The parameters a, b, c are related to the traditional quantities by a change of variables: the variance of the matter σ 2 = (δ R ) 2 = (δ R ) 2 P dδ R dδ R g = a/(ac − b 2 ) and that of galaxies σ 2 g = c/(ac − b 2 ). The third free parameter is the covariance, given by the correlation coefficient r = δ R g δ R /σ g σ = b/ √ ac. The traditional bias is defined as b 1 = σ g /σ = c/a. The power spectrum of the galaxies is then related to the power spectrum of the dark matter by P g (k) = b 2 1 P (k). To simplify the notation, we will from now on implicitly assume that all fields are smoothed at scale R unless otherwise specified, and drop the superscript R on all variables. The analysis of this paper relies on Equation (1) being a good approximation to the distribution of galaxies and dark matter when smoothed on large scales. A sufficient condition for this to hold is if the central limit theorem applies. Since we are considering the statistics of density fields averaged over some region, the central limit theorem will generally apply if the smoothing region is larger than the scale of non-locality of galaxy formation or non-linear gravitational clustering. There are some notable exceptions, however. 1. The dark matter power distribution may be non-Gaussian even when the fluctuations are linear. This might be true, for example, in topological defect theories of structure formation (Gooding et al. 1992) where the smoothed density field on any scale may be non-Gaussian. Such theories present great challenges to many attempts to measure cosmological parameters, including for example cosmic microwave background measurements (Pen, Seljak and Turok 1997). 2. The galaxy fluctuations could depend non-linearly on very large scale effects, for example external gravitational shear (van de Weygaert and Babul 1994). This could avoid the central limit theorem due to non-locality. Even if mild non-Gaussianity is present, we will argue below that the problem remains tractable. Conversely, there is every reason to believe that on small scales, the distributions of both the galaxy and the dark matter fields are significantly non-Gaussian. We will show in section 3 that mild non-Gaussianity actually allows us to break the degeneracy between the bias factor b 1 and Ω. We now turn to the next moment of the distribution, the skewness δ 3 /σ 3 . In principle one needs to specify four such independent moments δ 3−i δ i g for 0 ≤ i ≤ 3. A coordinate transformation simplifies Equation (1) if we define √ 2aδ g ≡ (u + v)/ √ 1 − r, √ 2cδ ≡ (u − v)/ √ 1 − r, and w 2 ≡ (1 − r)/(1 + r) . u is the variable with unit variance along the joint distribution of the galaxies and dark matter, while v has variance w 2 and measures their mutual deviation. In this rotated frame, u and v are uncorrelated. In order to model all relevant terms, we apply a general Edgeworth expansion about the Gaussian (1). We recall (Kim and Strauss 1997) that the coefficients of the two first order terms u, v are zero since the mean is by definition 0. The three second moments are absorbed into the definitions of u, v. In principle one needs four third order Hermite polynomials to describe the joint distribution self-consistently at the next order. But we can reasonably assume that the galaxies are positively correlated with the dark matter distribution, so w ≪ 1. In this case, the third order terms can be rank ordered in powers of w as u 3 , u 2 v, uv 2 , v 3 . As we will see below, it is necessary to retain the first two terms to model second order perturbation theory. We then neglect the last two terms because they depend on higher powers of the small parameter w, and disappear completely in the limit that biasing is deterministic r = 1. The Edgeworth expansion then gives us the truncated skew distribution P (u, v) = 1 + (u 3 − 3u)s + (u 2 − 1)vb s /w 2 2πw exp − u 2 2 − v 2 2w 2 .(2) The two new coefficients are the joint skewness parameter s, and the second order bias b s which allows us to adjust the skewness of each distribution independently. The joint PDF will be discussed in more detail in section 4 below. The Taylor expansion for general biasing introduces a quadratic bias at the same order as second order perturbation theory (Fry 1994): δ g = f (δ) = b 1 δ + b 2 (δ 2 − σ 2 ) + O(δ 3 ).(3) We will show in section 4 that in the stochastic notation, b 2 = 2b s b 1 /σ. We can now compute the basic relations needed for further calculations. All third order moments are uniquely defined δ 3 = σ 3 1 + r 2 3/2 (6s − 6b s ) δ 2 δ g = σ 2 σ g 1 + r 2 3/2 (6s − 2b s ) δδ 2 g = σσ 2 g 1 + r 2 3/2 (6s + 2b s ) δ 3 g = σ 3 g 1 + r 2 3/2 (6s + 6b s )(4) For an initially Gaussian matter distribution with a power-law power spectrum P (k) = k n , the skewness factor of the evolved matter distribution is given by S 3 ≡ δ 3 /σ 4 = −(3 + n) + 34/7 from second order perturbation theory (Juskiewicz et al. 1993) which is exact for Ω = 1 and depends only very weakly on Ω (Bouchet et al. 1992). We obtain one relation 1 + r 2 3/2 (6s − 6b s ) = σ 34 7 − (3 + n)(5) for the 5 unknowns σ, σ g , r, s, b s . We will use five observational values to determine the remaining four galaxy distribution parameters, as well as the cosmological parameter Ω. These determinations will rely on the comparison between redshift space and real space correlations. The real space correlation contains valuable information about the galaxy distribution itself, while the redshift space distortions are a consequence of the dynamics of the dark matter. Redshift space distortions We first consider the measurement of redshift space distortions of the variance or power spectrum in the presence of stochastic biasing. At first order, the two second order terms s, b s can be neglected. The redshift space density of galaxies is affected by their velocities through the Jacobian ρ g (x)dx = ρ g (x(z))(dx/dz)dz and linear perturbation theory gives us δ g (z) = δ g (x) + δ(x)Ω .6 µ 2(6) where µ = cos(θ) determines the angle between the wave vectork z and the line-of-sight (Kaiser 1987) and we have made the distant observer approximation. The power spectrum is the expectation value of the square of the Fourier transform of (6) and results in P g (k z , µ) = P g (k)(1 + β 2 µ 4 + 2rβµ 2 ) where β = Ω .6 /b 1 . P g (k) is the undistorted power spectrum. The Legendre relation P l ≡ 2l + 1 2 1 −1 P g (k z , µ)P l (µ)dµ(8) where P l (µ) is the l-th Legendre polynomial (Hamilton 1997) allows us to obtain moments of the angular dependence of (7). One can in principle measure both β and r by measuring the quadrupole distortion P 2 and the next order distortion P 4 separately. We can then solve for r and β using the following relations P 2 P 0 = 4 3 rβ + 4 7 β 2 1 + 2 3 rβ + 1 5 β 2 P 4 P 0 = 8 35 β 2 1 + 2 3 rβ + 1 5 β 2 . With sufficiently large data sets, one can measure r and β as a function of wavelength k. An alternative approach would be to measure P g from the angular correlation function, after which determination of the monopole P 0 /P g and quadrupole P 2 /P g terms would be sufficient. Peacock (1997) compared P g derived from the APM angular power spectrum to P 0 to determine β 0 = 0.4 ± 0.12 by setting r = 1. Allowing r to vary will increase the inferred value of β for all such measurements. In this case, the relation between the actual value of β for a given inferred value of β 0 is β = β 2 0 + 2β 0 + r 2 − r. A similar increase in β for a given inferred β 0 using r = 1 holds for quadrupole measurements using equation (9). If stochasticity has been neglected, all inferred values of β are only lower bounds. The skewness can be obtained from the bispectrum B g (k 1 , k 2 , k 3 ) ≡ δ g (k 1 )δ g (k 2 )δ g (k 3 ) . Isotropy requires k 1 + k 2 + k 3 = 0, using which we can compute the third moment δ 3 g = d 3 k 1 d 3 k 2 (2π) 3 B g (k 1 , k 2 , −k 1 − k 2 )W R (k 1 )W R (k 2 )W R (|k 1 + k 2 |).(10) One can then measure the net skewness by inverting the angular bispectrum to obtain the three dimensional bispectrum, in analogy to the power spectrum from APM (Baugh and Efstathiou 1994). The skewness of the smoothed galaxy field determines the last equation in (4). The skewness of the dark matter field is determined by the variance (5), allowing us to solve for both s and b s . Equations (5,9,10) allowed us to solve for all parameters of the stochastic non-linear biased galaxy distribution model. The final goal is to break the degeneracy of β = Ω .6 /b to determine Ω and b independently. In redshift surveys, the measured skewness is already distorted by velocity distortions, but is nevertheless readily measurable (Kim and Strauss 1997). The second order perturbation theory calculation of the skewness has recently been completed for both redshift space and the real space (Juskiewicz et al. 1993, Hivon et al. 1995. Their basic result was that in the absence of biasing, the skewness factor S 3 (see above) is weakly dependent on Ω, and is very similar in real and redshift space in second order theory. In these calculations, S 3,z and S 3 typically differ by a few percent depending on Ω 0 . A similar sized change occurs when the bias parameter is changed. Fortunately, the observations already allow very accurate determinations, for example Kim and Strauss found S 3 = 2.93 ± 0.09, where the errors are comparable to the expected effect. The actual redshift space skewness distortions quickly grow significantly larger than the second order predictions. For a self-similar power spectrum with n = −1 and Gaussian filter σ = 0.5 Hivon et al. found S 3 = 3.5 and S 3z = 2.9 using N-body simulations, which is a very significant effect and many times larger than current observational errors. Second order perturbation theory appears to systematically underestimate the redshift space skewness distortions. The real space skewness in simulations tends to be higher than perturbation theory, while redshift space skewness tends to be lower in simulations. This trend suggests that direct N-body simulations are necessary to quantify the effect of redshift space distortions of skewness. We will examine this strategy in section 5 below. By measuring the skewness from solely angular correlation information (Gaztañaga and Bernadeau 1997) as well as from the redshift space distribution, we obtain two measures of skewness which can be compared to each other, from which one can solve for Ω. One could also smooth the density field with an anisotropic window function, and compute the dependence of the skewness of the smoothed density field on the alignment of the window as was done for variance measurements by Bromley (1994). The formalism of Hivon et al. (1995) can then be modified using these anisotropic window functions. We will explore this approach further in section 5 below. Unfortunately, the second order perturbation theory redshift distorted skewness does not have a simple closed form expression, and a numerical triple integral must be evaluated for each specific set of choices of n, Ω, W (k, µ). Details of this procedure are given in Hivon et al. (1995). Coherent Limit A finite truncation of the Edgeworth expansion may result in a PDF which is not positive everywhere. For small corrections s ≪ 1, b s ≪ w 2 , the PDF in Equation (2) remains positive in all regions where the amplitude is still large, and for practical purposes the negative probabilities do not have a significant effect. But it is possible to leave the regime of small corrections. When the corrections become large, the PDF becomes negative when it still has a significant amplitude. We can, however, absorb the coefficient of v into the exponent for small b s P (u, v) = 1 + (u 3 − 3u)s √ 2π e −u 2 /2 × exp − [v−(u 2 −1)bs] 2 2w 2 w √ 2π .(11) Equation (11) remains positive for all values of b s . The right term becomes a Dirac delta function in the limit w → 0 P (u, v) = 1 + (u 3 − 3u)s √ 2π e −u 2 /2 × δ D [v − (u 2 − 1)b s ].(12) We then reproduce Equation (3) to leading order for the choices b 1 = rσ/σ g and b 2 = 2b s b 1 /σ. It is instructive to understand the nature of the two skewness parameters s and b s graphically. In Figure 1 we show the joint PDF. The respective projections onto the galaxy and dark matter PDF's is shown in Figure 2. The projected PDF for the galaxies is P (δ g ) = 1 + (δ 3 g − 3δ g )s g σ g √ 2π exp − δ 2 g 2σ g(13) where s g = [(1 + r)/2] 3/2 (s + b s ) and that for the dark matter is the same with a change in sign of b s . For real surveys, one can measure the residuals of the galaxy PDF after fitting a third order Edgeworth expansion to determine the accuracy of the fit, with proper account for noise (Kim and Strauss 1997). The same can be done for the dark matter by utilizing N-body simulations described in the next section. Constructing Mock Catalogues Let us now examine how one can build galaxy catalogues from an N-body simulation consistent with this Edgeworth expansion. The purpose of this exercise will be to test specific models against catalogs in the non-linear regimes. We will show a sample construction of a galaxy density field smoothed by a window function which is consistent with the stochastic non-linear biasing described above. We can then compite the likelihood function for the cosmological parameters for any set of observations. The ultimate test would be to recover the same cosmological parameters and dark matter power spectrum using galaxy types which are known to be biased relative to each other. The first step is to calculate the bias function b(k) in Fourier space by comparing the angular correlation function of the survey to that of the simulation. Statistical homogeneity and isotropy require that the bias is a function of the magnitude of the wave-number only. We will take the density field of the simulation in Fourier space and produce the galaxy field by scaling to the bias δ g (k) = b 1 (|k|)δ DM (k).(14) The mock density field is then convolved with the survey geometry and projected onto an angular power spectrum w(k). We solve for the bias function b 1 (k) by requiring the mock galaxy angular power spectrum to agree with the observed angular power spectrum. This procedure has used no velocity information. By repeating the simulation many times with different random seeds, we can obtain the full distribution of b 1 (k). We have three remaining parameters r, s, b s which must be solved for using velocity and skewness information. Since the skewness of the dark matter in the simulations is known, we have one constraint from Equation (4), reducing them to two remaining degrees of freedom. We will use four observational quantities to constrain them: two moments of the redshift space variance distortion, and the skewness as well as its distortion. While second order perturbation theory in principle allows us to solve for r, s, b s and Ω, its validity quickly breaks down as one enters the non-linearly regime. We must use N-body simulations at this point to make quantitative comparison. The problem is now doubly overconstrained, allowing us to solve for two free simulation parameters, for example Ω and the power spectrum shape parameter Γ by performing a sufficiently large number of N-body simulations (Hatton and Cole 1997). Velocity space distortions can be measured using anisotropic smoothing windows (Bromley 1994) W R (µ). The window function proposed by Bromley symmetrizes the distribution and thus destroys skewness information. Consider instead an elliptical top-hat with a major-minor axis ratio of 2:1. We pick a characteristic smoothing scale R and smooth the observation on that scale. The trade-off occurs between picking large R, which smoothes over large volumes and results in distributions which are closer to Gaussian, or small R which results in a smaller cosmic variance and a stronger non-linear signal, but for which the first three orders of our the Edgeworth expansion may be a poor approximation for the true dark matter-galaxy joint distribution function. We first decohere the galaxy density field ρ g by adding an independent random Gaussian galaxy field ρ N g with identical power spectrum weighted by the correlation coefficient r: ρ ′ g = rρ g + (1 − r)ρ N g .(15) We have averaged the result of an N-body simulation with a random field with identical (non-linear) power spectrum. This maintains the shape of the power spectrum, but weakens the degree of correlation between galaxy and dark matter fields. It is no longer true that δ g |δ = b 1 δ, but instead δ g |δ = b 1 rδ. Second order bias is added by feeding the field through a quadratic function ρ ′′ g = ρ ′ g + 2b s σ g (ρ ′ g −ρ g ) 2 − b 2 1 σ 2 g .(16) Next we distort into velocity space as follows: Each N-body particle mass is multiplied by ρ ′′ g /ρ DM and projected with its velocity into a redshift coordinate system. The window function is applied in redshift space ρ z (z) =ρ n i m i C[cz/H 0 − (x i + v z i )] ρ R (z) = ρ z (z ′ )W R (|z − z ′ |, µ)d 3 z ′ .(17) C(z) is the particle shape, which for Cloud-in-Cell mappings (Hockney and Eastwood 1980) is the same shape as the grid cell.n is the ratio of number of particles to the number of gridcells and m i is the scaled particle mass. x i is the particle position, and v z i is the line-of-sight component of the particle velocity which affects the radial redshift position. We now compare the statistics with the observed sample. One computes the variance σ 2 (µ) = (ρ R −ρ R ) 2 d 3 z and decomposes it into multipoles σ 2 (µ) ∼ σ 0 + σ 2 P 2 (µ) + σ 4 P 4 (µ) as in Equation (8) and does the same with the skewness s 3 (µ) = (ρ R −ρ R ) 3 d 3 z where now s 3 ∼ s 0 + s 2 P 2 (µ). These σ 2 , σ 4 , s 0 , s 2 are then compared with the values obtained from the surveys. A Monte-Carlo array of simulations provides the full likelihood distribution of these variables, allowing us to test consistency of each model with observations. This model of skew biasing allows us to discuss the systematic errors in the measurement of pairwise velocity disperions (Guzzo et al. 1997). Since pairwise galaxy velocities are measured in the non-linear regime, the inferred mean galaxy velocities can not be directly translated into mean dark matter pairwise velocity. Decoherence, and non-linear bias both introduce complex dependences in the conversion from galaxy velocity to dark matter velocity. Guzzo et al. (1997) showed that the one dimensional pairwise velocity varies by galaxy type from 345 to 865 km/sec. We must keep in mind that each galaxy type surely has different biasing and coherence properties. The pairwise velocities are typically measured at a separation of 1/h Mpc, where the density field is strongly non-linear, and the Edgeworth expansion may be a poor approximation to the actual joint galaxy-dark matter distribution. The mock catalog from N-body simulations described above effectively provides a handle to probe the dynamical properties of galaxies at larger separation, allowing us to separate the distribution properties of galaxies from the dynamical aspects of the dark matter. Conclusions The general stochastic galaxy biasing problem contains more free parameters than can easily be measured in any galaxy redshift survey. We have shown that using only linear perturbation theory we can determine two parameters, the correlation coefficient r and bias parameter β using the quadrupole and octupole distortions. This allows a reconstruction of the power spectrum P (k)Ω .6 as well as determination of two galaxy formation parameters. In the plausible scenario that galaxies correlate strongly with the matter distribution, only two free additional parameters s, b s need to be introduced to quantify the skewness. Second order perturbation theory provides one linear constraint, and observations of the skewness of galaxies determines the second. By measuring the redshift distortions of skewness we can in principle determine both the true underlying dark matter power spectrum and the density parameter Ω independently. This picture has incorporated both stochastic correlation and second order non-linear bias. We have shown how to extend the Ansatz of the Edgeworth expansion to general problems without relying on linear or second order theory. In an N-body simulation the same approach can be applied to directly compare specific models to observations. This also allows us to probe deeper into non-linear scales. Figure 1 onto the galaxies (solid) and dark matter (dashed). s parametrizes their common skewness, while each distribution's skewness is proportional to s ± b s . The units are in standard deviations of the dark matter. For our choice b = 2 the galaxies have twice the standard deviation of the dark matter. Fig. 1 . 1-Joint Probability Distribution Function Contours. The parameters for this plot are σ = 1, bias b = 2, correlation coefficient r = 0.8, skewness s = 0.05 and non-linear bias b s = 0.1. The solid line is the contour at half central probability, while the dotted lines are at 1/4 and 3/4. The axes are in units of the dark matter standard deviation. Fig. 2 . 2-The projection of e-mail I: [email protected] Acknowledgements. I thank Ben Bromley, Uros Seljak and Michael Strauss for helpful discussions. This work was supported by the Harvard Society of Fellows and the Harvard-Smithsonian Center for Astrophysics. . C M Baugh, G Efstathiou, MNRAS. 267323Baugh, C.M. and Efstathiou, G. 1994, MNRAS, 267, 323. . F R Bouchet, R Juszkiewicz, S Colombi, R Pellat, ApJ. 39481ApJBouchet, F.R., Juszkiewicz, R., Colombi, S., Pellat, R. 1992, ApJ, 394, L5. Bromley, B. 1994, ApJ, 423, L81. . R G Carlberg, H M P Couchman, P A Thomas, L29, R Cen, J P Ostriker, ApJ. 352113ApJCarlberg, R.G., Couchman, H.M.P. and Thomas, P.A. 1990, ApJ, 352, L29. Cen, R. and Ostriker, J.P. 1997, ApJ, 399, L113. A Dekel, talk presented at the Potsdam Meeting "Large Scale Structure: Tracks and Traces. Dekel, A. 1997, talk presented at the Potsdam Meeting "Large Scale Structure: Tracks and Traces". . A Dekel, O Lahav, in preparationDekel, A. and Lahav, O. 1997, in preparation. . J N Fry, Phys. Rev. Lett. 73215Fry, J.N. 1994, Phys. Rev. Lett., 73, 215. . E Gaztañaga, F Bernardeau, astro-ph/9707095Gaztañaga, E. and Bernardeau, F. 1997, astro-ph/9707095. . A K Gooding, C Park, D N Spergel, N Turok, R Gott, Iii, ApJ. 42Gooding, A.K., Park, C., Spergel, D.N., Turok, N., Gott, R. III 1992, ApJ, 393, 42. . L Guzzo, M A Strauss, K B Fisher, R Giovaneli, M P Haynes, astro- ph/9706150Guzzo, L., Strauss, M.A., Fisher, K.B., Giovaneli, R. and Haynes, M.P. 1997, astro- ph/9706150. A J S Hamilton, astro-ph/9708102Proceedings Ringberg Workshop on Large-Scale Structure. Ed D. HamiltonRingberg Workshop on Large-Scale StructureHamilton, A.J.S. 1997, Proceedings Ringberg Workshop on Large-Scale Structure, Ed D. Hamilton, astro-ph/9708102. . Cole Hatton, S , astro-ph/9707186Hatton, S and Cole, S. 1997, astro-ph/9707186. . J S Heyl, S Cole, C S Frenk, J F Navarro, MNRAS. 755Heyl, J.S., Cole, S., Frenk, C.S., Navarro, J.F. 1995, MNRAS, 274, 755. . E Hivon, F R Bouchet, S Colombi, R Juskiewicz, A&A. 298643Hivon, E., Bouchet, F.R., Colombi, S. and Juskiewicz, R. 1995, A&A, 298, 643. . R Juskiewicz, F R Bouchet, S Colombi, ApJ. 9Juskiewicz, R., Bouchet, F.R. and Colombi, S. 1993, ApJ, 412, L9. . N Kaiser, ApJ. 2849Kaiser, N. 1984, ApJ, 284, L9. . N Kaiser, MNRAS. 2271Kaiser, N. 1987, MNRAS, 227, 1. . R Kim, M A Strauss, astro-ph/9702144Kim, R. and Strauss, M.A. 1997, astro-ph/9702144. . J A Peacock, MNRAS. 284885Peacock, J.A. 1997, MNRAS, 284, 885. Principles of Physical Cosmology. J P E ; U Peebles, U Seljak, N Turok, Phys. Rev. Lett. 791611Princeton University Press PenPeebles, J.P.E. 1993, "Principles of Physical Cosmology", Princeton University Press Pen, U., Seljak, U. and Turok, N. 1997, Phys. Rev. Lett., 79, 1611. . M A Strauss, J A Willick, R Van De Weygaert, A Babul, Physics Reports. 26159ApJ. This preprint was prepared with the AAS L A T E X macros v4.0Strauss, M.A. and Willick, J.A. 1995, Physics Reports, 261, 271. van de Weygaert, R. and Babul, A. 1994, ApJ, 425, 59. This preprint was prepared with the AAS L A T E X macros v4.0.
[]
[]
[ "O Iyama ", "B Keller ", "T Nakanishi ", "\nR. Inoue : Faculty of Pharmaceutical Sciences\nGraduate School of Mathematics\nSuzuka University of Medical Sci-ence\n513-8670SuzukaJapan\n", "\nNagoya University\n464-8604NagoyaJapan\n", "\nUFR de Mathématiques\nInstitut de Mathématiques de Jussieu\nInstitute of Physics\nUMR 7586 du CNRS\nUniversité Paris\nDiderot -Paris 7, Case 7012, 2, place Jussieu75251, Cedex 05ParisFrance A. Kuniba\n", "\nGraduate School of Mathematics\nUniversity of Tokyo\n153-8902Japan\n", "\nNagoya University\n464-8604NagoyaJapan\n" ]
[ "R. Inoue : Faculty of Pharmaceutical Sciences\nGraduate School of Mathematics\nSuzuka University of Medical Sci-ence\n513-8670SuzukaJapan", "Nagoya University\n464-8604NagoyaJapan", "UFR de Mathématiques\nInstitut de Mathématiques de Jussieu\nInstitute of Physics\nUMR 7586 du CNRS\nUniversité Paris\nDiderot -Paris 7, Case 7012, 2, place Jussieu75251, Cedex 05ParisFrance A. Kuniba", "Graduate School of Mathematics\nUniversity of Tokyo\n153-8902Japan", "Nagoya University\n464-8604NagoyaJapan" ]
[]
We prove the periodicities of the restricted T and Y-systems associated with the quantum affine algebra of type Cr, F 4 , and G 2 at any level. We also prove the dilogarithm identities for these Y-systems at any level. Our proof is based on the tropical Y-systems and the categorification of the cluster algebra associated with any skew-symmetric matrix by Plamondon.
10.4171/prims/96
[ "https://arxiv.org/pdf/1001.1881v4.pdf" ]
9,157,089
1001.1881
1e6a95dc4a5aa20ef8f31f58dfc5d157e36d824f
15 Nov 2010 O Iyama B Keller T Nakanishi R. Inoue : Faculty of Pharmaceutical Sciences Graduate School of Mathematics Suzuka University of Medical Sci-ence 513-8670SuzukaJapan Nagoya University 464-8604NagoyaJapan UFR de Mathématiques Institut de Mathématiques de Jussieu Institute of Physics UMR 7586 du CNRS Université Paris Diderot -Paris 7, Case 7012, 2, place Jussieu75251, Cedex 05ParisFrance A. Kuniba Graduate School of Mathematics University of Tokyo 153-8902Japan Nagoya University 464-8604NagoyaJapan 15 Nov 2010PERIODICITIES OF T AND Y-SYSTEMS, DILOGARITHM IDENTITIES, AND CLUSTER ALGEBRAS II: TYPES C r , F 4 , AND G 2 REI INOUE, OSAMU IYAMA, BERNHARD KELLER, ATSUO KUNIBA, AND TOMOKI NAKANISHI 12010 Mathematics Subject Classification Primary 13F60; Secondary 17B37 We prove the periodicities of the restricted T and Y-systems associated with the quantum affine algebra of type Cr, F 4 , and G 2 at any level. We also prove the dilogarithm identities for these Y-systems at any level. Our proof is based on the tropical Y-systems and the categorification of the cluster algebra associated with any skew-symmetric matrix by Plamondon. Introduction This is the continuation of the paper [IIKKN]. In [IIKKN], we proved the periodicities of the restricted T and Y-systems associated with the quantum affine algebra of type B r at any level. We also proved the dilogarithm identities for these Y-systems at any level. Our proof was based on the tropical Y-systems and the categorification of the cluster algebra associated with any skew-symmetric matrix by Plamondon [P1, P2]. In this paper, using the same method, we prove the corresponding statements for type C r , F 4 , and G 2 , thereby completing all the nonsimply laced types. The results are basically parallel to type B r . Since the common method and the proofs of the statements for type B r were described in [IIKKN] in detail, in this paper, we skip the proofs of most statements, and concentrate on presenting the results with emphasis on the special features of each case. Notably, the tropical Y-system at level 2, which is the core part in the entire method, is quite specific to each case. While we try to make the paper as self-contained as possible, we also try to minimize duplication with [IIKKN]. Therefore, we have to ask the reader's patience to refer to the companion paper [IIKKN] for the things which are omitted. In particular, basic definitions for cluster algebras are summarized in [IIKKN,Section 2.1]. The organization of the paper is as follows. In section 2 we present the main results as well as the T and Y-systems for each type. In Section 3 the results for type C r are presented. The tropical Y-system at level 2 is the key and described in detail in Section 3.6. In Section 4 the results for type F 4 are presented. In Section 5 the results for type G 2 are presented. In Section 6 we list the known mutation equivalences of quivers corresponding to the T and Y-systems. Main results 2.1. Restricted T and Y-systems of types C r , F 4 , and G 2 . Let X r be the Dynkin diagram of type C r , F 4 , or G 2 with rank r, and I = {1, . . . , r} be the enumeration of the vertices of X r as below. Let h and h ∨ be the Coxeter number and the dual Coxeter number of X r , respectively. X r C r F 4 G 2 h 2r 12 6 h ∨ r + 1 9 4 (2.1) We set numbers t and t a (a ∈ I) by t = 2 X r = C r , F 4 3 X r = G 2 , t a = 1 α a : long root t α a : short root. (2.2) For a given integer ℓ ≥ 2, we introduce a set of triplets I ℓ = I ℓ (X r ) := {(a, m, u) | a ∈ I; m = 1, . . . , t a ℓ − 1; u ∈ 1 t Z}. (2.3) Definition 2.1 ( [KNS]). Fix an integer ℓ ≥ 2. The level ℓ restricted T-system T ℓ (X r ) of type X r (with the unit boundary condition) is the following system of relations for a family of variables T ℓ = {T (Here and throughout the paper, 2m (resp. 2m + 1) in the left hand sides, for example, represents elements 2, 4, . . . (1 ≤ a ≤ r − 2), T (r−1) 2m u − 1 2 T (r−1) 2m u + 1 2 = T (r−1) 2m−1 (u)T (r−1) 2m+1 (u) + T (r−2) 2m (u)T (r) m u − 1 2 T (r) m u + 1 2 , T (r−1) 2m+1 u − 1 2 T (r−1) 2m+1 u + 1 2 = T (r−1) 2m (u)T (r−1) 2m+2 (u) + T (r−2) 2m+1 (u)T (r) m (u)T (r) m+1 (u), T (r) m (u − 1)T (r) m (u + 1) = T (r) m−1 (u)T (r) m+1 (u) + T (r−1) 2m (u). (2.4) For X r = F 4 , T (1) m (u − 1)T (1) m (u + 1) = T (1) m−1 (u)T (1) m+1 (u) + T (2) m (u), T (2) m (u − 1)T (2) m (u + 1) = T (2) m−1 (u)T (2) m+1 (u) + T (1) m (u)T (3) 2m (u), T (3) 2m u − 1 2 T (3) 2m u + 1 2 = T(3) 2m−1 (u)T 3m u − 1 3 T (2) 3m u + 1 3 = T (2) 3m−1 (u)T (2) 3m+1 (u) + T (1) m u − 2 3 T (1) m (u)T (1) m u + 2 3 , T (2) 3m+1 u − 1 3 T (2) 3m+1 u + 1 3 = T (2) 3m (u)T (2) 3m+2 (u) + T (1) m u − 1 3 T (1) m u + 1 3 T (1) m+1 (u), T (2) 3m+2 u − 1 3 T (2) 3m+2 u + 1 3 = T (2) 3m+1 (u)T (2) 3m+3 (u) + T (1) m (u)T (1) m+1 u − 1 3 T (1) m+1 u + 1 3 .(2) (2.6) Definition 2.2 ( [KN]). Fix an integer ℓ ≥ 2. The level ℓ restricted Y-system Y ℓ (X r ) of type X r is the following system of relations for a family of variables Y ℓ = {YFor X r = C r , Y (a) m u − 1 2 Y (a) m u + 1 2 = (1 + Y (a−1) m (u))(1 + Y (a+1) m (u)) (1 + Y (a) m−1 (u) −1 )(1 + Y (a) m+1 (u) −1 ) (1 ≤ a ≤ r − 2), Y (r−1) 2m u − 1 2 Y (r−1) 2m u + 1 2 = (1 + Y (r−2) 2m (u))(1 + Y (r) m (u)) (1 + Y (r−1) 2m−1 (u) −1 )(1 + Y (r−1) 2m+1 (u) −1 ) , Y (r−1) 2m+1 u − 1 2 Y (r−1) 2m+1 u + 1 2 = 1 + Y (r−2) 2m+1 (u) (1 + Y (r−1) 2m (u) −1 )(1 + Y (r−1) 2m+2 (u) −1 ) , Y (r) m (u − 1)Y (r) m (u + 1) = (1 + Y (r−1) 2m+1 (u))(1 + Y (r−1) 2m−1 (u)) ×(1 + Y (r−1) 2m u − 1 2 )(1 + Y (r−1) 2m u + 1 2 ) (1 + Y (r) m−1 (u) −1 )(1 + Y (r) m+1 (u) −1 ) . (2.7) For X r = F 4 , Y (1) m (u − 1)Y (1) m (u + 1) = 1 + Y (2) m (u) (1 + Y (1) m−1 (u) −1 )(1 + Y (1) m+1 (u) −1 ) , Y (2) m (u − 1)Y (2) m (u + 1) = (1 + Y (1) m (u))(1 + Y (3) 2m−1 (u))(1 + Y (3) 2m+1 (u)) ×(1 + Y (3) 2m u − 1 2 )(1 + Y (3) 2m u + 1 2 ) (1 + Y (2) m−1 (u) −1 )(1 + Y (2) m+1 (u) −1 ) , Y (3) 2m u − 1 2 Y (3) 2m u + 1 2 = (1 + Y (2) m (u))(1 + Y (4) 2m (u)) (1 + Y (3) 2m−1 (u) −1 )(1 + Y (3) 2m+1 (u) −1 ) , Y (3) 2m+1 u − 1 2 Y (3) 2m+1 u + 1 2 = 1 + Y (4) 2m+1 (u) (1 + Y (3) 2m (u) −1 )(1 + Y (3) 2m+2 (u) −1 ) , Y (4) m u − 1 2 Y (4) m u + 1 2 = 1 + Y (3) m (u) (1 + Y (4) m−1 (u) −1 )(1 + Y (4) m+1 (u) −1 ) . (2.8) For X r = G 2 , Y (1) m (u − 1)Y (1) m (u + 1) = (1 + Y (2) 3m−2 (u))(1 + Y (2) 3m+2 (u)) ×(1 + Y (2) 3m−1 u − 1 3 )(1 + Y (2) 3m−1 u + 1 3 ) ×(1 + Y (2) 3m+1 u − 1 3 )(1 + Y (2) 3m+1 u + 1 3 ) ×(1 + Y (2) 3m u − 2 3 )(1 + Y (2) 3m u + 2 3 ) ×(1 + Y (2) 3m (u)) (1 + Y (1) m−1 (u) −1 )(1 + Y (1) m+1 (u) −1 ) , Y (2) 3m u − 1 3 Y (2) 3m u + 1 3 = 1 + Y (1) m (u) (1 + Y (2) 3m−1 (u) −1 )(1 + Y (2) 3m+1 (u) −1 ) , Y (2) 3m+1 u − 1 3 Y (2) 3m+1 u + 1 3 = 1 (1 + Y (2) 3m (u) −1 )(1 + Y (2) 3m+2 (u) −1 ) , Y (2) 3m+2 u − 1 3 Y (2) 3m+2 u + 1 3 = 1 (1 + Y (2) 3m+1 (u) −1 )(1 + Y (2) 3m+3 (u) −1 ) . (2.9) Let us write (2.4)-(2.6) in a unified manner k,v;a,m,u) . T (a) m u − 1 ta T (a) m u + 1 ta = T (a) m−1 (u)T (a) m+1 (u) + (b,k,v)∈I ℓ T (b) k (v) G(b, (2.10) (a, m, u; b, k, v). Then, (2.7)-(2.9) are written as Define the transposition t G(b, k, v; a, m, u) = GY (a) m u − 1 ta Y (a) m u + 1 ta = (b,k,v)∈I ℓ (1 + Y (b) k (v)) t G(b,k,v;a,m,u) (1 + Y (a) m−1 (u) −1 )(1 + Y (a) m+1 (u) −1 ) . (2.11) 2.2. Periodicities. Definition 2.3. Let T ℓ (X r ) be the commutative ring over Z with identity element, with generators T (a) m (u) ±1 ((a, m, u) ∈ I ℓ ) and relations T ℓ (X r ) together with T (a) m (u)T (a) m (u) −1 = 1. Let T • ℓ (X r ) be the subring of T ℓ (X r ) generated by T (a) m (u) ((a, m, u) ∈ I ℓ ). Definition 2.4. Let Y ℓ (X r ) be the semifield with generators Y (a) m (u) ((a, m, u) ∈ I ℓ and relations Y ℓ (X r ). Let Y • ℓ (X r ) be the multiplicative subgroup of Y ℓ (X r ) generated by Y (a) m (u), 1 + Y (a) m (u) ((a, m, u) ∈ I ℓ ) . (Here we use the symbol + instead of ⊕ for simplicity.) The first main result of the paper is the periodicities of the T and Y-systems. Theorem 2.5 (Conjectured by [IIKNS]). The following relations hold in T • ℓ (X r ). (i) Half periodicity: T (a) m (u + h ∨ + ℓ) = T (a) taℓ−m (u). (ii) Full periodicity: T (a) m (u + 2(h ∨ + ℓ)) = T (a) m (u). Theorem 2.6 (Conjectured by [KNS]). The following relations hold in Y • ℓ (X r ). (i) Half periodicity: Y (a) m (u + h ∨ + ℓ) = Y (a) taℓ−m (u). (ii) Full periodicity: Y (a) m (u + 2(h ∨ + ℓ)) = Y (a) m (u). 2.3. Dilogarithm identities. Let L(x) be the Rogers dilogarithm function L(x) = − 1 2 x 0 log(1 − y) y + log y 1 − y dy (0 ≤ x ≤ 1). (2.12) We introduce the constant version of the Y-system. Definition 2.7. Fix an integer ℓ ≥ 2. The level ℓ restricted constant Y-system Y c ℓ (X r ) of type X r is the following system of relations for a family of variables Y c ℓ = {Y (a) m | a ∈ I; m = 1, . . . , t a ℓ − 1}, where Y (0) m = Y (a) 0 −1 = Y (a) taℓ −1 = 0 if they occur in the right hand sides in the relations: For X r = C r , (Y (a) m ) 2 = (1 + Y (a−1) m )(1 + Y (a+1) m ) (1 + Y (a) m−1 −1 )(1 + Y (a) m+1 −1 ) (1 ≤ a ≤ r − 2), (Y (r−1) 2m ) 2 = (1 + Y (r−2) 2m )(1 + Y (r) m ) (1 + Y (r−1) 2m−1 −1 )(1 + Y (r−1) 2m+1 −1 ) , (Y (r−1) 2m+1 ) 2 = 1 + Y (r−2) 2m+1 (1 + Y (r−1) 2m −1 )(1 + Y (r−1) 2m+2 −1 ) , (Y (r) m ) 2 = (1 + Y (r−1) 2m−1 )(1 + Y (r−1) 2m ) 2 (1 + Y (r−1) 2m+1 ) (1 + Y (r) m−1 −1 )(1 + Y (r) m+1 −1 ) . (2.13) For X r = F 4 , (Y (1) m ) 2 = 1 + Y (2) m (1 + Y (1) m−1 −1 )(1 + Y (1) m+1 −1 ) , (Y (2) m ) 2 = (1 + Y (1) m )(1 + Y (3) 2m−1 )(1 + Y (3) 2m ) 2 (1 + Y (3) 2m+1 ) (1 + Y (2) m−1 (u) −1 )(1 + Y (2) m+1 (u) −1 ) , (Y (3) 2m ) 2 = (1 + Y (2) m )(1 + Y (4) 2m ) (1 + Y (3) 2m−1 −1 )(1 + Y (3) 2m+1 −1 ) , (Y (3) 2m+1 ) 2 = 1 + Y (4) 2m+1 (1 + Y (3) 2m −1 )(1 + Y (3) 2m+2 −1 ) , (Y (4) m ) 2 = 1 + Y (3) m (1 + Y (4) m−1 −1 )(1 + Y (4) m+1 −1 ) . (2.14) For X r = G 2 , (Y (1) m ) 2 = (1 + Y (2) 3m−2 )(1 + Y (2) 3m−1 ) 2 (1 + Y (2) 3m ) 3 (1 + Y (2) 3m+1 ) 2 (1 + Y (2) 3m+2 ) (1 + Y (1) m−1 −1 )(1 + Y (1) m+1 −1 ) , (Y (2) 3m ) 2 = 1 + Y (1) m (1 + Y (2) 3m−1 −1 )(1 + Y (2) 3m+1 −1 ) , (Y (2) 3m+1 ) 2 = 1 (1 + Y (2) 3m −1 )(1 + Y (2) 3m+2 −1 ) , (Y (2) 3m+2 ) 2 = 1 (1 + Y (2) 3m+1 −1 )(1 + Y (2) 3m+3 −1 ) . (2.15) Proposition 2.8. There exists a unique positive real solution of Y c ℓ (X r ). Proof. The same proof of [IIKKN,Proposition 1.8] is applicable. The second main result of the paper is the dilogarithm identities conjectured by Kirillov [Ki,Eq. (7)], properly corrected by Kuniba [Ku,Eqs. (A.1a), (A.1c)], Theorem 2.9 (Dilogarithm identities). Suppose that a family of positive real numbers {Y (a) m | a ∈ I; m = 1, . . . , t a ℓ − 1} satisfies Y c ℓ (X r ). Then, we have the identity 6 π 2 a∈I taℓ−1 m=1 L Y (a) m 1 + Y (a) m = ℓ dim g h ∨ + ℓ − r, (2.16) where g is the simple Lie algebra of type X r . The right hand side of (2.16) is equal to the number r(ℓh − h ∨ ) h ∨ + ℓ . (2.17) In fact, we prove a functional generalization of Theorem 2.9. Theorem 2.10 (Functional dilogarithm identities). Suppose that a family of positive real numbers {Y (a) m (u) | (a, m, u) ∈ I ℓ } satisfies Y ℓ (X r ). Then, we have the identities 6 π 2 (a,m,u)∈I ℓ 0≤u<2(h ∨ +ℓ) L Y (a) m (u) 1 + Y (a) m (u) = 2tr(ℓh − h ∨ ) =      4r(2rℓ − r − 1) C r 48(4ℓ − 3) F 4 24(3ℓ − 2) G 2 , (2.18) 6 π 2 (a,m,u)∈I ℓ 0≤u<2(h ∨ +ℓ) L 1 1 + Y (a) m (u) =      4ℓ(2rℓ − ℓ − 1) C r 8ℓ(3ℓ + 1) F 4 12ℓ(2ℓ + 1) G 2 . (2.19) Figure 1. The quiver Q ℓ (C r ) for even ℓ (upper) and for odd ℓ (lower), where we identify the rightmost column in the left quiver with the middle column in the right quiver. − + − + + − + − − + − + + − + − − + − + + − + − − + − + − + − + − + − + − + + − + r−1 ℓ−1              · · · · · · · · · · · · · · · · · · · · · − + − + + − + − − + − + + − + − − + − + + − + − − + − + + − + − − + − + − + − + + − + − + − + − + + − + − r−1 ℓ−1                      2ℓ−1                                The two identities (2.18) and (2.19) are equivalent to each other, since the sum of the right hand sides is equal to 2t(h ∨ + ℓ)(( a∈I t a )ℓ − r), which is the total number of (a, m, u) ∈ I ℓ in the region 0 ≤ u < 2(h ∨ + ℓ). It is clear that Theorem 2.9 follows form Theorem 2.10. 3. Type C r The C r case is quite parallel to the B r case. For the reader's convenience, we repeat most of the basic definitions and results in [IIKKN]. Most propositions are proved in a parallel manner to the B r case, so that proofs are omitted. The properties of the tropical Y-system at level 2 (Proposition 3.10) are crucial and specific to C r . Since its derivation is a little more complicated than the B r case, the outline of the proof is provided. 3.1. Parity decompositions of T and Y-systems. For a triplet (a, m, u) ∈ I ℓ , we set the 'parity conditions' P + and P − by P + : r + a + m + 2u is odd if a = r; 2u is even if a = r, (3.1) P − : r + a + m + 2u is even if a = r; 2u is odd if a = r. (3.2) We write, for example, (a, m, u) : P + if (a, m, u) satisfies P + . We have I ℓ = I ℓ+ ⊔ I ℓ− , where I ℓε is the set of all (a, m, u) : P ε . Define T • ℓ (C r ) ε (ε = ±) to be the subring of T • ℓ (C r ) generated by T (a) m (u) ((a, m, u) ∈ I ℓε ). Then, we have T • ℓ (C r ) + ≃ T • ℓ (C r ) − by T (a) m (u) → T (a) m (u + 1 2 ) and T • ℓ (C r ) ≃ T • ℓ (C r ) + ⊗ Z T • ℓ (C r ) − . (3.3) For a triplet (a, m, u) ∈ I ℓ , we set another 'parity conditions' P ′ + and P ′ − by P ′ + : r + a + m + 2u is even if a = r; 2u is even if a = r, (3.4) P ′ − : r + a + m + 2u is odd if a = r; 2u is odd if a = r. (3.5) We have I ℓ = I ′ ℓ+ ⊔ I ′ ℓ− , where I ′ ℓε is the set of all (a, m, u) : P ′ ε . We also have (a, m, u) : P ′ + ⇐⇒ (a, m, u ± 1 ta ) : P + . (3.6) Define Y • ℓ (C r ) ε (ε = ±) to be the subgroup of Y • ℓ (C r ) generated by Y (a) m (u), 1 + Y (a) m (u) ((a, m, u) ∈ I ′ ℓε ). Then, we have Y • ℓ (C r ) + ≃ Y • ℓ (C r ) − by Y (a) m (u) → Y (a) m (u + 1 2 ), 1 + Y (a) m (u) → 1 + Y (a) m (u + 1 2 ), and Y • ℓ (C r ) ≃ Y • ℓ (C r ) + × Y • ℓ (C r ) − . (3.7) 3.2. Quiver Q ℓ (C r ). With type C r and ℓ ≥ 2 we associate the quiver Q ℓ (C r ) by Figure 1, where the rightmost column in the left quiver and the middle column in the right quiver are identified. Also, we assign the empty or filled circle •/• and the sign +/− to each vertex. Let us choose the index set I of the vertices of Q ℓ (C r ) so that i = (i, i ′ ) ∈ I represents the vertex at the i ′ th row (from the bottom) of the ith column (from the left) in the left quiver for i = 1, . . . , r − 1, the one of the right column in the right quiver for i = r, and the one of the left column in the right quiver for i = r + 1. Thus, i = 1, . . . , r + 1, and i ′ = 1, . . . , ℓ − 1 if i = r, r + 1 and i ′ = 1, . . . , 2ℓ − 1 if i = r, r + 1. We use a natural notation I • (resp. I • + ) for the set of the vertices i with property • (resp. • and +), and so on. We have I = I • ⊔ I • = I • + ⊔ I • − ⊔ I • + ⊔ I • − . We define composite mutations, µ • + = i∈I • + µ i , µ • − = i∈I • − µ i , µ • + = i∈I • + µ i , µ • − = i∈I • − µ i . (3.8) Note that they do not depend on the order of the product. Let r be the involution acting on I by the left-right reflection of the right quiver. Let ω be the involution acting on I defined by, for even r, the up-down reflection of the left quiver and the 180 • rotation of the right quiver; and for odd r, the up-down reflection of the left and right quivers. Let r(Q ℓ (C r )) and ω(Q ℓ (C r )) denote the quivers induced from Q ℓ (C r ) by r and ω, respectively. For example, if there is an arrow i → j in Q ℓ (C r ), then, there is an arrow r(i) → r(j) in r(Q ℓ (C r )). For a quiver Q, Q op denotes the opposite quiver. Lemma 3.1. Let Q = Q ℓ (C r ). (i) We have a periodic sequence of mutations of quivers Figure 2. (Continues to Figure 3) Label of cluster variables x i (u) by I ℓ+ for C r , ℓ = 4. The variables framed by solid/dashed lines satisfy the condition p + /p − , respectively. The middle column in the right quiver (marked by ⋄) is identified with the rightmost column in the left quiver. Q µ • + µ • + ←→ Q op µ • − ←→ r(Q) µ • + µ • − ←→ r(Q) op µ • − ←→ Q. (3.9) (ii) ω(Q) = Q if h ∨ + ℓ is even, and ω(Q) = r(Q) if h ∨ + ℓ is odd. x(0) * x (r) 1 (−1) x (r) 2 (−1) * * x (r) 3 (−1) ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ' E ' E ' E c T T c T c T c T c j B j B % * x (r−2) 2 (− 1 2 ) * x (r−2) 4 (− 1 2 ) * x (r−2) 6 (− 1 2 ) * x (r−1) 1 (− 1 2 ) * x (r−1) 3 (− 1 2 ) * x (r−1) 5 (− 1 2 ) * x (r−1) 7 (− 1 2 ) T c T c T c c T c T c T E ' E ' E ' E . . . . . . . . . . . . . . . . . . . . . µ • + µ • + x( 1 2 ) * * * * * * ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ E ' E ' E ' T c c T c T c T c T % % B j x (r−2) 1 (0) * x (r−2) 3 (0) * x (r−2) 5 (0) * x (r−2) 7 (0) * x (r−1) 2 (0) * x (r−1) 4 (0) * x (r−1) 6 (0) * c T c T c T T c T c T c ' E ' E ' E ' . . . . . . . . . . . . . . . . . . . . . µ • − 3.3. Cluster algebra and alternative labels. It is standard to identify a quiver Q with no loop and no 2-cycle and a skew-symmetric matrix B. We use the convention for the direction of arrows as i −→ j ⇐⇒ B ij = 1. (3.10) (In this paper we only encounter the situation where B ij = −1, 0, 1.) Let B ℓ (C r ) be the corresponding skew-symmetric matrix to the quiver Q ℓ (C r ). In the rest of the section, we set the matrix B = (B ij ) i,j∈I = B ℓ (C r ) unless otherwise mentioned. Let A(B, x, y) be the cluster algebra with coefficients in the universal semifield Q sf (y), where (B, x, y) is the initial seed [FZ2]. See also [IIKKN,Section 2.1] for the conventions and notations on cluster algebras we employ. (Here we use the symbol + instead of ⊕ in Q sf (y), since it is the ordinary addition of subtraction-free expressions of rational functions of y.) x(1) x (r) 1 (0) * * x (r) 2 (0) x (r) 3 (0) * ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ' E ' E ' E T c T c T c T c c T % % j B * x (r−2) 2 ( 1 2 ) * x (r−2) 4 ( 1 2 ) * x (r−2) 6 ( 1 2 ) * x (r−1) 1 ( 1 2 ) * x (r−1) 3 ( 1 2 ) * x (r−1) 5 ( 1 2 ) * x (r−1) 7 ( 1 2 ) T c T c T c c T c T c T E ' E ' E ' E . . . . . . . . . . . . . . . . . . . . . µ • + µ • − x( 3 2 ) * * * * * * ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ E ' E ' E ' c T c T c T c T T c B j B j % x (r−2) 1 (1) * x (r−2) 3 (1) * x (r−2) 5 (1) * x (r−2) 7 (1) * x (r−1) 2 (1) * x (r−1) 4 (1) * x (r−1) 6 (1) * c T c T c T T c T c T c ' E ' E ' E ' . . .Definition 3.2. The coefficient group G(B, y) associated with A(B, x, y) is the multiplicative subgroup of the semifield Q sf (y) generated by all the coefficients y ′ i of A(B, x, y) together with 1 + y ′ i . In view of Lemma 3.1 we set x(0) = x, y(0) = y and define clusters x(u) = (x i (u)) i∈I (u ∈ 1 2 Z) and coefficient tuples y(u) = (y i (u)) i∈I (u ∈ 1 2 Z) by the sequence of mutations · · · µ • − ←→ (B, x(0), y(0)) µ • + µ • + ←→ (−B, x( 1 2 ), y( 1 2 )) µ • − ←→ (r(B), x(1), y(1)) µ • + µ • − ←→ (−r(B), x( 3 2 ), y( 3 2 )) µ • − ←→ · · · , (3.11) where r(B) = B ′ is defined by B ′ ij = B r(i)r(j) . For a pair (i, u) ∈ I × 1 2 Z, we set the parity condition p + and p − by p + :      i ∈ I • + ⊔ I • + u ≡ 0 i ∈ I • − u ≡ 1 2 , 3 2 i ∈ I • − ⊔ I • + u ≡ 1, p − :      i ∈ I • + ⊔ I • + u ≡ 1 2 i ∈ I • − u ≡ 0, 1 i ∈ I • − ⊔ I • + u ≡ 3 2 , (3.12) where ≡ is modulo 2Z. We have (i, u) : p + ⇐⇒ (i, u + 1 2 ) : p − . (3.13) Each (i, u) : p + is a mutation point of (3.11) in the forward direction of u, and each (i, u) : p − is so in the backward direction of u. Notice that there are also some (i, u) which do not satisfy p + nor p − , and are not mutation points of (3.11); explicitly, they are (i, u) with i ∈ I • + , u ≡ 1, 3 2 mod 2Z, or with i ∈ I • − , u ≡ 0, 1 2 mod 2Z. There is a correspondence between the parity condition p ± here and P ± , P ′ ± in (3.1) and (3.4). Lemma 3.3. Below ≡ means the equivalence modulo 2Z. (i) The map g : I ℓ+ → {(i, u) : p + } (a, m, u − 1 ta ) →      ((a, m), u) a = r ((r + 1, m), u) a = r; m + u ≡ 0 ((r, m), u) a = r; m + u ≡ 1 (3.14) is a bijection. (ii) The map g ′ : I ′ ℓ+ → {(i, u) : p + or p − } (a, m, u) →      ((a, m), u) a = r ((r + 1, m), u) a = r; m + u ≡ 0 ((r, m), u) a = r; m + u ≡ 1 (3.15) is a bijection. We introduce alternative labels x i (u) = x (a) m (u − 1/t a ) ((a, m, u − 1/t a ) ∈ I ℓ+ ) for (i, u) = g((a, m, u − 1/t a )) and y i (u) = y (a) m (u) ((a, m, u) ∈ I ′ ℓ+ ) for (i, u) = g ′ ((a, m, u)), respectively. See Figures 2-3. 3.4. T-system and cluster algebra. The result in this subsection is completely parallel to the B r case [IIKKN]. Let A(B, x) be the cluster algebra with trivial coefficients, where (B, x) is the initial seed [FZ2]. Let 1 = {1} be the trivial semifield and π 1 : Q sf (y) → 1, y i → 1 be the projection. Let [x i (u)] 1 denote the image of x i (u) by the algebra homomorphism A(B, x, y) → A(B, x) induced from π 1 . It is called the trivial evaluation. Recall that G(b, k, v; a, m, u) is defined in (2.10). Lemma 3.4. The family {x (a) m (u) | (a, m, u) ∈ I ℓ+ } satisfies a system of relations x (a) m u − 1 ta x (a) m u + 1 ta = y (a) m (u) 1 + y (a) m (u) (b,k,v)∈I ℓ+ x (b) k (v) G(b,k,v; a,m,u) + 1 1 + y (a) m (u) x (a) m−1 (u)x (a) m+1 (u), (3.16) where (a, m, u) ∈ I ′ ℓ+ . In particular, the family {[x (a) m (u)] 1 | (a, m, u) ∈ I ℓ+ } satisfies the T-system T ℓ (C r ) in A(B, x) by replacing T (a) m (u) with [x (a) m (u)] 1 . Definition 3.5. The T-subalgebra A T (B, x) of A(B, x, y) associated with the se- quence (3.11) is the subalgebra of A(B, x) generated by [x i (u)] 1 ((i, u) ∈ I × 1 2 Z). Theorem 3.6. The ring T • ℓ (C r ) + is isomorphic to A T (B, x) by the correspondence T (a) m (u) → [x (a) m (u)] 1 . 3.5. Y-system and cluster algebra. The result in this subsection is completely parallel to the B r case [IIKKN]. Lemma 3.7. The family {y (a) m (u) | (a, m, u) ∈ I ′ ℓ+ } satisfies the Y-system Y ℓ (C r ) by replacing Y (a) m (u) with y (a) m (u). Definition 3.8. The Y-subgroup G Y (B, y) of G(B, y) associated with the sequence (3.11) is the subgroup of G(B, y) generated by y i (u) ((i, u) ∈ I × 1 2 Z) and 1 + y i (u) ((i, u) : p + or p − ). Theorem 3.9. The group Y • ℓ (C r ) + is isomorphic to G Y (B, y) by the correspondence Y (a) m (u) → y (a) m (u) and 1 + Y (a) m (u) → 1 + y (a) m (u). 3.6. Tropical Y-system at level 2. The tropical semifield Trop(y) is an abelian multiplicative group freely generated by the elements y i (i ∈ I) with the addition ⊕ i∈I y a i i ⊕ i∈I y b i i = i∈I y min(a i ,b i ) i . (3.17) Let π T : Q sf (y) → Trop(y), y i → y i be the projection. Let [y i (u)] T and [G Y (B, y)] T denote the images of y i (u) and G Y (B, y) by the multiplicative group homomorphism induced from π T , respectively. They are called the tropical evaluations, and the resulting relations in the group [G Y (B, y)] T is called the tropical Y-system. We say a (Laurent) monomial m = i∈I y k i i is positive (resp. negative) if m = 1 and k i ≥ 0 (resp. k i ≤ 0) for any i. The following properties of the tropical Y-system at level 2 will be the key in the entire method. Proposition 3.10. For [G Y (B, y)] T with B = B 2 (C r ), the following facts hold. (i) Let u be in the region 0 ≤ u < 2. For any (i, u) : p + , the monomial [y i (u)] T is positive. (ii) Let u be in the region −h ∨ ≤ u < 0. (a) Let i = (i, 2) (i ≤ r − 1), (r, 1), or (r + 1, 1). For any (i, u) : p + , the monomial [y i (u)] T is negative. (b) Let i = (i, 1), (i, 3) (i ≤ r − 1). For any (i, u) : p + , the monomial [y i (u)] T is positive for u = − 1 2 h ∨ , − 1 2 h ∨ − 1 2 and negative otherwise. (iii) y ii ′ (2) = y −1 i,4−i ′ if i ≤ r − 1 and y −1 ii ′ if i = r, r + 1. (iv) For even r, y ii ′ (−h ∨ ) = y −1 ii ′ if i ≤ r − 1 and y −1 2r+1−i,i ′ if i = r, r + 1. For odd r, y ii ′ (−h ∨ ) = y −1 ii ′ . One can directly verify (i) and (iii) in the same way as the B r case [IIKKN,Proposition 3.2]. In the rest of this subsection we give the outline of the proof of (ii) and (iv). Note that (ii) and (iv) can be proved independently for each variable y i . (To be precise, we also need to assure that each monomial is not 1 in total. However, this can be easily followed up, so that we do not describe details here.) Below we separate the variables into two parts. Here is a brief summary of the results. (1) The D part. The powers of [y i (u)] T in the variables y i,2 (i ≤ r − 1) and y r,1 , y r+1,1 are described by the root system of type D r+1 with a Coxeter-like transformation. It turns out that they are further described by (a subset of ) the root system of type A 2r+1 with the Coxeter transformation. (2) The A part. The powers of [y i (u)] T in the variables y i,1 and y i,3 (i ≤ r−1), are mainly described by the root system of type A r−1 with the Coxeter transformation. 3.6.1. D part. Let us consider the D part first. Let D r+1 be the Dynkin diagram of type D with index set J = {1, . . . , r + 1}. We assign the sign +/− to vertices of D r+1 (no sign for r and r + 1) as inherited from Q 2 (C r ). 1 2 r − 1 r r + 1 − + − + − r: even 1 2 r − 1 r r + 1 + − + − + − r: odd Let Π = {α 1 , . . . , α r+1 }, −Π, Φ + be the set of the simple roots, the negative simple roots, the positive roots, respectively, of type D r+1 . Following [FZ1], we introduce the piecewise-linear analogue σ i of the simple reflection s i , acting on the set of the almost positive roots Φ ≥−1 = Φ + ⊔ (−Π), by σ i (α) = s i (α), α ∈ Φ + , σ i (−α j ) = α j j = i, −α j otherwise. (3.18) Let σ + = i∈J+ σ i , σ − = i∈J− σ i , (3.19) where J ± is the set of the vertices of D r+1 with property ±. We define σ as the composition σ = σ − σ + σ r+1 σ − σ + σ r . (3.20) Lemma 3.11. The following facts hold. (I) Let r be even. ( i) For i ≤ r − 1, σ k (−α i ) ∈ Φ + , (1 ≤ k ≤ r/2), σ r/2+1 (−α i ) = −α i . (ii) For i ≤ r − 1, σ k (α i ) ∈ Φ + , (0 ≤ k ≤ r/2), σ r/2+1 (α i ) = α i . (iii) σ k (−α r ) ∈ Φ + , (1 ≤ k ≤ r/2), σ r/2+1 (−α r ) = −α r+1 . (iv) σ k (−α r+1 ) ∈ Φ + , (1 ≤ k ≤ r/2 + 1), σ r/2+2 (−α r+1 ) = −α r . (v) The elements in Φ + in (i)-(iv) exhaust the set Φ + , thereby providing the orbit decomposition of Φ + by σ. (II) Let r be odd. ( i) For i ∈ J + , σ k (−α i ) ∈ Φ + , (1 ≤ k ≤ r+1), σ r+2 (−α i ) = −α i , σ (r+1)/2 (−α i ) = α i . (ii) For i ∈ J − , σ k (−α i ) ∈ Φ + , (1 ≤ k ≤ r+1), σ r+2 (−α i ) = −α i , σ (r+3)/2 (−α i ) = α i . (iii) σ k (−α r ) ∈ Φ + , (1 ≤ k ≤ (r + 1)/2), σ (r+3)/2 (−α r ) = −α r . (iv) σ k (−α r+1 ) ∈ Φ + , (1 ≤ k ≤ (r + 1)/2), σ (r+3)/2 (−α r+1 ) = −α r+1 . (v) The elements in Φ + in (i)-(iv) exhaust the set Φ + , thereby providing the orbit decomposition of Φ + by σ. Proof. They are verified by explicitly calculating σ k (−α i ) and σ k (α i ). The examples for r = 10 (for even r) and 9 (for odd r) are given in Tables 1 and 2, respectively, where we use the notations [i, j] = α i + · · · + α j (1 ≤ i < j ≤ r), [i] = α i (1 ≤ i ≤ r), {i, j} = (α i + · · · + α r−1 ) + (α j + · · · + α r+1 ) (1 ≤ i < j ≤ r + 1, i ≤ r − 1), (3.21) and {r + 1} = α r+1 . In fact, it is not difficult to read off the general rule from these examples. The orbits σ(−α i ) and σ(α i ) are further described by (a subset of) the root system of type A 2r+1 . Let Π ′ = {α ′ 1 , . . . , α ′ 2r+1 } and Φ ′ + be the sets of the simple roots and the positive roots of type A 2r+1 , respectively, with standard index set J ′ = {1, . . . , 2r + 1}. Define, J ′ + = {i ∈ J ′ | i − r is even}, J ′ − = {i ∈ J ′ | i − r is odd}. We introduce the notations [i, j] ′ = α ′ i + · · · + α ′ j (1 ≤ i < j ≤ 2r + 1) and [i] ′ = α ′ i as parallel to (3.21). Let O ′ i = {(σ ′ ) k (−α ′ i ) | 1 ≤ k ≤ r + 1} be the orbit of −α ′ i in Φ ′ + by σ ′ = σ ′ − σ ′ + , σ ′ ± = i∈J ′ ± σ ′ i , where σ ′ i is the piecewise-linear analogue of the simple reflection s ′ i as (3.18). Lemma 3.12. Let ρ : Φ + → r i=1 O ′ i (3.22) be the map defined by Table 1. The orbits σ k (−α i ) and σ k (α i ) in Φ + by σ of (3.20) for r = 10. The orbits of −α i and α i (i ≤ 8), for example, −α 1 → [2, 3] → [6, 7] → · · · → −α 1 and α 1 → [4, 5] → [8, 9] → · · · → α 1 , are aligned alternatively. The orbits of −α 10 and −α 11 , namely, −α 10 → {7, 9} → {3, 5} → · · · → −α 11 , and −α 11 → {9, 11} → {5, 7} → · · · → −α 10 , are aligned alternatively. The numbers −1, −2, · · · in the head line will be identified with the parameter u in (3.24 Table 2. The orbit of σ k (−α i ) in Φ + by σ of (3.20) for r = 9. The orbit of −α i (i ≤ 8), for example, −α 1 → [3, 4] → [7, 8] → · · · → α 1 → [1, 2] → [5, 6] → · · · → −α 1 , is aligned in a cyclic and alternative way. The orbits of −α 9 and −α 10 , namely, −α 9 → {6, 8} → {2, 4} → · · · → −α 9 , and −α 10 → {8, 10} → {4, 6} → · · · → −α 10 , are aligned alternatively. The numbers −1, −2, · · · in the head line will be identified with the parameter u in (3.24). [i, j] → [i, j] ′ j − r: odd [2r + 2 − j, 2r + 2 − i] ′ j − r: even, {i, j} → [i, 2r + 2 − j] ′ j − r: odd [j, 2r + 2 − i] ′ j − r: even, {r + 1} → [r, r + 1] ′ ,(3.For −h ∨ ≤ u < 0, define α i (u) =                    σ −u/2 (−α i ) i ∈ J + , u ≡ 0, σ −(u−1)/2 (α i ) i ∈ J + , u ≡ −1, σ −(2u−1)/4 (−α i ) i ∈ J − , u ≡ − 3 2 , σ −(2u+1)/4 (α i ) i ∈ J − , u ≡ − 1 2 , σ −u/2 (−α r ) i = r, u ≡ 0, σ −(u−1)/2 (−α r+1 ) i = r + 1, u ≡ −1, (3.24) where ≡ is modulo 2Z. Note that they correspond to the positive roots in Tables 1 and 2 with u being the parameter in the head lines. By Lemma 3.11 they are all the positive roots of D r+1 . Lemma 3.13. The family in (3.24) satisfies the recurrence relations α i (u − 1 2 ) + α i (u + 1 2 ) = α i−1 (u) + α i+1 (u) (1 ≤ i ≤ r − 2), α r−1 (u − 1 2 ) + α r−1 (u + 1 2 ) = α r−2 (u) + α r (u) (u: even) α r−2 (u) + α r+1 (u) (u: odd), α r (u − 1) + α r (u + 1) = α r−1 (u − 1 2 ) + α r−1 (u + 1 2 ) (u: odd), α r+1 (u − 1) + α r+1 (u + 1) = α r−1 (u − 1 2 ) + α r−1 (u + 1 2 ) (u: even), (3.25) where α 0 (u) = 0. Proof. These relations are easily verified by the explicit expressions of α i (u). See Tables 1 and 2. The first two relations are also obtained from Lemma 3.12 and [FZ2,Eq. (10.9)]. Let us return to prove (ii) of Proposition 3.10 for the D part. For a monomial m in y = (y i ) i∈I , let π D (m) denote the specialization with y i,1 = y i,3 = 1 (i ≤ r − 1). For simplicity, we set y i2 = y i (i ≤ r − 1), y r1 = y r , y r+1,1 = y r+1 , and also, y i2 (u) = y i (u) (i ≤ r − 1), y r1 (u) = y r (u), y r+1,1 (u) = y r (u). We define the vectors t i (u) = (t i (u) k ) r+1 k=1 by π D ([y i (u)] T ) = r+1 k=1 y ti(u) k k . (3.26) We also identify each vector t i (u) with α = r+1 k=1 t i (u) k α k ∈ ZΠ. Proposition 3.14. Let −h ∨ ≤ u < 0. Then, we have t i (u) = −α i (u) (3.27) for (i, u) in (3.24), and π D ([y i1 (u)] T ) = π D ([y i3 (u)] T ) = 1 (i ≤ r − 1, r + i + 2u: even). (3.28) Note that these formulas determine π D ([y i (u)] T ) for any (i, u) : p + . Proof. We can verify the claim for −2 ≤ u ≤ − 1 2 by direct computation. Then, by induction on u in the backward direction, one can establish the claim, together with the recurrence relations among t i (u)'s with (i, u) in (3.24), t i (u − 1 2 ) + t i (u + 1 2 ) = t i−1 (u) + t i+1 (u) (1 ≤ i ≤ r − 2), t r−1 (u − 1 2 ) + t r−1 (u + 1 2 ) = t r−2 (u) + t r (u) (u: even) t r−2 (u) + t r+1 (u) (u: odd), t r (u − 1) + t r (u + 1) = t r−1 (u − 1 2 ) + t r−1 (u + 1 2 ) (u: odd), t r+1 (u − 1) + t r+1 (u + 1) = t r−1 (u − 1 2 ) + t r−1 (u + 1 2 ) (u: even). (3.29) Note that (3.29) coincides with (3.25) under (3.27). To derive (3.29), one uses the mutations as in [IIKKN, Figure 6] (or the tropical version of the Y-system Y 2 (C r ) directly) and the positivity/negativity of π D ([y i (u)] T ) resulting from (3.27) and (3.28) by induction hypothesis. Now (ii) and (iv) in Proposition 3.10 for the D part follow from Proposition 3.14. 3.6.2. A part. The A part can be studied in a similar way to the D part. Thus, we present only the result. First we note that the quiver Q 2 (C r ) is symmetric under the exchange y i,1 ↔ y i,3 (i ≤ r − 1). Thus, one can concentrate on the powers of [y i (u)] T in the variables y i,1 (i ≤ r − 1). Let A r−1 be the Dynkin diagram of type A with index set J = {1, . . . , r − 1}. We assign the sign +/− to vertices (except for r) of A r−1 as inherited from Q 2 (C r ). 1 2 r − 2 r − 1 + − + − + r: even 1 2 r − 2 r − 1 − + + − + r: odd Let Π = {α 1 , . . . , α r−1 }, −Π, Φ + be the set of the simple roots, the negative simple roots, the positive roots, respectively, of type A r−1 . Again, we introduce the piecewise-linear analogue σ i of the simple reflection s i , acting on Φ ≥−1 = Φ + ⊔(−Π) as (3.18). Let σ + = i∈J+ σ i , σ − = i∈J− σ i , (3.30) where J ± is the set of the vertices of A r−1 with property ±. We define σ as the composition σ = σ − σ + . (3.31) For a monomial m in y = (y i ) i∈I , let π A (m) denote the specialization with y i,2 = y i,3 = 1 (i ≤ r − 1) and y r1 = y r+1,1 = 1. We set y i1 = y i (i ≤ r − 1). We define the vectors t i (u) = (t i (u) k ) r−1 k=1 by π A ([y i (u)] T ) = r−1 k=1 y t i (u) k k . (3.32) We also identify each vector t i (u) with α = r−1 k=1 t i (u) k α k ∈ ZΠ. With these notations, the result for the A part is summarized as follows. Proposition 3.15. Let −h ∨ ≤ u < 0. For (i, u) : p + , t i (u) is given by, for i ≤ r − 1, t i1 (u) = −σ −u (−α i ) i ∈ J + −σ −(2u−1)/2 (−α i ) i ∈ J − , t i2 (u) = −[2r + 2 − i + 2u, r − 1] − 1 2 h ∨ ≤ u < 0 −[−1 − i − 2u, r − 1] −h ∨ ≤ u < − 1 2 h ∨ , t i3 (u) = 0, (3.33) and t r1 (u) = −[r + 2 + 2u, r − 1] − 1 2 h ∨ ≤ u < 0 −[−1 − r − 2u, r − 1] −h ∨ ≤ u < − 1 2 h ∨ , t r+1,1 (u) = −[r + 2 + 2u, r − 1] − 1 2 h ∨ ≤ u < 0 −[−1 − r − 2u, r − 1] −h ∨ ≤ u < − 1 2 h ∨ , (3.34) where [i, j] = α i + · · · + α j if i ≤ j and 0 if i > j. Note that t i1 (− h ∨ 2 ) = α r−i (i ∈ J − for even r and i ∈ J + for odd r) and t i1 (− h ∨ 2 − 1 2 ) = α r−i (i ∈ J + for even r and i ∈ J − for odd r), and that they are the only positive monomials in (3.33) and (3.34). Now (ii) and (iv) in Proposition 3.10 for the A part follow from Proposition 3.15. This completes the proof of Proposition 3.10. 3.7. Tropical Y-systems at higher levels. By the same method for the B r case [IIKKN,Proposition 4.1], one can establish the 'factorization property' of the tropical Y-system at higher levels. As a result, we obtain a generalization of Proposition 3.10. (ii) Let u be in the region −h ∨ ≤ u < 0. (a) Let i ∈ I • or i = (i, i ′ ) (i ≤ r − 1, i ′ ∈ 2N). For any (i, u) : p + , the monomial [y i (u)] T is negative. (b) Let i = (i, i ′ ) (i ≤ r −1, i ′ ∈ 2N). For any (i, u) : p + , the monomial [y i (u)] T is positive for u = − 1 2 h ∨ , − 1 2 h ∨ − 1 2 and negative otherwise. (iii) y ii ′ (ℓ) = y −1 i,2ℓ−i ′ if i ≤ r − 1 and y −1 i,ℓ−i ′ if i = r, r + 1. (iv) For even r, y ii ′ (−h ∨ ) = y −1 ii ′ if i ≤ r − 1 and y −1 2r+1−i,i ′ if i = r, r + 1. For odd r, y ii ′ (−h ∨ ) = y −1 ii ′ . We obtain two important corollaries of Propositions 3.10 and 3.16. Figure 4. The quiver Q ℓ (F 4 ) for even ℓ (upper) and for odd ℓ (lower), where we identify the right column in the left quiver with the middle column in the right quiver. + − + ℓ−1                      2ℓ−1                                region 0 ≤ u < 2(h ∨ + ℓ). Then, we have N + = 2ℓ(2rℓ − ℓ − 1), N − = 2r(2ℓr − r − 1). (3.35) We observe the symmetry (the level-rank duality) for the numbers N + and N − under the exchange of r and ℓ. 3.8. Periodicities and dilogarithm identities. Applying [IIKKN,Theorem 5.1] to Theorem 3.17, we obtain the periodicities: Theorem 3.19. For A(B, x, y), the following relations hold. (i) Half periodicity: x i (u + h ∨ + ℓ) = x ω(i) (u). (ii) Full periodicity: x i (u + 2(h ∨ + ℓ)) = x i (u). Theorem 3.20. For G(B, y), the following relations hold. (i) Half periodicity: y i (u + h ∨ + ℓ) = y ω(i) (u). (ii) Full periodicity: y i (u + 2(h ∨ + ℓ)) = y i (u). Then, Theorems 2.5 and 2.6 for C r follow from Theorems 3.6, 3.9, 3.19, and 3.20. Furthermore, Theorem 2.10 for C r is obtained from the above periodicities and Theorem 3.18 as in the B r case [IIKKN, Section 6]. Type F 4 The F 4 case is quite parallel to the B r and C r case. We do not repeat the same definitions unless otherwise mentioned. Again, the properties of the tropical Y-system at level 2 (Proposition 4.7) are crucial and specific to F 4 . Parity decompositions of T and Y-systems. For a triplet (a, m, u) ∈ I ℓ , we reset the 'parity conditions' P + and P − by P + : 2u is even if a = 1, 2; a + m + 2u is odd if a = 3, 4, (4.1) P − : 2u is odd if a = 1, 2; a + m + 2u is even if a = 3, 4. (4.2) Then, we have T • ℓ (F 4 ) + ≃ T • ℓ (F 4 ) − by T (a) m (u) → T (a) m (u + 1 2 ) and T • ℓ (F 4 ) ≃ T • ℓ (F 4 ) + ⊗ Z T • ℓ (F 4 ) − . (4.3) For a triplet (a, m, u) ∈ I ℓ , we reset the parity condition P ′ + and P ′ − by P ′ + : 2u is even if a = 1, 2; a + m + 2u is even if a = 3, 4, (4.4) P ′ − : 2u is odd if a = 1, 2; a + m + 2u is odd if a = 3, 4. (4.5) We have (a, m, u) : P ′ + ⇐⇒ (a, m, u ± 1 ta ) : P + . (4.6) Also, we have Y • ℓ (F 4 ) + ≃ Y • ℓ (F 4 ) − by Y (a) m (u) → Y (a) m (u + 1 2 ), 1 + Y (a) m (u) → 1 + Y (a) m (u + 1 2 ), and Y • ℓ (F 4 ) ≃ Y • ℓ (F 4 ) + × Y • ℓ (F 4 ) − . (4.7) 4.2. Quiver Q ℓ (F 4 ). With type F 4 and ℓ ≥ 2 we associate the quiver Q ℓ (F 4 ) by Figure 4, where the right column in the left quiver and the middle column in the right quiver are identified. Also, we assign the empty or filled circle •/• and the sign +/− to each vertex. Let us choose the index set I of the vertices of Q ℓ (F 4 ) so that i = (i, i ′ ) ∈ I represents the vertex at the i ′ th row (from the bottom) of the ith column (from the left) in the right quiver for i = 1, 2, 3, the one of the i − 1th column in the right quiver for i = 5, 6, and the one of the left column in the left quiver for i = 4. Thus, i = 1, . . . , 6, and i ′ = 1, . . . , ℓ − 1 if i = 1, 2, 5, 6 and i ′ = 1, . . . , 2ℓ − 1 if i = 3, 4. Let r be the involution acting on I by the left-right reflection of the right quiver. Let ω be the involution acting on I by the up-down reflection of the left quiver and the 180 • rotation of the right quiver. Lemma 4.1. Let Q = Q ℓ (F 4 ). (i) We have the same periodic sequence of mutations of quivers as (3.9). (ii) ω(Q) = Q if h ∨ + ℓ is even, and ω(Q) = r(Q) if h ∨ + ℓ is odd. 4.3. Cluster algebra and alternative labels. Let B ℓ (F 4 ) be the corresponding skew-symmetric matrix to the quiver Q ℓ (F 4 ). In the rest of the section, we set the matrix B = (B ij ) i,j∈I = B ℓ (F 4 ) unless otherwise mentioned. Let A(B, x, y) be the cluster algebra with coefficients in the universal semifield Q sf (y), and the coefficient group G(B, y) associated with A(B, x, y). In view of Lemma 4.1 we set x(0) = x, y(0) = y and define clusters x(u) = (x i (u)) i∈I (u ∈ 1 2 Z) and coefficient tuples y(u) = (y i (u)) i∈I (u ∈ 1 2 Z) by the sequence of mutations (3.11). For a pair (i, u) ∈ I × 1 2 Z, we set the same parity condition p + and p − as (3.12). We have (3.13), and each (i, u) : p + is a mutation point of (3.11) in the forward direction of u, and each (i, u) : p − is so in the backward direction of u as before. Figure 6) Label of cluster variables x i (u) by I ℓ+ for F 4 , ℓ = 4. The middle column in the right quiver (marked by ⋄) is identified with the right column in the left quiver. x(0) x (1) 1 (−1) * x (2) 1 (−1) * * x (2) 2 (−1) * x (1) 2 (−1) x (1) 3 (−1) * x (2) 3 (−1) * ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ' ' E ' E ' E E ' ' E ' T c c T T c T c T c T c c T j B j B % * x (4) 2 (− 1 2 ) * x (4) 4 (− 1 2 ) * x (4) 6 (− 1 2 ) * x (3) 1 (− 1 2 ) * x (3) 3 (− 1 2 ) * x (3) 5 (− 1 2 ) * x (3) 7 (− 1 2 ) * x (3) 4 (0) * x(3) Lemma 4.2. Below ≡ means the equivalence modulo 2Z. (i) The map g : , m), u) a = 1, 2; a + m + u ≡ 0 ((7 − a, m), u) a = 1, 2; a + m + u ≡ 1 ((a, m), u) a = 3, 4 (4.8) I ℓ+ → {(i, u) : p + } (a, m, u − 1 ta ) →      ((a is a bijection. (ii) The map , m), u) a = 1, 2; a + m + u ≡ 0 ((7 − a, m), u) a = 1, 2; a + m + u ≡ 1 ((a, m), u) a = 3, 4 (4.9) g ′ : I ′ ℓ+ → {(i, u) : p + } (a, m, u) →      ((a x(1) * x is a bijection. 1 (0) * x(2)1 (0) x (1) 2 (0) * x (2) 2 (0) * * x (2) 3 (0) * x(1)3 (0) ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ ⋄ E ' E E ' ' E ' E ' E E c T T c T c T c T c c T T c % % j B * x (4) 2 ( 1 2 ) * x (4) 4 ( 1 2 ) * x (4) 6 ( 1 2 ) * x (3) 1 ( 1 2 ) * x (3) 3 ( 1 2 ) * x (3) 5 ( 1 2 ) * x (3) 7 ( 1 2 ) * x (3) 4 (1) * x(1) We introduce alternative labels x i (u) = x (a) m (u − 1/t a ) ((a, m, u − 1/t a ) ∈ I ℓ+ ) for (i, u) = g((a, m, u − 1/t a )) and y i (u) = y (a) m (u) ((a, m, u) ∈ I ′ ℓ+ ) for (i, u) = g ′ ((a, m, u)), respectively. See Figures 5-6. 4.4. T-system and cluster algebra. The result in this subsection is completely parallel to the B r and C r cases. 4.5. Y-system and cluster algebra. The result in this subsection is completely parallel to the B r and C r cases. Lemma 4.5. The family {y (a) m (u) | (a, m, u) ∈ I ′ ℓ+ } satisfies the Y-system Y ℓ (F 4 ) by replacing Y (a) m (u) with y (a) m (u). The Y-subgroup G Y (B, y) is defined as Definition 3.8. (iii) y ii ′ (2) = y −1 ii ′ if i = 1, 2, 5, 6 and y −1 Theorem 4.6. The group Y • ℓ (F 4 ) + is isomorphic to G Y (B, y) by the correspondence Y (a) m (u) → yi,4−i ′ if i = 3, 4. (iv) y ii ′ (−h ∨ ) = y −1 7−i,i ′ if i = 1, 2, 5, 6 and y −1 ii ′ if i = 3, 4. Also we have a description of the 'core part' of [y i (u)] T in the region −h ∨ ≤ u < 0, corresponding to the D part for C r , in terms of the root system of E 6 . We use the following index of the Dynkin diagram E 6 . Let Π = {α 1 , . . . , α 6 }, −Π, Φ + be the set of the simple roots, the negative simple roots, the positive roots, respectively, of type E 6 . Let σ i be the piecewise-linear analogue of the simple reflection s i , acting on the set of the almost positive roots Φ ≥−1 = Φ + ⊔ (−Π). We write i m i α i ∈ Φ + as [1 m1 , 2 m2 , . . . , 6 m6 ]; furthermore, [1 0 , 2 1 , 3 1 , 4 1 , 5 0 , 6 0 ], for example, is abbreviated as [2,3,4]. We define σ as the composition σ = σ 3 (σ 4 σ 2 σ 6 )σ 3 (σ 4 σ 1 σ 5 ). In particular, these elements in Φ + exhaust the set Φ + , thereby providing the orbit decomposition of Φ + by σ. For −h ∨ ≤ u < 0, define α i (u) =                σ −u/2 (−α i ) i = 1, 4, 5; u ≡ 0, σ −(u−1)/2 (−α i ) i = 2, 6; u ≡ −1, σ −(u−1)/2 (α 4 ) i = 4; u ≡ −1, σ −(2u−1)/4 (−α 3 ) i = 3; u ≡ − 3 2 , σ −(2u+1)/4 (α 3 ) i = 3; u ≡ − 1 2 , (4.12) where ≡ is mod 2Z. By Lemma 4.8 they are (all the) positive roots of E 6 . For a monomial m in y = (y i ) i∈I , let π A (m) denote the specialization with y 31 = y 33 = y 41 = y 43 = 1. For simplicity, we set y i1 = y i (i = 1, 2, 5, 6), y i2 = y i (i = 3, 4), and also, y i1 (u) = y i (u) (i = 1, 2, 5, 6), y i2 (u) = y i (u) (i = 3, 4). We define the vectors t i (u) = (t i (u) k ) 6 k=1 by π A ([y i (u)] T ) = 6 k=1 y ti(u) k k . (4.13) We also identify each vector t i (u) with α = 6 k=1 t i (u) k α k ∈ ZΠ. Proposition 4.9. Let −h ∨ ≤ u < 0. Then, we have t i (u) = −α i (u) (4.14) for (i, u) in (4.12). 4.7. Tropical Y-systems at higher levels. Due to the factorization property, we obtain the following. (i) Let u be in the region 0 ≤ u < ℓ. For any (i, u) : p + , the monomial [y i (u)] T is positive. (ii) Let u be in the region −h ∨ ≤ u < 0. (a) Let i ∈ I • or i = (3, i ′ ), (4, i ′ ) (i ′ ∈ 2N). For any (i, u) : p + , the monomial [y i (u)] T is negative. (b) Let i = (3, i ′ ), (4, i ′ ) (i ′ ∈ 2N). For any (i, u) : p + , the monomial [y i (u)] T is negative for u = − 1 2 , −1, − 3 2 , −3, − 7 2 , −4, − 11 2 , −6, − 13 2 , −8, − 17 2 , −9 and positive for u = −2, − 5 2 , − 9 2 , −5, −7, − 15 2 . 4.8. Periodicities and dilogarithm identities. Applying [IIKKN,Theorem 5.1] to Theorem 4.11, we obtain the periodicities: (iii) y ii ′ (ℓ) = y −1 i,ℓ−i ′ if i = 1, 2, 5, 6 and y −1 i,2ℓ−i ′ if i = 3, 4. (iv) y ii ′ (−h ∨ ) = y −1 7−i,i ′ if i = 1, Theorem 4.13. For A(B, x, y), the following relations hold. (i) Half periodicity: x i (u + h ∨ + ℓ) = x ω(i) (u). (ii) Full periodicity: x i (u + 2(h ∨ + ℓ)) = x i (u). Theorem 4.14. For G(B, y), the following relations hold. (i) Half periodicity: y i (u + h ∨ + ℓ) = y ω(i) (u). (ii) Full periodicity: y i (u + 2(h ∨ + ℓ)) = y i (u). Then, Theorems 2.5 and 2.6 for F 4 follow from Theorems 4.4, 4.6, 4.13, and 4.14. Furthermore, Theorem 2.10 for F 4 is obtained from the above periodicities and Theorem 4.12 as in the B r case [IIKKN, Section 6]. Type G 2 The G 2 case is mostly parallel to the former cases, but slightly different because the number t in (2.2) is three. Again, the properties of the tropical Y-system at level 2 (Proposition 5.9) are crucial and specific to G 2 . 5.1. Parity decompositions of T and Y-systems. For a triplet (a, m, u) ∈ I ℓ , we reset the parity conditions P + and P − by P + : a + m + 3u is even, (5.1) P − : a + m + 3u is odd. (5.2) Then, we have T • ℓ (G 2 ) + ≃ T • ℓ (G 2 ) − by T (a) m (u) → T (a) m (u + 1 3 ) and T • ℓ (G 2 ) ≃ T • ℓ (G 2 ) + ⊗ Z T • ℓ (G 2 ) − . (5.3) For a triplet (a, m, u) ∈ I ℓ , we reset the parity conditions P ′ + and P ′ − by P ′ + : a + m + 3u is odd, (5.4) P ′ − : a + m + 3u is even. Figure 7. The quiver Q ℓ (G 2 ) for even ℓ (upper) and for odd ℓ (lower), where we identify the right columns in the three quivers. (5.5) B © VI III VI III + − + − + − + − + − + − + − ℓ−1                                                                                               3ℓ−1 We have (a, m, u) : P ′ + ⇐⇒ (a, m, u ± 1 ta ) : P + . (5.6) Also, we have Y • ℓ (G 2 ) + ≃ Y • ℓ (G 2 ) − by Y (a) m (u) → Y (a) m (u + 1 3 ), 1 + Y (a) m (u) → 1 + Y (a) m (u + 1 3 ), and Y • ℓ (G 2 ) ≃ Y • ℓ (G 2 ) + × Y • ℓ (G 2 ) − . (5.7) 5.2. Quiver Q ℓ (G 2 ). With type G 2 and ℓ ≥ 2 we associate the quiver Q ℓ (G 2 ) by Figure 7, where the right columns in the three quivers are identified. Also we assign the empty or filled circle •/• to each vertex; furthermore, we assign the sign +/− to each vertex of property •, and one of the numbers I,. . . , VI to each vertex of property •. Let us choose the index set I of the vertices of Q ℓ (G 2 ) so that i = (i, i ′ ) ∈ I represents the vertex at the i ′ th row (from the bottom) of the left column in the ith quiver (from the left) for i = 1, 2, 3, and the one of the right column in any quiver for i = 4. Thus, i = 1, . . . , 4, and i ′ = 1, . . . , ℓ−1 if i = 4 and i ′ = 1, . . . , 3ℓ−1 if i = 4. For a permutation s of {1, 2, 3}, let ν s be the permutation of I such that ν s (i, i ′ ) = (s(i), i ′ ) for i = 4 and (4, i ′ ) for i = 4. Let ω be the involution acting on I by the up-down reflection. Let ν s (Q ℓ (G 2 )) and ω(Q ℓ (G 2 )) denote the quivers induced from Q ℓ (G 2 ) by ν s and ω, respectively. Lemma 5.1. Let Q = Q ℓ (G 2 ). (i) We have a periodic sequence of mutations of quivers Q µ • + µ • I ←→ ν (23) (Q) op µ • − µ • II ←→ ν (312) (Q) µ • + µ • III ←→ ν (13) (Q) op µ • − µ • IV ←→ ν (231) (Q) µ • + µ • V ←→ ν (12) (Q) op µ • − µ • VI ←→ Q. (5.8) (ii) ω(Q) = Q if h ∨ + ℓ is even, and ω(Q) = ν (13) (Q) op if h ∨ + ℓ is odd. See Figures 8-10 for an example. 5.3. Cluster algebra and alternative labels. Let B ℓ (G 2 ) be the corresponding skew-symmetric matrix to the quiver Q ℓ (G 2 ). In the rest of the section, we set the matrix B = (B ij ) i,j∈I = B ℓ (G 2 ) unless otherwise mentioned. Let A(B, x, y) be the cluster algebra with coefficients in the universal semifield Q sf (y), and G(B, y) be the coefficient group associated with A(B, x, y). In view of Lemma 5.1 we set x(0) = x, y(0) = y and define clusters x(u) = (x i (u)) i∈I (u ∈ 1 3 Z) and coefficient tuples y(u) = (y i (u)) i∈I (u ∈ 1 3 Z) by the sequence of mutations · · · µ • − µ • VI ←→ (B, x(0), y(0)) µ • + µ • I ←→ (−ν (23) (B), x( 1 3 ), y( 1 3 )) µ • − µ • II ←→ (ν (312) (B), x( 2 3 ), y( 2 3 )) µ • + µ • III ←→ (−ν (13) (B), x(1), y(1)) µ • − µ • IV ←→ (ν (231) (B), x( 4 3 ), y( 4 3 )) µ • + µ • V ←→ (−ν (12) (B), x( 5 3 ), y( 5 3 )) µ • − µ • VI ←→ · · · . (5.9) where ν s (B) = B ′ is defined by B ′ νs(i)νs(j) = B ij . For a pair (i, u) ∈ I × 1 3 Z, we set the parity condition p + and p − by p + :                    i ∈ I • I ⊔ I • + u ≡ 0 i ∈ I • II ⊔ I • − u ≡ 1 3 i ∈ I • III ⊔ I • + u ≡ 2 3 i ∈ I • IV ⊔ I • − u ≡ 1 i ∈ I • V ⊔ I • + u ≡ 4 3 i ∈ I • VI ⊔ I • − u ≡ 5 3 , p − :                    i ∈ I • VI ⊔ I • − u ≡ 0 i ∈ I • I ⊔ I • + u ≡ 1 3 i ∈ I • II ⊔ I • − u ≡ 2 3 i ∈ I • III ⊔ I • + u ≡ 1 i ∈ I • IV ⊔ I • − u ≡ 4 3 i ∈ I • V ⊔ I • + u ≡ 5 3 ,(5.10) where ≡ is modulo 2Z. We have (i, u) : p + ⇐⇒ (i, u + 1 3 ) : p − . (5.11) Each (i, u) : p + is a mutation point of (5.9) in the forward direction of u, and each (i, u) : p − is so in the backward direction of u. is a bijection. x(0) * x (1) 2 (−1) * x (2) 1 (− 1 3 ) * x (2) 3 (− 1 3 ) * x (2) 5 (− 1 3 ) * x (2) 7 (− 1 3 ) * x(2) We introduce alternative labels x i (u) = x (a) m (u − 1/t a ) ((a, m, u − 1/t a ) ∈ I ℓ+ ) for (i, u) = g((a, m, u − 1/t a )) and y i (u) = y Definition 5.4. The T-subalgebra A T (B, x) of A(B, x) associated with the sequence (5.9) is the subring of A(B, x) generated by [x i (u)] 1 ((i, u) ∈ I × 1 3 Z). Theorem 5.5. The ring T • ℓ (G 2 ) + is isomorphic to A T (B, x) by the correspondence T Definition 5.7. The Y-subgroup G Y (B, y) of G(B, y) associated with the sequence (5.9) is the subgroup of G(B, y) generated by y i (u) ((i, u) ∈ I × 1 3 Z) and 1 + y i (u) ((i, u) : p + or p − ). Theorem 5.8. The group Y • ℓ (G 2 ) + is isomorphic to G Y (B, y) by the correspondence Y For a monomial m in y = (y i ) i∈I , let π D (m) denote the specialization with y 41 = y 42 = y 44 = y 45 = 1. For simplicity, we set y i1 = y i (i = 4), y 43 = y 4 , and also, y i1 (u) = y i (u) (i = 4), y 43 (u) = y 4 (u). We define the vectors t i (u) = (t i (u) k ) 4 We also identify each vector t i (u) with α = 4 k=1 t i (u) k α k ∈ ZΠ. Proposition 5.11. Let −h ∨ ≤ u < 0. Then, we have t i (u) = −α i (u), (5.18) for (i, u) in (5.16). 5.7. Tropical Y-systems of higher levels. Proposition 5.12. Let ℓ > 2 be an integer. For [G Y (B, y)] T with B = B ℓ (G 2 ), the following facts hold. (i) Let u be in the region 0 ≤ u < ℓ. For any (i, u) : p + , the monomial [y i (u)] T is positive. (ii) Let u be in the region −h ∨ ≤ u < 0. Theorem 5.14. For [G Y (B, y)] T , let N + and N − denote the total numbers of the positive and negative monomials, respectively, among [y i (u)] T for (i, u) : p + in the region 0 ≤ u < 2(h ∨ + ℓ). Then, we have N + = 6ℓ(2ℓ + 1), N − = 12(3ℓ − 2). (5.19) 5.8. Periodicities and dilogarithm identities. Applying [IIKKN,Theorem 5.1] to Theorem 5.13, we obtain the periodicities: Theorem 5.15. For A(B, x, y), the following relations hold. (i) Half periodicity: x i (u + h ∨ + ℓ) = x ω(i) (u). (ii) Full periodicity: x i (u + 2(h ∨ + ℓ)) = x i (u). Theorem 5.16. For G(B, y), the following relations hold. (i) Half periodicity: y i (u + h ∨ + ℓ) = y ω(i) (u). (ii) Full periodicity: y i (u + 2(h ∨ + ℓ)) = y i (u). Then, Theorems 2.5 and 2.6 for G 2 follow from Theorems 5.5, 5.8, 5.15, and 5.16. Furthermore, Theorem 2.10 for G 2 is obtained from the above periodicities and Theorem 5.14 as in the B r case [IIKKN, Section 6]. Mutation equivalence of quivers Recall that two quivers Q and Q ′ are said to be mutation equivalent, and denoted by Q ∼ Q ′ here, if there is a quiver isomorphism from Q to some quiver obtained from Q ′ by successive mutations. Below we present several mutation equivalent pairs of the quivers Q ℓ (X r ), though the list is not complete at all. For simply laced X r , Q ℓ (X r ) is the quiver defined as the square product X r A ℓ−1 in [Ke,Section 8]. Proposition 6.1. We have the following mutation equivalences of quivers. Q 2 (B r ) ∼ Q 2 (D 2r+1 ), Q 2 (C 3 ) ∼ Q 3 (D 4 ), Q 2 (F 4 ) ∼ Q 3 (D 5 ), Q 3 (C 2 ) ∼ Q 4 (A 3 ), Q ℓ (G 2 ) ∼ Q ℓ (C 3 ). (6.1) m (u) | (a, m, u) ∈ I ℓ }, where T the unit boundary condition) if they occur in the right hand sides in the relations: (resp. 1, 3, . . . ).)For X r = C r , m (u) | (a, m, u) ∈ I ℓ }, where Y (0) m (u) = Y (a) 0 (u) −1 = Y (a)taℓ (u) −1 = 0 if they occur in the right hand sides in the relations: Figure 3 . 3(Continues fromFigure 2). Proposition 3 . 16 . 316For [G Y (B, y)] T with B = B ℓ (C r ), the following facts hold. (i) Let u be in the region 0 ≤ u < ℓ. For any (i, u) : p + , the monomial [y i (u)] T is positive. Theorem 3 . 17 . 317For [G Y (B, y)] T the following relations hold. (i) Half periodicity: [yi (u + h ∨ + ℓ)] T = [y ω(i) (u)] T . (ii) Full periodicity: [y i (u + 2(h ∨ + ℓ))] T = [y i (u)] T .Theorem 3.18. For [G Y (B, y)] T , let N + and N − denote the total numbers of the positive and negative monomials, respectively, among [y i (u)] T for (i, u) : p + in the Figure 5 . 5(Continues to Figure 6 . 6(Continues fromFigure 5). Lemma 4 . 3 . 43The family {x m (u) | (a, m, u) ∈ I ℓ+ } satisfies the system of relations (3.16) with G(b, k, v; a, m, u) for T ℓ (F 4 ). In particular, the family {[x m, u) ∈ I ℓ+ } satisfies the T-system T ℓ (F 4 ) in A(B, x) by replacing T The T-subalgebra A T (B, x) is defined as Definition 3.5.Theorem 4.4. The ring T • ℓ (F 4 ) + is isomorphic to A T (B, x) by the correspondence T (a) m (u) → [x (a) m (u)] 1 . . Tropical Y-system at level 2. By direct computations, the following properties are verified. Proposition 4 . 7 . 2 , 472For [G Y (B, y)] T with B = B 2 (F 4 ), the following facts hold. (i) Let u be in the region 0 ≤ u < 2. For any(i, u) : p + , the monomial [y i (u)] T is positive. (ii) Let u be in the region −h ∨ ≤ u < 0.(a) Let i = (1, 1), (2, 1), (5, 1), (6, 1), (3, 2), or (4, 2). For any (i, u) : p + , the monomial [y i (u)] T is negative. (b) Let i = (3, 1), (3, 3), (4, 1), or (4, 3). For any (i, u) : p + , the monomial [y i (u)] T is negative for u = − 1 − Proposition 4 . 10 . 410Let ℓ > 2 be an integer. For [G Y (B, y)] T with B = B ℓ (F 4 ), the following facts hold. 2, 5, 6 and y −1 ii ′ if i = 3, 4. We obtain corollaries of Propositions 4.7 and 4.10.Theorem 4.11. For [G Y (B, y)] T the following relations hold.(i) Half periodicity:[y i (u + h ∨ + ℓ)] T = [y ω(i) (u)] T . (ii) Full periodicity: [y i (u + 2(h ∨ + ℓ))] T = [y i (u)] T .Theorem 4.12. For [G Y (B, y)] T , let N + and N − denote the total numbers of the positive and negative monomials, respectively, among [y i (u)] T for (i, u) : p + in the region 0 ≤ u < 2(h ∨ + ℓ). Then, we have N + = 4ℓ(3ℓ + 1), N − = 24(4ℓ − 3). (4.15) Figure 8 . 8(Continues to Figures 9 and 10) Label of cluster variables x i (u) by I ℓ+ for G 2 , ℓ = 4. The right columns in the middle and right quivers (marked by ⋄) are identified with the right column in the left quiver. m (u) ((a, m, u) ∈ I ′ ℓ+ ) for (i, u) = g ′ ((a,m, u)), respectively. See Figures 8-10 for an example. 5.4. T-system and cluster algebra. Lemma 5.3. The family {x m (u) | (a, m, u) ∈ I ℓ+ } satisfies the system of relations (3.16) with G(b, k, v; a, m, u) for T ℓ (G 2 ). In particular, the family {[x (a) m (u)] 1 | (a, m, u) ∈ I ℓ+ } satisfies the T-system T ℓ (G 2 ) in A(B, x) by replacing T m (u) | (a, m, u) ∈ I ′ ℓ+ } satisfies the Y-system Y ℓ (G 2 ) by replacing Y m (u) → y (a) m (u) and 1 + Y (a) m (u) → 1 + y (a) m (u). (a) Let i ∈ I • or (4, i ′ ) (i ′ ∈ 3N). For any (i, u) : p + , the monomial [y i (u)] T is negative. (b) Let i = (4, i ′ ) (i ′ ∈ 3N). For any (i, u) : p + , the monomial [y i (u)] T is negative for u = ) y ii ′ (ℓ) = y −1 i,ℓ−i ′ if i = 4 and y −1 4,3ℓ−i ′ if i = 4. (iv) y ii ′ (−h ∨ ) = y −1ii ′ . We obtain corollaries of Propositions 5.9 and 5.12.Theorem 5.13. For [G Y (B, y)] T , the following relations hold. (i) Half periodicity: [y i (u + h ∨ + ℓ)] T = [y ω(i) (u)] T . (ii) Full periodicity: [y i (u + 2(h ∨ + ℓ))] T = [y i (u)] T . 23 ) 23where [i] =[i, i]. Then, ρ is a bijection. Furthermore, under the bijection ρ, the action of σ is translated into the one of the square of the Coxeter element s ′ = s ′ − s ′+ of type A 2r+1 acting on Φ ′ + , where s ′ ± = i∈J ′ ± s ′ i . −1 −2 −3 −4 −5 −6 −7 −8 −9 −10 −11 1 − −α 1 [1] [2,3] [4,5] [6,7] [8,9] {11} [9, 10] [7,8] [5, 6] [3, 4] [1, 2] −α 1 α 1 2 + α 2 −α 2 [1,3] [2,5] [4,7] [6,9] {8, 11} {9, 10} [7, 10] [5,8] [3,6] [1,4] [2] −α 2 3 − −α 3 [3] [1,5] [2,7] [4,9] {6,11} {8, 9} {7, 10} [5, 10] [3,8] [1, 6] [2, 4] −α 3 α 3 4 + α 4 −α 4 [3,5] [1,7] [2,9] {4,11} {6, 9} {7, 8} {5, 10} [3, 10] [1,8] [2,6] [4] −α 4 5 − −α 5 [5] [3,7] [1,9] {2,11} {4, 9} {6, 7} {5, 8} {3, 10} [1,10] [2,8] [4, 6] −α 5 α 5 6 + α 6 −α 6 [5,7] [3,9] {1,11} {2, 9} {4, 7} {5, 6} {3, 8} {1, 10} [2,10] [4,8] [6] −α 6 7 − −α 7 [7] [5,9] {3,11} {1, 9} {2, 7} {4, 5} {3, 6} {1, 8} {2, 10} [4,10] [6, 8] −α 7 α 7 8 + α 8 −α 8 [7,9] {5,11} {3, 9} {1, 7} {2, 5} {3, 4} {1, 6} {2, 8} {4, 10} [6,10] [8] −α 8 9 − −α 9 [9] {7,11} {5, 9} {3, 7} {1, 5} {2, 3} {1, 4} {2, 6} {4, 8} {6, 10} [8,10] −α 9 α 9 −α 11 −α 10 {9,11} {7, 9} {5, 7} {3, 5} {1, 3} {1, 2} {2, 4} {4, 6} {6, 8} {8, 10} [10] −α 11 −α 10 , the following facts hold.(i) Let u be in the region 0 ≤ u < 2. For any (i, u) : p + , the monomial [y i (u)] T is positive.(ii) Let u be in the region −h ∨ ≤ u < 0.ii ′ . Also we have a description of the core part of [y i (u)] T in the region −h ∨ ≤ u < 0 in terms of the root system of D 4 . We use the following index of the Dynkin diagram D 4 . Let Π = {α 1 , . . . , α 4 }, −Π, Φ + be the set of the simple roots, the negative simple roots, the positive roots, respectively, of type D 4 . Let σ i be the piecewise-linear analogue of the simple reflection s i , acting on the set of the almost positive roots Φ ≥−1 = Φ + ⊔ (−Π). We define σ as the composition σ = σ 3 σ 4 σ 1 σ 4 σ 2 σ 4 . (5.14)Lemma 5.10. We have the orbits by σ − α 1 → α 1 + α 3 + α 4 → α 1 + α 2 + α 4 → −α 1 ,(5.15)In particular, these elements in Φ + exhaust the set Φ + , thereby providing the orbit decomposition of Φ + by σ.i = 2; u = − 5 3 , − 11 3 , σ −(3u−5)/6 (−α 3 ) i = 3; u = − 1 3 , − 7 3 , σ −(3u+2)/6 (α 3 + α 4 ) i = 4; u = − 2 3 , − 8 3 , σ −(3u+4)/6 (α 1 ) i = 4; u = − 4 3 , − 10 3 , σ −u/2 (−α 4 ) i = 4; u = −2, −4. Tropical Y-system at level 2. By direct computations, the following properties are verified. Tropical Y-system at level 2. By direct computations, the following prop- erties are verified. Y-systems and generalized associahedra. S Fomin, A Zelevinsky, Ann. of Math. 158S. Fomin, A. Zelevinsky, Y-systems and generalized associahedra, Ann. of Math. 158 (2003), 977-1018. S Fomin, A Zelevinsky, Cluster algebras IV. Coefficients. 143S. Fomin, A. Zelevinsky, Cluster algebras IV. Coefficients, Compositio Mathematica 143 (2007), 112-164. R Inoue, O Iyama, B Keller, A Kuniba, T Nakanishi, arXiv:1001.1880Periodicities of T and Y-systems, dilogarithm identities, and cluster algebras I: Type Br. R. Inoue, O. Iyama, B. Keller, A. Kuniba, T. Nakanishi, Periodicities of T and Y-systems, dilogarithm identities, and cluster algebras I: Type Br, arXiv:1001.1880. Periodicities of T-systems and Y-systems. R Inoue, O Iyama, A Kuniba, T Nakanishi, J Suzuki, Nagoya Math. J. 197R. Inoue, O. Iyama, A. Kuniba, T. Nakanishi, J. Suzuki, Periodicities of T-systems and Y-systems, Nagoya Math. J. 197 (2010), 59-174. B Keller, Cluster algebras, quiver representations and triangulated categories. T. Holm, P. Jørgensen, and R. RouquierCambridge University Press375Triangulated categoriesB. Keller, Cluster algebras, quiver representations and triangulated categories, in Triangu- lated categories, T. Holm, P. Jørgensen, and R. Rouquier, eds., London Mathematical Society, Lecture Note Series vol. 375, Cambridge University Press, 2010, pp. 76-160. Identities for the Rogers dilogarithm function connected with simple Lie algebras. A N Kirillov, J. Sov. Math. 47A. N. Kirillov, Identities for the Rogers dilogarithm function connected with simple Lie algebras, J. Sov. Math. 47 (1989), 2450-2458. A Kuniba, Thermodynamics of the Uq(X (1). A. Kuniba, Thermodynamics of the Uq(X (1) Bethe ansatz system with q a root of unity. Nucl. Phys. 389Bethe ansatz system with q a root of unity, Nucl. Phys. B389 (1993), 209-244. Spectra in conformal field theories from the Rogers dilogarithm. A Kuniba, T Nakanishi, Mod. Phys. Lett. 7A. Kuniba, T. Nakanishi, Spectra in conformal field theories from the Rogers dilogarithm, Mod. Phys. Lett. A7 (1992), 3487-3494. Functional relations in solvable lattice models: I. Functional relations and representation theory. A Kuniba, T Nakanishi, J Suzuki, Int. J. Mod. Phys. A. 9A. Kuniba, T. Nakanishi, J. Suzuki, Functional relations in solvable lattice models: I. Functional relations and representation theory, Int. J. Mod. Phys. A 9 (1994), 5215-5266. Cluster characters for cluster categories with infinite-dimensional morphism spaces. P Plamondon, arXiv:1002.4956P. Plamondon, Cluster characters for cluster categories with infinite-dimensional morphism spaces, arXiv:1002.4956. P Plamondon, arXiv:1004.0830Cluster algebras via cluster categories with infinite-dimensional morphism spaces. P. Plamondon, Cluster algebras via cluster categories with infinite-dimensional morphism spaces, arXiv:1004.0830.
[]
[ "Anomalous radiative transitions", "Anomalous radiative transitions" ]
[ "Kenzo Ishikawa \nDepartment of Physics\nFaculty of Science\nHokkaido University\n060-0810SapporoJapan\n", "Toshiki Tajima \nDepartment of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA\n\nKEK High Energy Accelerator Research Organization\n305-0801TsukubaJapan\n", "Yutaka Tobita \nDepartment of Physics\nFaculty of Science\nHokkaido University\n060-0810SapporoJapan\n" ]
[ "Department of Physics\nFaculty of Science\nHokkaido University\n060-0810SapporoJapan", "Department of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA", "KEK High Energy Accelerator Research Organization\n305-0801TsukubaJapan", "Department of Physics\nFaculty of Science\nHokkaido University\n060-0810SapporoJapan" ]
[]
Anomalous transitions involving photons derived by many-body interaction of the form, ∂ µ G µ , in the standard model are studied. This does not affect the equation of motion in the bulk, but makes wave functions modified, and causes the unusual transition characterized by the time-independent probability. In the transition probability at a time-interval T expressed generally in the form P = T Γ 0 + P (d) , now with P (d) = 0. The diffractive term P (d) has the origin in the overlap of waves of the initial and final states, and reveals the characteristics of waves. In particular, the processes of the neutrino-photon interaction ordinarily forbidden by Landau-Yang's theorem (Γ 0 = 0) manifests itself through the boundary interaction. The new term leads to physical processes over a wide energy range to have finite probabilities. New methods of detecting neutrinos using laser are proposed that are based on this difractive term, which enhance the detectability of neutrinos by many orders of magnitude.
10.1093/ptep/ptu168
[ "https://arxiv.org/pdf/1409.4339v3.pdf" ]
85,452,255
1409.4339
6d8171e12693dda80a614796be1c30e0a0c64ed6
Anomalous radiative transitions 30 Sep 2014 Kenzo Ishikawa Department of Physics Faculty of Science Hokkaido University 060-0810SapporoJapan Toshiki Tajima Department of Physics and Astronomy University of California 92697IrvineCAUSA KEK High Energy Accelerator Research Organization 305-0801TsukubaJapan Yutaka Tobita Department of Physics Faculty of Science Hokkaido University 060-0810SapporoJapan Anomalous radiative transitions 30 Sep 2014 Anomalous transitions involving photons derived by many-body interaction of the form, ∂ µ G µ , in the standard model are studied. This does not affect the equation of motion in the bulk, but makes wave functions modified, and causes the unusual transition characterized by the time-independent probability. In the transition probability at a time-interval T expressed generally in the form P = T Γ 0 + P (d) , now with P (d) = 0. The diffractive term P (d) has the origin in the overlap of waves of the initial and final states, and reveals the characteristics of waves. In particular, the processes of the neutrino-photon interaction ordinarily forbidden by Landau-Yang's theorem (Γ 0 = 0) manifests itself through the boundary interaction. The new term leads to physical processes over a wide energy range to have finite probabilities. New methods of detecting neutrinos using laser are proposed that are based on this difractive term, which enhance the detectability of neutrinos by many orders of magnitude. Matter wave and S[T] In modern science and technology, quantum mechanics plays fundamental roles. Despite the fact that stationary phenomena and method have been well developed, those of non-stationary phenomena have not. In the former, the de Broglie wave length p determines a typical length and is of a smicroscopic size, and scatterings or reactions in macroscopic scale are considered independent, and successive reactions have been treated under the independent scattering hypothesis. The probability of the event that they occur are computed by the incoherent sum of each value. In the latter, time and space variables vary simultaneously, and a new scale, which can be much larger than the de Broglie wave length, emerges. They appear in overlapping regions of the initial and final waves, and show unique properties of intriguing quantum mechanical waves. A transition rate computed with a method for stationary waves with initial and final states defined at the infinite time-interval T = ∞ is independent of the details of the wave functions. They hold characteristics of particles and preserve symmetry of the system. Transitions occurring at a finite T , however, reveal characteristics of waves, the dependence on the boundary conditions [1,2] 1 , and the probability, P = T Γ 0 + P (d) ,(1) where P (d) is the diffractive term which has often escaped attention by researchers. The rate Γ 0 is computed with Fermi's golden rule [6,7,8,9,10], and preserves the internal and space-time symmetries, including the kineticenergy conservation. Γ 0 holds the characteristic properties of particles and the hypothesis of independent scatterings is valid. For a particle of small mass, m s , Γ 0 (p i , m s ) behaves Γ 0 (p i , m s ) ≈ Γ 0 (p i , 0),(2) because the characteristic length, the de Broglie wave length, is determined with p i . The region where P (d) is ignorable is called the particle-zone. Overlapping waves in the initial and final states have the finite interaction energy, and reveal unique properties of waves [1,2]. Because the interaction energy is part of the total energy, sharing with the kinetic energy, the conservation law of the kinetic-energy is violated. Consequently, the state becomes non-uniform in time and the transition probability has a new component P (d) , showing characteristics of waves. The term P (d) T was shown to behave with a new scale of length, ( msc ) · ( E i msc 2 ). Accordingly the correction is proportional to the ratio of two small quantities P (d) T = f ( 1/T m 2 s c 3 / E i ),(3) and does not follow Eq. (2). P (d) reflects the non-stationary waves and is not computed with the stationary waves. In the region where P (d) is important, the hypothesis of independent scattering is invalid, and interference unique to waves manifests. This region is called the wave-zone, and extends to a large area for light particles. P (d) has been ignored, but gives important contributions to the probability in various processes. Especially P (d) is inevitable for the process of Γ 0 = 0 and P (d) = 0, which often appears. Furthermore, P (d) can be enhanced drastically, if the overlap of the waves is constructive in wide area. This happens for small m s , large E i , even for large T , and reveals macroscopic quantum phenomena. Processes of large P (d) involving photon and neutrino in the standard model are studied in the present paper. An example of showing Γ 0 = 0, P (d) = 0 is a system of fields described by a free part L 0 and interaction part L int of total derivative, L = L 0 + L int , L int = d dt G,(4) where G is a polynomial of fields φ l (x). φ l (x) follows free equation, ∂L 0 ∂φ l (x) − ∂ ∂t ∂L 0 ∂ ∂φ l (x) ∂t = 0.(5) L int decouples from the equation and does not modify the equation of motion in classical and quantum mechanics. Nevertheless, a wave function |Ψ(t) follows a Schrödinger equation in the interaction picture, i ∂ ∂t |Ψ int = ( ∂ ∂t G int (t))|Ψ int ,(6) where the free part, H 0 , and the interaction part, H int , are derived from the previous Lagrangian, and G int stands for G of the interaction picture. A solution at t, |Ψ(t) int = e G int (t)−G int (0) i |Ψ(0) int ,(7) is expressed with G(t), and the state at t > 0 is modified by the interaction. The initial state |Ψ(0) int prepared at t = 0 is transformed to the other state of t-independent weight. Hence, Eq. (7), is like stationary, and Γ 0 = 0 , and P (d) = 0. Physical observables are expressed by the probability of the events, which are specified by the initial and final states. For those at finite T , normal S-matrix, S[∞], which satisfies the boundary condition at T = ∞ instead of those at finite T , is useless. S[T ] that satisfies the boundary condition at T [1,2] is necessary and was constructed. S[T ] is applied to the system described by Eq. (4). hence a matrix element of S[T ] between two eigenstates of H 0 , |α and |β of eigenvalues E α and E β , is decomposed into two components β|S[T ]|α = β|S (n) [T ]|α + β|S (d) [T ]|α ,(9) where β|S (n) |α and β|S (d) |α get contributions from cases E β = E α and E β = E α , and give Γ 0 T and P (d) respectively. The deviation of the kineticenergies, E β − E α , in the latter is due to the interaction energy of the overlapping waves, which depends on the coordinate system. Therefore, it is understood that H int is not Lorentz invariant. Thus the kinetic-energy nonconserving term, which was mentioned by Pierls and Landau [4] so as to give negligibly small correction, yields P (d) [5] 2 . Because H 0 is an generator of the Poincare group, Eq. (8) shows that S (d) [T] and P (d) violate the Poincare invariance. In the system described by Eq. (4), the first term disappears but the second term does not, Γ 0 = 0, and P (d) = 0. S[T ] is expressed with the boundary conditions for the scalar field φ(x) [12,13], lim t→−T /2 α|φ f |β = α|φ f in |β ,(10)lim t→+T /2 φ f |0 = α|φ f out |β ,(11) where φ in (x) and φ out (x) satisfy the free wave equation, and φ f , φ f in , and φ f out are the expansion coefficient of φ(x), φ in (x), and φ out (x), with the normalized wave functions f (x) of the form φ f (t) = i d 3 xf * ( x, t) ← → ∂ 0 φ( x, t).(12) The function f (x) indicates the wave function that the out-going wave interacts in a successive reaction of the process. The out-going photon studied in the following section interacts with atom or nucleus and their wave functions are used for f (x [12,13,8,9,10] can be replaced with plane waves for a practical computation for S[∞] [18,19,20,21,22,23,24,25], but can not be done so for P (d) [14,15,16]. P (d) is derived from S (d) [T ; f ], and depends on f (x). Photon is massless in vacuum and has a small effective mass determined by the plasma frequency in matter, and neutrino is nearly massless. Thus they have the large wave-zone of revealing wave phenomena caused by P (d) . These small masses make P (d) appear in a macroscopic scale and significantly affect physical reactions. In this small (or zero) mass region the effects of diffractive term P (d) are pronounced. This is our interest in the present paper. The produced photon interacts with matter with the electromagnetic interaction, which leads to macroscopic observables. The term P (d) of the processes of Γ 0 = 0 such as 2γ decays of 1 + meson and γ and ν reactions are shown to be relevant to many physical processes, including possible experimental observation of relic neutrino. The enhancement of the probability for light particles with intense photons based on the normal component Γ 0 was proposed in Refs. [27,28], and the collective interaction between electrons and neutrino derived from the normal component Γ 0 was considered in Ref. [29]. Our theory is based on the probability P (d) , hence differs from the previous ones in many respects. This paper is organized in the following manner. In Section 2, the couplings of two photon with 1 + state through triangle diagram is obtained. In section 3 and 4, positronium and heavy quarkonium are studied and their P (d) are computed. Based on these studies we go on to investigate the interaction of photons and neutrinos. In section 5, neutrino-photon interaction of the order αG F , and various implications to high energy neutrino phenomena are presented. In Section 6 we explore the implication of the photon-neutrino coupling on experimental settings. Summary is given in section 7. Coupling of 1 + meson with two photons The coupling of γγ with axial vector states, 1 + meson composed of e − e + , QQ, and νν are studied using an effective Lagrangian expressed by local fields. From symmetry considerations, an effective interaction of the 1 + state φ µ 1 with two photons has the form, S int = g d 4 x∂ µ (φ µ 1 (x)F αβ (x)F αβ (x)),(13)∂ µ φ µ 1 (x) = 0,F αβ (x) = ǫ αβγν F γν (x), where F αβ is the electromagnetic field, and the coupling strength g is computed later. In a transition of plane waves in infinite-time interval, the spacetime boundary is at the infinity, and the transition amplitude is computed with the plane waves, in the form, M = (p i − p f ) µ (2π) 4 δ(p i − p f )M µ(14) and vanishes, where p i and p f are four-dimensional momenta of the initial and final states andM is the invariant amplitude. This shows that the amplitude proportional to δ 4 (p i − p f ) and the transition rate Γ 0 vanish. The rate of 1 + → γγ decay vanishes in general systems, because the state of two photons of momenta ( p, − p) does not couple with a massive 1 + particle. Hence Γ 1 + →γγ 0 = 0,(15) which is known as Landau-Yang's theorem [30,31]. The term S int is written as a surface term in four-dimensional space-time, S int = surf ace dS µ g(φ µ 1 (x)F αβ (x)F αβ (x)),(16) which is determined by the wave functions of the initial and final states. Accordingly, the transition amplitude derived from this surface action is not proportional to T , but has a weaker T dependence. Thus P (d) comes from the surface term, and does not have the delta function of kinetic-energy conservation. Kinetic energy of the final states deviates from that of the initial state due to the finite interaction energy between them. The deviation becomes larger and P (d) is expected to increase with larger overlap. We find P (d) in the following. Triangle diagram The interaction of the form Eq. (13) is generated by one loop effect in the standard model. The scalar and axial vector current J(0) =l(0)l(0),(17)J 5,µ =l(0)γ 5 γ µ l(0)(18) in QED, we have L = L 0 + L int , L 0 =l(x)(γ · p − m l )l(x) − 1 4 F µν (x)F µν (x),(19)L int = eJ µ A µ (x), J µ (x) =l(x)γ µ l(x), where A µ (x) is the photon field and l(x) is the electron field, coupled through two photons in the bulk through the triangle diagram Fig. 1. The matrix elements are where ǫ µ (k) is the polarization vector for the photon. The triangle diagram for the axial vector current Eq. (21) has been studied in connection with axial anomaly and π 0 → γγ decay [32,33,34,35,36] and now is applied to P (d) for two photon transitions of the axial vector meson and neutrino. The triangle diagram Fig. 1 shows that the interaction occurs localy in space and time, but the transition amplitude is the integral over the coordinates and receives the large diffractive contribution if the neutrino and photon are spatially spread waves. In Fig. 1, the in-coming and out-going waves are expressed by lines, but they are in fact the spread waves, which is obvious in the figures in Ref. [37] and in Fig. 2. Γ 5,α is expressed also with f 1 in the form Γ 0 = 0|J(0)|k 1 , k 2 = e 2 4π 2 ǫ µ (k 1 )ǫ ν (k 2 )m[k 2,µ k 1ν − g µν k 1 · k 2 ]f 0 ,(20)Γ 5,α = 0|J 5,α (0)|k 1 , k 2 = −i e 2 4π 2 2f 1 ǫ µ 1 (k 1 )ǫ µ 2 (k 2 ) × [(k 1,µ 2 ǫ µ 1 ν 1 ν 2 α − k 2,µ 1 ǫ µ 2 ν 1 ν 2 α )k ν 1 1 k ν 2 2 + (k 1 · k 2 )ǫ µ 1 µ 2 να (k 1 − k 2 ) ν ],(21)Γ 5,α = α em 2π f 1 (k 1 + k 2 ) α (F 1,ρλ F ρλ 2 +F 2,ρλ F ρλ 1 ),(22)F 1,ρλ = k 1,ρ ǫ 1,λ − k 1,λ ǫ 2,ρ ,F ρλ = 1 2 ǫ ρλξη F ξη . The coefficients f 0 and f 1 are given by the integral over the Feynman parameters, f 0 = 1 0 dx 1−x 0 dy 1 m 2 l − 2xyk 1 · k 2 − iǫ ,(23)f 1 = 1 0 dx 1−x 0 dy xy m 2 l − 2xyk 1 · k 2 − iǫ ,(24) γ (detected) ν γ ν Figure 2: Diagram of the neutrino photon scattering, ν + γ → ν + γ of spatially spread waves. where f 1 = − 1 4k 1 · k 2 + m 2 l 4(k 1 · k 2 ) 2 I 1 ,(25)I 1 = 2k 1 · k 2 1 0 dx 1−x 0 dy 1 m 2 l − 2xyk 1 · k 2 − iǫ =          2Sin −1 k 1 ·k 2 2m 2 l , if k 1 · k 2 < 2m 2 l , π 2 2 − 1 2 log 2 1+ 1− 2m 2 l k 1 ·k 2 1− 1− 2m 2 l k 1 ·k 2 + iπ log 1+ 1− 2m 2 l k 1 ·k 2 1− 1− 2m 2 l k 1 ·k 2 , if k 1 · k 2 ≥ 2m 2 l .(26) f 1 in various kinematical regions is f 1 =      −1 4k 1 ·k 2 + m 2 4(k 1 ·k 2 ) 2 π(1 − 2δE m l ); k 1 · k 2 = 2m 2 l − m l δE, −1 4k 1 ·k 2 ; k 1 · k 2 ≫ m 2 l , 1 2m 2 l ; k 1 · k 2 ≪ m 2 l .(27) Positronium The bound states of a positronium with the orbital angular momentum L = 1, S = 1 have total angular momentum J = 2, 1, 0. These states at rest of P = 0 are expressed with non-relativistic wave functions and creation operators of l + and l − of momentum p, spin ±1/2, and P-wave wave functions p i F (p) as are given in Appendix A, where |1/2, 1/2; −1 = d p b † +1/2 ( p )d † +1/2 (− p )(p x − ip y )F (p)|0 .(28) Others are defined in the same manner. These bound states couple with the lepton pairs with effective local interactions which are expressed as L int = g 0 φ 0 (x)l(x)l(x) + g 1 φ µ 1 (x)l(x)γ 5 γ µ l(x) + g 2 φ µν 2 (x)l(x)γ µ ∂ ν l(x),(29) where the coupling strengths are computed from g 0 = 0|l(0)l(0)|φ 0 ; p = 0 ,(30)g 1 ǫ µ ( p = 0) = 0|l(0)γ 5 γ µ l(0)|φ 1 ; p = 0 ,(31)g 2 ǫ µν ( p = 0) = 0|l(0)γ µ ∂ ν l(0)|φ 2 ; p = 0 ,(32) where the ǫ µ , ǫ µν are polarization vector or tensor for the massive vector and tensor mesons. We have g 0 = gN 0 , g 1 = gN 1 , g 2 = gN 2 ,(33)g = −2π dp p 4 F (p) 2 |E| + m , N 0 = 1, N 1 = 0.862, N 2 = 1.33. The decay rates for 0 + and 2 + were studied in [38,39], so that here we concentrate on 1 + and 0 + as a reference for 1 + . Fields φ 0 (x), φ 1 (x), and φ 2 (x) couple with two photons through the triangle diagrams. Their interactions with two photons are summarized in the following effective Lagrangian, L int = g 0 α π f 0 φ 0 F µν F µν + g 1 α π f 1 ∂ µ (φ µ 1 ǫ νρστ F νρ F στ ) + g 2 α π T µν F µρ F ρν .(34) Axial vector positronium Here we study the two photon decay of axial vector positronium, which is governed by the second term of the right-hand side of Eq. (34). The matrix element of the axial current between the vacuum and two photon state was computed by Refs. [32,33,34,35,36]. Because Γ 0 of two photon decay of axial vector meson vanishes due to Landau-Yang's theorem, but P (d) does not, we give the detailed derivation of P (d) . From the effective interaction, Eq. (34), the probability amplitude of the event that one of the photons of k γ from the decay of c µ of p c is detected at ( X γ , T γ ) is M = −g 1 α π d 4 x ∂ ∂x µ [f 1 0|c µ (x)| p c ( k γ , X γ , T γ ), k 1 |ǫ νρστ F νρ F στ |0 ], (35) where k γ , X γ , T γ |A µ (x)|0 = N γ d k 2 ρ γ ( k 2 )e − σγ 2 ( k 2 − kγ) 2 +i(E( k 2 )(t−Tγ )− k 2 ( x− Xγ )) ǫ µ ( k 2 ),(36)k 1 |A µ (x)|0 = ρ γ ( k 1 )ǫ µ ( k 2 )e +i(E( k 1 )t− k 1 x) , 0|c µ (x)| p c = (2π) 3/2 ρ c ( p c )ǫ µ ( p c )e −i(E( kc)t− kc x) . The initial state is normalized, and the coupling in Eq. (33) has (2π) 3 2 for the initial state, and k µ ǫ µ (k) = 0, (p c ) µ ǫ µ (p c ) = 0,(37)ρ( k) = 1 2E(k)(2π) 3 1 2 . The state k γ , X γ , T γ | is normalized and, N γ = σ γ π 3 4 .(38) Integration over k 2 is made prior to the integration over x, in order for M to satisfy the boundary condition of S[T ], and we have, k γ , X γ , T γ |A µ (x)|0 = θ(λ) (2π) 3 σ γ σ 2 T 1 2 ρ( k γ + δ k)ǫ µ ( k γ + δ k) ×e i(E( kγ )(t−Tγ )− kγ ( x− Xγ))−χ(xµ) ,(39) where λ = (t − T γ ) 2 − ( x − X γ ) 2 ,(40)χ(x) = 1 2σ γ (( x − X γ ) l − v γ (x 0 − T γ )) 2 + 1 2σ T ( x − X γ ) 2 T ,(41)(δ k(x)) i = − i σ i γ δ x, δ x = ( x − X γ − v γ (t − T γ )), σ l γ = σ γ ; i = longitudinal, σ T γ = σ T ; i = transverse. The wavepacket expands in the transverse direction, and σ T in large x 0 − T γ is given by σ T = σ γ − i E (x 0 − T γ ).(42) We later use spin ǫ µ ( k γ + δ k)ǫ ν ( k γ + δ k) = −g µν ,(43)δk 0 (x) = i σ γ δx 0 , δx 0 = v( x − X γ − v γ (t − T γ )). The stationary phase for large x 0 − T γ exists in the time-like region λ ≥ 0 [14], and the function and its derivative is proportional to θ(λ). Thus the integration in Eq. (35) is made over the region λ ≥ 0, which has the boundary, λ = 0. Consequently the transition amplitude does not vanish if the integrand at the boundary is finite. It is shown that this is the case in fact. σ γ is the size of the nucleus or atomic wave function that the photon interacts with and is estimated later. For the sake of simplicity, we use the Gaussian form for the main part in this paper. Substituting Eq. (21), we have the amplitude M = −ig 1 α π N λ≥0 d 4 x ∂ ∂x µ [e i(pc−k 1 )xMµ ],(44)M µ = f 1 {k 1 · (k γ + δk(x))}e i(E(kγ )(t−Tγ )− kγ ( x− Xγ ))−χ(x) T µ , T µ = ǫ µ (p c )ǫ ν (k 1 ) * ǫ ρλξν k ξ 1 (k γ + δk γ (x)) ρ ǫ λ (k γ + δk γ (x)), N = (2π) 3/2 ρ c ( p c )ρ γ ( k γ )ρ( k 1 )N γ (2π) 3 σ γ σ 2 T 1 2 ,(45) which depends on the momenta and coordinates (T γ , X γ ) of the final state and T m . Although M is written as the integral over the surface, λ = 0, the expression Eq. (44) is useful and applied for computing the probability per particle P = 1 V d X γ (2π) 3 d k γ d k 1 s 1 ,s 2 |M| 2 ,(46) where V is a normalization volume for the initial state, and the momentum of the non-observed final state is integrated over the whole positive energy region and the position of the observed particle are integrated inside the detector. Following the method of the previous works [1,2] and Appendix B, we write the probability with a correlation function. In the integral d k 1 |M| 2 = gαN πρ(k 1 ) 2 d k 1 ρ 2 ( k 1 ) λ 1 ≥0 d 4 x 1 d 4 x 2 ∂ 2 ∂ µ 1 1 ∂ µ 2 2 F,(47)F =f 1 {k 1 (k γ + δk γ (x 1 ))f * 1 (k 1 (k γ + δk γ (x 2 )))}T µ 1 (x 1 )T µ 2 , * (x 2 ) × e −i(pc−k 1 −kγ )(x 1 −x 2 )−χ(x 1 )−χ * (x 2 ) , we have spin T µ 1 (x 1 )T µ 2 , * (x 2 ) = 2(−g µ 1 µ 2 + p µ 1 c p µ 2 c /M 2 )(k 1 (k γ + δk γ (x 2 ) * ))(k 1 (k γ + δk(x 1 ))).(48) With variables x µ + = x µ 1 + x µ 2 − 2X µ 2 , X 0 = T γ ,(49)δx µ = x µ 1 − x µ 2 ,(50) the integral is written as λ i ≥0 d 4 x 1 d 4 x 2 ∂ 2 ∂x µ 1 ∂x 2,µ F = λ + ≥0 d 4 x + d 4 δx 1 4 ∂ 2 ∂x +,µ 2 − ∂ 2 ∂δx µ 2 F = d 4 δx λ + =0 1 4 d 3 S µ + ∂ ∂x +,µ F,(51) where λ + = x 2 + and x + is integrated over the region λ + ≥ 0, and the integral is computed with the value at the boundary, λ + = 0. δx are integrated over the whole region, and the second term in the second line vanishes. Using the formulas [1,2], 1 (2π) 3 d k 1 2E(k 1 ) e −i(pc−k 1 )·δx = −i 1 4π δ(λ − )ǫ(δt)θ(phase-space) + regular, θ(phase-space) = θ(M 2 − 2p c p γ ),(53)dδ xe ipγ δ x− 1 4σ (δ x− vδt) 2 1 4π δ(λ) = σ T 2 e iφc(δt) δt ,(54)φ c (δt) = ω γ δt, ω γ = m 2 γ 2E γ , and the integrals given in Appendix E, we compute the probability. It is worthwhile to note that the right-cone singularity exists in the kinematical region θ(phase-space) and the probability becomes finite in this region [1,2]. The natural unit, c = = 1, is taken in the majority of places, but c and are written explicitly when it is necessary, and MKSA unit is taken in later parts. After tedious calculations, we have the probability P = 1 3 d X γ V d p γ (2π) 3 E γ N 2 1 4π i i I i ∆ 1+,γ θ(M 2 − 2p c · p γ ),(55) where I i is given in Appendix E, and N 2 = (g 2 π α) 2 1 2E c ( π σ γ ) 3/2 ,(56)∆ 1+,γ = |f 1 ((p c − k γ ) · k γ )| 2 2((p c − k γ ) · k γ ) 2 = (π − 2) 2 32 ; (δE → 0),(57) and π 3 σ γ σ 2 T (1)σ 2 T (2) * 1 2 = π 3 σ 3 γ 1 2 ρ s (x 0 1 − x 0 2 ),(58)ρ s (x 0 1 − x 0 2 ) = 1 + i x 0 1 − x 0 2 E γ σ γ were substituted. Integrating over the gamma's coordinate X γ , we obtain the total volume, which is canceled by the factor V −1 from the normalization of the initial state. The total probability, for the high-energy gamma rays, is then expressed as P = 1 3 π 2 σ 2 l log(ωT ) + σ l m 2 γ d 3 p γ (2π) 3 E γ 1 2 N 2 θ(M 2 − 2p c · p γ )∆ 1+,γ ,(59) where L = cT is the length of the decay region. The kinematical region of the final states is expressed by the step function [2], which is different from the on-mass shell condition. Γ 0 vanishes and P is composed of log T and constant terms. The constant is inversely proportional to m 2 γ , and becomes large for the small m γ . Transition probability A photon γ is massless in vacuum and has an effective mass in matter, which is given by plasma frequency in matter [40] as m ef f = n e e 2 m e ǫ 0 = 4παn e c m e ,(60) where n e is the density of electron, and for the value, n 0 e = 1 (10 −10 ) 3 (m) −3 = 10 30 (m) −3 .(61) Substituting α = 1 137 , m e = 0.5 MeV/c 2 ,(62) we have, m ef f c 2 = 30 n e n (0) e eV.(63) In the air, n e = 3 × 10 25 /m 3 , the mass agrees with m ef f c 2 = 4 × 10 −3 eV,(64) which is comparable to the neutrino mass. At a macroscopic T , ωT = T /T 0 , T −1 0 = m 2 ef f c 4 E = 900 n e n (0) e 1 6 × 10 −16 × 10 6 sec −1 ,(65) ωT is much larger than 1. We have the probability at the system of p c = (E c , 0, 0, p c ), dP dp γ = C (1 − E c p c )p γ + M 2 2p c ; M 2 2(E c + p c ) ≤ p γ ≤ M 2 2(E c − p c ) ,(66)C = π 2 3 3σ 2 γ log T T 0 + 1 4 σ γ m 2 γ N 2 2 ∆ 1+.γ , and the total probability P total = (π − 2) 2 1536 1 √ π 3 √ σ γ log T T 0 + 1 4 1 m 2 γ √ σ γ (gα) 2 E c + 2p c E c (E c + p c ) . (67) Heavy quarkonium The decay of axial vector mesons composed of heavy quarks exhibits the same phenomena. The heavy quark mesons composed of charm and bottom quarks are observed, and show rich decay properties in the two photon decay, radiative transitions, and two gluon decays. Because quarks interact via electromagnetic and strong interactions, non-perturbative effects are not negligible, but the symmetry consideration is valid. Moreover, the nonrelativistic representations are good for these bound states, because quark masses are much greater than the confinement scale. Furthermore they have small spatial sizes. Accordingly we represent them with local fields and find their interactions using the coupling strengths of Eq. (33) and the values of the triangle diagrams, Eqs. (20), and (21). qq → γ + γ Up and down quarks have charge 2e/3 or e/3, and color triplet. Hence the probabilities of two photon decays are obtained by the expression Eq. (67) with charges of quarks 2e/3, e/3, and color factor. It is highly desired to obtain the experimental value for 1 + . qq → gluon + gluon A meson composed of heavy quarks decays to light hadrons through gluons. Color singlet two gluon states are equivalent to two photon states. Accordingly the two gluon decays are calculated in the equivalent way to that of two photon decays as far as the perturbative calculations are concerned. The transition rates for 0 + and 2 + may be calculated in this manner. The total rates for L = 0 charmonium, J/Ψ and η c agree with the values obtained by the perturbative calculations. J/Ψ is C = −1 and decays to three gluons and the latter is C = +1 and decays to two gluons. The former rate is of α 3 s and the latter rate is of α 2 s , where α s is the coupling strength of gluon. Their widths are Γ = 93 keV or Γ = 26.7 MeV, and are consistent with small coupling strength α s ≈ 0.2. Now L = 1 states have C = +1. Hence a meson of J = 2 and that of J = 0 decay to two gluons, whereas the decay rate of a J = 1 meson vanishes by Landau-Yang's theorem. A cc meson of the quantum number 1 + is slightly different from that of positronium, because a gluon hadronizes by a non-perturbative effect, which is peculiar to the gluon. The hadronization length is not rigorously known, but it would be reasonable to assume that the length is on the order of the size of pion. The time interval T is then a microscopic value. P (d) for this time T is estimated in the following. A gluon also hadronizes in the interval of the lightest hadron size, T = R hadron c = c m π .(68) The gluon plasma frequency is estimated from quark density, m ef f = n q e 2 strong m q ǫ 0 ,(69) and n q = 1 (fm) 3 , α s = 0.2, m q c 2 = 2 MeV,(70) Then we have ωT = m 2 ef f c 4 T E = c 4 α 2 s (fm) 3 m q m π E = (c ) 3 m q m π E(fm) 3 α 2 s = (197MeV) 3 2 × 130 × 1500(MeV) 3 × 4 × 10 −2 = 0.9.(71) At small ωT ,g(ωT ) varies with T as is shown in Fig. 2 of Ref. [2] and g(ωT )| ωT =1 = 2.5.(72) Higher order corrections also modify the rates for qq. The values to light hadrons are estimated by [41,42,43,44,45]. The total values for light hadrons are expressed with a singlet and octet components H 1 and H 8 as Γ(χ 0 → light hadrons) = 6.6α 2 s H 1 + 3.96H 8 α 2 s ,(73)Γ(χ 1 → light hadrons) = 0 × α 2 s H 1 + 3.96H 8 α 2 s ,(74)Γ(χ 2 → light hadrons) = 0.682α 2 s H 1 + 3.96H 8 α 2 s .(75) Using values for χ 0 and χ 2 from [49] Γ(χ 0 → light hadrons) = 1.8 MeV, Γ(χ 2 → light hadrons) = 0.278 MeV,(76) we have the rate for χ 1 from H 8 , Γ (8) (χ 1 → light hadrons) = 0.056 MeV.(78) The experimental value for χ 1 is Γ(χ 1 → light hadrons ) = 0.086 MeV (world average),(79) The large discrepancy among experiments for Γ χ 1 →light hadrons may suggest that the value depends on the experimental situation, which may be a feature of P (d) . We have ∆Γ = Γ(χ 1 → light hadrons ) − Γ (8) (χ 1 → light hadrons),(81)∆Γ(world average) = 0.030 MeV ,(82) E1 transition: Ψ ′ → φ 1 + γ, φ 1 → J/Ψ + γ φ 1 is produced in the radiative decays of Ψ ′ through the E 1 transition Ψ ′ → φ 1 + γ,(83) which is expressed by the effective interaction S int = e ′ d 4 xO µν F µ,ν , O µν = ǫ µνρσ Ψ ′ µ φ 1,ν(84) in the local limit. The action Eq. (84) does not take the form of the total derivative, but is written as S int = e ′ d 4 x∂ ν (O µν A µ ) − e ′ d 4 x(∂ ν O µν )A µ .(85) The second term shows the interaction of the local electromagnetic coupling of the current j µ = ∂ ν O µν and the first term shows the surface term. This leads to the constant probability at a finite T , P (d) . The induced P (d) for muon decay was computed in [1], and large probability from P (d) compared with T Γ 0 was found in the region cT = 1 m. The radiative transitions of heavy quarkonium are deeply connected with other radiative transitions and the detailed analysis will be presented elsewhere. Spin 0 and 2 mesons , φ 0 and φ 2 , show the same E 1 transitions and photons show the same behavior from P (d) [46,47,48,49]. A pair of photons of the continuous energy spectrum are produced in the wave zone, and are correlated. On the other hand, a pair of photons of the discrete energy spectrum are produced in the particle zones and are not correlated. Thus the photons in the continuous spectrum are different from the simple background, and it would be possible to confirm the correlation by measuring the time coincident of the two photons. Neutrino-photon interaction The neutrino photon interaction of the strength αG F induced from higher order effects vanishes due to the Landau-Yang-Gell-Mann theorem. But P (d) is free from the theorem and gives observable effects. Moreover, although the strength seems much weaker than the normal weak process, that is enhanced drastically if the photon's effective mass is extremely small. Because P (d) does not preserve Lorentz invariance, a careful treatment is required. The triangle diagram of Fig. 1 is expressed in terms of the action S νγ = 1 2π α em G F √ 2 d 4 xf 1 ∂ ∂x µ (J A µ (x)F αβ F αβ ),(86)J A µ (x) =ν(x)(1 − γ 5 )γ µ ν(x) , where the axial vector meson in Eq. (34) was replaced by the neutrino current. The mass of the neutrino is extremely small, and was neglected. The S νγ leads to the neutrino gamma reactions with Γ 0 = 0, P (d) = 0 on the order αG F . ν + γ → ν + γ The rates Γ 0 of the events ν +ν → γ + γ (87) ν + γ → ν + γ vanishes on the order αG F [50] due to Landau-Yang's theorem. The higher order effects were also shown to be extremely small [34] and these processes have been ignored. The theorem is derived from the rigorous conservation law of the kinetic-energy and angular momentum. However these do not hold in P (d) , due to the interaction energy caused by the overlap of the initial and final wave functions. Consequently, neither P (d) nor the transition probability vanish. These processes are reconsidered with P (d) . From Eq. (86), we have the probability amplitude of the event in which one of the photon of k γ interacts with another object or is detected at X γ as M = − G F α em 2π √ 2 d 4 x ∂ ∂x µ [ p ν,2 |J A µ (x)|p ν,1 ( k γ , X γ , T γ )|F αβ F αβ |k 1 ]. (88) The amplitude is expressed in the same manner as Eq. (44), M = −i G F √ 2 2 π αN λ≥0 d 4 x ∂ ∂x µ [e −i(pν 1 −pν 2 +k 1 )xMµ ],(89)M µ = f (k 1 · k 2 γ )e i(E(k 2 γ )(t−Tγ )− k 2 γ ( x− Xγ ))−χ(x) T µ , T µ =ν(p ν 2 )(1 − γ 5 )γ µ ν(p ν 1 )ǫ αβδζ ǫ α (k 1 )ǫ β (k 2 γ + δk γ (x))k δ 1 (k 2 + δk γ (x)) ζ , N = (2π) 3 σ γ σ 2 T 1 2 N γ (2π) 3 2 ρ γ (k 1 )ρ γ (k 2 γ + δk) 1 (2π) 3 2 m ν m ν E ν 1 E ν 2 1 2 , where the corrections due to higher order in δ k are ignored in the following calculations as in Section (3.1). The probability averaged over the initial spin per unit of particles of initial state, is P = 1 2 d X γ V (2π) 3 d k 2 γ d k 2 ν s 1 ,s 2 |M| 2 = d k γ,2 (2π) 3 2E γ,2 N 2 0 (i)(I 0 p 0 ν 1 (p ν 1 + p γ,1 − p γ 2 ) 0 + I i p i ν 1 (p ν 1 + p γ 1 − p γ 2 ) i ) × (4k 1 · k γ f 1 ) 2 θ(Phase-space), (90) θ(Phase-space) = θ((p ν 1 + k γ,1 ) 2 − 2(p ν 1 + k γ,1 )k γ,2 ), N 2 0 = 1 2 G F √ 2 2α π 2 1 4π |N γ | 2 1 σ γ 3 1 2E 1 ν 1 2E 1 γ , where V is the normalization volume of the initial state, and f 1 (k 1 · k γ ) = 1 4k 1 ·kγ ; high energy, 1 2m 2 e ; low energy,(91) where I 0 and I i are given in Appendix E. The probability P is not Lorentz invariant and the values in the CM frame of the initial neutrino and photon and those of the general frame do not agree generally. Center of mass frame The phase space integral over the momentum in the CM frame p ν 1 + p γ 1 = 0, p = | p ν 1 | (92) is d k γ,2 (2π) 3 2E γ,2 (p(2p − p γ 2 )) 0 (4f 1 k 1 · k γ ) 2 θ(phase-space) = 1 6π 2 p 4 ; high-energy,(93)d k γ,2 (2π) 3 2E γ,2 (p(2p − p γ 2 )) 0 (4f 1 k 1 · k γ ) 2 1 k 2 γ,2 θ(phase-space) = 1 12π 2 p m e 4 p 2 ; low-energy.(94) Thus we have the probability P = 1 2 G F √ 2 2α π 2 1 4π 5 2 √ σ γ log(ωT ) + 1 2m 2 γ √ σ γ p 2 12 ; high-energy, (95) = 1 2 G F √ 2 2α π 2 1 4π 5 2 1 4 3 6π 5 2 σ − 1 2 γ 1 ǫ p m e 4 ; low-energy, whereω in the log term is the average ω, and ǫ is the deviation of the index of refraction from unity given in Appendix C. The log term was ignored in the right-hand side of the low energy region. We found that in the majority of region, 1 ωE √ σγ becomes much larger than √ σ γ log(ωT ). Here we assume that the medium is not ionized. Moving frame At the frame p ν 1 = (0, 0, p ν 1 ), p γ 1 = (0, 0, −p γ 1 ), p ν 1 > p γ 1 the probability in high energy region is P =N 2 0 d k γ,2 (2π) 3 2E γ,2 θ((p 1 + k 1 ) 2 − 2(p 1 + k 1 )k γ,2 ) × (p 0 ν 1 (p ν 1 + p γ 1 − p γ 2 ) 0 + p l ν 1 (p ν 1 + p γ 1 − p γ 2 ) l ) l (σ 2 γ log ωT + σ γ 4ωE ) + p i ν 1 (p ν 1 + p γ 1 − p γ 2 ) i (σ 2 γ log ωT ) =C 0 √ σ γ log ωT + 1 √ σ γ m 2 γ p 2 ν 1 ; p ν 1 ≫ p γ 1 .(97) In the low energy region, the second term of P is given in the form, P = C ′ 0 1 √ σ γ ǫ p ν 1 m e 4 ,(98) and the first term proportional to p 6 ν 1 was ignored. Numerical constants C 0 and C ′ 0 are proportional to ( G F √ 2 2α π ) 2 . The photon effective mass at high energy region, m γ , and the deviation of the refraction constant from unity at low energy region, ǫ, are extremely small in the dilute gas, and P (d) becomes large in these situations. Neutrino interaction with uniform magnetic field ν + B → ν + γ The action Eq. (86) leads to the coherent interactions of neutrinos with the macroscopic electric or magnetic fields. These fields are expressed in MKSA unit. Accordingly we express the Lagrangian in MKSA unit, which is summarized inAppendix D, and compute the probabilities. The magnetic field B in the z-direction is expressed by the field strength F µν (x) = ǫ 3,µν B,(99) and we have the action S νγ (B) = g B d 4 x ∂ ∂x µ (J A µ (x)F 0,z ),(100) where g B is given in Appendix D. Because S νγ (B) is reduced to the surface term, the rate vanishes, Γ 0 = 0, but P (d) = 0. Furthermore, S νγ (B) is not Lorentz invariant, and P (d) for ν i → ν j + γ becomes proportional to m 2 ν of much larger magnitude than the naive expectation. The amplitude is M = −iNg B λ≥0 d 4 x ∂ ∂x µ [e −i(pν 1 −pν 2 )xMµ ],(101)M µ = e i(k 0 (kγ )(x 0 −X 0 γ )− kγ ( x− Xγ ))−χ(x) T µ , T µ =ν(k ν 2 )(1 − γ 5 )γ µ ν(k ν 1 )(ǫ 0 (k γ )k z γ − ǫ z (k γ )k 0 γ ), N = (2π) 3 σ γ σ 2 T 1 2 N γ ρ γ (k γ ) 1 (2π) 3 2 ω 0 ν 1ω 0 ν 2 k 0 ν 1 k 0 ν 2 1 2 , ρ γ (k γ ) = 1 (2π) 3 2k 0 γ ǫ 0 1 2 , where spin (T µ 1 T µ 2 * ) = 8 2ω 0 ν 1 2ω 0 ν 2 (k µ 1 ν 1 k µ 2 ν 2 − g µ 1 µ 2 k ν 1 k ν 2 + k µ 2 ν 1 k µ 1 ν 2 )(k 0 γ 2 − k z γ 2 ). (102) We have the probability from Eq. (46), P =Ñ 2 0 g 2 B (2π) 3 16 1 2E ν 1 1 ǫ 0 d k γ (2π) 3 2E γ (k 0 γ 2 − k 3 γ 2 ) ×(i)(I 0 k 0 ν 1 (k ν 1 − k γ ) 0 + I l k l ν 1 (k ν 1 − k γ ) l )θ(phase-space), (103) θ(phase-space) = θ(δω 2 ν 12 − 2k ν 1 k γ ),whereÑ 2 0 = (πσ γ ) − 3 2 , δω 2 12 = (ω 0 ν 1 ) 2 − (ω 0 ν 2 ) 2 , I 0,l and I T,i are given in Ap- pendix E. The convergence condition on the light-cone singularity is satisfied in the kinematical region, θ(phase-space). Thus the momentum satisfies 2(k 0 ν 1 k 0 γ − k ν 1 k γ cos θ) ≤ δω 12 2 .(104) Solving k γ , we have the condition for fraction x = kγ kν 1 , α − ≤ x ≤ α + ,(105)α ± = δω 2 12 ± δω 4 12 − 4(ω 0 ν 1 ) 2 m 2 γ 2(ω 0 ν 1 ) 2 ,(106) where α ± = O(1). The process ν i → ν j + γ occurs with the probability Eq. (103) if m 2 γ ≤ (δω 2 12 ) 2 4(ω 0 ν 1 ) 2 ,(107) which is satisfied in dilute gas. If the inequality Eq. (107) is not satisfied, this probability vanishes. The probablity P reflects the large overlap of initial and final states and is not Lorentz invariant. Consequently, although the integration region in Eq. (103) is narrow in phase space determined by θ(phase-space), which is proportional to the mass-squared difference, δω 2 12 , the integrand is as large as p 3 ν . P becomes much larger than the value obtained from Fermi's golden rule. High energy neutrino At high energy, Eq. (A.49) are substituted. For the case that p ν 1 is nonparallel to B, we have dP dx =Ñ 2 0 g 2 B (2π) 3 16 1 ǫ 0 (1 − cos 2 ζ)(i)I 0 k 3 ν 1 C(m, x) (108) C(m, x) = m 2 γ x 4 − (δω 2 12 + m 2 γ )x 3 + (ω ν 1 ) 2 − ω 2 12 )x 2 − (ω 0 ν 1 ) 2 x, where ζ is the angle between p ν 1 and B, B · p ν 1 = Bp ν 1 cos ζ.(109) The integral I 0 is almost independent of k γ . Ignoring the dependence, we integrate the photon's momentum, and have P =Ñ 2 0 g 2 B (2π) 3 16 1 ǫ 0 (1 − cos 2 ζ)(i)I 0 k 3 ν 1 C(m) (110) C(m) = 1 5 m 2 γ (α 5 + − α 5 − ) − 1 4 (δω 2 12 + m 2 γ )(α 4 + − α 4 − ) + 1 3 ((ω 0 ν 1 ) 2 − ω 2 12 )(α 3 + − α 3 − ) − 1 2 (ω 0 ν 1 ) 2 (α 2 + − α 2 − ). The probability P of Eq. (110) is proportional to (ω 0 ν ) 2 k 3 ν 1 , which is very different from the rate of the normal neutrino radiative decay, Γ = G 2 F m 5 ν (m ν 1 /E ν 1 )× (numerial factor), especially for high energy neutrino. Moreover, I 0 = 1 m 2 γ , can be extremely large in dilute gas, thus P is enormously enhanced. If the momentum of initial neutrino is parallel to the magnetic field, ζ = 0, we have P =Ñ 2 0 g 2 B (2π) 3 16 1 ǫ 0 (i)I 0 k ν 1 D(m),(111)D(m) = δω 4 12 α 2 + − α 2 − 2 − δω 2 12 (ω 0 ν 1 ) 2 (α + − α − − 2 α 2 + − α 2 − 2 ) + (ω 0 ν 1 ) 2 (log α + α − − α + + α − ). P in Eq. (111) is proportional to (ω 0 ν ) 4 k ν 1 , and is negligibly small compared to that of Eq. (110). Low energy neutrino At low energy, Eq. (A.52) are substituted. I 0 is inversely proportional to k 2 γ and we have dP dx =Ñ 2 0 g 2 B 4πE ν 1 1 ǫ 0 (1 − cos 2 ζ) 1 ǫ (1 − x)[δω 2 12 − xm 2 γ − 1 x m 2 γ ], and P =Ñ 2 0 g 2 B 4π 1 ǫ 0 (1 − cos 2 ζ)k ν 1 1 ǫ C low (m),(112)C low (m) = δω 2 12 (α + − α − − α 2 + − α 2 − 2 ) − m 2 γ ( α 2 + − α 2 − 2 − α 3 + − α 3 − 3 ) + (ω 0 ν 1 ) 2 (α + − α − ) − (ω 0 ν 1 ) 2 log α + α − . Using α ± , C(m) and D(m) are computed easily. 25 Neutrino interaction with uniform electric field ν + E → ν + γ For a uniform electric field in the z-direction, F µν (x) = E c ǫ 03µν ,(113) we have T µ =ν(k ν 2 )(1 − γ 5 )γ µ ν(k ν 1 )(ǫ x (k γ )k y γ − ǫ y (k γ )k x γ ),(114)spin (T µ 1 T µ 2 * ) = 8 1 2ω 0 ν 1 2ω 0 ν 2 (k µ 1 ν 1 k µ 2 ν 2 − g µ 1 µ 2 k ν 1 k ν 2 + k µ 2 ν 1 k µ 1 ν 2 ) ×(k x γ 2 + k y γ 2 ).(115) ν i → ν j + γ in the electric field is almost the same as that in the magnetic field. High energy neutrino We have the probability in the high energy region, P =Ñ 2 0 g 2 E (2π) 3 16 1 c 2 ǫ 0 (1 − cos 2 ζ)(i)I 0 k 3 ν 1 C(m),(116)C(m) = 1 5 m 2 γ (α 5 + − α 5 − ) − 1 4 (δω 2 12 + m 2 γ )(α 4 + − α 4 − ) + 1 3 ((ω 0 ν 1 ) 2 − ω 2 12 )(α 3 + − α 3 − ) − 1 2 (ω 0 ν 1 ) 2 (α 2 + − α 2 − ), where E k ν 1 = Ek ν 1 cos ζ.(117) Low energy neutrino The probability in the low energy region is P =Ñ 2 0 g 2 E 4π 1 c 2 ǫ 0 (1 − cos 2 ζ)k ν 1 1 ǫ C low (m),(118)C low (m) = δω 2 12 α + − α − − α 2 + − α 2 − 2 − m 2 γ α 2 + − α 2 − 2 − α 3 + − α 3 − 3 + (ω 0 ν 1 ) 2 (α + − α − ) − (ω 0 ν 1 ) 2 log α + α − . Neutrino interaction with nucleus electric field ν + E nuc → ν + γ In space-time near a nucleus, there is the Coulombic electric field E nucl due to the nucleus, and one of F µν in the action is replaced with E nucl . The rate estimated in Ref. [34] was much smaller by a factor 10 −4 or more than the value of the normal process due to charged current interaction. Here we estimate P (d) for the same process. The action becomes S νγ (E nucl ) = 1 2π α em G F √ 2 d 4 xf 1 ∂ ∂x µ (J A µ (x)F µν F µν nucl ),(119) which causes the unusual radiative interaction of neutrino in matter. The probability P (d) of the high energy neutrino, where 2k 1 · k 2 ≫ m 2 e , and we substitute the value f 1 = 1 8m 2 l . Since F µν nucl due to a bound nucleus is shortrange, the probability is not enhanced. Neutrino interaction with laser wave ν + E laser → ν + γ In the scattering of neutrino with a classical electromagnetic wave due to laser, one of F µν in the action is replaced with the electromagnetic field E laser of laser of the form F µν laser = E i ǫ 0iµν e ik 1 x .(120) The probability P (d) of this process is computed with the action S νγ (E nucl ) = 1 2π α em G F √ 2 d 4 xf 1 ∂ ∂x µ (J A µ (x)F µν F µν laser ),(121) where 2k 1 · k 2 ≈ 0, and we substitute the value f 1 = 1 8m 2 l . The amplitude and probability are almost equivalent to those of the uniform electric field. Implications to neutrino reactions in matter and fields An initial neutrino is transformed to another neutrino and a photon following the probability P (d) . The photon in the final state interacts with a microscopic object in matter with the electromagnetic interaction, and loses the energy. Thus the size σ γ in P (d) is determined by its wave function, and the probability P (d) × σ γA , where σ γA is the cross section of the photon and is much larger than that of weak reactions, determines the effective cross section of the whole process. Hence the effective cross section can be as large as that of the normal weak process caused by the charged current interaction. Effective cross section The probability of the event that the photon reacts on another object is expressed by P (d) in Eqs. (96), (97), and (98), and that of the final photon. If a system initially has photons of density n γ (E γ ), the number of photons are multiplied, and the probability of the event that the initial neutrino is transformed is given by P (d) × n γ . In the system of electric or magnetic fields, the initial neutrino is transformed to the final neutrino and photon. Hence P (d) ×n γ for the former case, and P (d) for the latter case are important parameters to be compared with the experiments. The effective cross section, for the process where the photon in the final state interacts with atoms A of the cross section σ γA , is σ (d) γA = P (d) (γ)n γ × σ γA ,(122) for the former case, and σ (d) γA = P (d) (γ) × σ γA .(123) for the latter cases. The cross sections Eqs. (122) and (123) are compared with that of the charged current weak process σ weak νA = G 2 F 2 E ν M A .(124) Since σ γA is much larger than σ weak νA , by a factor 10 14 or more, σ Accordingly, 10 −14 or 10 −15 is the critical value for the photon neutrino process to be relevant and important. If the value is larger, then the reaction that is dectated to vanish due to Landau-Yang's theorem manifests with a sizable probability. 6.2 ν + γ → ν + γ The probability P (d) is of the order of αG F and almost independent of time. The probability in this order has been considered vanishing, and this process has not been studied. If the magnitude is sizable, these neutrino processes should be included in astronomy and others. The process ν +ν → γ + γ is almost equivalent to ν + γ → ν + γ, and we do not study in this paper. A system of high temperature has many photons, and a neutrino makes a transition through its collision with the photons. The probability is determined by the product between the number of photons n γ and each probability P (d) (γ)n γ .(125) In a thermal equilibrium of higher temperature, the density is about n γ = (kT ) 3 ,(126) and we have the product for a head-on collision P n γ = (G F α) 2 2 8π 9/2 m 2 γ σ 1/2 p 2 ν 12 (kT ) 3 The sun In the core of the sun, R = 10 9 meters, (127) kT ≈ 2 keV, and the solar neutrino has the energy around 1 − 10 MeV. The photon's energy distribution is given by the Planck distribution, and the mean free path for the head-on collision is l = 1 P (d) (γ)n γ σ γA = 5 × 10 15 meters,(128) where σ γA = 10 −24 cm 2 , m γ = 1 eV and n A = 10 29 /cm 3 are used. The value is much longer than the sun's radius. For the neutrino of higher energy, we use Eq. (97), and have l = 1 P (d) (γ)n γ n A = 6.2 × 10 9 p 0 ν p ν 2 meters,(129) p 0 ν = 10 GeV, thus the length exceeds the sun's radius for p ν > 25 GeV. The high energy neutrino does not escape from the core if the energy is higher than around 25 GeV. Supernova In supernova, the temperature is as high as 10 MeV and the probability becomes much higher. We have l = 1 P (d) (γ)n γ n A = 1m,(130) p γ m γ = 10 7 , n A = 10 25 /cm 3 or l = 10 4 m, (131) p γ m γ = 10 5 , n A = 10 24 /cm 3 . The mean free path becomes smaller in the lower matter density region. Thus in the region of small photon's effective mass, the neutrino does not escape but loses its majority of energy. This is totally different from the standard behavior of the supernova neutrino. Neutron star If magnetic field is as high as 10 9 [Tesla], then the probability becomes large. The energy of the neutrino is transfered to the photon's energy. Low energy reaction In low energy region, P (d) behaves as Eq.(96) and σ γA → CE −3.5 photon , E photon → 0.(132) The effective transition probability P (d) n γ and the cross section depend on the photon density. ν + (E, B) → ν ′ + γ The radiative transition of one neutrino to another lighter neutrino and photon in the electromagnetic field occurs with the probability P (d) . Because P (d) is not proportional to T but almost constant, the number of parents decreases fast at small T , and remains the same afterward without decreasing. Now the photon in the final state reacts with matter with sizable magnitude and the probability of whole process is expressed with the effective cross section. High energy neutrinos The transition probability of the high energy neutrino in the magnetic field B [Tesla] and electric field E [V/meter] are P B = 4α ecB m e c 2 2 G 2 F 2 π −3/2 1 4m 2 γ √ σ γ p 3 ν C(m ν ) 1 m 2 e ,(133)P E = 4α eE m e c 2 2 G 2 F 2 π −3/2 1 4m 2 γ √ σ γ p 3 ν C(m ν ) 1 m 2 e .(134) For the parameters, m γ c 2 = 10 −9 eV, √ σ γ = 10 −13 meters, they become in the magnetic field P B = 6.4 × 10 −27 B B 0 2 p ν p (0) ν 3 ,(136) Low energy neutrinos The transition probability of the low energy neutrino are P B = 4α ecB m e c 2 2 G 2 F 2 π −3/2 1 ǫ √ σ γ p ν C(m ν ) 1 m 2 e(138)P E = 4α eE m e c 2 2 G 2 F 2 π −3/2 1 ǫ √ σ γ p ν C(m ν ) 1 m 2 e .(139) They become in the typical situations ǫ = 10 −20 , √ σ γ = 10 −13 m, p ν = 1 eV, E 0 = 10 3 GV/m, (140) P E = 6.4 × 10 −38 E E 0 2 . The probabilities Eqs. (136), (137), and (140) are caused by the overlap of waves on the parent and daughters. The phases of waves become cancelled at small m γ or ǫ, and more waves are added constructively then. Thus the effect become larger as they become smaller. The probability shows this and is inversely proportional to m 2 γ or ǫ. They become extremely large as m γ → 0 or ǫ → 0. Table of processes The P (d) could be tested in various neutrino processes of wide energy regions. The diffractive term P (d) becomes substantial in magnitude at high energy or fields. So this process may be relevant to the neutrino of high energy or at high fields, and may give new insights into or measurability of the following processes. 1. The accelerator neutrino has high energy and total intensity of the order 10 20 . At B = 10 Tesla and P ν = 1(10) GeV, we have P ≈ 10 −18 (10 −15 ), and using the laser of E 0 = 10 3 GV/m at P ν = 1(10) GeV, we have P ≈ 10 −16 (10 −13 ). The neutrino from the accelerator may be probed by detecting the final gamma rays. For example, at the beam dump of LHC we may set up laser to detect the neutrino interaction. 2. For the reactor neutrino, of the flux ≈ 10 20 /s per reactor, a detector of high magnetic field of the order 10 Tesla, or of high intensity laser, may be able to detect the neutrino. 3. Direct observations of solar neutrinos, which have the energy in 0.5 MeV to 10 MeV and flux around 10 15 /m 2 sec, using the detector with strong magnetic field similar to that for axion search [51] would be possible. 4. In Supernova or neutron star, the neutrino photon reaction would give a new important process, because the final photon interacts with matter strongly. As to the detectability of neutrinos from the sun, 10 GeV is the threshold from the sun, while they from SN interact with 10 MeV photons. 5. We find that P (d) becomes maximum at ζ = π 2 from Eq.(118), and may apply this effect to enhancing the neutrino flux. The term also has the momentum dependence, which may be exploites for "optical" effect of neutrino through the photon interaction. 6. The photon neutrino reaction may be useful for the relic neutrino detection. The reaction rate may be enhanced with such method as the neutrinos mirrors that collect them. [52] We may take advantage of the above effect. 7. The probability becomes huge at extremely high energy. So, this process may be relevant to ultra-high energy neutrino process. [53,54,55]), 8. Neutrino may interact with electromagnetic fields in Cumulonimbus cloud. 8-1. Lightening has total energy of the order 900 MJ ≈ 10 9 CV and current 10 6 A in a short period. cB at a radius r = 1 cm is 1.5 × 10 9 N/C. Assuming E = cB = 1.5 × 10 9 N/C, and m γ c 2 = 10 −11 eV, P ν = 10 MeV we have P E+B ≈ 10 −15 . The neutrino inevitably loses its energy, and the photons of the continuous spectrum are emitted. This may be related with the upper-atmospheric lightening [56] . 8-2. The gamma rays observed in Cumulonimbus cloud, [57], may be connected with the diffractive component. 9. Primordial magnetic fluctuations with zero frequency [58] may have interacted with neutrinos before neutrino detachment from the hot neutrino plasma(>GeV temperature) during the Big Bang. Such signatures may be carried by neutrinos (which is now relic neutrinos). Summary and the future prospect We have found that the photon interaction expressed by the total derivative Eqs. (13) or (86), which are derived from the triangle diagram in the standard model, causes the unusual transitions characterized by the time-independent probability. The interaction Lagrangian of this form does not give rise to any physical effect in classical physics, because the equation of motion is not affected. In quantum mechanics, this assertion is correct for the transition rate Γ 0 . However, this does not apply to the diffractive term P (d) , which manifests the wave characteristics of the initial and final waves. Our results show that P (d) is relevant to experiments and important in understanding many phenomena in nature. The neutral particles do not interact with the photon in classical mechanics. In quantum field theory, the vacuum fluctuation expressed by triangle diagram gives the effective interaction to the neutral particles such as 1 + meson → γγ and ν +γ(B, E) → ν +γ. However, they have vanishing rates due to Landau-Yang-Gell-Mann's theorem. P (d) does not vanish, nevertheless, and holds unusual properties such as the violation of the kinetic-energy conservation and that of Lorentz invariance. Furthermore the magnitudes become comparable to or even larger than the normal weak processes. Accordingly, the two photon or two gluon decays of the neutral axial vector mesons composed of a pair of electrons or quarks come to have finite decay probabilities. They will be tested in experiments. The neutrino photon processes, which have been ignored, also have finite probabilities from P (d) . It will be interesting to observe the neutrino photon processes directly using electric or magnetic fields, or laser and neutrino beams in various energy regions. The diffractive probability P (d) would be also important for understanding the wide neutrino processes in earth, star, astronomy, and cosmology. The diffractive probability P (d) is caused by the overlap of wave functions of the parent and decay products, which makes the interaction energy finite and the kinetic energy vary. Consequently the final state has continuous spectrum of the kinetic energy and possesses a wave nature unique to the waves. The unique feature of P (d) , i.e., independence of on the time-interval T , shows that the number of parents, which decrease like e −Γt in the normal decay, is constant now. The state of parent and daughter is expressed by the quasi-stationary states, which is expressed by the superposition of different energies and different from the normal stationary state of the form e −iEt ψ(x i ). The probability of the events that the neutrinos or photons are detected is computed with S[T ] that satisfies the boundary condition of the physical processes. Applying S[T ] we obtained the results that can be compared with experiments. The pattern of the probability is determined by the difference of angular velocities, ω = ω E − ω dB , where ω E = E ν / and ω dB = c| p ν |/ . The quantity ω takes the extremely small value m 2 c 4 /(2E ) for light particles such as neutrinos or photons [49] in matter. Consequently, the diffraction term becomes finite in the macroscopic spatial region of r ≤ 2πE c m 2 c 4 . This allows us to introduce a new class of experimental measurement possibilities of the deployment of photons to detect weakly interacting particles such as neutrinos. Because the modern technology on the electric and magnetic fields and laser, a large number of coherent photons are possible and the effects we have derived may have important implications in detecting and enhancing the measurement of neutrinos with photons. We see a variety of detectability opportunities that have eluded attention till now. These happen either with high energy neutrinos such as from cosmic rays and accelerators; or in high fields (such as intense laser and strong magnetic fields ). In the latter examples, neutrinos from reactors, accelerators, the sun, supernovae, thunder clouds and even polar ice may be detected with enhanced probabilities using intense lasers. We also mention the probability estimate for the primordial relic neutrinos and embedded information in them. where the integrands satisfy g(t + , x + = ±∞) = 0, f (t − , x − = ±∞) = 0 (A.5) is made with the change of variables. Due to Eq. (A.5), I would vanish if the integration region were from −∞ to +∞. We write I with variables x + = x 1 +x 2 2 , x − = x 1 − x 2 , and have I = I 1 − I 2 , (A.6) I 1 = λ + ≥0 d 2 x − f (x − )d 2 x + ( ∂ 2 ∂t 2 + g(x + )), I 2 = λ + ≥0 d 2 x + g(x + )d 2 x − ( ∂ 2 ∂t 2 − f (x − )), where λ + = x 2 + . It is noted that x + is integrated in the restricted region but x − is integrated in whole region. Thus I 1 = − d 2 x − f (x − )dt + v 2 ( ∂ ∂x + g(x + ))| x + =x +,min , (A.7) I 2 = 0, where the functional form g(x + ) = g(x + − vt + ) was used. I is computed with the slope of g(x + ) at the boundary x + = t + . We apply this method for computing four dimensional integrals. Appendix C In the case of the low energy region the photon has no effective mass, but is expressed by the index of refraction very close to unity in dilute gas, n = 1 + ǫ. Consequently the integrand in P (d) is proportional to 2 Pγ ǫ . and the Lagrangian density of electric and magnetic fields L EM = − 1 4 1 µ 0 F µν F µν = ǫ 0 2 E 2 − 1 2µ 0 B 2 . (A.17) The Lagrangian density of electronic fields is, .18) and that of QED is L QED = L e + L EM + +ecA 0ψ γ 0 ψ − ecA lψ γ l ψ. The spinor is normalized as s u( k, s)ū( k, s) = γ · k +ω 0 2ω 0 , (A. 27) and the light-cone singularity is L e =ψ(x) γ 0 ic ∂ ∂x 0 − γ l ic ∂ ∂x l − mc 2 ψ, (A∆ + (x 1 − x 2 ) = d k (2π) 3 2k 0 e −ik·(x 1 −x 2 ) = 2i 1 4π δ(λ)ǫ(δx 0 ) + · · · , (A.28) where the less-singular and regular terms are in · · · . The action S = dx(L QED +ψ(x)γ µ (1 − γ 5 )ψ(x)J µ (x)), (A.29) J µ (x) = G F √ 2ν (x)γ µ (1 − γ 5 )ν(x). governs the dynamics of electron, photon and neutrino. Integrating ψ(x) andψ(x), we have Det(D + J µ γ µ (1 − γ 5 )) and the effective action between neutrino and photon S ef f = e 2 8π 2 c 2 f (0) dx ∂ ∂x ν J ν (x)ǫ αβρσ ∂ ∂x α A β ∂ ∂x ρ A σ ,(A. S[T ] is constructed with the Møller operators at a finite T , Ω ± (T ), as S[T ] = Ω † − (T )Ω + (T ). Ω ± (T ) are expressed by a free Hamiltonian H 0 and a total Hamiltonian H by Ω ± (T ) = lim t→∓T /2 e iHt e −iH 0 t . From this expression, S[T ] is unitary and satisfies [S[T ], H 0 ] = 0, Figure 1 : 1Triangle diagrams of the electron loop which give contributions to 1 + (ll) → γγ, ν + γ → ν + γ and ν +ν → γ + γ. Γ(χ 1 → light hadrons) = 0.139 MeV (BESS II). ∆Γ(BESSII ) = 0.083 MeV, which are attributed to P (d) . Eqs. (122) and (123) can be as large Eq. (124), if P (d) is around 10 −14 . Figure 3 : 3E ν dependence of the probability Eq. (136) is shown. B = 10 Tesla is used for the calculation. Figure 4 : 4E ν dependence of the probability Eq. (137) is shown. E = 100 GV/m is used for the calculation. /m 2 s(solar ν) 10 − 5 5Tesla the angular velocity inφ c (δt) is given byω = (1 + ǫ)P γ − P γ = ǫP γ .in dilute gas becomes extremely small of the order 10 −14 − 10 −15 . c 2 E i , {ψ(x 1 ), ψ † (x 2 )}δ(t 1 − t 2 ) = δ( x 1 − x 2 )δ(t 1 − t 2 ), (A.21) [A i (x 1 ),Ȧ j (x 2 )]δ(t 1 − t 2 ) = i 1 ǫ 0 δ ij δ( x 1 − x 2 )δ(t 1 − t 2 ),where the gauge dependent term was ignored in the last equation. Thus the commutation relations for electron fields do not have , and those of electromagnetic fields have and the dielectric constant ǫ 0 . ǫ 0 shows the unit size in phase space. Accordingly the number of states per unit area and the strength of the light-cone singularity are proportional to 1/ǫ 0 . The fields are expanded with the wave vectors as u( k,s)b( k, s)e −ikx + v( k, s)d † ( k, s)e ikx ), ) 3 2k 0 ) 1/2 ( ǫ 0 ) 1/2 (ǫ i ( k, s)a( k, s)e −ikx + ǫ * ( k, s)a † ( k, s)e ikx ),The creation and annihilation operators satisfy{b( k 1 , s 1 ), b † ( k 2 , s 2 )} = δ( k 1 − k 2 )δ s 1 s 2 , (A.25)[a( k 1 , s 1 ), a † ( k 2 , s 2 )] = δ( k 1 − k 2 )δ s 1 s 2 .(A.26) I is composed of the log T term and constant. At ω ≈ 0, the latter is important and at a larger ω, the former is important.SimilarlyI i = I i (+)T,i = −i2π 2 (σ 2 γ log(ωT )), (A.50) I l = I 0 . (A.51) In low energy regions, ω = ǫp, and I 0 = −i2π 2 (σ 2 γ log(ωT ) + σ γ 4ǫp 2 ), (A.52) I i = −i2π 2 (σ 2 γ log(ωT )), (A.53) I l = I 0 . (A.54) ). Consequently, S (d) [T ] depends on f (x), and is appropriate to write as S (d) [T ; f ]. Accordingly the probability of the events is expressed by this normalized wave function, called wavepacket. Wavepackets that satisfy free wave equations and are localized in space are important for rigorously defining scattering amplitude[12,13]. S (d) [T ; f ] expresses the wave nature due to the states of continuous kinetic-energy. S (d) [T ; f ] does not preserve Poincaré invariance defined by L 0 . The state |β of E β is orthogonal to |α of E α = E β and P (d) approaches constant at T = ∞.The wavepackets It was pointed out by Sakurai[3], Peierls[4], and Greiner[5] that the probability at finite T would be different from that at T = ∞. Hereafter we compute the difference for the detected particle that has the mean free path l mf p of l mf p ≫ cT . Unusual enhancement observed in laser Compton experiment[11] may be connected. ,(137)p (0) ν = 10 MeV, E 0 = 10 3 GV/m,Neutrino energy dependences are shown in Figs. 3 (ν + B) and 4 (ν + E). AcknowledgmentsThis work was partially supported by a Grant-in-Aid for Scientific Research (Grant No. 24340043Appendix BThe integration over a semi-infinite region of satisfying the causality in two dimensional variables,Appendix DNotations :MKSA unitIt is convenient to express the Lagrangian with MKSA unit to study quantum phenomena caused by macroscopic electric and magnetic field[59]. The Maxwell equations for electric and magnetic fields are expressed with the dielectric constant and magnetic permeability of the vacuum, ǫ 0 , µ 0 which are related with the speed of light, c,The zeroth component x 0 in 4-dimensional coordinates x µ , isThe Maxwell equations in vacuum are,where the charge density and electric current, ρ(x), j(x), satisfyUsing the vector potentialThe magnetic field and electric field in the 3-rd direction, and laser field expressed by the external fields,are substituted to the action Eq .(A.30), and we have the actionswhere the coupling strengths areThe wave vectors are connected with the energy and momentumAppendix E Integration formulaeThe integrals(A.40)where the spreading in the transverse direction in Eq. (58) leads to ρ s (x 0 − ) in the right hand sides. Changing the variables to x + and x − , we havewhere the wavepacket sizes in the transverse direction areThe wavepacket expands in the transverse direction and the size σ T (+) is given byThe off-diagonal term χ(x + , x − ) gives small corrections and is ignored. We have . K Ishikawa, Y Tobita, 10.1093/ptep/ptt049Prog. Theor. Exp. Phys. 073B02. K. Ishikawa and Y. Tobita. Prog. Theor. Exp. Phys. 073B02, doi:10.1093/ptep/ptt049 (2013). Ann of Phys. K Ishikawa, Y Tobita, 10.1016/j.aop.2014.02.007344K. Ishikawa and Y. Tobita. Ann of Phys, 344, 118(2014), doi: 10.1016/j.aop.2014.02.007. J J Sakurai, Advanced quantum mechanics. New JerseyPrinceton University Press184J. J. Sakurai, Advanced quantum mechanics (Princeton University Press, New Jersey 1979) p.184. R Peierls, Surprises in Theoretical Physics. New JerseyPrinceton University Press121R. Peierls. Surprises in Theoretical Physics (Princeton University Press, New Jersey 1979) p.121. W Greiner, QUANTUM MECHANICS An Introduction. New York Berlin HeidelbergSpringer282W. Greiner, QUANTUM MECHANICS An Introduction (Springer, New York Berlin Heidelberg, 1994), p282. . P A M Dirac, Pro. R. Soc. Lond. A. 114243P. A. M. Dirac. Pro. R. Soc. Lond. A 114, 243 (1927). L I Schiff, Quantum Mechanics. McGRAW-Hill Book COMPANY,Inc. New YorkL. I. Schiff. Quantum Mechanics (McGRAW-Hill Book COMPANY,Inc. New York, 1955). M L Goldberger, Kenneth M Watson, Collision Theory. John Wiley & Sons, Inc. New YorkM. L. Goldberger and Kenneth M. Watson, Collision Theory (John Wiley & Sons, Inc. New York, 1965). R G Newton, Scattering Theory of Waves and Particles. New YorkSpringer-VerlagR. G. Newton, Scattering Theory of Waves and Particles (Springer- Verlag, New York, 1982). J R Taylor, Scattering Theory, The Quantum Theory of non-relativistic Collisions. New YorkDover PublicationsJ. R. Taylor, Scattering Theory: The Quantum Theory of non-relativistic Collisions (Dover Publications, New York, 2006). . M Iinuma, Phys.Letters A. 346M. Iinuma,et al, Phys.Letters A 346 255-260(2005) . H Lehman, K Symanzik, W Zimmermann, Il Nuovo Cimento. 1205H. Lehman, K. Symanzik, and W. Zimmermann, Il Nuovo Ci- mento (1955-1965) 1, 205 (1955). . F Low, Phys. Rev. 971392F. Low, Phys. Rev. 97, 1392 (1955). . K Ishikawa, T Shimomura, hep-ph/0508303Prog. Theor. Phys. 1141201K. Ishikawa and T. Shimomura, Prog. Theor. Phys. 114, 1201 (2005) [hep-ph/0508303]. . K Ishikawa, Y Tobita, arXiv:0906.3938Prog. Theor. Phys. 1221111quant-phK. Ishikawa and Y. Tobita. Prog. Theor. Phys. 122, 1111 (2009) [arXiv:0906.3938[quant-ph]]. . K Ishikawa, Y Tobita, arXiv:0801.3124AIP Conf. proc. 1016329hep-phK. Ishikawa and Y. Tobita, AIP Conf. proc. 1016, 329(2008); arXiv:0801.3124 [hep-ph]. . K Ishikawa, Y Tobita, arXiv:1106.4968hep-phK. Ishikawa and Y. Tobita, arXiv:1106.4968[hep-ph]. . B Kayser, Phys. Rev. 24110B. Kayser. Phys. Rev. D24, 110 (1981). . C Giunti, C W Kim, U W Lee, Phys. Rev. 443635C. Giunti, C. W. Kim and U. W. Lee. Phys. Rev. D44, 3635 (1991). . S Nussinov, Phys. Lett. 63201S. Nussinov. Phys. Lett. B63, 201 (1976). . K Kiers, S Nussinov, N Weiss, hep-ph/9506271Phys. Rev. 53K. Kiers, S. Nussinov and N. Weiss. Phys. Rev. D53, 537 (1996) [hep-ph/9506271]. . L Stodolsky, hep-ph/9802387Phys. Rev. 5836006L. Stodolsky. Phys. Rev. D58, 036006 (1998) [hep-ph/9802387]. . H J Lipkin, hep-ph/0505141Phys. Lett. 642366H. J. Lipkin. Phys. Lett. B642, 366 (2006) [hep-ph/0505141]. . E K Akhmedov, Jhep, arXiv:0706.12160709hep-phE. K. Akhmedov. JHEP. 0709, 116 (2007) [arXiv:0706.1216 [hep-ph]]. . A Asahara, K Ishikawa, T Shimomura, T Yabuki, hep-ph/0406141Prog. Theor. Phys. 113A. Asahara, K. Ishikawa, T. Shimomura, and T. Yabuki, Prog. Theor. Phys. 113, 385 (2005) [hep-ph/0406141]; . T Yabuki, K Ishikawa, Prog. Theor. Phys. 108347T. Yabuki and K. Ishikawa. Prog. Theor. Phys. 108, 347 (2002). . H L Anderson, Phys. Rev. 1192050H. L. Anderson et al. Phys. Rev. 119, 2050 (1960). . K Homma, D Habs, T Tajima, Appl. Phys. 106229K. Homma, D. Habs, and T. Tajima, Appl. Phys. B106, 229(2012). . T Tajima, K Homma, Int. J. Mod. 271230027T. Tajima, and K. Homma, Int. J. Mod. PhysA27,1230027(2012). . C H Lai, AustinUniveristy of TexasPh.D dissertationC. H. Lai, Ph.D dissertation (Univeristy of Texas,Austin,1994); . T Tajima, K Shibata, Plasma Astrophysics. 451Addison-WesleyT. Tajima and K. Shibata, "Plasma Astrophysics" (Addison- Wesley,Reading,1997) p.451. . L Landau, Sov. Phys. Doclady. 60207L. Landau. Sov. Phys. Doclady 60, 207(1948). . C N Yang, Phys. Rev. 77242C. N. Yang. Phys. Rev. 77, 242(1950). . H Fukuda, Y Miyamoto, Prog. Theor. Phys. 449H. Fukuda and Y. Miyamoto, Prog. Theor. Phys.4,49(1949). . J Steinberger, Phys. Rev. 761180J. Steinberger, Phys. Rev. 76, 1180(1949). . L Rosenberg, Phys. Rev. 1292786L. Rosenberg. Phys. Rev. 129, 2786(1963). . S L Adler, Phys. Rev. 1772426S. L. Adler. Phys. Rev. 177, 2426(1969). . J Liu, Phys. Rev. 442879J. Liu, Phys. Rev. D44, 2879(1991). . R P Feynman, Phys. Rev. 76749R. P. Feynman, Phys. Rev. 76, 749(1949). . A I Alelseev, Zh. Eksp. Teor. Fiz. 341195Sov Phys. JETP 7,826(1958)A. I. Alelseev, Zh. Eksp. Teor. Fiz. 34,1195(1958) [Sov Phys. JETP 7,826(1958)]. . K A Tumanov, Zh. Eksp. Teor. Fiz. 25385K. A. Tumanov, Zh. Eksp. Teor. Fiz. 25, 385(1953). . T Tajima, J M Dawson, Phys. Rev. Lett. 43267T. Tajima and J. M. Dawson, Phys. Rev. Lett. 43, 267(1979). . R Barbieri, R Gatto, R Kogerler, Phys. Lett. 60183R. Barbieri, R. Gatto, and R. Kogerler, Phys. Lett. 60B, 183(1976). . W Kwong, P B Mackenzie, R Rosenfeld, J L Rosner, Phys. Rev. 373210W. Kwong,P. B. Mackenzie, R. Rosenfeld, and J. L. Rosner, Phys. Rev. D37,3210(1988). . Z P Li, F E Close, T Barns, Phys. Rev. 432161Z. P. Li, F.E.Close, and T.Barns, Phys. Rev. D43, 2161(1991). . G T Bodwin, E Braaten, G P Lepage, Phys. Rev. 511125G.T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D51, 1125(1995). . Han-Wen Huang, Kuang-Ta Chao, Phys. Rev. 546850Han-Wen Huang and Kuang-Ta Chao, Phys. Rev. D54, 6850(1996). . J E Gaiser, Phys. Rev. 34711J. E. Gaiser, et al, Phys. Rev. D34, 711(1986). . S B Athar, Phys. Rev. 70112002S. B. Athar, et al, Phys. Rev. D70, 112002(2004). . M Ablikin, Phys. Rev. 7192002M. Ablikin, et al, Phys. Rev. D71, 092002(2005). . J Beringer, J. Phys.Rev. 8610001Particle Data GroupJ. Beringer et al. [Particle Data Group], J. Phys.Rev D86, 010001(2012). . M Gell-Mann, Phys. Rev. Lett. 670M. Gell-Mann. Phys. Rev. Lett. 6, 70(1961). . Y Inoue, Phys. Lett. 66893Y. Inoue, et. al, Phys. Lett. B668, 93(2008). Total Reflection of Relic Neutrinos from Material Targes. J Arafune, G Takeda, ut-icepp 08-02ICEPP Report. University of Tokyoand private communicationJ.Arafune and G.Takeda, "Total Reflection of Relic Neutrinos from Ma- terial Targes", University of Tokyo, ICEPP Report, ut-icepp 08-02, and private communication. M G Aartsen, IceCube CollaborationarXiv:1405.5303Observation of High-Energy Astrophysical Neutrinos in Three Years of IceCube Data. astro-ph.HEM. G. Aartsen et al. (IceCube Collaboration): Observation of High- Energy Astrophysical Neutrinos in Three Years of IceCube Data, 2014, arXiv:1405.5303 [astro-ph.HE]. . P Abreu, Pierre Auger CollaborationarXiv:1304.1630Adv. High Energy Phys. 2013708680astro-ph.HEP. Abreu et al. (Pierre Auger Collaboration), Adv. High Energy Phys. 2013, 708680 (2013), arXiv:1304.1630 [astro-ph.HE]. . G I Rubtsov, Telescope Array CollaborationJ. Phys. Conf. Series. 40912087G. I. Rubtsov et al. (Telescope Array Collaboration), J. Phys. Conf. Series 409 (2013), 012087. . R C Franz, R J Nemzek, J R Winckler, Science. 24948R. C. Franz, R. J. Nemzek, and J. R. Winckler, Science 249 48 (1990); . G J Smith, Science. 2641313G. J. Smith, et.al., Science,textbf264,1313(1994). . T Torii, M Takeishi, T Hosono, J.Geophys.Res. 1074324T.Torii, M.Takeishi, and T. Hosono, J.Geophys.Res.107,4324(2002); . H Tsuchiya, Phys.Rev.Letters. 99165002H.Tsuchiya, et. al, Phys.Rev.Letters. 99165002(2007) . T Tajima, S Cable, K Shibata, R M Kulsrud, Ap. J. 390309T.Tajima, S. Cable, K.Shibata, and R. M. Kulsrud, Ap. J. 390, 309 (1992). Photons and atoms: Introduction to Quantum Elctrodynamics. Claude Cohen-Tannoudji, Jacques Dupont-Roc, Gilbert Grynberg, John Wiley & Sons, Inc. New YorkClaude Cohen-Tannoudji, Jacques Dupont-Roc, and Gilbert Grynberg. Photons and atoms: Introduction to Quantum Elctrodynamics (John Wiley & Sons, Inc. New York, 1989).
[]
[ "Novel Approaches to Spectral Properties of Correlated Electron Materials: From Generalized Kohn-Sham Theory to Screened Exchange Dynamical Mean Field Theory", "Novel Approaches to Spectral Properties of Correlated Electron Materials: From Generalized Kohn-Sham Theory to Screened Exchange Dynamical Mean Field Theory" ]
[ "Pascal Delange \nCentre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance\n", "Steffen Backes \nCentre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance\n", "† ", "Ambroise Van Roekeghem \nCEA\nLITEN\n17 Rue des Martyrs38054GrenobleFrance\n", "‡ ", "Leonid Pourovskii \nCentre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance\n\nCollège de France\n11 place Marcelin Berthelot75005ParisFrance\n", "Hong Jiang \nCollege of Chemistry and Molecular Engineering\nPeking University\n100871BeijingChina\n", "Silke Biermann \nCentre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance\n\nCollège de France\n11 place Marcelin Berthelot75005ParisFrance\n\nEuropean Theoretical Spectroscopy Facility (ETSF)\nEurope\n" ]
[ "Centre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance", "Centre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance", "CEA\nLITEN\n17 Rue des Martyrs38054GrenobleFrance", "Centre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance", "Collège de France\n11 place Marcelin Berthelot75005ParisFrance", "College of Chemistry and Molecular Engineering\nPeking University\n100871BeijingChina", "Centre de Physique Théorique\nÉcole Polytechnique\n91128PalaiseauFrance", "Collège de France\n11 place Marcelin Berthelot75005ParisFrance", "European Theoretical Spectroscopy Facility (ETSF)\nEurope" ]
[ "Journal of the Physical Society of Japan SPECIAL TOPICS" ]
The most intriguing properties of emergent materials are typically consequences of highly correlated quantum states of their electronic degrees of freedom. Describing those materials from first principles remains a challenge for modern condensed matter theory. Here, we review, apply and discuss novel approaches to spectral properties of correlated electron materials, assessing current day predictive capabilities of electronic structure calculations. In particular, we focus on the recent Screened Exchange Dynamical Mean-Field Theory scheme and its relation to generalized Kohn-Sham Theory. These concepts are illustrated on the transition metal pnictide BaCo 2 As 2 and elemental zinc and cadmium. *
10.7566/jpsj.87.041003
[ "https://arxiv.org/pdf/1707.00456v1.pdf" ]
119,025,477
1707.00456
682e80326d5494a285b564cfc8f71200bd27c7cf
Novel Approaches to Spectral Properties of Correlated Electron Materials: From Generalized Kohn-Sham Theory to Screened Exchange Dynamical Mean Field Theory 3 Jul 2017 Pascal Delange Centre de Physique Théorique École Polytechnique 91128PalaiseauFrance Steffen Backes Centre de Physique Théorique École Polytechnique 91128PalaiseauFrance † Ambroise Van Roekeghem CEA LITEN 17 Rue des Martyrs38054GrenobleFrance ‡ Leonid Pourovskii Centre de Physique Théorique École Polytechnique 91128PalaiseauFrance Collège de France 11 place Marcelin Berthelot75005ParisFrance Hong Jiang College of Chemistry and Molecular Engineering Peking University 100871BeijingChina Silke Biermann Centre de Physique Théorique École Polytechnique 91128PalaiseauFrance Collège de France 11 place Marcelin Berthelot75005ParisFrance European Theoretical Spectroscopy Facility (ETSF) Europe Novel Approaches to Spectral Properties of Correlated Electron Materials: From Generalized Kohn-Sham Theory to Screened Exchange Dynamical Mean Field Theory Journal of the Physical Society of Japan SPECIAL TOPICS 3 Jul 2017arXiv:1707.00456v1 [cond-mat.str-el] The most intriguing properties of emergent materials are typically consequences of highly correlated quantum states of their electronic degrees of freedom. Describing those materials from first principles remains a challenge for modern condensed matter theory. Here, we review, apply and discuss novel approaches to spectral properties of correlated electron materials, assessing current day predictive capabilities of electronic structure calculations. In particular, we focus on the recent Screened Exchange Dynamical Mean-Field Theory scheme and its relation to generalized Kohn-Sham Theory. These concepts are illustrated on the transition metal pnictide BaCo 2 As 2 and elemental zinc and cadmium. * Introduction Technological progress has been intimately related with progress in materials science since its very early days. The last century has seen the development of refined capabilities for materials elaboration, characterisation and control of properties, culminating among others in the unprecedented possibilities of the digital age: materials properties have become an object of theoretical simulations, stimulating systematic searches for systems with desired characteristics. This branch of condensed matter physics is continuously evolving into a true new pillar of materials science, where the theoretical assessment of solid state systems becomes a fundamental tool for materials screening. Obviously, the success of this program hinges on the degree of predictive power of modern simulation techniques, which have to allow for performing calculations without introducing adjustable parameters. This requirement is even more severe since the most popular branch of ab initio calculations, the Density Functional Theory (DFT) approach, is restricted to ground state properties of the solid, while most properties of potential technological interest stem from excited states: the calculation of any type of transport phenomenon, for example electric, thermal, thermoelectric or magnetoelectric transport, of optical or spectroscopic properties, or of magnetic, charge-or orbital susceptibilites requires the accurate assessment of highly non-trivial response functions within a finite-temperature description. It is not a coincidence that the materials with the most exotic features are also most challenging to get to grips with from a theoretical point of view: both the complexity of their properties and the difficulty of their description are in fact a consequence of the very nature of the underlying electronic structure: typically, one is dealing with transition metal, lanthanide or actinide compounds, where the electrons in partially filled d-or f -shells strongly interact with each other through electron-electron Coulomb interactions, leading to highly entangled many-body quantum states. Even when using a simplified description of the solid in terms of an effective lattice model, assessing these quantum correlated states is a tremendous challenge. If local quantum fluctuations on a given atomic site are dominant, dynamical mean-field theory (DMFT) 1,2) yields an accurate description of the system. In particular, DMFT captures both the strong coupling Mott-insulating limit and the weakly interacting band limit of the Hubbard model, and is able to describe the salient spectral features of a correlated metal with coexisting quasi-particle and Hubbard peaks even in the intermediate correlation regime. In the multi-orbital case, additional degrees of freedom can lead to even richer physics with unconventional (e.g. orbital-selective) behavior, 3) or complex ordering phenomena. [4][5][6][7][8][9] In order to recover the material-specific character of the calculations, DMFT has been combined with DFT 10,11) into the so-called "DFT+DMFT" scheme, which is nowadays one of the most popular workhorses of electronic structure theory for correlated electron materials. Successful applications include transition metals, [12][13][14] transition metal oxides, 4,[15][16][17][18][19][20][21][22][23][24][25][26] lanthanide 27,28) or actinide [29][30][31][32] systems. Comparisons be-tween calculated and measured spectral functions have sometimes led to impressive agreement. 33,34) However, despite these successes the construction of the Hamiltonian used in these calculations is rather ad hoc and remains a limitation to the predictive power of the approach. Therefore, the elaboration of more systematic interfaces between electronic structure and many-body theory has become an active area of research. 35) In this work, we review a recent scheme combining screened exchange and DMFT. [36][37][38] We discuss its relation to many-body perturbation theory and generalized Kohn-Sham (KS) Theory and analyse the effects included at the different levels of the theory. As an illustration we describe its application to the low-energy spectral properties of the cobalt pnictide BaCo 2 As 2 . How does a theory thas has been designed for strongly correlated materials reduce to an a priori simpler version in the case of a weakly correlated system ? For methods based on DFT that include only the static on-site Coulomb interaction between localized states, like DFT+U 39) or DFT+DMFT, the spectrum in the limit of vanishing Hubbard interactions reduces trivially to the Kohn-Sham spectrum of DFT. For more complex interfaces of electronic structure and many-body theory which we will discuss here, however, it is a non-trivial question and allows for interesting possibilities to check their consistency. Here, we will discuss this issue on the example of the transition metals zinc and cadmium. This also allows us to comment on the challenge of including states in a wider energy range, identifying a challenging problem on these seemingly "simple" systems. The paper is organised as follows: in section 2 we review how Screened Exchange Dynamical Mean Field Theory derives from the combined many-body perturbation theory + dynamical mean field scheme "GW+DMFT". In section 3 we analyse the relation of screened exchange schemes to generalized KS theory. Section 4 provides an example of the application of such methods to BaCo 2 As 2 , while section 5 discusses the electronic structure of elemental zinc and cadmium within a simplified approach. Finally, we conclude in section 6. From GW+DMFT to Screened Exchange Dynamical Mean Field Theory Improving the predictive power of methods that combine electronic structure and many-body theory poses the challenge of properly connecting the two worlds, without double counting of interactions or screening. At the heart of this challenge lies the mismatch between the density-based description of DFT and the Green's function formalism used at the many-body level, as well as the difficulty of incorporating the feedback of high-energy screening processes governed by the unscreened Coulomb interaction onto the low-energy electronic structure. Conceptually speaking, these difficulties can be avoided by working on a large energy scale in the continuum with the full long-range Coulomb interactions, and a Green's function-based formalism even at the level of the weakly correlated states. These features are realised within the combined many-body perturbation theory and dynamical mean field theory scheme "GW+DMFT": [40][41][42][43][44][45][46][47][48][49] screening is assessed by the random phase approximation in the continuum, augmented by a local vertex correction, while the starting electronic structure for the DMFT calculation can be roughly interpreted as a "non-local GW" calculation. 50) Screened Exchange Dynamical Mean Field Theory [36][37][38] can be understood as an approximation to this full GW+DMFT scheme. It is based on the recent observation 36,50) that within the GW approximation the correction to LDA can be split into two contributions: a local dynamical self-energy Σ loc (ω) and a k-dependent but static self-energy Σ nloc (k), which does not contain any local component. If such a separation was strictly valid in the full energy range, the non-local static part of the full self-energy would be given by the non-local Hartree-Fock contribution, since the dynamical part vanishes at high frequency. In many realistic systems such a decomposition holds to a good approximation in the low-energy regime that we are interested in, where the static part Σ nloc (k) is quite different from the Fock exchange term. It is approximately given by a screened exchange selfenergy, leading to a decomposition of the GW self-energy into Σ GW = [GW(ν = 0)] nonloc + [GW] loc . Here, the first term is a screened exchange contribution arising from the screened interaction W(ν) evaluated at zero frequency W(ν = 0). The second term is the local projection of the GW self-energy. It is simply given by the GW self-energy evaluated using a local propagator G loc and the local screened Coulomb interaction W loc . Exactly as in GW+DMFT, Screened Exchange Dynamical Mean Field Theory replaces this term by a nonperturbative one: it is calculated from an effective local impurity problem with dynamical interactions. In current practical applications of Screened Exchange + DMFT the RPAscreened Coulomb potential W has been replaced by its longwavelength limit, which reduces to a simple Yukawa-form. 36) Quite generally the dynamical character of the interactions results in an additional renormalization Z B of the hopping amplitudes, which can be understood as an electronic polaron effect: the coupling of the electrons to plasmonic screening degrees of freedom leads to an effective mass enhancement corresponding to the hopping reduction, manifesting itself as a narrowing of the band. This effect can be estimated from the plasmon density of modes as given by the imaginary part of the frequency dependent interaction W. An explicit expression for Z B has been derived in Ref. 51. Relation to Generalized Kohn-Sham Theory Screened Exchange Dynamical Mean Field Theory can also be viewed as a specific approximation to a spectral density functional theory based on the Generalized Kohn-Sham (GKS) scheme of Seidl et al. 52) In GKS theory, alternative choices for the reference system that are different than the familiar Kohn-Sham system of DFT are explored. In particular, a generalized Kohn-Sham scheme where the reference system is a screened exchange Hamiltonian can be constructed. The main motivation for the inclusion of screened exchange in the literature has been to improve upon the band gap problem in semiconductors. Indeed, it can be shown that the screened exchange contribution, which corresponds to a non-local potential, effectively reintroduces to some degree the derivative discontinuity that is missing in the pure DFT description based on local exchange-correlation potentials. 53) Since the derivative discontinuity corresponds to the discrepancy between the true gap and the Kohn-Sham gap in exact DFT, a substantial improvement of the theoretical estimate for band gaps can be expected on physical grounds and has indeed been found. Here, our goal is somewhat different: motivated by the analysis of the role of screened exchange in GW+DMFT described above, we would like to connect the Screened Exchange DMFT scheme introduced above to generalized KS schemes making direct use of the non-local screened exchange potential. With this aim in mind, we briefly review the generalized Kohn-Sham construction in the case an effective Kohn-Sham system including screened exchange. Hereby, we follow closely Seidl et al., 52) both in notation and presentation. First, one defines a functional S [Φ] = Φ T Φ + U H {φ i } + E sx x {φ i }(1) that includes, in addition to the familiar kinetic energy term Φ|T |Φ and the Hartree energy U H {φ i } also the screened Fock term E sx x {φ i } = − N i< j drdr ′ × φ * i (r)φ * j (r ′ )e −k TF |r−r ′ | φ j (r)φ i (r ′ ) |r − r ′ |(2) Here, Φ are Slater determinants of single-particle states φ i . k TF is the Thomas Fermi wave vector. In order to derive a functional of the density Seidl et al. define a functional F s via the minimisation F S ρ = min Φ→ρ(r) S [Φ] = min {φ i }→ρ(r) S {φ i }(3) Next we define the energy functional E S {φ i }; v e f f = S {φ i } + drv e f f (r)ρ(r)(4) where now the potential v e f f does not only include the external potential v as in usual DFT, but also a contribution by the exchange-correlation part v e f f = v + v sx xc ρ .(5) The additional contribution, the generalized (local) exchangecorrelation potential v sx xc = ∂E sx xc ρ ∂ρ .(6) is the functional derivative of the generalized (local) exchange-correlation functional E sx xc ρ = E xc ρ − E sx x ρ + T ρ − T sx ρ(7) which comprises the difference between the exchangecorrelation potential of standard Kohn-Sham DFT and the non-local exchange energy defined above, as well as the difference between the kinetic energies of the standard and generalized Kohn-Sham systems. The functional derivative will eventually have to be evaluated self-consistently at the converged density. This construction leads to the generalized Kohn-Sham equations −∇ 2 φ i (r) + v(r)φ i (r) + u ([ρ]; r) φ i (r) − dr ′ v sx,NL x (r, r ′ )φ i (r ′ ) + v sx xc ([ρ]; r) φ i (r) = ε i φ i (8) with the Hartree potential u and the non-local screened Fock potential v sx,NL x (r, r ′ ) = − N j=1 φ i (r)e −k TF |r−r ′ | φ * j (r ′ ) |r − r ′ | .(9) and the effective (local) generalized Kohn Sham potential v sx xc defined above. The generalized Kohn-Sham equations have the formÔ {φ i } φ j +v e f f φ j = ǫ j φ j(10) whereÔ is a non-local operator, generalizing the standard Kohn-Sham operator consisting solely of kinetic energy and Hartree potential. The ground state energy for a system in the external potential v is then given by the expression E SEx−DFT [v] = F S ρ S 0 v e f f + E sx xc ρ S 0 v e f f + drv(r)ρ S 0 v e f f(11) Now, the relation to Screened Exchange DMFT is becoming clear: one may construct a spectral density functional in the same spirit as in DFT+DMFT, 54) but starting from the generalized KS functional. In 55, the expression for the total energy within the standard DFT+DMFT case was derived to be E = E DFT − l ǫ KS l + H KS + (H int − H dc )(12) where l ǫ KS l is the sum of the occupied Kohn-Sham eigenvalues, H KS = tr [H KSĜ ], and H int and H dc denote the local interaction part of the Hamiltonian and the corresponding double counting term, respectively. Instead of using the usual Kohn-Sham Hamiltonian for the construction of the one-body part, Screened Exchange DMFT relies on the generalized Kohn-Sham reference system that includes the screened exchange potential. The generalization of (12) to the present case thus replaces the Kohn-Sham Hamiltonian H KS in the expression for the energy by its non-local form, keeping track of the effective potential part: E = E SEx−DFT − l ǫ SEx−KS l + Ô +v e f f -6 -5 -4 -3 -2 -1 0 1 2 3 Γ X M Γ E -E F (eV) -6 -5 -4 -3 -2 -1 0 1 2 3 -6 -5 -4 -3 -2 -1 0 1 2 3 -6 -5 -4 -3 -2 -1 0 1 2 3 LDA screened exchange+ (H int (V ee , λ s , ω s ) − H dc )(13) Furthermore, the local interaction term is taken in the more general form of a dynamical interaction, thus corresponding to a local Hubbard term with unscreened interactions and local Einstein plasmons of energy ω s coupling to the electrons via coupling strength λ s . This concludes our description of the generalized Kohn-Sham interpretation of Screened Exchange DMFT, resulting in particular in an energy functional expression. However, in the practical calculations presented in the following we do not minimise the full energy expression as given above, but rather work at the converged DFT density and then investigate spectral properties using the Screened Exchange DMFT formalism. This amounts to a one-shot Screened Exchange-DFT+DMFT calculation that uses the DFT density as a starting point. The advantage of such an approach is obvious: Numerically, this procedure allows us to avoid the expensive evaluation of non-local exchange terms within the selfconsistency cycle of GKS theory. Moreover, as is well-known, while severe deviations of the true spectrum from the Kohn-Sham spectrum are quite common, the ground state density obtained even from approximate DFT functionals is often a good representation of the true one. In the case of the exact DFT functional, our approach would also lead to the exact ground state density and energy, with additional improvements of the spectrum over standard Kohn-Sham DFT. Results on BaCo 2 As 2 As a first illustration of Screened Exchange DMFT, we review calculations on BaCo 2 As 2 , which is the fully Cosubstituted representative of the so-called "122" family of the iron-based superconductors, isostructural to the prototypical parent compound BaFe 2 As 2 . The Fe → Co substitution has however important consequences: the nominal filling of the 3d states changes from d 6 to d 7 , which strongly reduces the degree of Coulomb correlations, making BaCo 2 As 2 a moderately correlated compound. 36,56) Indeed, the power-law deviations from Fermi liquid behavior above an extremely low coherence temperature discussed in the Fe-based compounds 57) are a consequence of the d 6 configuration and strong intraatomic exchange interactions. This is no longer the case in the cobalt pnictides, where angle-resolved photoemission spectroscopy (ARPES) identifies well-defined and long-lived quasiparticle excitations with relatively weak mass renormalization. Nevertheless, the DFT-LDA derived Fermi surface differs from experiment. 36,56,58,59) Therefore, BaCo 2 As 2 provides an ideal testing ground for new approaches to spectroscopic properties. We do not discuss here the details of yet another interesting question, which is the absence of ferromagnetism despite a high value of the DFT density of states at the Fermi level, suggestive of Stoner ferromagnetism, but refer the reader to Ref. 36, where the solution to this puzzle was discussed in detail. Fig. 1 shows the DFT band structure of BaCo 2 As 2 , in comparison to a screened exchange calculation. As in the ironbased pnictides, the dominantly 3d-derived states are located around the Fermi level; in this case in a window of about -3 eV to 2 eV. As compared to the parent iron pnictides with 3d 6 configuration of the Fe shell, the Fermi surface topology is modified due to the larger 3d 7 filling. The hole pocket at the Γ point that is present in most Fe-based pnictide compounds is pushed below the Fermi level, as well as the band forming the electron pocket at M, which is now fully filled. In standard DFT-LDA (see Fig. 2), a characteristic flat band of dominant x 2 − y 2 character lies directly on the Fermi level around the M point, giving rise to a huge peak in the density of states. The close proximity of this band to the Fermi level will render its energy highly sensitive to the details of the calculation and thus provides a perfect benchmark for improved computational techniques. Comparison to angle-resolved photoemission experiments, 36) as reproduced in Fig. 3, reveals that the overall bandwidth of the LDA band structure is too wide by roughly a factor of 1.5. In a DFT+DMFT calculation, as shown in Fig. 4, this band renormalisation is reproduced, giving an overall occupied bandwidth of about 1.5 eV. The fine details of the Fermi surface, and in particular the position of the flat x 2 − y 2 band are however not well described. For a detailed comparison we refer the reader to Ref. 36. Including screened exchange in the form of a Yukawa potential on top of DFT, as shown in Fig. 1 widens the band by a considerable amount as compared to DFT-LDA. Even more striking are the modifications at the Fermi level: the x 2 − y 2 band has been shifted above the Fermi level, with an energy at the M point of about 0.15 eV above E F . While the lowering of the filling of this band improves the agreement between theory and experiment, the shift is too large to reproduce the experimental Fermi surface. However, when applying DMFT with dynamical interactions on top of the screened exchange Hamiltonian, equivalent to the one-shot Screened Exchange DMFT procedure described above, this band is renormalized by the electronic interactions and ends up again close (but above!) the Fermi level. Its energy is slightly higher than in DFT-LDA and is now in excellent agreement with the experimental Fermi surface. A more detailed analysis presented in Ref. 36) reveals that also the higher energy features such as the bands within the range of up to 2 eV are well reproduced in this one-shot Screened Exchange DMFT approach. The comparison of the band structures and spectral func- tions within the different computational schemes illustrates the effects of the different terms in an instructive way: Improving on the description of exchange by replacing the local exchange as contained in DFT-LDA by a non-local screened Fock exchange widens the bandwidth and strongly "overcorrects" the Fermi surface. Improving at the same time on the correlation part by applying DMFT with frequencydependent interactions then leads to a partial cancellation of the band widening, giving an overall bandwidth surprisingly close to the LDA+DMFT one. This suggests that the good description of the overall bandwidth in DFT+DMFT is the result of an error cancellation between the local approximation to exchange and partial neglect of correlations. However, the Fermi surface is strongly modified by the non-local corrections, leading to a significant improvement over LDA+DMFT and resulting in good agreement with experiment. These findings might suggest that possible inconsistencies between theoretical and experimental Fermi surfaces that are observed in many pnictide materials could be corrected by adding simple non-local self-energy corrections stemming from screened exchange effects. Whether this treatment can fully account for the "red-blue shift" of Fermi surface pockets found in the experimental-theoretical comparisons 60) is a most interesting topic for future research. The current example may give rise to optimism. How does Screened Exchange Dynamical Mean Field Theory behave for weakly correlated materials? Hamiltonians built as combinations of a DFT part and local Hubbard-type interaction terms trivially reduce to the DFT Kohn-Sham electronic structure when assuming that in weakly correlated materials the effective local interactions become negligible. The question of the recovery of the weakly interacting limit is, however, more interesting in the case of Screened Exchange DMFT. While the static part of the effective local interaction may be assumed to lose its importance, band widening by the replacement of the DFT exchange correlation potential by the non-local exchange-correlation GW self-energy persists. On the other hand, plasmonic effects are also present in weakly correlated materials and continue to renormalize the low-energy band structure through electronplasmon coupling. This raises the question of what the resulting spectra for weakly correlated materials look like in screened exchange + DMFT. In Ref. 38, this question has been studied for early transition metal perovskites, where it was found that the band widening effect induced by non-local exchange and the electronic polaron effect counteract each other and tend to approximately cancel, thus resulting in a low-energy electronic structure close to the DFT Kohn-Sham band structure as long as static Hubbard interactions are disregarded. Here, we address this question in the case of the seemingly "simple" transition metals zinc and cadmium. Both elements nominally display a d 10 configuration, with fully occupied 3d orbitals in the case of Zn and 4d in the case of Cd, the dominantly d-derived bands being located several eV below the Fermi level. In Fig. 7 and Fig. 8 we show the band structure calculated within DFT for both materials. Here and in the following we use the experimental crystal structure. DFT puts the occupied d states at around -8 eV in Zn and -9 eV in Cd. The conducting states of this transition metal are formed by dispersive 4s(5s) in Zn(Cd) states around the Fermi level, that hybridize with the p-manifold. These facts raise the immediate expectation of negligibly small correlation effects on the occupied d shells. An effective Hubbard interaction calculated for the d-manifold within the constrained random phase approximation coincides with the fully screened interaction, since as a consequence of the no reason that the DFT Kohn-Sham spectrum, being derived from an effective non-interacting system, provides an accurate description of the experimental situation. Figs. 11 and 12 compare the experimental photoemission spectra 61,62) from the literature to the density of states (DOS) derived from DFT and Hartree-Fock (HF) theory. The resulting discrepancy in terms of an underestimation of the binding energy of the d states in DFT of several eV had been noted in the literature before: [63][64][65] Norman et a.l 63) discussed it in terms of a self-interaction error, proposing a correction in terms of an approximate substraction of self-interaction contributions contained in DFT. 66) Hartree-Fock calculations are, on the one hand, self-interaction free, but on the other handdue to the absence of screening -widen all bands and place the d-bands far too low in energy, as can be seen in Figs. 11 and 12. Since local dynamical correlations can assumed to be small as discussed before, an improved treatment of the screened interaction and the self-interaction correction at the same time is likely to improve the shortcomings of Hartree-Fock, resp. DFT. Here we will discuss the two possible extensions of Screened Exchange plus a bosonic renormalization factor Z B and the GW approximation. The computationally cheaper option of screened exchange is including only static exchange contributions with a Yukawa-type interaction potential, and an effective renormalization Z B which originates from the spectral weight transfer to plasmonic excitations. GW is computationally more demanding, but has the advantage of treating the dynamical part of the screening and correlation. Even though both methods include a self-interaction correction, the selfinteraction contained in the Hartree term is not completely cancelled since the exchange contributions are derived from a screened interaction and not the bare one, as opposed to the Hartree term. In Figs. 13 and 14 we show comparisons of the DOS of zinc and cadmium, calculated within DFT, screened Exchange(+Z B ) and GW 67) to photoemission spectra. In the GW calculation we used 7 × 7 × 3 k-points and 5 additional high-energy local orbitals. Interestingly, while in both systems the GW approximation provides a significant correction of the DFT Kohn-Sham spectrum in the right direction, but still underestimates the binding energy, the screened exchange scheme places the d-states too low in energy for Zn while providing a slightly better estimate than GW in Cd. The addition of the bosonic renormalization factor Z B merely renormalizes the d bandwidth but keeps the average level position constant. This raises the interesting question of which effects are missing in screened exchange and GW? The incomplete cancellation of the self-interaction in both approaches is expected to lead to an overall underestimation of the binding energy, since the additional unphysical interaction increases the energy of the d states. A more accurate estimate of this term would lead to an improvement of GW in both systems, but an even larger error of screened exchange in Zn. Another effect neglected in screened exchange is the Coulomb hole contribution: this term, discussed by Hedin as part of the "Coulomb hole screened exchange (COHSEX)" approximation translates the fact that the presence of an electron at a position r pushes away charge at r (in the language of a lattice model, the charge-charge correlation function exhibits a reduction of the double occupancy), and the interaction of this effective positive charge with the electron presents an energy gain expressed in the form of an interaction of the electron with a "Coulomb hole". This term, contained in GW but not in screened exchange, also increases the binding energy. This leads to the overall picture that screened exchange with the inclusion of the static corrections just discussed has a tendency to overestimate the binding energy in general, while GW underestimates it. The obvious difference between the two methods is the dynamical treatment of the screened interaction, which is treated more appropriately in GW, but it is not clear a priori whether the static approximation of the screened interaction or the approximated form of the screened interaction in terms of a Yukawa potential gives rise to the difference between screened exchange and GW. The GW description of zinc and cadmium is close to the experiment. The remaining discrepancy to experiment is likely explained by remaining self-interaction contributions and/or missing self-consistency. Self-consistency (or quasiparticle self-consistency) has been investigated in the homogeneous electron gas 68) and various solid state systems. 69) These questions are left for future work. Conclusions In this work we have reviewed and applied existing as well as novel approaches to obtain spectral properties of correlated electron materials. Guided by the need of a proper treatment of the long-range Coulomb interaction and non-local exchange effects we presented a lightweight version of the general GW+DMFT approach, the so-called Screened Exchange DMFT. It can be derived as a simplication to GW+DMFT in terms of a generalized screened exchange DFT scheme where local interactions are treated by dynamical DMFT. Analysis and the application of a simplifed form of this scheme to BaCo 2 As 2 indeed showed that non-local exchange and electronic screening lead to significant corrections to the electronic spectrum which are necessary to obtain a proper description of the experimental observations. Furthermore, we discussed the case of the elemental transition metals Zn and Cd, where strong local correlations are unimportant but the position of the occupied d manifold is very sensitive to a proper treatment of screened exchange effects. SB and AvR thank their collaborators of Refs.36 and 37 for the fruitful collaborations and discussions. This work was supported by IDRIS/GENCI Orsay under project t2017091393, the European Research Council under its Consolidator Grant scheme (project 617196), and the ECOS-Sud grant A13E04. HJ acknowledges financial support from National Natural Science Foundation (21373017, 21621061) and the National Basic Research Program of China (2013CB933400). Fig. 1 . 1(Color online) Kohn-Sham band structure of BaCo 2 As 2 within DFT-LDA (red lines) and the screened exchange approximation (black lines). Fig. 2 . 2(Color online) Kohn-Sham band structure of BaCo 2 As 2 within DFT-LDA, projected on the d x 2 −y 2 orbital. Fig. 3 . 3(Color online) Angle-resolved photoemission spectrum of BaCo 2 As 2 . Adapted from Ref.36. Fig. 4 . 4(Color online) k-resolved spectral function of BaCo 2 As 2 within LDA+DMFT. Adapted from Ref.36. Fig. 5 . 5(Color online) k-resolved spectral function of BaCo 2 As 2 within Screened Exchange DMFT (with dynamical interactions). Adapted from Ref.36. Fig. 6 . 6(Color online) k-resolved spectral function of BaCo 2 As 2 within Screened Exchange DMFT (with dynamical interactions), projected on the d x 2 −y 2 orbital. Adapted from Ref.36. Fig. 7 . 7(Color online) The band structure of elemental Zn calculated within DFT. The orbital character is indicated by the intensity of the different colors. Fig. 8 . 8(Color online) The band structure of elemental Cd calculated within DFT. The orbital character is indicated by the intensity of the different colors. Fig. 9 .Fig. 10 .Fig. 11 . 91011(Color online) The fully screened effective local Hubbard interaction on the 3d manifold for Zn. (Color online) The fully screened effective local Hubbard interaction on the 4d manifold for Cd. complete filling of the d-shell there are no intra-d transitions to be cut out as opposed to an open shell system, where transitions inside the shell contribute to screening effects. Figs. 9 and 10 display the local component of this fully screened interaction projected on the d-manifold. The lowfrequency limit approaches a value of 4.8 eV and 3.1 eV for Zn and Cd respectively. Even though this value is similar to their oxides, which are open shell systems, where correlations on the d states are significant, the high binding energy of these states far away from the Fermi level effectively prevents dynamical fluctuations. This suggests that screened exchange + DMFT should in fact reduce to screened exchange renormalized by the bosonic factor Z B discussed above. Nevertheless, this does not mean that static effects of the interaction are properly treated in DFT. Even if this were the case, there is (Color online) The Density of States of elemental Zn calculated within Density Functional Theory (DFT, black solid line) and Hartree-Fock (HF, red solid line) in comparison with Photoemission experiments 61, 62) (dashed line, symbols). Density Functional Theory (DFT) calculations underestimate the binding energy of the Zn 3d states, while the HF overestimates the binding energy significantly (see explanation in the text). Fig. 12 . 12(Color online) The Density of States of elemental Cd calculated within Density Functional Theory (DFT, black solid line) and Hartree-Fock (HF, red solid line) in comparison with Photoemission experiments 61) (dashed line, symbols). Density Functional Theory (DFT) calculations underestimate the binding energy of the Cd 3d states, while the HF overestimates the binding energy significantly (see explanation in the text). Fig. 13 . 13(Color online) The Density of States of elemental Zn calculated within different theoretical methods (solid lines), in comparison with Photoemission experiments 61, 62) (dashed line, symbols). Density Functional Theory (DFT) calculations significantly underestimate the binding energy of the Zn 3d states, while the GW approximation obtains a much better agreement. Screened exchange overestimates the binding energy significantly. Fig. 14 . 14(Color online) The Density of States of elemental Cd calculated within different theoretical methods (solid lines), in comparison with Photoemission experiments 61) (dashed line). DFT calculations significantly underestimate the binding energy of the Cd 4d states, while the GW approximation and also screened exchange obtain a much better agreement. . A Georges, G Kotliar, W Krauth, M J Rozenberg, Reviews of Modern Physics. 6813A. Georges, G. Kotliar, W. Krauth, and M. J. Rozenberg: Reviews of Modern Physics 68 (1996) 13. G Kotliar, D Vollhardt, Physics Today. 53G. Kotliar and D. Vollhardt: Physics Today March 2004 (2004) 53. . S Biermann, L De&apos; Medici, A Georges, Phys. Rev. Lett. 95206401S. Biermann, L. de' Medici, and A. Georges: Phys. Rev. Lett. 95 (2005) 206401. . E Pavarini, S Biermann, A Poteryaev, A I Lichtenstein, A Georges, O K Andersen, Phys. Rev. Lett. 92176403E. Pavarini, S. Biermann, A. Poteryaev, A. I. Lichtenstein, A. Georges, and O. K. Andersen: Phys. Rev. Lett. 92 (2004) 176403. . C Martins, M Aichhorn, L Vaugier, S Biermann, Phys. Rev. Lett. 107266404C. Martins, M. Aichhorn, L. Vaugier, and S. Biermann: Phys. Rev. Lett. 107 (2011) 266404. . R Arita, J Kuneš, A V Kozhevnikov, A G Eguiluz, M Imada, Phys. Rev. Lett. 10886403R. Arita, J. Kuneš, A. V. Kozhevnikov, A. G. Eguiluz, and M. Imada: Phys. Rev. Lett. 108 (2012) 086403. . K.-H Ahn, K.-W Lee, J Kuneš, Journal of Physics: Condensed Matter. 2785602K.-H. Ahn, K.-W. Lee, and J. Kuneš: Journal of Physics: Condensed Mat- ter 27 (2015) 085602. . J Kuneš, V Křápek, Phys. Rev. Lett. 106256401J. Kuneš and V. Křápek: Phys. Rev. Lett. 106 (2011) 256401. . J Kuneš, Phys. Rev. B. 90235140J. Kuneš: Phys. Rev. B 90 (2014) 235140. . V Anisimov, A Poteryaev, M Korotin, A Anokhin, G Kotliar, Journal of Physics: Condensed Matter. 97359V. Anisimov, A. Poteryaev, M. Korotin, A. Anokhin, and G. Kotliar: Journal of Physics: Condensed Matter 9 (1997) 7359. . A Lichtenstein, M Katsnelson, Physical Review B. 576884A. Lichtenstein and M. Katsnelson: Physical Review B 57 (1998) 6884. . A I Lichtenstein, M I Katsnelson, G Kotliar, Phys. Rev. Lett. 8767205A. I. Lichtenstein, M. I. Katsnelson, and G. Kotliar: Phys. Rev. Lett. 87 (2001) 067205. . P Delange, T Ayral, S I Simak, M Ferrero, O Parcollet, S Biermann, L Pourovskii, Phys. Rev. B. 94100102P. Delange, T. Ayral, S. I. Simak, M. Ferrero, O. Parcollet, S. Biermann, and L. Pourovskii: Phys. Rev. B 94 (2016) 100102. . J Sánchez-Barriga, J Minár, J Braun, A Varykhalov, V Boni, I Di Marco, O Rader, V Bellini, F Manghi, H Ebert, M I Katsnelson, A I Lichtenstein, O Eriksson, W Eberhardt, H A Dürr, J Fink, Phys. Rev. B. 82104414J. Sánchez-Barriga, J. Minár, J. Braun, A. Varykhalov, V. Boni, I. Di Marco, O. Rader, V. Bellini, F. Manghi, H. Ebert, M. I. Katsnelson, A. I. Lichtenstein, O. Eriksson, W. Eberhardt, H. A. Dürr, and J. Fink: Phys. Rev. B 82 (2010) 104414. . S Biermann, A Poteryaev, A I Lichtenstein, A Georges, Phys. Rev. Lett. 9426404S. Biermann, A. Poteryaev, A. I. Lichtenstein, and A. Georges: Phys. Rev. Lett. 94 (2005) 026404. . A I Poteryaev, J M Tomczak, S Biermann, A Georges, A I Lichtenstein, A N Rubtsov, T Saha-Dasgupta, O K Andersen, Phys. Rev. B. 7685127A. I. Poteryaev, J. M. Tomczak, S. Biermann, A. Georges, A. I. Lichten- stein, A. N. Rubtsov, T. Saha-Dasgupta, and O. K. Andersen: Phys. Rev. B 76 (2007) 085127. . J M Tomczak, S Biermann, Journal of Physics: Condensed Matter. 19365206J. M. Tomczak and S. Biermann: Journal of Physics: Condensed Matter 19 (2007) 365206. . J M Tomczak, S Biermann, Europhysics Letters). 8637004EPLJ. M. Tomczak and S. Biermann: EPL (Europhysics Letters) 86 (2009) 37004. . J M Tomczak, S Biermann, Journal of Physics: Condensed Matter. 2164209J. M. Tomczak and S. Biermann: Journal of Physics: Condensed Matter 21 (2009) 064209. . J M Tomczak, A I Poteryaev, S Biermann, Comptes Rendus Physique. 10537J. M. Tomczak, A. I. Poteryaev, and S. Biermann: Comptes Rendus Physique 10 (2009) 537 . . I A Nekrasov, N S Pavlov, M V Sadovskii, Journal of Experimental and Theoretical Physics. 116620I. A. Nekrasov, N. S. Pavlov, and M. V. Sadovskii: Journal of Experi- mental and Theoretical Physics 116 (2013) 620. . P Thunström, I Di Marco, O Eriksson, Phys. Rev. Lett. 109186401P. Thunström, I. Di Marco, and O. Eriksson: Phys. Rev. Lett. 109 (2012) 186401. . I Leonov, L Pourovskii, A Georges, I A Abrikosov, Phys. Rev. B. 94155135I. Leonov, L. Pourovskii, A. Georges, and I. A. Abrikosov: Phys. Rev. B 94 (2016) 155135. . V I Anisimov, D E Kondakov, A V Kozhevnikov, I A Nekrasov, Z V Pchelkina, J W Allen, S.-K Mo, H.-D Kim, P Metcalf, S Suga, A Sekiyama, G Keller, I Leonov, X Ren, D Vollhardt, Phys. Rev. B. 71125119V. I. Anisimov, D. E. Kondakov, A. V. Kozhevnikov, I. A. Nekrasov, Z. V. Pchelkina, J. W. Allen, S.-K. Mo, H.-D. Kim, P. Metcalf, S. Suga, A. Sekiyama, G. Keller, I. Leonov, X. Ren, and D. Vollhardt: Phys. Rev. B 71 (2005) 125119. . A Sekiyama, H Fujiwara, S Imada, S Suga, H Eisaki, S I Uchida, K Takegahara, H Harima, Y Saitoh, I A Nekrasov, G Keller, D E Kondakov, A V Kozhevnikov, T Pruschke, K Held, D Vollhardt, V I Anisimov, Phys. Rev. Lett. 93156402A. Sekiyama, H. Fujiwara, S. Imada, S. Suga, H. Eisaki, S. I. Uchida, K. Takegahara, H. Harima, Y. Saitoh, I. A. Nekrasov, G. Keller, D. E. Kondakov, A. V. Kozhevnikov, T. Pruschke, K. Held, D. Vollhardt, and V. I. Anisimov: Phys. Rev. Lett. 93 (2004) 156402. . S K Mo, J D Denlinger, H D Kim, J H Park, J W Allen, A Sekiyama, A Yamasaki, K Kadono, S Suga, Y Saitoh, T Muro, P Metcalf, G Keller, K Held, V Eyert, V I Anisimov, D Vollhardt, Phys. Rev. Lett. 90186403S. K. Mo, J. D. Denlinger, H. D. Kim, J. H. Park, J. W. Allen, A. Sekiyama, A. Yamasaki, K. Kadono, S. Suga, Y. Saitoh, T. Muro, P. Metcalf, G. Keller, K. Held, V. Eyert, V. I. Anisimov, and D. Vollhardt: Phys. Rev. Lett. 90 (2003) 186403. . L V Pourovskii, B Amadon, S Biermann, A Georges, Phys. Rev. B. 76235101L. V. Pourovskii, B. Amadon, S. Biermann, and A. Georges: Phys. Rev. B 76 (2007) 235101. . I L M Locht, Y O Kvashnin, D C M Rodrigues, M Pereiro, A Bergman, L Bergqvist, A I Lichtenstein, M I Katsnelson, A Delin, A B Klautau, B Johansson, I Di Marco, O Eriksson, Phys. Rev. B. 9485137I. L. M. Locht, Y. O. Kvashnin, D. C. M. Rodrigues, M. Pereiro, A. Bergman, L. Bergqvist, A. I. Lichtenstein, M. I. Katsnelson, A. Delin, A. B. Klautau, B. Johansson, I. Di Marco, and O. Eriksson: Phys. Rev. B 94 (2016) 085137. . S Y Savrasov, G Kotliar, E Abrahams, Nature. 410793S. Y. Savrasov, G. Kotliar, and E. Abrahams: Nature 410 (2001) 793. . L V Pourovskii, M I Katsnelson, A I Lichtenstein, Phys. Rev. B. 72115106L. V. Pourovskii, M. I. Katsnelson, and A. I. Lichtenstein: Phys. Rev. B 72 (2005) 115106. . J Kolorenc, A B Shick, A I Lichtenstein, Phys. Rev. B. 9285125J. Kolorenc, A. B. Shick, and A. I. Lichtenstein: Phys. Rev. B 92 (2015) 085125. . L V Pourovskii, G Kotliar, M I Katsnelson, A I Lichtenstein, Phys. Rev. B. 75235107L. V. Pourovskii, G. Kotliar, M. I. Katsnelson, and A. I. Lichtenstein: Phys. Rev. B 75 (2007) 235107. . J.-Z Ma, A Van Roekeghem, P Richard, Z.-H Liu, H Miao, L.-K Zeng, N Xu, M Shi, C Cao, J.-B. He, G.-F. Chen, Y.-L. Sun, G.-H. Cao, S.-CJ.-Z. Ma, A. van Roekeghem, P. Richard, Z.-H. Liu, H. Miao, L.-K. Zeng, N. Xu, M. Shi, C. Cao, J.-B. He, G.-F. Chen, Y.-L. Sun, G.-H. Cao, S.-C. . S Wang, T Biermann, H Qian, Ding, Phys. Rev. Lett. 113266407Wang, S. Biermann, T. Qian, and H. Ding: Phys. Rev. Lett. 113 (2014) 266407. . S Backes, H O Jeschke, R Valentí, Phys. Rev. B. 92195128S. Backes, H. O. Jeschke, and R. Valentí: Phys. Rev. B 92 (2015) 195128. . M Hirayama, T Miyake, M Imada, S Biermann, arxiv.orgarXiv:1511.03757M. Hirayama, T. Miyake, M. Imada, and S. Biermann: arxiv.org arXiv:1511.03757 (2015). . A Van Roekeghem, T Ayral, J M Tomczak, M Casula, N Xu, H Ding, M Ferrero, O Parcollet, H Jiang, S Biermann, Phys. Rev. Lett. 113266403A. van Roekeghem, T. Ayral, J. M. Tomczak, M. Casula, N. Xu, H. Ding, M. Ferrero, O. Parcollet, H. Jiang, and S. Biermann: Phys. Rev. Lett. 113 (2014) 266403. . A Van Roekeghem, P Richard, X Shi, S Wu, L Zeng, B Saparov, Y Ohtsubo, T Qian, A S Sefat, S Biermann, H Ding, Phys. Rev. B. 93245139A. van Roekeghem, P. Richard, X. Shi, S. Wu, L. Zeng, B. Saparov, Y. Ohtsubo, T. Qian, A. S. Sefat, S. Biermann, and H. Ding: Phys. Rev. B 93 (2016) 245139. . A Van Roekeghem, S Biermann, Europhysics Letters). 10857003EPLA. van Roekeghem and S. Biermann: EPL (Europhysics Letters) 108 (2014) 57003. . V I Anisimov, F Aryasetiawan, A I Lichtenstein, J. Phys. Condensed Matter. 9767V. I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein: J. Phys. Con- densed Matter 9 (1997) 767. . S Biermann, F Aryasetiawan, A Georges, Phys. Rev. Lett. 9086402S. Biermann, F. Aryasetiawan, and A. Georges: Phys. Rev. Lett. 90 (2003) 086402. . P Sun, G Kotliar, Phys. Rev. Lett. 92196402P. Sun and G. Kotliar: Phys. Rev. Lett. 92 (2004) 196402. J M Tomczak, M Casula, T Miyake, S Biermann, EPL (Europhysics Letters. 10067001J. M. Tomczak, M. Casula, T. Miyake, and S. Biermann: EPL (Euro- physics Letters) 100 (2012) 67001. . S Biermann, Journal of Physics: Condensed Matter. 26173202S. Biermann: Journal of Physics: Condensed Matter 26 (2014) 173202. . T Ayral, P Werner, S Biermann, Physical Review Letters. 109226401T. Ayral, P. Werner, and S. Biermann: Physical Review Letters 109 (2012) 226401. . T Ayral, S Biermann, P Werner, Physical Review B. 87125149T. Ayral, S. Biermann, and P. Werner: Physical Review B 87 (2013) 125149. . P Hansmann, T Ayral, L Vaugier, P Werner, S Biermann, Physical Review Letters. 110166401P. Hansmann, T. Ayral, L. Vaugier, P. Werner, and S. Biermann: Physical Review Letters 110 (2013) 166401. . L Huang, T Ayral, S Biermann, P Werner, Phys. Rev. B. 90195114L. Huang, T. Ayral, S. Biermann, and P. Werner: Phys. Rev. B 90 (2014) 195114. . L Boehnke, F Nilsson, F Aryasetiawan, P Werner, Physical Review B. 94L. Boehnke, F. Nilsson, F. Aryasetiawan, and P. Werner: Physical Re- view B 94 (2016) 201106. . T Ayral, S Biermann, P Werner, L V Boehnke, preparationT. Ayral, S. Biermann, P. Werner, and L. V. Boehnke: in preparation (2017). . J M Tomczak, M Casula, T Miyake, S Biermann, Phys. Rev. B. 90165138J. M. Tomczak, M. Casula, T. Miyake, and S. Biermann: Phys. Rev. B 90 (2014) 165138. . M Casula, P Werner, L Vaugier, F Aryasetiawan, T Miyake, A J Millis, S Biermann, Phys. Rev. Lett. 109126408M. Casula, P. Werner, L. Vaugier, F. Aryasetiawan, T. Miyake, A. J. Mil- lis, and S. Biermann: Phys. Rev. Lett. 109 (2012) 126408. . A Seidl, A Görling, P Vogl, J Majewski, M Levy, Physical Review B. 533764A. Seidl, A. Görling, P. Vogl, J. Majewski, and M. Levy: Physical Re- view B 53 (1996) 3764. . J P Perdew, R G Parr, M Levy, J L Balduz, Phys. Rev. Lett. 491691J. P. Perdew, R. G. Parr, M. Levy, and J. L. Balduz: Phys. Rev. Lett. 49 (1982) 1691. . G Kotliar, S Y Savrasov, K Haule, V S Oudovenko, O Parcollet, C A Marianetti, Rev. Mod. Phys. 78865G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti: Rev. Mod. Phys. 78 (2006) 865. . B Amadon, S Biermann, A Georges, F Aryasetiawan, Phys. Rev. Lett. 9666402B. Amadon, S. Biermann, A. Georges, and F. Aryasetiawan: Phys. Rev. Lett. 96 (2006) 066402. . N Xu, P Richard, A Van Roekeghem, P Zhang, H Miao, W.-L Zhang, T Qian, M Ferrero, A S Sefat, S Biermann, H Ding, Phys. Rev. X. 311006N. Xu, P. Richard, A. van Roekeghem, P. Zhang, H. Miao, W.-L. Zhang, T. Qian, M. Ferrero, A. S. Sefat, S. Biermann, and H. Ding: Phys. Rev. X 3 (2013) 011006. . P Werner, M Casula, T Miyake, F Aryasetiawan, A J Millis, S Biermann, Nature Physics. 8331P. Werner, M. Casula, T. Miyake, F. Aryasetiawan, A. J. Millis, and S. Biermann: Nature Physics 8 (2012) 331. . R S Dhaka, Y Lee, V K Anand, D C Johnston, B N Harmon, A Kaminski, Phys. Rev. B. 87214516R. S. Dhaka, Y. Lee, V. K. Anand, D. C. Johnston, B. N. Harmon, and A. Kaminski: Phys. Rev. B 87 (2013) 214516. . J Mansart, P Le Fèvre, F Bertran, A Forget, D Colson, V Brouet, Phys. Rev. B. 94235147J. Mansart, P. Le Fèvre, F. m. c. Bertran, A. Forget, D. Colson, and V. Brouet: Phys. Rev. B 94 (2016) 235147. . V Brouet, F Rullier-Albenque, M Marsi, B Mansart, M Aichhorn, S Biermann, J Faure, L Perfetti, A Taleb-Ibrahimi, P Le Fèvre, F Bertran, A Forget, D Colson, Phys. Rev. Lett. 10587001V. Brouet, F. Rullier-Albenque, M. Marsi, B. Mansart, M. Aichhorn, S. Biermann, J. Faure, L. Perfetti, A. Taleb-Ibrahimi, P. Le Fèvre, F. Bertran, A. Forget, and D. Colson: Phys. Rev. Lett. 105 (2010) 087001. . R Poole, R Leckey, J Jenkin, J Liesegang, Physical Review B. 81401R. Poole, R. Leckey, J. Jenkin, and J. Liesegang: Physical Review B 8 (1973) 1401. . F Himpsel, D Eastman, E Koch, A Williams, Physical Review B. 224604F. Himpsel, D. Eastman, E. Koch, and A. Williams: Physical Review B 22 (1980) 4604. . M R Norman, Physical Review B. 292956M. R. Norman: Physical Review B 29 (1984) 2956. . F Aryasetiawan, O Gunnarsson, Phys. Rev. B. 5417564F. Aryasetiawan and O. Gunnarsson: Phys. Rev. B 54 (1996) 17564. . M Oshikiri, F Aryasetiawan, Phys. Rev. B. 6010754M. Oshikiri and F. Aryasetiawan: Phys. Rev. B 60 (1999) 10754. . J P Perdew, A Zunger, Phys. Rev. B. 235048J. P. Perdew and A. Zunger: Phys. Rev. B 23 (1981) 5048. . H Jiang, R I Gómez-Abal, X.-Z Li, C Meisenbichler, C Ambrosch-Draxl, M Scheffler, Computer Physics Communications. 184348H. Jiang, R. I. Gómez-Abal, X.-Z. Li, C. Meisenbichler, C. Ambrosch- Draxl, and M. Scheffler: Computer Physics Communications 184 (2013) 348. . B Holm, U Barth, Phys. Rev. B. 572108B. Holm and U. von Barth: Phys. Rev. B 57 (1998) 2108. . M Van Schilfgaarde, T Kotani, S Faleev, Phys. Rev. Lett. 96226402M. van Schilfgaarde, T. Kotani, and S. Faleev: Phys. Rev. Lett. 96 (2006) 226402.
[]
[ "Sublunar-Mass Primordial Black Holes from Closed Axion Domain Walls", "Sublunar-Mass Primordial Black Holes from Closed Axion Domain Walls" ]
[ "Shuailiang Ge \nDepartment of Physics and Astronomy\nUniversity of British Columbia\nV6T 1Z1VancouverBCCanada\n" ]
[ "Department of Physics and Astronomy\nUniversity of British Columbia\nV6T 1Z1VancouverBCCanada" ]
[]
We study the formation of primordial black holes (PBHs) from the collapse of closed domain walls (DWs) which naturally arise in QCD axion models near the QCD scale together with the main string-wall network. The size distribution of the closed DWs is determined by percolation theory, from which we further obtain PBH mass distribution and abundance. Various observational constraints on PBH abundance in turn also constrain QCD axion parameter space. Our model prefers axion mass at the meV scale ( f a ∼ 10 9 GeV). The corresponding PBHs are in the sublunar-mass window 10 20 -10 22 g (i.e., 10 −13 -10 −11 M ), one of few mass windows still available for PBHs contributing significantly to dark matter (DM). In our model, PBH abundance could reach ∼ 1% of DM, sensitive to the formation efficiency of closed axion DWs.
10.1016/j.dark.2019.100440
[ "https://export.arxiv.org/pdf/1905.12182v3.pdf" ]
168,170,079
1905.12182
297a2727804ab167108f2bc2ee3e66211604c45b
Sublunar-Mass Primordial Black Holes from Closed Axion Domain Walls Shuailiang Ge Department of Physics and Astronomy University of British Columbia V6T 1Z1VancouverBCCanada Sublunar-Mass Primordial Black Holes from Closed Axion Domain Walls Primordial black holesAxionDomain wallsDark matter We study the formation of primordial black holes (PBHs) from the collapse of closed domain walls (DWs) which naturally arise in QCD axion models near the QCD scale together with the main string-wall network. The size distribution of the closed DWs is determined by percolation theory, from which we further obtain PBH mass distribution and abundance. Various observational constraints on PBH abundance in turn also constrain QCD axion parameter space. Our model prefers axion mass at the meV scale ( f a ∼ 10 9 GeV). The corresponding PBHs are in the sublunar-mass window 10 20 -10 22 g (i.e., 10 −13 -10 −11 M ), one of few mass windows still available for PBHs contributing significantly to dark matter (DM). In our model, PBH abundance could reach ∼ 1% of DM, sensitive to the formation efficiency of closed axion DWs. Introduction Primordial black holes (PBHs) have long been considered as viable dark matter (DM) candidates, see Refs. [1][2][3] for recent reviews. Despite various observational constraints, some mass windows remain valid in which PBHs could significantly contribute to DM: sublunar-mass range O(10 20 g) and intermediate mass range O(10M ) [1,2,4]. In addition to the frequently studied mechanism of PBH formation from the collapse of overdense regions in the early universe [1,2], PBHs could also be formed from the collapse of topological defects [5][6][7][8][9][10][11][12][13][14]. QCD axion was originally proposed as a solution to strong CP problem [15][16][17][18][19][20][21]. As Peccei-Quinn (PQ) symmetry gets spontaneously broken at PQ scale T PQ ∼ f a in the early universe, axion strings are formed. If PQ symmetry is broken after inflation ( f a H I , post-inflationary scenario), axion domain walls (DWs) will be formed later near QCD scale T 1 ∼ GeV with the pre-existing strings as boundaries, which we call the string-wall network [22,23]. Otherwise, in the pre-inflationary scenario, the pre-existing strings are 'blown away' and the axion field gets homogenized by inflation, so no DWs can be formed at T 1 . Propagating axions generated from misalignment mechanism and topological decays are also DM candidates [24,25]. Recently, Refs. [26,27] have studied PBH formation from the collapse of closed axion DWs. The PBH mass obtained in Ref. [26] is ∼ 10 −8 M (10 25 g), but much heavier in Ref. [27] ∼ 10 4 -10 7 M since an extra bias term is considered there lifting the energy enclosed by DWs. Closed DWs in Refs. [26,27] are related to the network fragment which could occur much later than T 1 , and PBH formation there is significantly affected by the fragment time which is however very hard to determine [28][29][30][31][32]. In this paper, however, we study the closed axion DWs initially formed at T 1 together with the main string-wall network. The closed DWs thus evolve independently of the network fragment. Also, we focus on N DW = 1 case. The size distribution of N DW = 1 closed DWs initially formed at T 1 is well predicted by percolation theory, from which we can further calculate the PBH mass distribution and abundance. Another advantage is that N DW = 1 model naturally avoids the known DW problem that arises in N DW > 1 models leading to a DW-dominated universe [24,33]. The DW problem in N DW > 1 cases can also be avoided with a bias term introduced, which is adopted in Ref. [27], although there is only little room in parameter space for this term [24]. In our model, for axion decay constant f a ∼ 10 9 GeV, PBHs formed from the collapse of closed axion DWs are in the sublunar-mass window ∼ 10 20 -10 22 g, one of few allowed windows constrained by observations. In addition to the propagating axions generated from misalignment mechanism and topological decays as conventional DM candidates, PBH abundance in our model could reach ∼ 1% of DM, sensitive to the formation efficiency of closed DWs at T 1 . Additionally, various observational constraints on PBH abundance in turn could constrain QCD axion parameter space. The paper is organized as follows. In Section 2, we briefly review the formation of axion DWs and discuss the size distribution of N DW = 1 closed axion DWs predicted by percolation theory. In Section 3, we study the criterion for a closed DW to collapse into a black hole. In Section 4, we present the PBH mass distribution and abundance obtained in our model, in comparison with the constraints from astrophysical observations. Also, the constraints on PBH abundance in turn are used to constrain QCD axion parameter space. We draw the conclusions in Section 5. Size distribution of closed axion DWs We start with a brief review of axion DWs formation. Nonperturbative QCD effects induce an effective potential for the axion field φ [24,25]: V a = m 2 a (T ) f 2 a [1 − cos(φ/ f a )](1) with 0 ≤ φ/ f a ≤ 2πN DW where N DW is the model-dependent chiral anomaly coefficient [34] that also represents the number of degenerate vacua locating at φ/ f a = 2kπ. The axion mass is [35,36] m a (T ) =        f −1 a χ 1/2 0 , T ≤ T c f −1 a χ 1/2 0 (T/T c ) −β , T ≥ T c(2) where T c 150 MeV is the QCD transition temperature, χ 0 = (75.6 MeV) 4 is the zero-temperature topological susceptibility and β 4 [35,37]. V a is unimportant until m a (T ) increases to the scale of the inverse of Hubble radius H ∼ t −1 at t 1 [24] m a (t 1 )t 1 1. ( We say axion mass effectively turns on at t 1 . The corresponding temperature is T 1 ∼ 1 GeV, much lower than PQ scale. In the post-inflationary scenario, axion DWs start to form due to Kibble-Zurek mechanism [38,39] at T 1 when different regions of the universe fall into different vacua. The typical length of each region is the correlation length ξ (see e.g. Refs. [40,41]): ξ(T ) m −1 a (T )(4) Using Eq. (3), we further get ξ(T 1 ) t 1 , i.e. the correlation length at DW formation point t 1 is approximately the Hubble radius. If N DW = 1, the topology of vacuum manifold has two discrete values, φ/ f a = 0, 2π, corresponding to the same physical vacuum. It is known that DWs can be formed in this case as φ interpolates between the two topological branches 0 and 2π [24,42], and they could live long enough against tunnelling process to have important implications [42,43]. If we ignore the pre-existing strings at T 1 (the effects of which will be discussed later), N DW = 1 model can be treated as Z 2 model, for they have identical topology of vacuum manifold: both have two discrete values [44]. The formation of such walls in the early universe has been widely studied in the literature (see e.g. Refs. [45,46]): different 'cells' (typical length ξ) fall into one of the two values randomly with equal probability. Two or more neighbouring cells falling into the same value form a finite cluster (closed DW). A mathematical theory known as percolation theory studies the size distribution of such clusters, which gives [45]: n s ∝ s −τ exp (−λs 2/3 ). n s is the number density of finite clusters with size s (number of cells within a cluster). τ = −1/9 and λ ≈ 0.025 are two coeffi-cients from percolation theory 1 . Although Eq. (5) is originally obtained with the assumption s 1, it can be extrapolated down to the smallest clusters s = 1 with high accuracy [52]. Eq. (5) can be translated into DW language straightforwardly. Finite clusters are closed DWs with volume R 3 1 sξ 3 , where R 1 is introduced as the radius of closed DWs. We can write n s in differential form as n s = dn/ds where n denotes the number density of finite clusters with size smaller than s. Then, Eq. (5) becomes f (r 1 ) = f 0 · r 2−3τ 1 · e λ(1−r 2 1 )(6) where r 1 ≡ R 1 /ξ, f (r 1 ) ≡ dn/dr. f 0 ≡ f (r 1 = 1) is the distribu- tion at the smallest size R 1 = ξ. Closed DWs are indeed observed in computer simulations. In Z 2 -system, closed DWs account for γ ∼ 13% of total wall area [45]. We expect the proportion is lower in N DW = 1 models with strings present, because the presence of strings makes less space available to form closed DWs. This has also been seen in simulations [45,53]. But it's hard to determine the strings effects exactly. One difficulty is that simulations are sensitive to simulation size [45] and may not be properly applied to the universe at T 1 . Another difficulty is that simulations only apply to DWs formed soon after strings formation [45] which contradicts the realistic case T 1 T PQ . Despite simulation difficulties, we can absorb the strings effects on closed DWs at T 1 into γ (defined as the proportion of closed DWs area in total wall area [40]), implying γ 13% with strings present. Additionally, in contrast with the traditional view, N DW = 1 DWs could also be formed in the pre-inflationary scenario ( f a H I ) based on the argument that different topological branches cannot be separated by inflation [40,54] 2 . In that scenario, the pre-existing strings are blown away by inflation, so they cannot affect the formation of closed DWs at T 1 , implying that γ ∼ 13%, the same as Z 2 case. We can also interpret the correlation length ξ as the average distance among DWs, to get ∞ 1 dr 1 4π(ξr 1 ) 2 f (r 1 ) γ · 1 ξ .(7) The best information we have about γ in the post-inflationary scenario is γ 13% (but nonzero, since closed DWs are observed with strings present [45,53]). One might worry that closed DWs could be destroyed by intercommuting with walls bounded by strings in the late time evolution after T 1 , but our analysis shows that closed DWs will survive, see Appendix A for details. 1 λ is obtained indirectly. In percolation theory, λ −1 is the crossover size where λ −1 |p − p c | −1/σ valid for |p − p c | 1 (see e.g. Refs. [47][48][49]). p is the probability of each cell choosing one of the two topological branches, so p = 0.5 in our case; p c = 0.31 for cubic lattice and σ = 0.45 in 3D [50], so λ ≈ 0.025 for |p − p c | 1 well satisfied. The other coefficient τ = −1/9 for p > p c is obtained in a field theoretical formulation of the percolation problem [50,51]. 2 N DW = 1 closed axion DWs formed in the pre-inflationary scenario are crucial in Refs. [40,54]. The closed walls there accumulate baryons or antibaryons inside. They finally evolve into the axion quark nuggets (AQNs) which have many intriguing astrophysical and cosmological implications. See the original paper [54] and recent developments [40,44,[55][56][57][58][59][60][61][62][63][64] for details. Collapse into PBHs Closed DWs with size r 1 > 1 (i.e. R 1 > ξ(T 1 )) are super-Hubble structures since ξ(T 1 ) t 1 . They do not collapse until the size is surpassed by Hubble horizon. We emphasize that super-Hubble DWs are formed not because φ is physically correlated in super-Hubble scale, but a natural result of random combinations of self-correlated cells predicted by percolation theory. Instead of contraction, super-Hubble closed DWs first expand due to the universe's expansion with the scale factor a(t) ∝ T −1 ∝ t 1/2 (radiation-dominated era). However, the Hubble horizon H −1 ∼ t increases faster, implying that some time after t 1 (labeled as t 2 ), H −1 will catch up with the closed DWs size, R 2 t 2 . R 1 and R 2 are connected by the universe's expansion, R 2 /R 1 (t 2 /t 1 ) 1/2 . Recalling that r 1 ≡ R 1 /ξ(T 1 ) R 1 /t 1 , we have t 2 r 2 1 t 1 .(8) Closed DWs start to collapse at t 2 as the DW tension overcomes the universe's expansion. The collapse of closed DWs is dominated by the axion La- grangian L = 1/2(∂ µ φ) 2 − V a with V a from Eq. (1). The equa- tion of motion (EoM) is       ∂ 2 t + 3∂ t 2t − ∂ 2 R a 2 (t) − 2∂ R a 2 (t)R      φ + m 2 a (t) sinφ = 0 (9) where we have incorporated the universe's expansion. R = R/a(t) is the co-moving distance. Also, the axion field is redefined asφ = φ/ f a (dimensionless). For simplicity, we treat closed DWs as nearly spherical, so the EoM is written in the spherically symmetric form. We can use the kink-antikink pair as the initial configuration of spherical DWs [26,41] φ(t = t 2 , R) =4 tan −1 [e m a (t 2 )(R−R 2 ) ] + tan −1 [e m a (t 2 )(−R−R 2 ) ](10) where the initial scale factor is set as a(t 2 ) = 1. We also assume walls initially at rest,φ(t = t 2 , R) = 0. Following the procedure of Ref. [26], we define E(t, R) as the energy contained within a sphere of radius R at time t during collapse of a closed DW. If for some t and R, we have R smaller than the corresponding Schwarzschild radius R s = 2GE(t, R), a black hole will be formed. The above criterion can be expressed as [26] R s R = 2GE(t, R) R 1 ⇒ S (t, R) m 2 P(11) where S (t, R) ≡ 2E(t, R)/R and m P is the Planck mass. By numerically solving the EoM (9) with the initial conditions above, we can obtain the evolution of S (t, R). The detailed numerical calculations are shown in Appendix B. The key result is that the maximum S (t, R) is related to the initial collapse size R 2 by S max = k 1 [m a (t 2 )R 2 ] k 2 · f 2 a(12) where k 1 ≈ 3.1 × 10 3 and k 2 ≈ 2.76. This should be compared with a similar relation in Ref. [26] where k 1 ≈ 21.9 and k 2 ≈ 2.7. The crucial difference is that in our model closed DWs are originally formed at T 1 together with the main network and the collapse point T 2 could be earlier than the QCD transition T c (i.e. T c < T 2 < T 1 ), so the full expression of axion mass Eq. (2) where m a (T ) increases rapidly with T before T c must be included in solving the EoM (9). Additionally, our EoM includes the universe's expansion. In comparison, Ref. [26] considered collapse of fragments from the string-wall network. The fragment process could occur later than T c , so m a is treated as a constant there. Also, fragments in Ref. [26] inherit angular momentum from strings motion, which could significantly suppress PBH formation. However, our model does not suffer from this suppression. Closed DWs have no initial angular momentum at T 1 since they are formed independently of the main network, and the simple assumption of spherical shape guarantees no angular motion later but only radial motion. Substituting Eq. (12) into Eq. (11) and using Eq. (8), we can finally express the criterion of PBH formation in terms of r 1 : r 2 1 m a (t 1 ) m a (t 2 )       m 2 P k 1 f 2 a       1/k 2 .(13) The classical window of current axion mass is 10 −6 eV m a,0 10 −2 eV [65], implying 10 8 GeV f a 10 12 GeV [Eq. (2)]. r 1,min is the minimum radius satisfying the criterion Eq. (13). With f a known, t 1 and t 2 are also known from Eqs. (2), (3) and (8), so r 1,min is merely determined by f a . In Fig. 1, we plot the relation r 1,minf a (see also Appendix B for more numerical details). PBHs as DM Eq. (13) roughly determines whether a closed axion DW could collapse into a PBH. To exactly calculate the PBH mass, however, we need to answer many complicated questions, e.g. how the PBH as the core alters the wall dynamics and the fraction of the wall falling into the PBH, etc. For simplicity, we estimate the PBH mass as the energy initially stored in the closed wall at t 2 when it starts to collapse: 10 -1 1 M PBH /g ψ(M PBH ) f a =10 12 GeV f a =10 10 GeV f a =10 8 GeV 〈M PBH 〉 〈M PBH 〉 〈M PBH 〉M PBH 4πR 2 2 σ(t 2 ) 4πr 4 1 · m −2 a (t 1 ) · σ(r 2 1 t 1 )(14) where σ = 8 f 2 a m a is the DW tension [42]. The PBH mass distribution is related to the size distribution of closed axion DWs Eq. (6) via dρ PBH (t) dM PBH = M PBH (r 1 ) · f (r 1 ) · T (t) T 1 3 · dr 1 dM PBH(15) where ρ PBH (t) is the mass density of PBHs. [T (t)/T 1 ] 3 is the matter density decrease with the universe expanding. We further define Ω PBH (t) = ρ PBH (t)/ρ cr (t) where ρ cr (t) = 3H 2 (t)/8πG is the critical density. Ω PBH (t) remains constant after the epoch of matter-radiation equality T eq ≈ 0.8 eV, so the present mass distribution of PBHs is dΩ PBH (t eq ) dM PBH = M PBH (r 1 ) · f (r 1 ) ρ cr (t 1 ) · T 1 T eq · dr 1 dM PBH(16) By integrating Eq. (16), the present PBH abundance is Ω PBH = ∞ r 1,min M PBH (r 1 ) · f (r 1 ) ρ cr (t 1 ) · T 1 T eq dr 1 .(17) The average mass of PBHs can be calculated as M PBH = ∞ r 1,min dr 1 M PBH (r 1 ) f (r 1 ) ∞ r 1,min dr 1 f (r 1 ) ,(18) which does not change with the universe's expansion. There is a one-to-one correspondence between M PBH and f a . In Fig. 2, we plot PBH mass distributions for different f a . We see that PBHs are generally within the mass range 10 19 -10 29 g, but the distribution for each f a is quite narrow centering at ∼ M PBH and heavy PBHs are greatly suppressed due to Eq. (6). We also plot f a -scale in the upper x-axis one-to-one corresponding to M PBH . The shaded regions are various observational constraints on PBH abundance: femtolensing (FL) [66], white dwarfs distribution (WD) [67], Subaru/HSC microlensing (HSC) [68] and Kepler microlensing (K) [69]. The rprocess nucleosynthesis line is from Ref. [70]. We emphasize that PBH mass reaching the scale 10 19 -10 29 g is due to the large size of closed DWs which is inversely proportional to the axion mass at T 1 ∼ GeV, i.e. ξ m −1 a (T 1 ), rather than the current axion mass m a,0 . There is a huge difference between m a,0 and m a (T 1 ). For example, for m a,0 as large as 10 −4 eV, we have m a (T 1 ) ∼ 10 −8 eV [Eq. (2)]. Another factor contributing to closed DWs size is r 1 predicted by percolation theory. See also Eq. (14) where m −1 a (T 1 ) and r 1 enter the PBH mass expression. PBHs surviving today contribute to DM with the trivial constraint Ω PBH ≤ Ω DW . Furthermore, various astrophysical observations constrain Ω PBH for a wide mass window [1,2]. Most of the valid constraints assume the PBH mass function is monochromatic. Although PBHs in our model have a mass distribution, it is narrow as we see in Fig. 2. If we approximate our model as one which has the monochromatic mass function M PBH = M PBH with the same abundance Ω PBH , the astrophysical constraints on Ω PBH can be roughly applied to our model. Ω PBH in Eq. (17) depends on f a which determines the DWs formation point t 1 and also the DW tension σ. Another parameter that also significantly affects Ω PBH is γ [contained in f (r 1 ), via Eqs. (6), (7)], Ω PBH ∝ γ. In Fig. 3, we plot Ω PBH /Ω DM , the present fraction of PBHs in DM, as a function of M PBH (or f a in the second x-axis, one-to-one corresponding to M PBH ) for different γ, with various observational constraints. We see that for f a ∼ 10 9 GeV, PBHs are in the sublunar-mass window M PBH ∼ 10 20 -10 22 g, one of few allowed windows 3 . For the typical value γ = 0.1, PBHs could account for up to ∼ 1% of DW in this mass window. If closed DWs are formed more efficiently, PBHs could contribute more to DM. We can in turn constrain QCD axion parameter space using the constraints on Ω PBH . Fig. 3 shows that f a 10 10 GeV is almost excluded, although extremely small γ 10 −3 is still plausible resulting in Ω PBH 10 −3 Ω DM . For f a 10 8 GeV, PBH abundance is very tiny ( f a 10 8 GeV is actually excluded by independent observations of supernovae cooling [73]). Our model prefers f a ∼ 10 9 GeV corresponding to m a,0 ∼ meV (see a similar result in Ref. [27] but depending on a totally different mechanism). Additionally, PBH formation mechanism suggested in this work can also be applied to axion-like particles (ALPs) where m a and f a are not linked. In the ALP case, PBH formation could even be more efficient due to the larger DW sizes since the ALP mass could be lower than 10 −12 eV [74]. Conclusions and discussions We have studied PBH formation from the collapse of closed QCD axion DWs naturally arising when axion mass effectively turns on. PBH mass distribution can be obtained from the size distribution of closed DWs predicted by percolation theory. Our model prefers axion mass at the meV scale (several experiments can detect axion in this mass range, see Ref. [75] for a review). The resulting PBHs are in the sublunar-mass window 10 20 -10 22 g, one of few allowed windows constrained by observations. PBH abundance in our model could vary a lot and it could reach ∼ 1% of DM, where the formation efficiency γ of closed DWs plays a key role. Sublunar-mass PBHs have other significant implications. Ref. [70] suggests that their interactions with neutron stars could solve the long-standing puzzle of r-process nucleosynthesis, which might get indirect supports from aLIGO, aVirgo and KAGRA experiments [76][77][78] in the near future. In Fig. 3, rprocess is denoted as the dashed line, the region above/below which is the parameter space that fully/partially explains rprocess observations [70]. Ref. [79] discussed the possibility of detecting gravitational waves generated by sublunar-mass PBH binaries. Ref. [80] proposed the sublunar-mass PBHs detection through the diffractive microlensing of quasars in long wavelengths with sublunar-mass PBHs as lenses, which could also detect the PBH mass distribution. These experiments might support or exclude our proposal of PBH formation. Acknowledgments The work was initiated in the conference IPA 2018 (Interplay between Particle and Astroparticle Physics) in Cincinnati, USA. I thank IPA organizers for this excellent conference. I also thank Ariel Zhitnitsky for useful comments on the work. This work was supported in part by the National Science and Engineering Research Council of Canada and the Four Year Doctoral Fellowship (4YF) of UBC. Appendix A. Survival of the closed axion DWs in the precollapse evolution As we discussed in the main text, closed axion DWs are formed at T 1 and start to collapse at T 2 = T 1 /r 1 when their sizes are surpassed by the Hubble horizon. The minimum r 1 required to collapse into PBHs is about 4 to 14 for different f a as we see in Fig. 1 in the main text. The pre-collapse evolution refers to the evolution of closed axion DWs from T 1 to T 2 . During this period, in addition to closed DWs, walls bounded by strings (which we call string-wall objects) are also copiously present in the system (post-inflationary scenario), whose intercommuting with closed DWs might destroy closed DWs [22]. In this section, we are going to study how string-wall objects affect closed DWs and demonstrate that closed DWs will survive against these effects. The string-wall objects are formed at T 1 as strings become boundaries of walls. They are like pancakes or large walls with holes [53]. T 1 can be obtained from Eqs. (2) and (3): T 1 1 GeV · 10 12 GeV f a 1/6 . (A.1) Another critical time is the time when the domain wall tension dominates over that of strings. We denote the time as t w , which is defined by [28,53,81] t w µ(t w )/σ(t w ), (A.2) where σ 8 f 2 a m a is the wall tension and µ π f 2 a ln( f a /m a ) is the energy per unit length of strings [53]. Solving Eq. (A.2), we get [53] T w 600 MeV · which is below T 1 . After T w , the dynamics of string-wall objects is dominated by walls, whereas, before T w it is dominated by strings [53]. Thus, the evolutions of the string-wall objects are totally different before and after T w , so we should should discuss their effects on closed walls separately. Before T w . In this stage, we have t < µ(t)/σ(t) and strings dominate the dynamics of string-wall objects. The evolution of strings in this stage is no qualitatively different from that before T 1 when walls have not been formed yet [81]. The main source of strings is closed loops (or wiggles on long strings) with the typical size t [53]. These strings move relativistically and are likely to hit closed walls, which will create holes on walls [22]. However, the holes that are formed in this stage (before T w ) will shrink and disappear [81]. This is because the force of tension in a string ∼ µ(t)/t, is greater than the wall tension σ(t), for t < µ(t)/σ(t) [81]. We thus conclude that although the relativistically moving strings may create holes on walls, these holes will disappear themselves as the tension in a string loop can easily overcome the wall tension in this stage. On the other hand, at T 1 , closed walls with string holes on them could also be formed initially with strings present. This is one of the reasons why γ 13% compared to the case without strings. But as we discussed above, these holes tend to disappear themselves in the initial stage, and thus these holey walls initially formed at T 1 may become closed, which actually brings γ closer to 13%. This is another thing we can learn from t < µ(t)/σ(t). After T w . The wall tension becomes greater than that of strings. In this stage, if strings hit closed walls and create holes on them, these string holes will inevitably increase in size pulled by the walls, which may significantly decrease the rate of closed walls collapsing into PBHs. However, compared with the first stage, the crucial difference is that the motion of a string after T w is greatly constrained by its own wall originally attached, for the walls dominating the dynamics of the stringwall objects. Also, the string-wall objects will quickly decay into axions [53]. As we will see below, string-wall objects cannot reach the nearest closed walls before these string-wall objects totally decay. In the first stage (before T w ), the strings move at relativistic speeds [53]. If a string and a wall collide, the intercommuting probability is very high (close to 1) [22,81,82]. Thus, large closed walls will eat the incoming string-wall objects quickly and efficiently in the first stage (the holes created will disappear as discussed above). With the surrounding regions cleared up, the typical distance between a closed wall surface and the neighbouring string-wall object is the Hubble scale ∼ t, saturating the requirement of causality 4 . The equilibrium will be kept until T w when the dynamics of string-wall objects is greatly altered. Now at T w , for string-wall objects, more energy is stored in walls rather than strings and thus the bulk motion of stringwall objects is determined by walls. We should check what will happen to the system. The simulation result of walls speed is v ∼ 0.4c [83]. At T w , the distance between a string-wall object and its nearest closed wall surface is ∼ t w . Then, the time needed for the string-wall object to hit the closed wall can be estimated as T hit should be compared with the temperature at which the string-wall objects totally decay. Soon after T w , string-wall objects will decay into axions, as the strings pulled by the wall tension quickly unzip the attached walls [53]. Recent simulations show that string-wall objects totally decay at T decay T 1 /3 [29] 5 . The crucial point for us is that t hitT hit T decay (A.6) 4 This is also commonly assumed in many related studies of topological defects where the interactions are efficient, see e.g. Refs. [24,83]. This is also consistent with the numerical simulations of string-wall objects where the wall area parameter A 1 [28], implying on average there is one or less horizonsize string-wall object per horizon. 5 It is T decay T 1 /4 obtained in Ref. [30]. However, the exact value of T decay is not essential for us. As we will see below, in the realistic case that m a (t) increases rapidly with time, the wall speed is much lower, which finally leads to Eq. (A.7). which implies that string-wall objects cannot reach the nearest closed walls before these string-wall objects totally decay into free axions. In other words, closed domain walls will not be destroyed by the string-wall objects after T w . One more comment is that the wall speed v ∼ 0.4c obtained in Refs. [83] is relatively high, because they did not consider that the axion mass m a (t) increases with time drastically. With the time-dependent m a (t) taken into consideration, the bulk speed is expected to be lower (even non-relativistic). This could be possibly explained as follows. The speed v is related to the ratio of kinetic energy to rest energy E kin /E rest [83,84] where E kin ∼ 1 2φ 2 and E rest ∼ 1 2 (∇φ) 2 + m 2 a (t) . With m a (t) ∝ T −β increasing rapidly, the ratio becomes much lower and so does the wall speed v. We could see this picture more intuitively in Fig.2 of Ref. [28], where the simulations show that the stringwall objects are constrained "locally" to decay with almost no bulk motion (close to zero) 6 . Thus, Eq. (A.6) is quite conservative, and actually we should have T hit T decay . (A.7) We conclude this section that closed walls will survive the pre-collapse evolution. Therefore, γ formed at T 1 remains unaffected and becomes important in calculating the PBH abundance. Appendix B. Numerical details of the collapse of closed axion DWs In this section, we are going to show the details of numerically solving the collapse of closed axion DWs, including how we get the expression of S max as shown in Eq. (12) and also the relation between r 1,min and f a as plotted in Fig. 1 in the main text. For the convenience of numerical calculations, we definer = R/m −1 a (t 2 ) andt = t/m −1 a (t 2 ) as dimensionless variables, then the EoM Eq. (9) and the initial conditions (Eq. (10) andφ(t = t 2 , R) = 0) can be written as ∂ 2φ ∂t 2 + 3 2t ∂φ ∂t − 1 a 2 (t) ∂ 2φ ∂r 2 + 2 r ∂φ ∂r + m 2 a (t) m 2 a (t 2 ) sinφ = 0, (B.1) φ(t 2 ,r) = 4 tan −1 [e (r−r 2 ) ] + tan −1 [e (−r−r 2 ) ] , (B.2) ∂φ(t,r) ∂t t=t 2 = 0 (B.3) wherer 2 = R 2 /m −1 a (t 2 ) andt 2 = t 2 /m −1 a (t 2 ) are respectively the rescaled initial radius and rescaled initial time at the starting point of the collapse of closed DWs, consistent with the definitions ofr andt. Note thatr 2 =t 2 since R 2 = t 2 . As we mentioned in the main text, the initial scale factor is set as 1, a(t 2 ) = 1. In the radiation-dominated era, we have a(t) = t t 2 1/2 = t t 2 1/2 . (B.4) If PBHs are formed before the QCD transition T c , according to Eq. (2) the axion mass that enters Eq. (B.1) is m a (t) m a (t 2 ) = t t 2 β/2 = t t 2 β/2 . (B.5) Later, we will discuss the effect of QCD transition on the collapse of closed axion DWs. As we mentioned in the main text, β 4. One of the most recent calculations on axion mass is given by Ref. [35] based on lattice QCD method which shows that the exact value is β = 3.925 7 . E(t, R) is defined as the energy contained within a sphere of radius R at time t during collapse of a closed DW, which can be calculated as E(t,r) f 2 a =m −1 a (t 2 ) · r 0 dr · 4πr 2 · a 3 (t) ·        1 2 ∂φ ∂t 2 + 1 2a 2 (t) ∂φ ∂r 2 + m 2 a (t) m 2 a (t 2 ) (1 − cosφ)        . (B.6) We add the prefactor 1/ f 2 a in LHS because φ is redefined as a dimensionless variableφ = φ/ f a as we mentioned in the main text. Now, the term S (t, R) related to the criterion of PBH formation can be expressed as S (t,r) = 2E(t,r) R = 2E(t,r) r · m a (t 2 ) a(t) . (B.7) The maximum value of S (t,r) during the collapse is S max = max (t,r) S (t,r) (B.8) We see that S max / f 2 a is a function ofr 2 . We then study the collapse of closed axion DWs by numerically solving Eqs. (B.1)-(B.5), from which we obtain the evolution of S (t,r) (based on Eq. (B.7)) and further S max . We do numerical calculations for different values of the initial radius r 2 , and finally we obtain the relation between S max / f 2 a andr 2 which is plotted in Fig. B.4. We see that S max / f 2 a linearly depends onr 2 in the log-log scale, consistent with Ref. [26] which however did the numerical calculations for a constant m a . By fitting the numerical results in Fig. B.4, we get S max / f 2 a = k 1 · (r 2 ) k 2 , (B.9) where k 1 = 3106.28 and k 2 = 2.7626. In Fig. B.5, we also plot the relation between t max andr 2 where t max is the time when 7 Ref. [35] does not give the value of β directly, but the Supplementary Information of that paper provides the related data. By fitting the data provided, we get β = 3.925. S (t,r) reaches its maximum value S max . The numerical results show that t max /t 2 ≈ 3.1. (B.10) We see that the collapse is a very fast process, with the scale factor a(t) only enlarged by (t max /t 2 ) 1/2 ≈ 1.76 times from t 2 to t max . Similar to Ref. [26], we also observed that S max is reached when the wall collapses to the radius close to zero. So the speed of collapse can be estimated as (t max /t 2 ) 1/2 t 2 /(t max − t 2 ) ≈ 0.84, close to the speed of light. Substituting Eq. (B.9) into the criterion Eq. (11), and using Eqs. (3) and (8), the criterion of PBH formation can be expressed in terms of r 1 : r 2 1 m a (t 1 ) m a (t 2 )       m 2 P k 1 f 2 a       1/k 2 . (B.11) Taking equal sign in Eq. (B.11), we obtain the lowest limit of the size of closed axion DWs at the formation point t 1 which could finally collapse into PBHs, denoted as r 1,min . However, Eq. (B.9) is only applicable when the axion mass relation Eq. (B.5) works, which assumes that S max is reached before QCD transition, i.e. t max < t c . Using Eqs. (8) and (B.10), this condition (t max < t c ) becomes a constraint on the size of closed DWs at the formation point: r 1 < 0.57 T 1 T c . (B.12) The interpretation of this relation is straightforward. The larger a closed DW is at t 1 , the later it will collapse according to Eq. (8), so a sufficiently large closed DW will collapse after T c 150 MeV. If Eq. (B.12) is satisfied, we can substitute the axion mass relation Eq. (B.5) into Eq. (B.11) to get r 1,min       m 2 P k 1 f 2 a       1 k 2 · 1 β+2 , for t max < t c . (B.13) We see that r 1,min is merely determined by f a . The relation between r 1,min and f a is plotted in Fig. B.6, denoted as line 1. For the case t 2 > t c , i.e. closed axion DWs start to collapse after QCD transition, the axion mass that enters the EoM is a constant according to Eq. (2). t 2 > t c corresponds to the condition r 1 > T 1 /T c . Ref. [26] numerically solves the collapse of closed axion DWs with m a constant, in which S max has the same form as Eq. (B.9) but with k 1 ≈ 21.9 and k 2 ≈ 2.7 8 . Then, from Eq. (B.11) we can derive r 1,min in this case: r 1,min m a (t 1 ) m a,0 1 2       m 2 P 21.9 f 2 a       1 2.7 · 1 2 , for t 2 > t c . (B.14) We also plot r 1,min in this case as a function of f a in Fig. B.6, denoted as the dashed line. In Fig. B.6, we also plot T 1 /T c and 0.57(T 1 /T c ) in comparison with Eqs. (B.13) and (B.14). Region I (between line 1 and line 2) is the parameter space where the condition Eq. (B.12) is satisfied, so the criterion Eq. (B.13) is applicable here and the closed DWs with parameters in this region will finally collapse into PBHs. Region III (beyond line 3) is the parameter space where r 1 > T 1 /T c (i.e. t 2 > t c ), so we should use the criterion Eq. (B.14) here. We see that region III is well above the criterion Eq. (B.14), so the closed DWs with parameters in this region will finally collapse into PBHs. Region II (between line 2 and line 3) where 0.57(T 1 /T c ) < r 1 < T 1 /T c is more subtle. The collapse of closed DWs with parameters in this region will pass through QCD transition, i.e. experience the 'knee' of axion mass expression Eq. (2). Since region II satisfies well the criterion of PBH formation from the perspective of both the changing axion mass (Eq. (B.13)) and the constant axion mass (Eq. (B.14)), we should expect the closed DWs with parameters in this region will collapse into PBHs 9 . To conclude, region I, II, and III are all parameter spaces (the shaded region) where closed axion DWs can collapse into PBHs. Thus, the criterion Eq. (B.13) denoted as line 1 in Fig. B.6 is indeed the lowest limit of r 1 for PBH formation (the tiny difference in the range f a 10 11 GeV can be ignored as we discussed in footnote 9), which is also plotted in Fig. 1 in the main text. Note that we cannot use Eq. (B.14) (dashed line) as the final criterion although it is lower than line 1, because the parameter space around the dashed line satisfies the condition Eq. (B.12) and thus should be checked by the criterion Eq. (B.13) rather than Eq. (B.14). Figure 1 : 1Relation between r 1,min and f a . Figure 2 : 2To compare PBH mass distributions for different f a , we have rescaled the distribution Eq. (16) as ψ(M PBH ) ≡ ( M PBH /Ω PBH ) · (dΩ PBH /dM PBH ) which is normalized as dM PBH ψ(M PBH ) = M PBH . The black dot and the dashed line for each f a are respectively the minimum PBH mass M PBH,min (corresponding to r 1,min ) and the average mass M PBH Eq. (18). Figure 3 : 3Ω PBH /Ω DM as a function of M PBH for various γ, denoted as black lines. T hit 0.26T 1 , we also used T w 0.6T 1 [Eqs. (A.1) and (A.3)]. Figure B. 4 : 4Relation between S max / f 2 a andr 2 . We do numerically for initial radiusr 2 = 10, 20, ..., 50 respectively, and the numerical results of (r 2 , S max / f 2 a ) are plotted as red points. The black line is the fitting result Eq. (B.9). Figure B. 5 : 5Relation between t max /t 2 andr 2 . The blue points are numerical results and the dashed line is t max /t 2 = 3.1. Figure B. 6 : 6Parameter space (r 1 , f a ) for closed axion DWs. Line 1 is r 1,min in Eq. (B.13); the dashed line is r 1,min in Eq. (B.14). Line 2 and line 3 are respectively the value of 0.57(T 1 /T c ) and T 1 /T c as a function of f a . Like many other discussions (e.g. Refs.[68,70]),Fig. 3does not include the constraint from observations of neutron stars[71] which depends on the controversial assumption of PBHs as DM existing in globular clusters. Many observations disfavor DM existing in such regions, see e.g. Ref.[72]. The bulk motion should not be confused with the strings motion pulled by the walls. After T w , due to the wall tension, a string is accelerated to relativistic speed in the direction of the wall to which it is originally attached ("unzip")[53]. So the strings motion is constrained locally by the position of walls in the string-wall objects (see e.g.Fig.2of Ref.[28]). However, the bulk speed of the string-wall objects is low as we have discussed. Although Ref.[26] does not incorporate the effect of the universe's expansion into the EoM, the results of that paper can still be applied here for constant axion mass. This is because the universe's expansion plays only a minor role as we see in Eq. (B.10) where the scale factor is only enlarged by 1.76 times during the collapse which is a very fast process.9 One may notice that inFig. B.6, the lower three lines (line 1, 2 and the dashed line) intersect with one another at f a10 11 GeV and are thus not in good order, which might slightly affect r 1,min in the range f a10 11 GeV. Primordial black holesperspectives in gravitational wave astronomy. M Sasaki, T Suyama, T Tanaka, S Yokoyama, Classical and Quantum Gravity. 3563001M. Sasaki, T. Suyama, T. Tanaka, S. Yokoyama, Primordial black holes- perspectives in gravitational wave astronomy, Classical and Quantum Gravity 35 (2018) 063001. Primordial black holes as dark matter. B Carr, F Kühnel, M Sandstad, Physical Review D. 9483504B. Carr, F. Kühnel, M. Sandstad, Primordial black holes as dark matter, Physical Review D 94 (2016) 083504. Massive Primordial Black Holes in Contemporary and Young Universe (old predictions and new data). A D Dolgov, Int. J. Mod. Phys. 331844029A. D. Dolgov, Massive Primordial Black Holes in Contemporary and Young Universe (old predictions and new data), Int. J. Mod. Phys. A33 (2018) 1844029. Primordial black holes as generators of cosmic structures. B Carr, J Silk, Monthly Notices of the Royal Astronomical Society. 478B. Carr, J. Silk, Primordial black holes as generators of cosmic structures, Monthly Notices of the Royal Astronomical Society 478 (2018) 3756- 3775. Black holes from cosmic strings. S W Hawking, Physics Letters B. 231S. W. Hawking, Black holes from cosmic strings, Physics Letters B 231 (1989) 237-239. Formation of primordial black holes by cosmic strings. A Polnarev, R Zembowicz, Physical Review D. 431106A. Polnarev, R. Zembowicz, Formation of primordial black holes by cos- mic strings, Physical Review D 43 (1991) 1106. Black holes from nucleating strings. J Garriga, A Vilenkin, Physical Review D. 473265J. Garriga, A. Vilenkin, Black holes from nucleating strings, Physical Review D 47 (1993) 3265. Cosmological density fluctuations produced by vacuum strings. A Vilenkin, Physical Review Letters. 461169A. Vilenkin, Cosmological density fluctuations produced by vacuum strings, Physical Review Letters 46 (1981) 1169. Do global string loops collapse to form black holes?. J Fort, T Vachaspati, Physics Letters B. 311J. Fort, T. Vachaspati, Do global string loops collapse to form black holes?, Physics Letters B 311 (1993) 41-46. However, we may safely ignore the tiny difference since the three lines are very close to each other in this range of f a . Also, as we discussed in the main text, the parameter space f a 10 11 GeV is less interesting since it is almost excluded by observational constraints on Ω PBH . The most interesting part is f a. 109However, we may safely ignore the tiny difference since the three lines are very close to each other in this range of f a . Also, as we discussed in the main text, the parameter space f a 10 11 GeV is less interesting since it is almost excluded by observational constraints on Ω PBH . The most interesting part is f a ∼ 10 9 GeV which results in sublunar-mass PBHs, and r 1,min can be well determined for f a 10 11 GeV as we see in Fig. B.6GeV which results in sublunar-mass PBHs, and r 1,min can be well determined for f a 10 11 GeV as we see in Fig. B.6. Effects of friction on cosmic strings. J Garriga, M Sakellariadou, Physical Review D. 482502J. Garriga, M. Sakellariadou, Effects of friction on cosmic strings, Phys- ical Review D 48 (1993) 2502. The formation of primary galactic nuclei during phase transitions in the early universe. S G Rubin, A S Sakharov, M Y Khlopov, Journal of Experimental and Theoretical Physics. 92S. G. Rubin, A. S. Sakharov, M. Y. Khlopov, The formation of primary galactic nuclei during phase transitions in the early universe, Journal of Experimental and Theoretical Physics 92 (2001) 921-929. Primordial structure of massive black hole clusters. M Y Khlopov, S G Rubin, A S Sakharov, Astroparticle Physics. 23M. Y. Khlopov, S. G. Rubin, A. S. Sakharov, Primordial structure of massive black hole clusters, Astroparticle Physics 23 (2005) 265-277. Black holes and the multiverse. J Garriga, A Vilenkin, J Zhang, JCAP. 160264J. Garriga, A. Vilenkin, J. Zhang, Black holes and the multiverse, JCAP 1602 (2016) 064. Primordial black hole and wormhole formation by domain walls. H Deng, J Garriga, A Vilenkin, JCAP. 170450H. Deng, J. Garriga, A. Vilenkin, Primordial black hole and wormhole formation by domain walls, JCAP 1704 (2017) 050. Constraints imposed by cp conservation in the presence of pseudoparticles. R D Peccei, H R Quinn, Physical Review D. 161791R. D. Peccei, H. R. Quinn, Constraints imposed by cp conservation in the presence of pseudoparticles, Physical Review D 16 (1977) 1791. A new light boson?. S Weinberg, Physical Review Letters. 40223S. Weinberg, A new light boson?, Physical Review Letters 40 (1978) 223. Problem of strong p and t invariance in the presence of instantons. F Wilczek, Physical Review Letters. 40279F. Wilczek, Problem of strong p and t invariance in the presence of in- stantons, Physical Review Letters 40 (1978) 279. Weak-interaction singlet and strong cp invariance. J E Kim, Physical Review Letters. 43103J. E. Kim, Weak-interaction singlet and strong cp invariance, Physical Review Letters 43 (1979) 103. Can confinement ensure natural cp invariance of strong interactions?. M A Shifman, A Vainshtein, V I Zakharov, Nuclear Physics B. 166M. A. Shifman, A. Vainshtein, V. I. Zakharov, Can confinement en- sure natural cp invariance of strong interactions?, Nuclear Physics B 166 (1980) 493-506. A simple solution to the strong cp problem with a harmless axion. M Dine, W Fischler, M Srednicki, Physics letters B. 104M. Dine, W. Fischler, M. Srednicki, A simple solution to the strong cp problem with a harmless axion, Physics letters B 104 (1981) 199-202. On Possible Suppression of the Axion Hadron Interactions. A R Zhitnitsky, Sov. J. Nucl. Phys. 31260Yad. Fiz.A. R. Zhitnitsky, On Possible Suppression of the Axion Hadron In- teractions. (In Russian), Sov. J. Nucl. Phys. 31 (1980) 260. [Yad. Fiz.31,497(1980)]. Cosmic strings and domain walls in models with goldstone and pseudo-goldstone bosons. A Vilenkin, A E Everett, Physical Review Letters. 481867A. Vilenkin, A. E. Everett, Cosmic strings and domain walls in models with goldstone and pseudo-goldstone bosons, Physical Review Letters 48 (1982) 1867. P Sikivie, Axions, domain walls, and the early universe. 481156P. Sikivie, Axions, domain walls, and the early universe, Physical Review Letters 48 (1982) 1156. . P Sikivie, Axion cosmology. SpringerP. Sikivie, Axion cosmology, in: Axions, Springer, 2008, pp. 19-50. Axion cosmology. D J Marsh, Physics Reports. 643D. J. Marsh, Axion cosmology, Physics Reports 643 (2016) 1-79. T Vachaspati, arXiv:1706.03868Lunar mass black holes from qcd axion cosmology. arXiv preprintT. Vachaspati, Lunar mass black holes from qcd axion cosmology, arXiv preprint arXiv:1706.03868 (2017). Primordial black holes from the qcd axion. F Ferrer, E Masso, G Panico, O Pujolas, F Rompineve, Physical review letters. 122101301F. Ferrer, E. Masso, G. Panico, O. Pujolas, F. Rompineve, Primordial black holes from the qcd axion, Physical review letters 122 (2019) 101301. Production of dark matter axions from collapse of string-wall systems. T Hiramatsu, M Kawasaki, K Saikawa, T Sekiguchi, Physical Review D. 85105020T. Hiramatsu, M. Kawasaki, K. Saikawa, T. Sekiguchi, Production of dark matter axions from collapse of string-wall systems, Physical Review D 85 (2012) 105020. Axion dark matter: strings and their cores. L Fleury, G D Moore, JCAP. 16014L. Fleury, G. D. Moore, Axion dark matter: strings and their cores, JCAP 1601 (2016) 004. The dark-matter axion mass. V B Klaer, G D Moore, JCAP. 171149V. B. Klaer, G. D. Moore, The dark-matter axion mass, JCAP 1711 (2017) 049. Axions from Strings: the Attractive Solution. M Gorghetto, E Hardy, G Villadoro, JHEP. 07151M. Gorghetto, E. Hardy, G. Villadoro, Axions from Strings: the Attractive Solution, JHEP 07 (2018) 151. Long-term dynamics of cosmological axion strings. M Kawasaki, T Sekiguchi, M Yamaguchi, J Yokoyama, PTEP. 2018M. Kawasaki, T. Sekiguchi, M. Yamaguchi, J. Yokoyama, Long-term dynamics of cosmological axion strings, PTEP 2018 (2018) 091E01. . B Ya, I Zeldovich, Yu, L B Kobzarev, Okun, Cosmological Consequences of the Spontaneous Breakdown of Discrete Symmetry. 671Sov. Phys.Ya. B. Zeldovich, I. Yu. Kobzarev, L. B. Okun, Cosmological Conse- quences of the Spontaneous Breakdown of Discrete Symmetry, Zh. Eksp. Teor. Fiz. 67 (1974) 3-11. [Sov. Phys. JETP40,1(1974)]. The strong cp problem and axions. R D Peccei, SpringerR. D. Peccei, The strong cp problem and axions, in: Axions, Springer, 2008, pp. 3-17. Calculation of the axion mass based on hightemperature lattice quantum chromodynamics. S Borsanyi, Nature. 539S. Borsanyi, et al., Calculation of the axion mass based on high- temperature lattice quantum chromodynamics, Nature 539 (2016) 69-71. Axion cosmology revisited. O Wantz, E Shellard, Physical Review D. 82123508O. Wantz, E. Shellard, Axion cosmology revisited, Physical Review D 82 (2010) 123508. Topological Susceptibility and QCD Axion Mass: QED and NNLO corrections. M Gorghetto, G Villadoro, JHEP. 0333M. Gorghetto, G. Villadoro, Topological Susceptibility and QCD Axion Mass: QED and NNLO corrections, JHEP 03 (2019) 033. Topology of Cosmic Domains and Strings. T W B Kibble, J. Phys. 9T. W. B. Kibble, Topology of Cosmic Domains and Strings, J. Phys. A9 (1976) 1387-1398. Cosmological Experiments in Superfluid Helium?. W H Zurek, Nature. 317W. H. Zurek, Cosmological Experiments in Superfluid Helium?, Nature 317 (1985) 505-508. Axion field and the quark nugget's formation at the QCD phase transition. X Liang, A Zhitnitsky, Phys. Rev. 9483502X. Liang, A. Zhitnitsky, Axion field and the quark nugget's formation at the QCD phase transition, Phys. Rev. D94 (2016) 083502. Kinks and domain walls: An introduction to classical and quantum solitons. T Vachaspati, Cambridge University PressT. Vachaspati, Kinks and domain walls: An introduction to classical and quantum solitons, Cambridge University Press, 2006. A Vilenkin, E P S Shellard, Cosmic Strings and Other Topological Defects. Cambridge University PressA. Vilenkin, E. P. S. Shellard, Cosmic Strings and Other Topological De- fects, Cambridge University Press, 2000. Domain walls in qcd. M M Forbes, A R Zhitnitsky, Journal of High Energy Physics. 200113M. M. Forbes, A. R. Zhitnitsky, Domain walls in qcd, Journal of High Energy Physics 2001 (2001) 013. Axion quark nugget dark matter model: Size distribution and survival pattern. S Ge, K Lawson, A Zhitnitsky, Phys. Rev. 99116017S. Ge, K. Lawson, A. Zhitnitsky, Axion quark nugget dark matter model: Size distribution and survival pattern, Phys. Rev. D99 (2019) 116017. Formation and evolution of cosmic strings. T Vachaspati, A Vilenkin, Physical Review D. 302036T. Vachaspati, A. Vilenkin, Formation and evolution of cosmic strings, Physical Review D 30 (1984) 2036. Calculation of cosmological baryon asymmetry in grand unified gauge models. J A Harvey, E W Kolb, D B Reiss, S Wolfram, Nuclear Physics B. 201J. A. Harvey, E. W. Kolb, D. B. Reiss, S. Wolfram, Calculation of cos- mological baryon asymmetry in grand unified gauge models, Nuclear Physics B 201 (1982) 16-100. Scaling theory of percolation clusters. D Stauffer, Physics Reports. 54D. Stauffer, Scaling theory of percolation clusters, Physics Reports 54 (1979) 1 -74. Percolation, statistical topography, and transport in random media. M B Isichenko, Reviews of modern physics. 64961M. B. Isichenko, Percolation, statistical topography, and transport in ran- dom media, Reviews of modern physics 64 (1992) 961. Large clusters in supercritical percolation. P Grinchuk, Physical Review E. 6616124P. Grinchuk, Large clusters in supercritical percolation, Physical Review E 66 (2002) 016124. Introduction to percolation theory: revised second edition. D Stauffer, A Aharony, CRC pressD. Stauffer, A. Aharony, Introduction to percolation theory: revised sec- ond edition, CRC press, 2014. Cluster size distribution above the percolation threshold. T Lubensky, A Mckane, Journal of Physics A: Mathematical and General. 14157T. Lubensky, A. McKane, Cluster size distribution above the percolation threshold, Journal of Physics A: Mathematical and General 14 (1981) L157. Use of percolation clusters in nucleation theory. K Bauchspiess, D Stauffer, Journal of Aerosol Science. 9K. Bauchspiess, D. Stauffer, Use of percolation clusters in nucleation theory, Journal of Aerosol Science 9 (1978) 567 -577. Studies of the motion and decay of axion walls bounded by strings. S Chang, C Hagmann, P Sikivie, Phys. Rev. 5923505S. Chang, C. Hagmann, P. Sikivie, Studies of the motion and decay of axion walls bounded by strings, Phys. Rev. D59 (1998) 023505. Nonbaryonic' dark matter as baryonic color superconductor. A R Zhitnitsky, JCAP. 031010A. R. Zhitnitsky, 'Nonbaryonic' dark matter as baryonic color supercon- ductor, JCAP 0310 (2003) 010. Cosmological CP odd axion field as the coherent Berry's phase of the Universe. S Ge, X Liang, A Zhitnitsky, Phys. Rev. 9663514S. Ge, X. Liang, A. Zhitnitsky, Cosmological CP odd axion field as the coherent Berry's phase of the Universe, Phys. Rev. D96 (2017) 063514. Cosmological axion and a quark nugget dark matter model. S Ge, X Liang, A Zhitnitsky, Phys. Rev. 9743008S. Ge, X. Liang, A. Zhitnitsky, Cosmological axion and a quark nugget dark matter model, Phys. Rev. D97 (2018) 043008. Solar Extreme UV radiation and quark nugget dark matter model. A Zhitnitsky, JCAP. 171050A. Zhitnitsky, Solar Extreme UV radiation and quark nugget dark matter model, JCAP 1710 (2017) 050. The 21 cm absorption line and the axion quark nugget dark matter model. K Lawson, A R Zhitnitsky, Phys. Dark Univ. 100295100295Phys. Dark Univ.K. Lawson, A. R. Zhitnitsky, The 21 cm absorption line and the axion quark nugget dark matter model, Phys. Dark Univ. (2018) 100295. [Phys. Dark Univ.100295,2019(2018)]. Solar corona heating by axion quark nugget dark matter. N Raza, L Van Waerbeke, A Zhitnitsky, Phys. Rev. 98103527N. Raza, L. van Waerbeke, A. Zhitnitsky, Solar corona heating by axion quark nugget dark matter, Phys. Rev. D98 (2018) 103527. New mechanism producing axions in the AQN model and how the CAST can discover them. H Fischer, X Liang, Y Semertzidis, A Zhitnitsky, K Zioutas, Phys. Rev. 9843013H. Fischer, X. Liang, Y. Semertzidis, A. Zhitnitsky, K. Zioutas, New mechanism producing axions in the AQN model and how the CAST can discover them, Phys. Rev. D98 (2018) 043013. Fast Radio Bursts and the Axion Quark Nugget Dark Matter Model. L Van Waerbeke, A Zhitnitsky, Phys. Rev. 9943535L. van Waerbeke, A. Zhitnitsky, Fast Radio Bursts and the Axion Quark Nugget Dark Matter Model, Phys. Rev. D99 (2019) 043535. Gravitationally bound axions and how one can discover them. X Liang, A Zhitnitsky, Phys. Rev. 9923015X. Liang, A. Zhitnitsky, Gravitationally bound axions and how one can discover them, Phys. Rev. D99 (2019) 023015. Primordial Lithium Puzzle and the Axion Quark Nugget Dark Matter Model. V V Flambaum, A R Zhitnitsky, Phys. Rev. 9923517V. V. Flambaum, A. R. Zhitnitsky, Primordial Lithium Puzzle and the Axion Quark Nugget Dark Matter Model, Phys. Rev. D99 (2019) 023517. Gravitationally trapped axions on the Earth. K Lawson, X Liang, A Mead, M S R Siddiqui, L Van Waerbeke, A Zhitnitsky, Phys. Rev. 10043531K. Lawson, X. Liang, A. Mead, M. S. R. Siddiqui, L. Van Waerbeke, A. Zhitnitsky, Gravitationally trapped axions on the Earth, Phys. Rev. D100 (2019) 043531. Experimental searches for the axion and axion-like particles. P W Graham, I G Irastorza, S K Lamoreaux, A Lindner, K A Van Bibber, Annual Review of Nuclear and Particle Science. 65P. W. Graham, I. G. Irastorza, S. K. Lamoreaux, A. Lindner, K. A. van Bibber, Experimental searches for the axion and axion-like particles, An- nual Review of Nuclear and Particle Science 65 (2015) 485-514. New constraints on primordial black holes abundance from femtolensing of gamma-ray bursts. A Barnacka, J.-F Glicenstein, R Moderski, Physical Review D. 8643001A. Barnacka, J.-F. Glicenstein, R. Moderski, New constraints on pri- mordial black holes abundance from femtolensing of gamma-ray bursts, Physical Review D 86 (2012) 043001. Dark matter triggers of supernovae. P W Graham, S Rajendran, J Varela, Physical Review D. 9263007P. W. Graham, S. Rajendran, J. Varela, Dark matter triggers of super- novae, Physical Review D 92 (2015) 063007. Microlensing constraints on primordial black holes with subaru/hsc andromeda observations. H Niikura, M Takada, N Yasuda, R H Lupton, T Sumi, S More, T Kurita, S Sugiyama, A More, M Oguri, Nature Astronomy. 1H. Niikura, M. Takada, N. Yasuda, R. H. Lupton, T. Sumi, S. More, T. Ku- rita, S. Sugiyama, A. More, M. Oguri, et al., Microlensing constraints on primordial black holes with subaru/hsc andromeda observations, Nature Astronomy (2019) 1. Experimental limits on primordial black hole dark matter from the first 2 yr of kepler data. K Griest, A M Cieplak, M J Lehner, The Astrophysical Journal. 786158K. Griest, A. M. Cieplak, M. J. Lehner, Experimental limits on primordial black hole dark matter from the first 2 yr of kepler data, The Astrophysical Journal 786 (2014) 158. Primordial black holes and rprocess nucleosynthesis. G M Fuller, A Kusenko, V Takhistov, Physical review letters. 11961101G. M. Fuller, A. Kusenko, V. Takhistov, Primordial black holes and r- process nucleosynthesis, Physical review letters 119 (2017) 061101. Constraints on primordial black holes as dark matter candidates from capture by neutron stars. F Capela, M Pshirkov, P Tinyakov, Phys. Rev. 87123524F. Capela, M. Pshirkov, P. Tinyakov, Constraints on primordial black holes as dark matter candidates from capture by neutron stars, Phys. Rev. D87 (2013) 123524. Testing newtonian gravity with aaomega: mass-to-light profiles of four globular clusters. R R Lane, L L Kiss, G F Lewis, R A Ibata, A Siebert, T R Bedding, P Székely, Monthly Notices of the Royal Astronomical Society. 400R. R. Lane, L. L. Kiss, G. F. Lewis, R. A. Ibata, A. Siebert, T. R. Bedding, P. Székely, Testing newtonian gravity with aaomega: mass-to-light pro- files of four globular clusters, Monthly Notices of the Royal Astronomical Society 400 (2009) 917-923. Supernova 1987a constraints on sub-gev dark sectors, millicharged particles, the qcd axion, and an axionlike particle. J H Chang, R Essig, S D Mcdermott, Journal of High Energy Physics. 201851J. H. Chang, R. Essig, S. D. McDermott, Supernova 1987a constraints on sub-gev dark sectors, millicharged particles, the qcd axion, and an axion- like particle, Journal of High Energy Physics 2018 (2018) 51. A Ringwald, Proceedings, 49th Rencontres de Moriond on Electroweak Interactions and Unified Theories. 49th Rencontres de Moriond on Electroweak Interactions and Unified TheoriesLa Thuile, ItalyAxions and Axion-Like ParticlesA. Ringwald, Axions and Axion-Like Particles, in: Proceedings, 49th Rencontres de Moriond on Electroweak Interactions and Unified Theo- ries: La Thuile, Italy, March 15-22, 2014, pp. 223-230. New experimental approaches in the search for axion-like particles. I G Irastorza, J Redondo, Prog. Part. Nucl. Phys. 102I. G. Irastorza, J. Redondo, New experimental approaches in the search for axion-like particles, Prog. Part. Nucl. Phys. 102 (2018) 89-159. Advanced LIGO Constraints on Neutron Star Mergers and R-Process Sites. B Ct, K Belczynski, C L Fryer, C Ritter, A Paul, B Wehmeyer, B W O&apos;shea, Astrophys. J. 836230B. Ct, K. Belczynski, C. L. Fryer, C. Ritter, A. Paul, B. Wehmeyer, B. W. O'Shea, Advanced LIGO Constraints on Neutron Star Mergers and R- Process Sites, Astrophys. J. 836 (2017) 230. Advanced Virgo: a second-generation interferometric gravitational wave detector. F Acernese, Class. Quant. Grav. 3224001F. Acernese, et al., Advanced Virgo: a second-generation interferometric gravitational wave detector, Class. Quant. Grav. 32 (2015) 024001. Interferometer design of the KAGRA gravitational wave detector. Y Aso, Y Michimura, K Somiya, M Ando, O Miyakawa, T Sekiguchi, D Tatsumi, H Yamamoto, Phys. Rev. 8843007Y. Aso, Y. Michimura, K. Somiya, M. Ando, O. Miyakawa, T. Sekiguchi, D. Tatsumi, H. Yamamoto, Interferometer design of the KAGRA gravita- tional wave detector, Phys. Rev. D88 (2013) 043007. Gravitational waves from sub-lunar-mass primordial black-hole binaries: A new probe of extradimensions. K T Inoue, T Tanaka, Phys. Rev. Lett. 9121101K. T. Inoue, T. Tanaka, Gravitational waves from sub-lunar-mass primor- dial black-hole binaries: A new probe of extradimensions, Phys. Rev. Lett. 91 (2003) 021101. Primordial black hole detection through diffractive microlensing. T Naderi, A Mehrabi, S Rahvar, Phys. Rev. 97103507T. Naderi, A. Mehrabi, S. Rahvar, Primordial black hole detection through diffractive microlensing, Phys. Rev. D97 (2018) 103507. Cosmic Strings and Domain Walls. A Vilenkin, Phys. Rept. 121A. Vilenkin, Cosmic Strings and Domain Walls, Phys. Rept. 121 (1985) 263-315. Axionic domain walls and cosmology, in: Liege International Astrophysical Colloquia. E P S Shellard, Liege International Astrophysical Colloquia. 26E. P. S. Shellard, Axionic domain walls and cosmology, in: Liege Inter- national Astrophysical Colloquia, volume 26 of Liege International As- trophysical Colloquia, pp. 173-179. The evolution of networks of domain walls and cosmic strings. B S Ryden, W H Press, D N Spergel, Astrophys. J. 357B. S. Ryden, W. H. Press, D. N. Spergel, The evolution of networks of domain walls and cosmic strings, Astrophys. J 357 (1990) 293-300. Dynamical evolution of domain walls in an expanding universe. W H Press, B S Ryden, D N Spergel, Astrophys. J. 347W. H. Press, B. S. Ryden, D. N. Spergel, Dynamical evolution of domain walls in an expanding universe, Astrophys. J 347 (1989) 590-604.
[]
[ "QUANTITATIVE RESULTS FOR THE FLEMING-VIOT PARTICLE SYSTEM AND QUASI-STATIONARY DISTRIBUTIONS IN DISCRETE SPACE", "QUANTITATIVE RESULTS FOR THE FLEMING-VIOT PARTICLE SYSTEM AND QUASI-STATIONARY DISTRIBUTIONS IN DISCRETE SPACE" ]
[ "Bertrand Cloez ", "Marie-Noémie Thai " ]
[]
[]
We show, for a class of discrete Fleming-Viot (or Moran) type particle systems, that the convergence to the equilibrium is exponential for a suitable Wassertein coupling distance. The approach provides an explicit quantitative estimate on the rate of convergence. As a consequence, we show that the conditioned process converges exponentially fast to a unique quasi-stationary distribution. Moreover, by estimating the two-particle correlations, we prove that the Fleming-Viot process converges, uniformly in time, to the conditioned process with an explicit rate of convergence. We illustrate our results on the examples of the complete graph and of N particles jumping on two points.
10.1016/j.spa.2015.09.016
[ "https://arxiv.org/pdf/1312.2444v2.pdf" ]
119,140,587
1312.2444
afff303f464be124eab000464d64974399ae50fe
QUANTITATIVE RESULTS FOR THE FLEMING-VIOT PARTICLE SYSTEM AND QUASI-STATIONARY DISTRIBUTIONS IN DISCRETE SPACE 28 Jul 2014 Bertrand Cloez Marie-Noémie Thai QUANTITATIVE RESULTS FOR THE FLEMING-VIOT PARTICLE SYSTEM AND QUASI-STATIONARY DISTRIBUTIONS IN DISCRETE SPACE 28 Jul 2014AMS 2000 Mathematical Subject Classification: 60K3560B1037A25 Keywords: Fleming-Viot process -quasi-stationary distributions -coupling -Wasserstein distance -chaos propagation -commutation relation CONTENTS We show, for a class of discrete Fleming-Viot (or Moran) type particle systems, that the convergence to the equilibrium is exponential for a suitable Wassertein coupling distance. The approach provides an explicit quantitative estimate on the rate of convergence. As a consequence, we show that the conditioned process converges exponentially fast to a unique quasi-stationary distribution. Moreover, by estimating the two-particle correlations, we prove that the Fleming-Viot process converges, uniformly in time, to the conditioned process with an explicit rate of convergence. We illustrate our results on the examples of the complete graph and of N particles jumping on two points. is about the quasi-stationary distribution of the process which is killed at some rate, see for instance [10,24]. Instead of conditioning on non-killing, it is possible to start N copies of the Markov chain and, instead of being killed, one chain jumps randomly on the state of another one. The resulting process is a version of the Moran model that we will call Fleming-Viot. While the convergence of the large-population limit of the Moran model to the quasi-stationary distribution was already shown under some assumptions [13,17,30], the present paper is concerned with deriving bounds for the rate of convergence. Our first main result, namely Theorem 1.1, establishes the exponential ergodicity of the particle system with an explicit rate. This seems to be a novelty. As a consequence, we prove that the correlations between particles vanish uniformly in time, see Theorem 1.3 and Theorem 2.5. This is also a new result even if [17] gives a similar bound heavily depending on time. As application, we also give new proofs for some more classical but important results as a rate of convergence as N tends to infinity (Theorem 1.2) which can be compared to the results of [13,19,30], a quantitative convergence of the conditioned semi-group (Corollary 1.4) comparable to the results of [14,23] and uniform bound (in time) as N tends to infinity (see Corollary 1.5), which seems to be new in discrete space but already proven for diffusion processes in [27] with an approach based on martingale inequality and spectral theory associated to Schrödinger equation. Let us now be more precise and introduce our model. Let (Q i,j ) i,j∈F * be the transition rate matrix of an irreducible and positive recurrent continuous time Markov process on a discrete and countable state space F * . Set F = F * ∪ {0} where 0 / ∈ F * and let p 0 : F * → R + be a non-null function. The generator of the Markov process (X t ) t≥0 , with transition rate Q and death rate p 0 , when applied to bounded functions f : F → R, gives Gf (i) = p 0 (i)(f (0) − f (i)) + j∈F * Q i,j (f (j) − f (i)), for every i ∈ F * and Gf (0) = 0. If this process does not start from 0 then it moves according to the transition rate Q until it jumps to 0 with rate p 0 ; the state 0 is absorbing. Consider the process (X t ) t≥0 generated by G with initial law µ and denote by µT t its law at time t conditioned on non absorption up to time t. That is defined, for all non-negative function f on F * , by µT t f = µP t f µP t 1 {0} c = y∈F * P t f (y)µ(y) y∈F * P t 1 {0} c (y)µ(y) , where (P t ) t≥0 is the semigroup generated by G and we use the convention f (0) = 0. For every x ∈ F * , k ∈ F * and non-negative function f on F * , we also set T t f (x) = δ x T t f and µT t (k) = µT t 1 {k} , ∀t ≥ 0. A quasi-stationary distribution (QSD) for G is a probability measure ν qs on F * satisfying, for every t ≥ 0, ν qs T t = ν qs . The QSD are not well understood, nor easily amenable to simulation. To avoid these difficulties, Burdzy, Holyst, Ingerman, March [6], and Del Moral, Guionnet, Miclo [12,13] introduced, independently from each other, a Fleming-Viot or Moran type particle system. This model consists of finitely many particles, say N , moving in the finite set F * . Particles are neither created nor destroyed. It is convenient to think of particles as being indistinguishable, and to consider the occupation number η with, for k ∈ F * , η(k) = η (N ) (k) representing the number of particles at site k. Each particle follows independent dynamics with the same law as (X t ) t≥0 except when one of them hits state 0; at this moment, this individual jumps to another particle chosen uniformly at random. The configuration (η t ) t≥0 is a Markov process with state space E = E (N ) defined by E = η : F * → N | i∈F * η(i) = N . Applying its generator to a bounded function f gives Lf (η) = L (N ) f (η) = i∈F * η(i)   j∈F * (f (T i→j η) − f (η)) Q i,j + p 0 (i) η(j) N − 1   ,(1) for every η ∈ E, where, if η(i) = 0, the configuration T i→j η is defined by T i→j η(i) = η(i) − 1, T i→j η(j) = η(j) + 1, and T i→j η(k) = η(k) k / ∈ {i, j}. For η ∈ E, the associated empirical distribution m(η) of the particle system is given by m(η) = 1 N k∈F * η(k)δ {k} . For ϕ : F * → R and k ∈ F * , we also set m(η)(ϕ) = j∈F * ϕ(j)m(η)({j}) and m(η)(k) = m(η)({k}). The aim of this work is to quantify (if they hold) the following limits: m(η (N ) t ) (a) −→ t→+∞ m(η (N ) ∞ ) (b)   (c) m(η 0 )T t (d) −→ t→+∞ ν qs where all limits are in distribution and the limits (b), (c) are taken as N tends to infinity. More precisely, Theorem 1.1 gives a bound for the limit (a), Theorem 1.2 for the limit (b), Corollary 1.5 for the limit (c) and finally Corollary 1.4 for the limit (d). To illustrate our main results, we develop in detail the study of two examples. Those examples are very simple when you are interested by the study of (T t ) t≥0 (QSD, rate of convergence ...) but there are important problems (and even some open questions) on the particle system (invariant distribution, rate of convergence...). The first example concerns a random walk on the complete graph with sites {1, . . . , K} and constant killing rate. Namely ∀i, j ∈ {1, . . . , K}, i = j, Q i,j = 1 K , p 0 (i) = p > 0. The quasi-stationary distribution is trivially the uniform distribution. However, the associated particle system does not behave as independent identically distributed copies of uniformly distributed particles and its behavior is less trivial. One interesting point of the complete graph approach is that it permits to reduce the difficulties of the Fleming-Viot to the interaction. Due to its simple geometry, several explicit formulas are obtained such as the invariant distribution, the correlations and the spectral gap. It seems to be new in the context of Fleming-Viot particle systems. The second example is the case where F * contains only two elements. The study of (T t ) t≥0 is classically reduced to the study of a 2 × 2 matrix. The study of the particle system, for its part, is reduced to the study of a birth-death process with quadratic rates. We are not able to find, even in the literature, a closed formula for its spectral gap. However, we give a lower bound not depending on the number of particles. The proofs are based on our main general theorem (coupling type argument) and a generalisation of [26] (Hardy's inequalities type argument). For this example, the only trivial limit to quantify is the limit (d). The analysis of these two examples shows the subtlety of Fleming-Viot processes. Long time behavior. To bound the limit (a), we introduce the parameter λ defined by λ = inf i,i ′ ∈F *   Q i,i ′ + Q i ′ ,i + j =i,i ′ Q i,j ∧ Q i ′ ,j   . This parameter controls the ergodicity of a Markov chain with transition rate Q without killing. Note that λ is slightly larger than the ergodic coefficient α defined in [17] by: α = j∈F * inf i =j Q i,j . In particular, if there exists j ∈ F * and c > 0 such that for every i = j, Q i,j > c then λ ≥ c. Before expressing our results, let us describe the different distances that we use. We endow E with the distance d 1 defined, for all η, η ′ ∈ E, by d 1 (η, η ′ ) = 1 2 j∈F |η(j) − η ′ (j)|, which is the total variation distance between m(η) and m(η ′ ) up to a factor N : d 1 (η, η ′ ) = N d TV (m(η), m(η ′ )) . Indeed, recall that, for every two probability measures µ and µ ′ , the total variation distance is given by d TV (µ, µ ′ ) = 1 2 sup f ∞ ≤1 f dµ − f dµ ′ = inf X∼µ X ′ ∼µ ′ P X = X ′ , where the infimum runs over all the couples of random variables with marginal laws µ and µ ′ . Now, if µ and µ ′ are two probability measures on E, the d 1 −Wasserstein distance between these two laws is defined by W d 1 (µ, µ ′ ) = inf η∼µ η ′ ∼µ ′ E d 1 (η, η ′ ) , where the infimum runs again over all the couples of random variables with marginal laws µ and µ ′ . The law of a random variable X is denoted by L(X) and, along the paper, we assume that sup(p 0 ) < ∞. Our first main result is: Theorem 1.1 (Wasserstein exponential ergodicity). If ρ = λ − (sup(p 0 ) − inf(p 0 )) then for any processes (η t ) t>0 and (η ′ t ) t>0 generated by (1), and for any t ≥ 0, we have W d 1 (L(η t ), L(η ′ t )) ≤ e −ρt W d 1 (L(η 0 ), L(η ′ 0 ) ). In particular, if ρ > 0 then there exists a unique invariant distribution ν N satisfying for every t ≥ 0, W d 1 (L(η t ), ν N ) ≤ e −ρt W d 1 (L(η 0 ), ν N ). To our knowledge, it is the first theorem which establishes an exponential convergence for the Fleming-Viot particle system with an explicit rate. Note anyway that in [31], it is shown that the particle system is exponentially ergodic, when the underlying dynamics follows a certain stochastic differential equation. Its proof is based on Foster-Lyapunov techniques [25,22] and, contrary to us, the dependence on N of the rates and bounds are unknown. So, this gives less informations. When the death rate p 0 is constant, our bound is optimal in terms of contraction. See for instance section 3, where the example of a random walk on the complete graph is developed. When the death rate is not constant, this bound is not optimal, for instance if the state space is finite, we can have ρ < 0 even if the process can converge exponentially fast. Indeed, it can be an irreducible Markov process on a finite state space. Nevertheless, finding a general optimal bound is a difficult problem. See for instance Section 4, where we study the case where F * contains only two elements. Even though in this case the study seems to be easy, we are not able to give a closed formula for the spectral gap (even if we give a lower bound in the general case). Also, note that the previous inequality is a contraction, this gives some information for small times and is more than a convergence result. Finally the previous convergence is stronger than a convergence in total variation distance as can be checked with Corollary 2.3. Propagation of chaos. In general, two tagged particles in a large population of interacting ones behave in an almost independent way under some assumptions; see [29]. In our case, two particles are almost independent when N is large and this gives the convergence of (m(η t )) t≥0 to (T t ) t≥0 . To prove this result, we will assume that: Assumption (boundedness assumption). (A) Q 1 = sup i∈F * j∈F * ,j =i Q i,j < +∞ and p = sup i∈F * p 0 (i) < +∞. Under this assumption, the particle system converges to the conditioned semi-group. Moreover, when the state space is finite, this convergence is quantified in terms of total variation distance. To express this convergence, we set E η [f (X)] = E[f (X) | η 0 = η], for every bounded function f , every η ∈ E and every random variable X. Theorem 1.2 (Convergence to the conditioned process). Under Assumption (A) and for t ≥ 0, there exists B, C > 0 such that, for all η ∈ E, and any probability measure µ, we have sup ϕ ∞≤1 E η [|m(η t )(ϕ) − µT t ϕ|] ≤ Ce Bt 1 √ N + d TV (m(η), µ) . All constants are explicit and detailed in the proof (In particular, they do not depend on N and t). The proof is based on an estimation of correlations and on a Gronwall-type argument. More precisely our correlation estimate is given by: Theorem 1.3 (Covariance estimates). Let ρ be defined in Theorem 1.1. Under Assumption (A), we have for all k, l ∈ F * , η ∈ E and t ≥ 0 E η η t (k) N η t (l) N − E η η t (k) N E η η t (l) N ≤ 2(Q 1 + p) N − 1 1 − e −2ρt ρ , with the convention (1 − e −2ρt )ρ −1 = 2t when ρ = 0. This theorem gives a decay of the variances and the covariances of the marginals of η. Actually, it does not give any information on the correlation but this slight abuse of language is used to be consistent with other previous works [2,17]. The previous theorem is a consequence of Theorem 2.5 which gives some bounds on the correlations of more general functional of η. The proof of this result comes from a commutation relation between the carré du champs operator and the semigroup of η. This commutation-type relation gives a decay of the variance and thus, by the Cauchy-Schwarz inequality, of the correlations. The previous bound is uniform in time when ρ > 0 and it generalizes several previous work [2,17]. Indeed, as our proof differs completely to [2,17] (proof based on a comparison with the voter model), we are able to use more complex functional of η and our bounds are uniform in time. In particular, taking the limit t → +∞ when ρ > 0, we have the decay of the correlations under the invariant distribution of (η t ) t≥0 . This seems to be new (in discrete or continuous state space However, these two theorems cover a more general setting. This theorem permits to extend the properties of the particle system to the conditioned process; see the next subsection. The proof of Theorem 1.2 differs from all these theorems; it seems simpler and is only based on a Gronwall argument and the correlation estimates. Finally, we can improve the previous bound in the special case of the complete graph random walk but, in general, we do not know how improve it even when card(F * ) = 2; see Sections 3 and 4. As all constants are explicit, the previous theorem allows us to consider parameters depending on N and to understand how the particle system evolves when Q varies with the size of the population; see for instance Remark 3.11. Two main consequences. We summarize two important consequences of our main theorems. Firstly, as ρ, defined in Theorem 1.1, does not depend on N , we can take the limit as N → +∞ in Theorem 1.1. This gives an "easy-to-verify" criterion to prove the existence, uniqueness of a quasi-stationary distribution and the exponential convergence of the conditioned process to it. Corollary 1.4 (Convergence to the QSD). Suppose that ρ is positive and that Assumption (A) holds. For any probability measure µ, ν, we have ∀t ≥ 0, d TV (µT t , νT t ) ≤ e −ρt d TV (µ, ν) .(2) In particular, there exists a unique quasi-stationary distribution ν qs for (T t ) t≥0 and for any probability measure µ, we have ∀t ≥ 0, d TV (µT t , ν qs ) ≤ e −ρt . This corollary is closely related to several previous work [11], [ sup t≥0 sup ϕ ∞≤1 E η [|m(η t )(ϕ) − m(η)T t ϕ|] ≤ K 0 N γ . All constants are explicit and given in (10). In particular, if η is distributed according to the invariant measure ν N , then under the assumptions of the previous corollary, there exist K 0 > 0 and γ > 0 such that E |m(η)(ϕ) − ν qs (ϕ)| ≤ K 0 N γ , for every ϕ satisfying ϕ ∞ ≤ 1. Namely, under its invariant distribution, the particle system converges to the QSD. Without rate of convergence, this limiting result was proved in [2, Theorem 2] when F is finite. Whereas, here, a rate of convergence is given. To our knowledge, it is the first bound of convergence for this limit. Whenever F * is finite, the conclusion of the previous corollary holds with a less explicit γ even when ρ ≤ 0; see Remark 2.8. Note also that, closely related, article [27] gives a similar result when the underlying dynamics is diffusive instead of discrete. Its approach is completly different and based on martingale properties and on spectral properties associated to Schrödinger equation. The remainder of the paper is as follows. Section 2 gives the proofs of our main theorems; Subsection 2.1 contains the proof of Theorem 1.1, Subsection 2.2 the proof of Theorem 1.2 and the last subsection the proof of the corollaries. We conclude the paper with Sections 3 and 4, where we give the two examples mentioned above. The first one illustrates the sharpness of our results. The study of the second one is reduced to a very simple process for which few properties are known. It illustrates the need of general theorems as those previously introduced. PROOF OF THE MAIN THEOREMS In this section, we prove Theorems 1.1 and 1.2 and the corollaries stated before. Let us recall that the generator of the Fleming-Viot process with N particles applied to bounded functions f : E → R and η ∈ E, is given by Lf (η) = i∈F * η(i) j∈F * Q i,j + p 0 (i) η(j) N − 1 (f (T i→j η) − f (η)) .(3) Now let us give two remarks about the dynamics of the Fleming-Viot particle system. Remark 2.1 (Translation of the death rate). Let (P t ) t≥0 and (P ′ t ) t≥0 be two semi-groups with the same transition rate Q but different death rates p 0 , p ′ 0 and let (T t ) t≥0 , (T ′ t ) t≥0 be their corresponding conditioned semi-groups respectively. Using the fact that P t 1 {0} c = E e − t 0 p 0 (Xs)ds and P ′ t 1 {0} c = E e − t 0 p ′ 0 (X ′ s )ds , for every t ≥ 0, it is easy to see that (T t ) t≥0 = (T ′ t ) t≥0 as soon as p 0 − p ′ 0 is constant. This invariance by translation is not conserved by the Fleming-Viot processes. The larger p 0 is, the more jumps are obtained and the larger the variance becomes. This is why our criterion about the existence of QSD does not depend on inf(p 0 ) and why our propagation of chaos result depends on it. Remark 2.2 (Non-explosion). The particle dynamics guarantees the existence of the process (η t ) t≥0 under the condition that there is no explosion. In other words, our construction is global as long as the particles only jump finitely many times in any finite time interval. We naturally assume that the Markov process with transition Q is not explosive but it is not enough for the existence of the particle system. Indeed, an example of explosive Fleming-Viot particle system can be found in [5]. However, the assumption that p 0 is bounded is trivially sufficient to guarantee this non-explosion. Proof of Theorem 1.1. Proof of Theorem 1.1. We build a coupling between two Fleming-Viot particle systems, (η t ) t≥0 and (η ′ t ) t≥0 , generated by (1), starting respectively from some random configurations η 0 , η ′ 0 in E. We will prove that they will be closer and closer. Let us begin by roughly describing our coupling and then be more precise. For every t ≥ 0, we set ξ(t) = ξ = (ξ 1 , . . . , ξ N ) ∈ (F * ) N and ξ ′ (t) = ξ ′ = (ξ ′ 1 , . . . , ξ ′ N ) the respective positions of the N particles of the two configurations η t and η ′ t . Then ∀i ∈ F * , η t (i) = card{1 ≤ k ≤ N | ξ k = i} and η ′ t (i) = card{1 ≤ k ≤ N | ξ ′ k = i}. Distance d 1 (η, η ′ ) represents the number of particles which are not in the same site; namely, changing the indexation, d 1 (η, η ′ ) = card{1 ≤ k ≤ N | ξ k = ξ ′ k }. We then couple our two processes in order to maximize the chance that two particles coalesce. In a first time, we forget the interaction; we have two systems of N particles evolving independently from each others. If two particles are in the same site, ξ k = ξ ′ k , then the Markov property entails that we can make them jump together. When two particles are not in the same site, we can choose our jumps time in such a way that one goes to the second one, with positive probability. These steps are represented by the jumps rate A Q below. Nevertheless, the situation is trickier when we consider the interaction. Indeed, let us now disregard the underlying dynamics and only regard the interaction. If two particles are in the same site, ξ k = ξ ′ k , then they have to be killed and jump over the other particles. If the empirical measures are the same η = η ′ then we can couple the two particles in such a way they die at the same time (because they are in the same site) and jump in the same site (because the empirical measures are equal). If η = η ′ then we can not do this but we can maximize the probability to coalesce. Indeed there is N − d 1 (η, η ′ ) particles which are in the same site and then a probability (N − d 1 (η, η ′ ))/(N − 1) to coalesce. If two particles are not in the same site, ξ k = ξ ′ k , we can try to kill one before the other and put it in the same site. This is also not always possible. Before expressing precisely the jumps rates, let us give some explanations. We call first configuration the particles represented by {ξ k } and the second configuration the particles represented by {ξ ′ k }. We speak about couple of particles when there are two particles coming from different configurations. There is η(i) = card{k | ξ k = i} particles on the site i and we can write η(i) = (η(i) − η ′ (i)) + + η(i) ∧ η ′ (i), where (·) + = max(0, ·). The part η(i) ∧ η ′ (i) represents the number of couples of particles on i and (η(i) − η ′ (i)) + the rest of particles coming from the first configuration. Note that i∈F * (η(i) − η ′ (i)) + = d 1 (η, η ′ ) = N − i∈F * η(i) ∧ η ′ (i). Now, we describe in detail our coupling. It is Markovian and we describe it by expressing its generator and its jumps rate; for every bounded function f and η, η ′ ∈ E, its generator L is given by Lf (η, η ′ ) = i,i ′ ,j,j ′ ∈F * A(i, i ′ , j, j ′ )(f (T i→j η, T i ′ →j ′ η ′ ) − f (η, η ′ )), where we decompose the jump rate A into two parts A = A Q + A p . The jumps rate A Q , that depends only on the transition rate Q, corresponds to the jumps related to the underlying dynamics, namely it is the dynamics when a particle does not die. A Markov process having only A Q as jumps rate corresponds to a coupling of two systems of N particles evolving independently from each others. The jumps rate A p , corresponds to the redistribution dynamics and depends only on p 0 ; it does not depend on the underlying dynamics but only on the interaction. The construction of A Q is then more classic and the construction of A p is new and specific to this interaction. In what follows, we give the expressions of A p and A Q ; the points i, i ′ , j, j ′ are always different in twos. • There are η(i) ∧ η ′ (i) couples of particles on site i ∈ F * . -For each couple, both particles can jump to the same site j ∈ F * , at the same time and through the underlying dynamics. This gives the following jumps rate: A Q (i, i, j, j) = η(i) ∧ η ′ (i) Q i,j . -Both of them can die at the same time. With probability η(j)∧η ′ (j) N −1 , they can jump to the same site j; this gives A p (i, i, j, j) = p 0 (i) η(i) ∧ η ′ (i) η(j) ∧ η ′ (j) N − 1 . With probability η(i)∧η ′ (i)−1 N −1 , both particles jump where they come from and, so, this changes anything. With probability 1 − k∈F * η(k) ∧ η ′ (k) − 1 N − 1 (η(j) − η ′ (j)) + k∈F * (η(k) − η ′ (k)) + (η ′ (j ′ ) − η(j ′ )) + k∈F * (η ′ (k) − η(k)) + ,(4) they can jump to two different sites j, j ′ . Indeed, with probability 1− k∈F * η(k)∧η ′ (k)−1 N −1 , they can jump in different sites, and conditionally on this event, with probability (η(j)−η ′ (j)) + k∈F * (η ′ (k)−η(k)) + , the first particle jumps in site j and, with probability (η ′ (j ′ )−η(j ′ )) + k∈F * (η ′ (k)−η(k)) + , the second one jumps in site j ′ . Probability (4) is equal to (η(j) − η ′ (j)) + · (η ′ (j ′ ) − η(j ′ )) + (N − 1)d 1 (η, η ′ ) . In short, this gives the following jump rates: A p (i, i, j, j ′ ) = p 0 (i) η(i) ∧ η ′ (i) (η(j) − η ′ (j)) + · (η ′ (j ′ ) − η(j ′ )) + (N − 1)d 1 (η, η ′ ) . • For every site i ∈ F * there are (η(i) − η ′ (i)) + particles from the first configuration which are not in a couple. For each of theses particles, we choose, uniformly at random, a particle of the second configuration (which is not coupled with another particle as in the first point). This particle, chosen at random, is on the site i ′ ∈ F * with probability (η ′ (i ′ ) − η(i ′ )) + k (η ′ (k) − η(k)) + = (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) . -For one of these new couple of particles coming from sites i = i ′ , both particles can jump at the same time to the same site j (different from i, i ′ ), through the underlying dynamics; this gives A Q (i, i ′ , j, j) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · Q i,j ∧ Q i ′ ,j . Nevertheless, these two particles do not have the same jump rates (because they do not come from the same site), so it is possible that one jumps to another site while the other one does not jump (also through the underlying dynamics); this gives A Q (i, i ′ , j, i ′ ) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · (Q i,j − Q i ′ ,j ) + , and A Q (i, i ′ , i, j ′ ) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · (Q i ′ ,j ′ − Q i,j ′ ) + . Also, one of them can jump to the site of the second one: A Q (i, i ′ , i ′ , i ′ ) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) Q i,i ′ , and A Q (i, i ′ , i, i) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) Q i ′ ,i . -We focus now our attention on the redistribution dynamics. We would like that both particles of a couple die at the same time and jump to the same site j (where a couple of particles exists; that is with probability η(j)∧η ′ (j) N −1 ). This gives: A p (i, i ′ , j, j) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · p 0 (i) ∧ p 0 (i ′ ) · η(j) ∧ η ′ (j) N − 1 But, even if they die at same time, they can jump to different sites with rate A p (i, i ′ , j, j ′ ) = p 0 (i) ∧ p 0 (i ′ ) (η(i) − η ′ (i)) + (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · (η(j) − η ′ (j)) + (η ′ (j ′ ) − η(j ′ )) + (N − 1)d 1 (η, η ′ ) . However, this is not always possible to kill them at the same time. If they do not then the dying particle jumps uniformly to a particle of its configuration; this gives A p (i, i ′ , j, i ′ ) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · p 0 (i) − p 0 (i ′ ) + · η(j) N − 1 , and A p (i, i ′ , i, j ′ ) = (η(i) − η ′ (i)) + · (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) · p 0 (i ′ ) − p 0 (i) + · η(j ′ ) N − 1 . We set, for every measurable function f , L Q f (η, η ′ ) = i,i ′ ,j,j ′ ∈F * A Q (i, i ′ , j, j ′ )(f (T i→j η, T i ′ →j ′ η ′ ) − f (η, η ′ )), and L p f (η, η ′ ) = i,i ′ ,j,j ′ ∈F * A p (i, i ′ , j, j ′ )(f (T i→j η, T i ′ →j ′ η ′ ) − f (η, η ′ )). Our coupling is totally defined. It is a little bit long but not difficult to verify that if a measurable function f on E × E does not depend on its first (resp. second) variable; that is with a slight abuse of notation: ∀η, η ′ ∈ E, f (η, η ′ ) = f (η) (resp. f (η, η ′ ) = f (η ′ )), then Lf (η, η ′ ) = Lf (η) (resp. Lf (η, η ′ ) = Lf (η ′ )) . This property ensures that the couple (η t , η ′ t ) t≥0 generated by L is well a coupling of processes generated by L (that is of Fleming-Viot processes). Now, let us prove that the distance between η t and η ′ t decreases exponentially. We have L p d 1 (η, η ′ ) ≤ i∈F * p 0 (i) η(i) ∧ η ′ (i) d 1 (η, η ′ ) N − 1 − i,i ′ ∈F * p 0 (i) ∧ p 0 (i ′ ) (η(i) − η ′ (i)) + (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) j∈F * η(j) ∧ η ′ (j) N − 1 ≤ (sup(p 0 ) − inf(p 0 )) d 1 (η, η ′ ) N − 1 N − d 1 (η, η ′ ) ≤ (sup(p 0 ) − inf(p 0 ))d 1 (η, η ′ ). Now, L Q d 1 (η, η ′ ) ≤ − i,i ′ ∈F *   Q i,i ′ + Q i ′ ,i + j =i,i ′ Q i,j ∧ Q i ′ ,j   (η(i) − η ′ (i)) + (η ′ (i ′ ) − η(i ′ )) + d 1 (η, η ′ ) ≤ −λd 1 (η, η ′ ). We deduce that Ld 1 (η, η ′ ) ≤ −ρd 1 (η, η ′ ). Now let (P t ) t≥0 be the semi-group associated with the generator L. Using the equality ∂ t P t f = P t Lf and Gronwall Lemma, we have, for every t ≥ 0, P t d 1 ≤ e −ρt d 1 ; namely E[d 1 (η t , η ′ t )] ≤ e −ρt E[d 1 (η 0 , η ′ 0 )]. Taking the infimum over all couples (η 0 , η ′ 0 ), the claim follows. The existence and the uniqueness of an invariant distribution come from classical arguments; see for instance [8,Theorem 5.23]. As it is easy to see that the distance W d 1 is larger than the total variation distance, we have the following consequence: Corollary 2.3 (Coalescent time estimate). For all t ≥ 0, we have d TV (L(η t ), L(η ′ t )) ≤ e −ρt W d 1 (L(η 0 ), L(η ′ 0 )). In particular, if ρ > 0 the invariant distribution ν N satisfies d TV (L(η t ), ν N ) ≤ e −ρt W d 1 (L(η 0 ), ν N ). The proof is simple and given for sake of completeness. Proof. Using Theorem 1.1, we find d TV (L(η t ), L(η ′ t )) = inf ηt∼L(ηt) η ′ t ∼L(η ′ t ) E 1 ηt =η ′ t ≤ inf ηt∼L(ηt) η ′ t ∼L(η ′ t ) E d 1 (η t , η ′ t ) = W d 1 (L(η t ), L(η ′ t )) ≤ e −ρt W d 1 (L(η 0 ), L(η ′ 0 )). Remark 2.4 (Generalization). As we can see at the end of the paper, in the case where F * contains only two elements, the coupling that we use is pretty good but our estimation of the distance is (in general) too rough. There is some natural way to change the bound/criterion that we found. The first one is to use another more appropriate distance. This technique is in general useful in other (Markovian) contexts [7,9,15]. Another way is to find a contraction after a certain time: it is the Foster-Lyapunov-type techniques [4,22,25]. This type of techniques give more general criteria but are useless for small times and the formulas we get are less explicit. All of these techniques will give different criteria that are not necessarily better. Finally note that, in all the paper, we can replace ρ by ρ ′ = inf i,i ′ ∈F *    p 0 (i) ∧ p 0 (i ′ ) + Q i,i ′ + Q i ′ ,i + j =i,i ′ Q i,j ∧ Q i ′ ,j    − sup(p 0 ), and all conclusions hold. Indeed, we have to bound directly Ld 1 instead of bounding separately L Q d 1 and L p d 1 . Proofs of Theorems 1.2 and 1.3. The proof of Theorem 1.2 is done in two steps. Firstly, we estimate the correlations between the number of particles over the sites and then we estimate the distance in total variation via the Kolmogorov equation. Let us introduce some notations. For every bounded functions f, g, every η ∈ E and every random variable X, we set Cov η [f (X), g(X)] = E η [f (X)g(X)] − E η [f (X)]E η [g(X)], and Var η [f (X)] = Cov η [f (X), f (X)]. Let (S t ) t≥0 be the semigroup of (η t ) t≥0 defined by S t f (η) = E η [f (η t )], for every t ≥ 0, η ∈ E and bounded function f . If µ is a probability measure on E and t ≥ 0, then µS t is the measure defined by µS t f = E S t f (y)µ(dy). It represents the law of η t when η 0 is distributed according to µ. We also introduce the carré du champ operator Γ defined, for any bounded function f and η ∈ E, by Γf (η) = L(f 2 )(η) − 2f (η)Lf (η) (5) = i,j∈F * η(i) Q i,j + p 0 (i) η(j) N − 1 (f (T i→j η) − f (η)) 2 . We present now an improvement of Theorem 1.3. Theorem 2.5 (Correlations for Lipschitz functional) . Let g, h be two 1-Lipschitz mappings on (E, d 1 ); namely |g(η) − g(η ′ )| ≤ d 1 (η, η ′ ) and |h(η) − h(η ′ )| ≤ d 1 (η, η ′ ), for every η, η ′ ∈ E. Under Assumption (A) we have for all t ≥ 0 and η ∈ E, |Cov η (g(η t ), h(η t ))| ≤ 1 − e −2ρt 2ρ N Q 1 + p N 2 N − 1 , with the convention (1 − e −2ρt )ρ −1 = 2t when ρ = 0. In particular, if ρ > 0 then the previous bound is uniform. Proof. For any function g on E and t ≥ 0, we have Var η (g(η t )) = S t (g 2 )(η) − (S t g) 2 (η) = t 0 S s ΓS t−s g(η)ds. Indeed, setting, for any s ∈ [0, t] and η ∈ E, Ψ η (s) = S s (S t−s g) 2 (η) and ψ(s) = S t−s g, we get ∀s ≥ 0, Ψ ′ η (s) = S s Lψ 2 − 2ψLψ (η) = S s Γψ(s)(η), and so, Var η (g(η t )) = Ψ η (t) − Ψ η (0) = t 0 S s ΓS t−s g(η)ds. Now, if g is a 1-Lipschitz mapping with respect to d 1 then | S t−s g(T i→j η) − S t−s g(η) | ≤ E |g(η ′ t−s ) − g(η t−s )| ≤ E d 1 (η t−s , η ′ t−s ) , where η t−s , η ′ t−s evolve as Fleming-Viot particle systems with initial conditions η and T i→j η. Thus, using Theorem 1.1, we obtain | S t−s g(T i→j η) − S t−s g(η) | ≤ W d 1 (L(η t−s ), L(η ′ t−s )) (6) ≤ e −ρ(t−s) d 1 (T i→j η, η) ≤ e −ρ(t−s) 1 i =j . Hence, ΓS t−s g ∞ = sup η∈E | ΓS t−s g(η) |≤ e −2ρ(t−s) N Q 1 + p N 2 N − 1 . Finally, the Cauchy-Schwarz inequality and the first part of the proof give |Cov η (g(η t ), h(η t ))| ≤ Var η (g(η t )) 1/2 Var η (h(η t )) 1/2 ≤ 1 − e −2ρt 2ρ N Q 1 + p N 2 N − 1 . Proof of Theorem 1.3. Fix l ∈ F * and set ϕ l : η → η(l). The function ϕ l /2 is a 1-Lipschitz mapping with respect to d 1 , so we apply the previous theorem . Remark 2.6 (Generalization). Assume that there exist C > 0 and λ > 0 such that for any processes (η t ) t>0 and (η ′ t ) t>0 generated by (1), and for any t > 0, we have W d 1 (L(η t ), L(η ′ t )) ≤ Ce −λt W d 1 (L(η 0 ), L(η ′ 0 )),(7) then under the previous assumptions we have, for all t ≥ 0, Cov η (η t (k)/N, η t (l)/N ) ≤ 2C N 2 1 − e −2λt λ N Q 1 + p N 2 N − 1 . A bound like (7) is proved when the state space F * contains only two points. Proof of Theorem 1.2. The proof is based on a bias-variance type decomposition. The variance is bounded through Theorem 1.3 and the bias through Gronwall-type argument. More precisely, for t ≥ 0, we have sup ϕ ∞ ≤1 E η [|m(η t )(ϕ) − µT t ϕ|] ≤ sup ϕ ∞ ≤1 E η [|m(η t )(ϕ) − m(η t )(ϕ)|] + 2d TV (m(η t ), µT t ),(8) where m(η t ) is the empirical mean measure; namely m(η t )(k) = E[m(η t )(k)], for every k ∈ F * . Let ϕ be a function such that ϕ ∞ ≤ 1. Cauchy-Schwarz inequality gives E η [|m(η t )(ϕ) − m(η t )(ϕ)|] ≤ 2N −1 Var(g ϕ (η t )) 1/2 , where g ϕ : η → 1 2 k∈F * η(k)ϕ(k) = N 2 m(η)(ϕ) is a 1-Lipschitz function. So by Theorem 2.5 we have sup ϕ ∞≤1 E η [|m(η t )(ϕ) − m(η t )(ϕ)|] ≤ 2ρ −1 (1 − e −2ρt )(Q 1 + p)(N − 1) −1 . Now, to study the bias term in (8), let us introduce the following notations u k (t) = E η [m(η t )(k)] and v k (t) = µT t (k). It is well known that (µT t ) t≥0 is the unique measure solution to the (non-linear) Kolmogorov forward type equations: µT 0 = µ, and ∀t ≥ 0, ∂ t µT t (j) = i∈F * (Q i,j µT t (i) + p 0 (i) µT t (i) µT t (j)) .(9) Thus ∂ t v k (t) = i∈F * Q i,k v i (t) + i∈F * p 0 (i)v i (t)v k (t). Also, u k (t) = E η [m(η t )(k)] = S t f (η), where f : η → m(η)(k) and (S t ) t≥0 is the semi-group of (η t ) t≥0 , thus, using (1), the equality ∂ t S t f = LS t f and the convention that p 0 (i) + j∈F * Q i,j = 0 for every i ∈ F * , we find ∂ t u k (t) = i∈F * Q i,k u i (t) + i∈F * p 0 (i)u i (t)u k (t) − p 0 (k) N − 1 u k (t) + R k (t), where R k (t) = i∈F * p 0 (i) N N − 1 E η (m(η t )(i)m(η t )(k)) − E η (m(η t )(i))E η (m(η t )(k)) = E η i∈F * p 0 (i)m(η t )(i) m(η t )(k) − E η i∈F * p 0 (i)m(η t )(i) E η (m(η t )(k)) + (N − 1) −1 E η i∈F * p 0 (i)m(η t )(i) m(η t )(k) . For t ≥ 0, let us define ǫ(t) = k∈F * |u k (t) − v k (t)| = 2d TV (m(η t ), µT t ). Using triangular inequality, Fubini-Tonelli Theorem and Assumption (A), we have ǫ(t) = k∈F * u k (0) − v k (0) + t 0 ∂ s (u k (s) − v k (s))ds ≤ ǫ(0) + k∈F * t 0 i∈F * Q i,k (u i (s) − v i (s)) + k∈F * t 0 p 0 (k) N − 1 u k (s) + |R k (s)| ds + k∈F * t 0 i∈F * p 0 (i) [v i (s)(u k (s) − v k (s)) + u k (s)(u i (s) − v i (s))] ds ≤ ǫ(0) + t 0 (Q 1 + 2p) ǫ(s)ds + pt N − 1 + t 0 k∈F * |R k (s)|ds. However, by Cauchy-Schwarz inequality and Theorem 2.5 with the 1-Lipschitz function g : η → 1 2p i∈F * p 0 (i)η(i), we have k∈F * |R k (t)| ≤ k∈F * E η m(η t )(k) i∈F * p 0 (i)m(η t )(i) − E η i∈F * p 0 (i)m(η t )(i) + p(N − 1) −1 ≤ 2pN −1 Var η (g(η t )) 1 2 + p(N − 1) −1 ≤ p 2ρ −1 (1 − e −2ρt )(Q 1 + p)(N − 1) −1 + p(N − 1) −1 . If c t = ρ −1 (1 − e −2ρt ), B = Q 1 + 2p then Gronwall's lemma gives ε(t) ≤ ε(0)e Bt + t 0 e B(t−s) 2p √ 2B (N − 1) 1/2 √ c s + 2p N − 1 ds ≤ ε(0) + 2p (N − 1)B + 2p √ 2B (N − 1) 1/2 t 0 e −Bs √ c s ds e Bt ≤ ε(0) + A √ N e Bt , for some A > 0. Proof of the corollaries. In this subsection, we give the proofs of corollaries given in the introduction. Proof of Corollary 1.4. The proof is based on an approximation of the conditioned semigroups by two particle systems. Theorem 1.1 gives a contraction for these particle systems. We then use Theorem 1.2 and a discretization argument to prove that it implies a contraction for the conditioned semigroups. Let (m . Now let us prove that we can take the limit N → +∞. Since F is countable and discrete, there exists an increasing sequence of finite sets (F * n ) n≥0 such that F * = ∪ n≥0 F * n and d TV (µT t , νT t ) = 1 2 k∈F * |µT t 1 {k} − νT t 1 {k} | = lim n→+∞ 1 2 k∈F * n |µT t 1 {k} − νT t 1 {k} |. The previous bound gives E   1 2 k∈F * n η (N ) t (k) N − η (N ) t (k) N   ≤ N −1 E d 1 (η (N ) t , η (N ) t ) ≤ e −ρt d TV m (N ) 0 , m (N ) 0 . Using Theorem 1.2 and taking the limit N → +∞, we find 1 2 k∈F * n |µT t 1 {k} − νT t 1 {k} | ≤ e −ρt d TV (µ, ν) . Indeed, as we work in discrete space, the convergence in distribution is equivalent to that in total variation distance: lim N →+∞ d TV (m (N ) 0 , µ) = lim N →+∞ d TV ( m (N ) 0 , ν) = 0. Furthermore all sequences in the expectations are increasing. Thus, taking the limit n → +∞, we obtain (2). Finally, the existence of a QSD can be proved as in the proof of [23, Theorem 1]. More precisely, let µ be any probability measure on F * . We have, for all s, t ≥ 0 such that s ≥ t, d T V (µT t , µT s ) = d T V (µT t , µT s−t+t ) = d T V (µT t , (µT s−t )T t ) ≤ e −ρt . Thus (µT t ) t≥0 is a Cauchy sequence for the total variation distance and thus admits a limit ν qs . This measure is then proved to be a QSD by standard arguments; see for instance [24, Proposition 1]. Remark 2.7 (Weaker assumptions). Assumption (A) is not necessary and even useless in the previous corollary. Indeed, using [30, Theorem 1] and a similar argument of approximation, it's enough that the particle system does not explode. However, we used this proof for sake of completeness. We can now proceed to the proof of the second corollary. Proof of Corollary 1.5. The proof is based on an "interpolation" between the bounds obtained in Corollary 1.4 and Theorem 1.2. Let us fix t > 0, u ∈ [0, 1] and ϕ a function such that ϕ ∞ ≤ 1. By the Markov property, we have E η [|m(η t )(ϕ) − m(η)T t ϕ|] ≤ E η |m(η t )(ϕ) − m(η tu )T t(1−u) ϕ| + E η |m(η tu )T t(1−u) ϕ − m(η)T t ϕ| ≤ sup ϕ ∞≤1 E η E ηtu |m( η t(1−u) )(ϕ) − m(η tu )T t(1−u) ϕ| + E η d TV (m(η tu )T t(1−u) , m(η)T ut T t(1−u) ) , where ( η t ) t≥0 is a Markov process generated by (1) and where, for all η ∈ E, we denote by E η the conditional expectation of ( η t ) t≥0 given the event { η 0 = η}. On the one hand, by Theorem 1.2, which is a uniform estimate on the initial condition, there exist B, C > 0 such that sup ϕ ∞ ≤1Ẽ ηtu |m( η t(1−u) )(ϕ) − m(η tu )T t(1−u) ϕ| ≤ Ce Bt(1−u) √ N − 1 . On the other hand, from Corollary 1.4, we have (1−u) . E η d TV (m(η tu )T t(1−u) , m(η)T ut T t(1−u) ) ≤ e −ρtChoosing u = 1 + 1 t(B + ρ) log BC ρ √ N − 1 , this gives sup ϕ ∞≤1 E η [|m(η t )(ϕ) − m(η)T t ϕ|] ≤ B + ρ B BC ρ √ N − 1 ρ B+ρ . (10) Remark 2.8 (Weaker assumptions). We can weaken the assumption ρ > 0 in the previous corollary. Indeed, it is enough to assume that there exist C > 0 and λ > 0 such that ∀t ≥ 0, d TV (µT t , νT t ) ≤ Ce −λt . Some sufficient conditions are given in [11,14,23]. We can also use a bound of convergence for the Fleming-Viot particle system as in Theorem 1.1. In particular, when F * is finite, the particle system converges, uniformly in time, to the conditioned process; hence, if η is distributed by the invariant distribution of the particle system (it exists since E is finite) then it converges in law towards the quasi-stationary distribution. COMPLETE GRAPH DYNAMICS In all this section, we study the example of a random walk on the complete graph. Let us fix K ∈ N * , p > 0 and N ∈ N * , the dynamics of this example is as follows: we consider a model with N particles and K +1 vertices 0, 1, . . . , K. The N particles move on the K vertices 1, . . . , K uniformly at random and jump to 0 with rate p. When a particle reaches the node 0, it jumps instantaneously over another particle chosen uniformly at random. This particle system corresponds to the model previously cited with parameters Q i,j = 1 K , ∀i, j ∈ F * = {1, . . . , K}, i = j and p 0 (i) = p, ∀i ∈ F * . The generator of the associated Fleming-Viot process is then given by Lf (η) = K i=1 η(i)   K j=1 (f (T i→j η) − f (η)) 1 K + p η(j) N − 1   ,(11) for every function f and η ∈ E. A process generated by (11) is an instance of inclusion processes studied in [18,20,21]. It is then related to models of heat conduction. One main point of [18,20] is a criterion ensuring the existence and reversibility of an invariant distribution for the inclusion processes. In particular, they give an explicit formula of the invariant distribution of a process generated by (11) and we give this expression in Subsection 3.3. They also study different scaling limits which seem to be irrelevant for our problems. Another application of this example comes from population genetics. Indeed,this model can also be referred as neutral evolution, see for instance [16,32]. More precisely, consider N individuals possessing one type in F * = {1, . . . , K} at time t. Each pair of individuals interacts at rate p. Upon an interacting event, one individual dies and the other one reproduces. In addition, every individual changes its type (mutates) at rate 1 and chooses uniformly at random a new type in F * . The measure m(η t ) gives the proportions of types. The kind of mutation we consider here is often referred to as parent-independent or the house-of-cards model. In all this section, for any probability measure µ on E, we set in a classical manner E µ [·] = F * E x [·]µ(dx) and P µ = E µ [1 · ]; similarly Cov µ and Var µ are defined with respect to E µ . 3.1. The associated killed process. We define the process (X t ) t≥0 by setting X t = Z t if t < τ 0 if t ≥ τ, where τ is an exponential variable with mean 1/p and (Z t ) t≥0 is the classical complete graph random walk (i.e. without extinction) on {1, . . . , K}. We have, for any bounded function f , T t f (x) = E [f (X t ) | X 0 = x, X t = 0] , t ≥ 0, x ∈ F * . The conditional distribution of X t is simply given by the distribution of Z t : P(X t = i | X t = 0) = P(Z t = i). The study of (Z t ) t≥0 is trivial. Indeed, it converges exponentially fast to the uniform distribution π K on {1, . . . , K}. We deduce that for all t ≥ 0 and all initial distribution µ, d TV (µT t , π K ) = K i=1 |P µ (X t = i | τ > t) − π K (i)| ≤ e −t . Thus in this case, the conditional distribution of X converges exponentially fast to the Yaglom limit π K . Correlations at fixed time. The special form of L, defined at (11), makes the calculation of the two-particle correlations at fixed time easy. Theorem 3.1 (Two-particle correlations). For all k, l ∈ {1, . . . , K}, k = l and any probability measure µ on E, we have for all t ≥ 0 Cov µ (η t (k), η t (l)) = E µ [η 0 (k)η 0 (l)] e − 2K(N−1+p) K(N−1) t + −N + 1 + 2pN K(N − 1 + 2p) (E µ [η 0 (k)] + E µ [η 0 (l)])e −t − E µ [η 0 (k)] E µ [η 0 (l)] e −2t + −N 2 (p + 1) + N K 2 (N − 1 + p) . Remark 3.2 (Limit t → +∞). By the previous theorem, we find for any probability measure µ lim t→+∞ Cov µ (η t (k), η t (l)) = −N 2 (p + 1) + N K 2 (N − 1 + p) = Cov(η(k), η(l)), where η is distributed according to the invariant distribution; it exists since the state space is finite, see the next section. Remark 3.3 (Limit N → +∞). If Cov µ (η 0 (k), η 0 (l)) = 0 then for all k, l ∈ {1, . . . , K}, k = l and any probability measure µ, we have Cov µ η t (k) N , η t (l) N ∼ N e −2t Cov µ η 0 (k) N , η 0 (l) N , where u N ∼ N v N iff lim N →+∞ u N v N = 1. Proof of Theorem 3.1. For k, l ∈ {1, .., K}, let ψ k,l be the function η → η(k)η(l). Applying the generator (11) to ψ k,l we obtain Lψ k,l (η) = − 2K(N − 1 + p) K(N − 1) η(k)η(l) + N − 1 K (η(k) + η(l)). So, for all t ≥ 0, Lψ k,l (η t ) = − 2K(N − 1 + p) K(N − 1) η t (k)η t (l) + N − 1 K (η t (k) + η t (l)). Using Kolmogorov's equation, we have ∂ t E µ (η t (k)η t (l)) = − 2K(N − 1 + p) K(N − 1) E µ (η t (k)η t (l)) + N − 1 K (E µ (η t (k)) + E µ (η t (l))). (12) Now if ϕ k (η) = η(k) then Lϕ k (η) = N K − η(k). We deduce that, for every t ≥ 0, ∂ t E µ (η t (k)) = N K − E µ (η t (k)) and E µ (η t (k)) = E µ (η 0 (k))e −t + N K . Solving equation (12) ends the proof. 3.3. Properties of the invariant measure. As (η t ) t≥0 is an irreducible Markov chain on a finite state space, it is straightforward that it admits a unique invariant measure. In fact, this invariant distribution is reversible and we know its expression. Theorem 3.4 (Invariant distribution). The process (η t ) t≥0 admits a unique invariant and reversible measure ν N , which is defined, for every η ∈ E, by ν N ({η}) = Z −1 K i=1 η(i)−1 j=0 N − 1 + Kpj j + 1 , where Z is a normalizing constant. This result was already proved in [18,Section 4] and [20, Theorem 2.1] but we give it for sake of completeness. Proof. A measure ν is reversible if and only if it satisfies the following balance equation ν({η})C(η, ξ) = ν({ξ})C(ξ, η)(13) where ξ = T i→j η and C(η, ξ) = L1 ξ (η) = η(i)(K −1 + pη(j)(N − 1) −1 ). Due to the geometry of the complete graph, it is natural to consider that ν has the following form ν({η}) = 1 Z K i=1 l(η(i)), where l : {0, . . . , N } → [0, 1] is a function and Z is a normalizing constant. From (13), we have l(η(i))l(η(j))η(i)(N − 1 + Kpη(j)) = l(η(i) − 1)l(η(j) + 1)(η(j) + 1)(N − 1 + Kp(η(i) − 1)), for all η ∈ E and i, j ∈ {1, . . . K}. Hence, l(n) l(n − 1) n N − 1 + Kp(n − 1) = l(m) l(m − 1) m N − 1 + Kp(m − 1) = u, for every m, n ∈ {1, . . . , N } and some u ∈ R. Finally, ν({η}) = K i=1   u η(i) η(i)−1 j=0 N − 1 + Kpi i + 1 l(0)   = l(0) K u N K i=1 η(i)−1 j=0 N − 1 + Kpj j + 1 , and Z = 1/(l(0) K u N ). In particular, we have directly Corollary 3.5 (Invariant distribution when p = 1/K). If p = 1/K then the process (η t ) t≥0 admits a unique invariant and reversible measure ν N , which is defined, for every η ∈ E, by ν N ({η}) = Z −1 K i=1 N − 2 + η(i) N − 2 , where Z is a normalizing constant given by Z = (K + 1)N − K − 1 KN − K − 1 . Corollary 3.6 (Marginal laws when p = 1/K). If p = 1/K then for all i ∈ {1, . . . , K} we have P ν N (η(i) = x) = 1 Z N − 2 + x N − 2 KN − K − x (K − 1)N − K , Proof. Firstly let us recall the Vandermonde binomial convolution type formula: let n, n 1 , . . . , n p be some non-negative integers satisfying p i=1 n i = n, we have r − 1 n − 1 = r 1 +···+rp=r p j=1 r j − 1 n j − 1 . The proof is based on the power series decomposition of z → (z/(1 − z)) n = p i=1 (z/(1 − z)) n i . Using this formula, we find P ν N (η(i) = x) = x∈E 1 P ν N (η = (x 1 , . . . , x i−1 , x, x i+1 . . . , x K )) = 1 Z N − 2 + x N − 2 x∈E 1 i−1 l=1 K l=i+1 N − 2 + x l N − 2 = 1 Z N − 2 + x N − 2 (K − 1)(N − 1) + N − x − 1 (K − 1)(N − 1) − 1 , where E 1 = {x = (x 1 , . . . , x i−1 , x i+1 . . . , x K )|x 1 + · · · + x i−1 + x i+1 · · · + x K = N − x}. We are now able to express the particle correlations under this invariant measure. |Cov ν N (η(i)/N, η(j)/N )| ∼ N p + 1 K 2 N , Proof. Let η be a random variable with law ν N . As η(1), . . . , η(K) are identically distributed and K i=1 η(i) = N we have Cov ν N (η(i)/N, η(j)/N ) = − Var ν N (η(i)/N ) K − 1 . Using the results of Section 3.4, we have L(η(i) 2 ) = η(i) 2 −2 − 2p N − 1 + η(i) 2N K + 2pN N − 1 + K − 2 K + N K . Using the fact that L(η(i) 2 )dν N = 0 and η(i)dν N = N K , we deduce that η(i) 2 dν N = N [(2N + K − 2)(N − 1) + 2KN p + K(N − 1)] 2K 2 (N − 1 + p) . Finally, Var ν N (η(i)) = η(i) 2 dν N − η(i)dν N 2 = N (K − 1)(N p + N − 1) K 2 (N − 1 + p) , and thus, for i = j, |Cov ν N (η(i)/N, η(j)/N )| ∼ N p + 1 K 2 N . Remark 3.8 (Proof through coalescence methods). Maybe we can use properties of Kingman's coalescent type process (which is a dual process) to recover some of our results (as for instance the previous correlation estimates). Indeed, after an interacting event, all individuals evolve independtly and it is enough to look when the first mutation happens (backwards in time) on one of the genealogical tree branches. Nevertheless, we prefer to use another approach based on Markovian techniques. Remark 3.9 (Number of sites). Theorem 3.7 gives the rate of the decay of correlations with respect to the number of particles, but we also have a rate with respect to the number of sites K. For instance when p = 1/K and if η is distributed under the invariant measure, then |Cov ν N (η(i)/N, η(j)/N )| ∼ K 1 K(K − 1)N . The previous theorem shows that the occupation numbers of two distinct sites become noncorrelated when the number of particles increases. In fact, Theorem 3.7 leads to a propagation of chaos: Corollary 3.10 (Convergence to the QSD). We have E ν N [d TV (m(η), π K )] ≤ K(p + 1) N , where π K is the uniform measure on {1, . . . , K}. Proof. By the Cauchy-Schwarz inequality, we have E ν N η(k) N − 1 K ≤ E ν N η(k) N − 1 K 2 1 2 = Var ν N η(k) N 1/2 ≤ (K − 1)(p + 1) K 2 N . Summing over {1, . . . , K} ends the proof. The previous bound is better than the bound obtained in Theorem 1.2 and its corollaries. This comes from the absence of bias term. Indeed, ∀k ∈ F * , E ν N [m(η)(k)] = 1 K = π K (k). The bad term in Theorem 1.2 comes from, with the notations of its proof, the estimation of |u k (t)− v k (t)| and Gronwall Lemma. N ). A nice application of explicit rates of convergence is to consider parameters depending on N . For instance, we can now consider that p = p N depends on N , this does not change neither the conditioned semi-goup nor the QSD but this changes the dynamics of our interacting-particle system. The last corollary gives that if lim N →∞ p N /N = 0 then the empirical measure converges to the uniform measure. Remark 3.11 (Parameters depending on Long time behavior and spectral analysis of the generator. In this subsection, we point out the optimality of Theorem 1.1 in this special case. It gives Corollary 3.12 (Wasserstein contraction). For any processes (η t ) t>0 and (η ′ t ) t>0 generated by (11), and for any t ≥ 0, we have W d 1 (L(η t ), L(η ′ t )) ≤ e −t W d 1 (L(η 0 ), L(η ′ 0 ) ). In particular, when (η ′ 0 ) follows the invariant distribution ν N associated to (11), we get for every t ≥ 0 W d 1 (L(η t ), ν N ) ≤ e −t W d 1 (L(η 0 ), ν N ). In particular, if λ 1 is the smallest positive eigenvalue of −L, defined at (11), then we have 1 = ρ ≤ λ 1 . Indeed, on the one hand, let us recall that, as the invariant measure is reversible, λ 1 is the largest constant such that lim t→+∞ e 2λt R t f − ν N (f ) 2 L 2 (ν N ) = 0,(14) for every λ < λ 1 and f ∈ L 2 (ν N ), where (R t ) t≥0 is the semi-group generated by L. See for instance [3,28]. On the other hand, if λ < 1 then, by Theorem 1.1, we have e 2λt R t f − ν N (f ) 2 L 2 (ν N ) = e 2λt E ((δ η R t )f − (ν N R t )f ) 2 ν N (dη) ≤ 2e 2λt f 2 ∞ E W d 1 (δ η R t , ν N R t ) 2 ν N (dη) ≤ 2e 2(λ−1)t f 2 ∞ E W d 1 (δ η , ν N ) 2 ν N (dη), and then (14) holds. Now, the constant functions are trivially eigenvectors of L associated with the eigenvalue 0, and if, for k ∈ {1, . . . , K}, l ≥ 1 we set ϕ (l) k : η → η(k) l then the function ϕ (1) k satisfies Lϕ (1) k = N/K − ϕ(1) k . In particular ϕ (1) k − N/K is an eigenvector and 1 is an eigenvalue of −L. This gives λ 1 ≤ 1 and finally λ 1 = 1 is the smallest eigenvalue of −L. By the reversibility, we have a Poincaré (or spectral gap) inequality ∀t ≥ 0, R t f − ν N (f ) 2 L 2 (ν N ) ≤ e −2t f − ν N (f ) 2 L 2 (ν N ) . Remark 3.13 (Complete graph random walk). If (a i ) 1≤i≤K is a sequence such that K i=1 a i = 0 then the function K i=1 ϕ(1) i is an eigenvector of L. However, if L is the generator of the classical complete graph random walk, La = −a and then a is also an eigenvector of L with the same eigenvalue. Let us finally give the following result on the spectrum of L: Lemma 3.14 (Spectrum of −L). The spectrum of −L is included in K i=1 λ l i | l 1 , . . . , l K ∈ {0, . . . , N } , where ∀l ∈ {0, . . . , N }, λ l = l + l(l − 1)p N − 1 . Proof. For every k ∈ {1, . . . , K} and l ∈ {0, . . . , N }, we have Lϕ (l) k (η) = −λ l ϕ (l) k (η) + Q l−1 (η) , where Q l−1 is a polynomial whose degree is less than l − 1. A straightforward recurrence shows that whether there exists or not a polynomial function ψ (l) k , whose degree is l, satisfying Lψ (l) k = −λ l ψ (l) k (namely ψ (l) k is an eigenvector of L). Indeed, it is possible to have ψ (l) k = 0 since the polynomial functions are not linearly independent (F is finite). More generally, for all l 1 , . . . , l K ∈ {1, . . . , N }, there exists a polynomial Q with K variables, whose degree with respect to the i th variable is strictly less than l i , such that the function φ : η → K i=1 η(k i ) l i + Q(η) satisfies Lφ = −λφ where λ = K i=1 λ l i . Again, provided that φ = 0, φ is an eigenvector and λ an eigenvalue of −L. Finally, as the state space is finite, using multivariate Lagrange polynomial, we can prove that every function is polynomial and thus we capture all the eigenvalues. Remark 3.15 (Cardinal of E). As card(F * ) = K, we have card(E) = N + K − 1 K − 1 = (N + K − 1)! N !(K − 1)! . In particular, the number of eigenvalues is finite and less than card(E). Remark 3.16 (Marginals). For each k, the random process (η t (k)) t≥0 , which is a marginal of a process generated by (11), is a Markov process on N N = {0, . . . , N } generated by Gf (x) = (N − x) 1 K + px N − 1 (f (x + 1) − f (x)) + x K − 1 K + p(N − x) N − 1 (f (x − 1) − f (x)), for every function f on N N and x ∈ N N . We can express the spectrum of this generator. Indeed, let ϕ l : x → x l , for every l ≥ 0. The family (ϕ l ) 0≤l≤N is linearly independent as can be checked with a Vandermonde determinant. This family generates the L 2 −space associated to the invariant measure since this space has a dimension equal to N + 1. Now, similarly to the proof of the previous lemma, we can prove the existence of N + 1 polynomials, which are eigenvectors and linearly independent, whose eigenvalues are λ 0 , λ 1 , . . . , λ N . THE TWO POINT SPACE We consider a Markov chain defined on the states {0, 1, 2} where 0 is the absorbing state. Its infinitesimal generator G is defined by G =   0 0 0 p 0 (1) −a − p 0 (1) a p 0 (2) b −b − p 0 (b),   where a, b > 0, p 0 (1), p 0 (2) ≥ 0 and p 0 (1) + p 0 (2) > 0. The generator of the Fleming-Viot process with N particles applied to bounded functions f : E → R reads Lf (η) = η(1) a + p 0 (1) η(2) N − 1 (f (T 1→2 η) − f (η)) + η(2) b + p 0 (2) η(1) N − 1 (f (T 2→1 η) − f (η)).(15) 4.1. The associated killed process. The long time behavior of the conditionned process is related to the eigenvalues and eigenvectors of the matrix: M = −a − p 0 (1) a b −b − p 0 (2) . Indeed see [24, section 3.1]. Its eigenvalues are given by λ + = −(a + b + p 0 (1) + p 0 (2)) + (a − b + p 0 (1) − p 0 (2)) 2 + 4ab 2 , λ − = −(a + b + p 0 (1) + p 0 (2)) − (a − b + p 0 (1) − p 0 (2)) 2 + 4ab 2 , and the corresponding eigenvectors are respectively given by (2)). From these properties, we deduce that Lemma 4.1 (Convergence to the QSD). There exists a constant C > 0 such that for every initial distribution µ, we have v + = a −A + √ A 2 + 4ab and v − = a −A − √ A 2 + 4ab , where A = a − b + p 0 (1) − p 0 (2). Also set ν = v + /(v + (1) + v +∀t ≥ 0, d TV (µT t , ν) ≤ Ce −(λ + −λ − )t . Proof. See [24,Theorem 7] and [24,Remark 3]. Note that λ + −λ − = (a + b) 2 + 2(a − b)(p 0 (1) − p 0 (2)) + (p 0 (1) − p 0 (2)) 2 > a+b−(sup(p 0 )−inf(p 0 )) when sup(p 0 ) > inf(p 0 ). Explicit formula of the invariant distribution. Firstly note that, as ∀η ∈ E, η(1) + η(2) = N, each marginal of (η t ) t≥0 is a Markov process: Lemma 4.2 (Markovian marginals). The random process (η t (1)) t≥0 , which is a marginal of a process generated by (15), is a Markov process generated by G defined by Gf (n) = b n (f (n + 1) − f (n)) + d n (f (n − 1) − f (n)),(16) for any function f and n ∈ N N = {0, . . . , N }, where b n = (N − n) b + p 0 (2) n N − 1 and d n = n a + p 0 (1) N − n N − 1 . Proof. For every η ∈ E, we have η = (η(1), N −η(1)) thus the Markov property and the generator are easily deducible from the properties of (η t ) t≥0 . From this result and the already known results on birth and death processes [7,8], we deduce that (η t (1)) t≥0 admits an invariant and reversible distribution π given by π(n) = u 0 n k=1 b k−1 d k and u −1 0 = 1 + N k=1 b 0 · · · b k−1 d 1 · · · d k , for every n ∈ N N . This gives π(n) = u 0 N n n k=1 b(N − 1) + (k − 1)p 0 (2) a(N − 1) + (N − k)p 0 (1) , and u −1 0 = 1 + N k=1 b(N − 1) + kp 0 (2) a(N − 1) + kp 0 (1) . Similarly, as η t (2) = N − η t (1), the process (η t (2)) t≥0 is a Markov process whose invariant distribution is also easily calculable. The invariant law of (η t ) t≥0 , is then given by ν N ((r 1 , r 2 )) = π ({r 1 }) , ∀(r 1 , r 2 ) ∈ E. Note that if p 0 is not constant then we can not find a basis of orthogonal polynomials in the L 2 space associated to ν N . It is then very difficult to express the spectral gap or the decay rate of the correlations without using our main results. Rate of convergence. Applying Theorem 1.1, in this special case, we find: Corollary 4.3 (Wasserstein contraction). For any processes (η t ) t>0 and (η ′ t ) t>0 generated by (15), and for any t ≥ 0, we have W d 1 (L(η t ), L(η ′ t )) ≤ e −ρt W d 1 (L(η 0 ), L(η ′ 0 )), where ρ = a + b − (sup(p 0 ) − inf(p 0 )). In particular, when (η ′ 0 ) follows the invariant distribution ν N of (15), we get for every t > 0 W d 1 (L(η t ), ν N ) ≤ e −ρt W d 1 (L(η 0 ), ν N ). This result is not optimal. Nevertheless, the error does not come from our coupling choice but it comes from how we estimate the distance. Indeed, this coupling induces a coupling between two processes generated by G defined by (16). More precisely, let L = L Q + L p be the generator of our coupling introduced in the proof of Theorem 1.1 in this special case. We set G = G Q + G p , where for any n, n ′ ∈ N N and f on E × E, L Q f ((n, N − n), (n ′ , N − n ′ )) = G Q ϕ f (n, n ′ ), L p f ((n, N − n), (n ′ , N − n ′ )) = G p ϕ f (n, n ′ ), and ϕ f (n, n ′ ) = f ((n, N − n), (n ′ , N − n ′ )). It satisfies, for any function f on N N and n ′ > n two elements of N N , G Q f (n, n ′ ) = na f (n − 1, n ′ − 1) − f (n, n ′ ) + (N − n ′ )b f (n + 1, n ′ + 1) − f (n, n ′ ) + (n ′ − n)b f (n + 1, n ′ ) − f (n, n ′ ) + (n ′ − n)a f (n, n ′ − 1) − f (n, n ′ ) , and G p f (n, n ′ ) = p 0 (1) n(N − n ′ ) N − 1 f (n − 1, n ′ − 1) − f (n, n ′ ) + p 0 (2) n(N − n ′ ) N − 1 f (n + 1, n ′ + 1) − f (n, n ′ ) + p 0 (1) n(n ′ − n) N − 1 f (n − 1, n ′ ) − f (n, n ′ ) + p 0 (2) (N − n ′ )(n ′ − n) N − 1 f (n, n ′ + 1) − f (n, n ′ ) + p 0 (2) n(n ′ − n) N − 1 f (n + 1, n ′ ) − f (n, n ′ ) + p 0 (1) (N − n ′ )(n ′ − n) N − 1 f (n, n ′ − 1) − f (n, n ′ ) . Now, for any sequence of positive numbers (u k ) k∈{0,...,N −1} , we introduce the distance δ u defined by δ u (n, n ′ ) = n ′ −1 k=n u k , for every n, n ′ ∈ N N such that n ′ > n. For all n ∈ N N \{N }, we have Gδ u (n, n + 1) ≤ −λ u δ u (n, n + 1) where λ u = min k∈{0,...,N −1} d k+1 − d k u k−1 u k + b k − b k+1 u k+1 u k , and thus, by linearity, Gδ u (n, n ′ ) ≤ −λ u δ u (n, n ′ ), for every n, n ′ ∈ N N . This implies that for any processes (X t ) t≥0 and (X ′ t ) t≥0 generated by G , and for any t ≥ 0, W δu (L(X t ), L(X ′ t )) ≤ e −λut W δu (L(X 0 ), L(X ′ 0 )). Note that, for every n, n ′ ∈ N N , we have min(u)d 1 ((n, N − n), (n ′ , N − n ′ )) ≤ δ u (n, n ′ ) ≤ max(u)d 1 ((n, N − n), (n ′ , N − n ′ )), and then for any processes (η t ) t≥0 and (η ′ t ) t≥0 generated by (15), and for any t ≥ 0, we have W d 1 (L(η t ), L(η ′ t )) ≤ max(u) min(u) e −λut W d 1 (L(η 0 ), L(η ′ 0 )). Finally, using [8,Theorem 9.25], there exists a positive sequence v such that λ v = max u λ u > 0 is the spectral gap of the birth and death process (η t (1)) t≥0 . These parameters depend on N and so we should write the previous inequality as W d 1 (L(η t ), L(η ′ t )) ≤ C(N )e −λ N t W d 1 (L(η 0 ), L(η ′ 0 )),(17) where C(N ) and λ N are two constants depending on N . In conclusion, the coupling introduced in Theorem 1.1 gives the optimal rate of convergence but we are not able to express a precise expression of λ N and C(N ). Nevertheless, in the section that follows, we will prove that, whatever the value of the parameters, the spectral gap is always bounded from below by a positive constant not depending on N . A lower bound for the spectral gap. In this subsection, we study the evolution of (λ N ) N ≥0 . Calculating λ N for small value of N (it is the eigenvalue of a small matrix) and some different parameters show that, in general, this sequence is not monotone and seems to converge to λ + −λ − . We are not able to prove this, but as it is trivial that for all N ≥ 0, λ N > 0, we can hope that it is bounded from below. The aim of this section is to prove this fact. Firstly, using similar arguments of subsection 3.4, we have λ N ≥ ρ, for every N ≥ 0. This result does not give us information in the case ρ ≤ 0. However, we can use Hardy's inequalities [1,Chapter 6] and mimic some arguments of [26] to obtain: We recall that π = π N is the invariant distribution defined in Subsection 4.2 and jumps rates b and d also depend on N . More precisely, [26,Proposition 3] shows that if one wants to get a "good" lower bound of the spectral gap, one only needs to guess an "adequate choice" of i and to apply the estimate λ N ≥ 1 4 max{B N,+ (i), B N,− (i)} . So, we have to find an upper bound for these two quantities. Before to give it, let us prove that the invariant distribution π is unimodal. Indeed, it will help us to choose an appropriate i. Lemma 4.5 (Unimodality of π). The sequence (π(i + 1)/π(i)) i≥0 is decreasing. Proof of Lemma 4.5. For all i ∈ {1, . . . , N }, we set g(i) = π(i + 1) π(i) = (N − i)(b(N − 1) + ip 0 (2)) (i + 1)((a + p 0 (1))(N − 1) − ip 0 (1)) . It follows that g(i + 1) − g(i) = Λ N (i) (i + 1)((a + p 0 (1))(N − 1) − ip 0 (1))(i + 2)((a + p 0 (1))(N − 1) − (i + 1)p 0 (1)) where Λ N (i) = (N − i − 1)(b(N − 1) + (i + 1)p 0 (2))(i + 1)((a + p 0 (1))(N − 1) − ip 0 (1)) − (N − i)(b(N − 1) + ip 0 (2))(i + 2)((a + p 0 (1))(N − 1) − (i + 1)p 0 (1)) = − [b(N − 1) − p 0 (2)] [(N + 1) (a(N − 1) − p 0 (1)) + p 0 (1)(N − i)(N − i − 1)] − p 0 (2) i 2 + 3i + 2 (a(N − 1) − p 0 (1)) ≤ 0. We deduce the result. Proof of Theorem 4.4. Without less of generality, we assume that p 0 (1) ≥ p 0 (2) and we recall that ρ ≤ 0. We would like to know where π reaches its maximum i * since it will be a good candidate to estimate B N,+ (i * ) and B N,− (i * ). From the previous lemma, to find it, we look when π(i + 1)/π(i) is close to one. We have, for all i ∈ {1, . . . , N }, π(i + 1) π(i) = b i d i+1 = 1 + (p 0 (1) − p 0 (2))(i − i 1 )(i − i 2 ) (i + 1) ((a + p 0 (1))(N − 1) − ip 0 (1)) ,(19) where i 1 and i 2 are the two real numbers given by i 1 = N (a + b + p 0 (1) − p 0 (2)) − (a + b + 2p 0 (1)) − √ ∆ 2(p 0 (1) − p 0 (2)) and i 2 = N (a + b + p 0 (1) − p 0 (2)) − (a + b + 2p 0 (1)) + √ ∆ 2(p 0 (1) − p 0 (2)) , where ∆ = [N (a + b + p 0 (1) − p 0 (2)) − (a + b + 2p 0 (1))] 2 − 4(N − 1)(bN − a − p 0 (1))(p 0 (1) − p 0 (2)). In particular, 1 ≤ i 1 ≤ N ≤ i 2 . Furthermore, if ⌊.⌋ denotes the integer part then π(⌊i 1 ⌋ + 2) π(⌊i 1 ⌋ + 1) ≤ 1 ≤ π(⌊i 1 ⌋ + 1) π(⌊i 1 ⌋) . Let us define m N = ⌊i 1 ⌋ + 1 and l N = 2(⌊ √ N ⌋ + 1). Using a telescopic product, we have π(m N + l N ) π(m N ) = π(m N + l N − ⌊ √ N ⌋ − 1) π(m N ) ⌊ √ N ⌋+1 j=1 π(m N + l N − j + 1) π(m N + l N − j) , Using Lemma 4.5 and the previous calculus, we have that the sequences (π(i)) i≥m N and (π(i + 1)/π(i)) i≥0 are decreasing and then π(m N + l N ) π(m N ) ≤ π(m N + l N − ⌊ √ N ⌋) π(m N + l N − ⌊ √ N ⌋ − 1) ⌊ √ N ⌋+1 . Now using (19) and some equivalents, there exists a constant δ 1 > 0 (not depending on N ) such that π(m N + l N − ⌊ √ N ⌋) π(m N + l N − ⌊ √ N ⌋ − 1) ≤ 1 − δ 1 √ N . Using the fact that 1 − x ≤ e −x for all x ≥ 0, we finally obtain π(m N + l N )/π(m N ) ≤ e −δ 1 . Similar arguments entail the existence of δ 2 > 0 (also not depending on N ) such that π(m N − l N )/π(m N ) ≤ e −δ 2 . In conclusion, using Lemma 4.5, we have shown that for all i ≥ m N and j ≤ m N , the following inequalities holds: π(i + l N ) ≤ e −δ 1 π(i) and π(j − l N ) ≤ e −δ 2 π(j). We are now armed to evaluate B N,+ (m N ) defined in (18). Firstly, using the expressions of the death rate d and m N , there exist γ > 0 (not depending on N ) and N 0 ≥ 0 such that for all N ≥ N 0 and all i ≥ m N + 1, d i ≥ γN . Let us fix x ≥ m N + 1, using that (π(i)) i≥m N is decreasing, we have Using these three estimates, we deduce that, for every N ≥ N 0 , x y=m N +1 1 π(y) = {i,k|m N +1≤k−il N ≤x} 1 π(k − il N ) ≤ {i,k|m N +1≤k−il N ≤x} e −δ 1 i π(k) ≤ 1 1 − e −δ 1 x k=x−l N +1 1 π(k) ≤ l N π(x) 1 1 − e −δ 1 .B N,+ (m N ) ≤ 1 γN l N 1 − e −δ 1 2 ≤ 1 γN 2( √ N + 1) 1 − e −δ 1 2 ≤ 16 γ(1 − e −δ 1 ) . The study of B N,− (m N ) is similar. 4.5. Correlations. Using Theorem 2.5, we have Corollary 4.6 (Correlations). If (η t ) t≥0 is a process generated by (15) then we have for all t ≥ 0, Cov(η t (k)/N, η t (l)/N ) ≤ 2 N 2 1 − e −2ρt ρ N (a ∨ b) + sup(p 0 ) N 2 N − 1 . If ρ ≤ 0, the right-hand side of the previous inequality explodes as t tends to infinity whereas these correlations are bounded by 1. Nevertheless, using Theorem 2.5, Remark 2.6 and Inequality (17), we can prove that there exists two constants C ′ (N ), depending on N , and K, which does not depend on N , such that where C(N ) is defined in (17). Even if Theorem 4.4 gives an estimate of λ N , C(N ) is not (completely) explicit and we do not know if the right-hand side of the previous expression tends to 0 as N tends to infinity. This example shows the difficulty of finding explicit and optimal rates of the convergence towards equilibrium and the decay of correlations; it also illustrates that our main results are extremely useful when sup(p 0 ) = inf(p 0 ). 0 ,) 0N ≥0 be two sequences of probability measures that converge to µ and ν respectively , as N tends to infinity, and such that η k)) k∈F * ∈ E (N ) , for every N ≥ 0. The existence of these two sequences can be proved via the law of large numbers. Now, for each N ≥ 0 and t ≥ Theorem ≤ e −ρt d TV m Theorem 3. 7 ( 7Correlation estimates). For all i = j ∈ {1, . . . , K}, we have Theorem 4. 4 4(A lower bound for the spectral gap). If ρ ≤ 0 then there exists c > 0 such that∀N ≥ 0, λ N > c.The rest of this subsection aims to prove this result. Hardy's inequalities are mainly based on the estimation of the quantities B N,+ and B N,− be defined for every i ∈ N by B N,+ (i) i|x≤k+il N ≤N } 1 {x+il N ≤N } Π N (k + il N ) ≤ l N π(x) 1 − e −δ 1 . η t (k)/N, η t (l)/N ) ≤ C ′ (N ) = KC(N ) N λ N , Sur les inégalités de Sobolev logarithmiques. C Ané, S Blachère, D Chafaï, P Fougères, I Gentil, F Malrieu, C Roberto, G Scheffer, 10Panoramas et Synthèses [Panoramas and SynthesesC. Ané, S. Blachère, D. Chafaï, P. Fougères, I. Gentil, F. Malrieu, C. Roberto, and G. Scheffer. Sur les inégalités de Sobolev logarithmiques, volume 10 of Panoramas et Synthèses [Panoramas and Syntheses]. . Société Mathématique De France, Dominique Bakry and Michel LedouxParisSociété Mathéma- tique de France, Paris, 2000. With a preface by Dominique Bakry and Michel Ledoux. Quasistationary distributions and Fleming-Viot processes in finite spaces. A Asselah, P A Ferrari, P Groisman, J. Appl. Probab. 482A. Asselah, P. A. Ferrari, and P. Groisman. Quasistationary distributions and Fleming-Viot processes in finite spaces. J. Appl. Probab., 48(2):322-332, 2011. L'hypercontractivité et son utilisation en théorie des semigroupes. D Bakry, Lectures on probability theory. Saint-Flour; BerlinSpringer1581D. Bakry. L'hypercontractivité et son utilisation en théorie des semigroupes. In Lectures on probability theory (Saint-Flour, 1992), volume 1581 of Lecture Notes in Math., pages 1-114. Springer, Berlin, 1994. Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincaré. D Bakry, P Cattiaux, A Guillin, J. Funct. Anal. 2543D. Bakry, P. Cattiaux, and A. Guillin. Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincaré. J. Funct. Anal., 254(3):727-759, 2008. Extinction of Fleming-Viot-type particle systems with strong drift. M Bieniek, K Burdzy, S Pal, Electron. J. Probab. 171115M. Bieniek, K. Burdzy, and S. Pal. Extinction of Fleming-Viot-type particle systems with strong drift. Electron. J. Probab., 17:no. 11, 15, 2012. A Fleming-Viot particle representation of the Dirichlet Laplacian. K Burdzy, R Hołyst, P March, Comm. Math. Phys. 2143K. Burdzy, R. Hołyst, and P. March. A Fleming-Viot particle representation of the Dirichlet Laplacian. Comm. Math. Phys., 214(3):679-703, 2000. Intertwining and commutation relations for birth-death processes. D Chafaï, A Joulin, Bernoulli. 195AD. Chafaï and A. Joulin. Intertwining and commutation relations for birth-death processes. Bernoulli, 19(5A):1855-1879, 2013. From Markov chains to non-equilibrium particle systems. M.-F Chen, World Scientific Publishing Co. IncRiver Edge, NJsecond editionM.-F. Chen. From Markov chains to non-equilibrium particle systems. World Scientific Publishing Co. Inc., River Edge, NJ, second edition, 2004. Exponential ergodicity for Markov processes with random switching. B Cloez, M Hairer, ArXiv e-printsB. Cloez and M. Hairer. Exponential ergodicity for Markov processes with random switching. ArXiv e-prints, Mar. 2013. Quasi-stationary distributions. P Collet, S Martínez, J San Martín, Markov chains, diffusions and dynamical systems. New York; HeidelbergSpringerProbability and its ApplicationsP. Collet, S. Martínez, and J. San Martín. Quasi-stationary distributions. Probability and its Applications (New York). Springer, Heidelberg, 2013. Markov chains, diffusions and dynamical systems. On quasi-stationary distributions in absorbing continuous-time finite Markov chains. J N Darroch, E Seneta, J. Appl. Probability. 4J. N. Darroch and E. Seneta. On quasi-stationary distributions in absorbing continuous-time finite Markov chains. J. Appl. Probability, 4:192-196, 1967. On the stability of measure valued processes with applications to filtering. P , Del Moral, A Guionnet, C. R. Acad. Sci. Paris Sér. I Math. 3295P. Del Moral and A. Guionnet. On the stability of measure valued processes with applications to filtering. C. R. Acad. Sci. Paris Sér. I Math., 329(5):429-434, 1999. A Moran particle system approximation of Feynman-Kac formulae. P , Del Moral, L Miclo, Stochastic Process. Appl. 862P. Del Moral and L. Miclo. A Moran particle system approximation of Feynman-Kac formulae. Stochastic Process. Appl., 86(2):193-216, 2000. On the stability of nonlinear Feynman-Kac semigroups. P , Del Moral, L Miclo, Ann. Fac. Sci. Toulouse Math. 116P. Del Moral and L. Miclo. On the stability of nonlinear Feynman-Kac semigroups. Ann. Fac. Sci. Toulouse Math. (6), 11(2):135-175, 2002. Couplings, distances and contractivity for diffusion processes revisited. A Eberle, ArXiv e-printsA. Eberle. Couplings, distances and contractivity for diffusion processes revisited. ArXiv e-prints, May 2013. Some mathematical models from population genetics. A Etheridge, Lecture Notes in Mathematics. 2012SpringerLectures from the 39th Probability Summer SchoolA. Etheridge. Some mathematical models from population genetics, volume 2012 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. Lectures from the 39th Probability Summer School held in Saint-Flour, 2009. Quasi stationary distributions and Fleming-Viot processes in countable spaces. Electron. P A Ferrari, N Marić, J. Probab. 1224P. A. Ferrari and N. Marić. Quasi stationary distributions and Fleming-Viot processes in countable spaces. Elec- tron. J. Probab., 12:no. 24, 684-702, 2007. Correlation inequalities for interacting particle systems with duality. C Giardinà, F Redig, K Vafayi, J. Stat. Phys. 1412C. Giardinà, F. Redig, and K. Vafayi. Correlation inequalities for interacting particle systems with duality. J. Stat. Phys., 141(2):242-263, 2010. Simulation of quasi-stationary distributions on countable spaces. P Groisman, M Jonckheere, ArXiv e-printsP. Groisman and M. Jonckheere. Simulation of quasi-stationary distributions on countable spaces. ArXiv e-prints, June 2012. Condensation in the inclusion process and related models. S Grosskinsky, F Redig, K Vafayi, J. Stat. Phys. 1425S. Grosskinsky, F. Redig, and K. Vafayi. Condensation in the inclusion process and related models. J. Stat. Phys., 142(5):952-974, 2011. Dynamics of condensation in the symmetric inclusion process. S Grosskinsky, F Redig, K Vafayi, Electron. J. Probab. 1866S. Grosskinsky, F. Redig, and K. Vafayi. Dynamics of condensation in the symmetric inclusion process. Electron. J. Probab., 18:no. 66, 23, 2013. Yet another look at Harris' ergodic theorem for Markov chains. M Hairer, J C Mattingly, Seminar on Stochastic Analysis, Random Fields and Applications VI. Basel AG, BaselBirkhäuser/Springer63M. Hairer and J. C. Mattingly. Yet another look at Harris' ergodic theorem for Markov chains. In Seminar on Stochastic Analysis, Random Fields and Applications VI, volume 63 of Progr. Probab., pages 109-117. Birkhäuser/Springer Basel AG, Basel, 2011. Existence and uniqueness of a quasi-stationary distribution for Markov processes with fast return from infinity. S Martinez, J Martin, D Villemonais, ArXiv e-printsS. Martinez, J. San Martin, and D. Villemonais. Existence and uniqueness of a quasi-stationary distribution for Markov processes with fast return from infinity. ArXiv e-prints, Feb. 2012. Quasi-stationary distributions and population processes. S Méléard, D Villemonais, Probab. Surv. 9S. Méléard and D. Villemonais. Quasi-stationary distributions and population processes. Probab. Surv., 9:340- 410, 2012. Stability of Markovian processes. III. Foster-Lyapunov criteria for continuous-time processes. S P Meyn, R L Tweedie, Adv. in Appl. Probab. 253S. P. Meyn and R. L. Tweedie. Stability of Markovian processes. III. Foster-Lyapunov criteria for continuous-time processes. Adv. in Appl. Probab., 25(3):518-548, 1993. An example of application of discrete hardy's inequalities. L Miclo, Markov Process. Related Fields. 53L. Miclo. An example of application of discrete hardy's inequalities. Markov Process. Related Fields, 5(3):319- 330, 1999. On the control of an interacting particle estimation of schrödinger ground states. M Rousset, SIAM J. Math. Analysis. 383M. Rousset. On the control of an interacting particle estimation of schrödinger ground states. SIAM J. Math. Analysis, 38(3):824-844, 2006. Lectures on finite Markov chains. L Saloff-Coste, Lectures on probability theory and statistics. Saint-Flour; BerlinSpringer1665L. Saloff-Coste. Lectures on finite Markov chains. In Lectures on probability theory and statistics (Saint-Flour, 1996), volume 1665 of Lecture Notes in Math., pages 301-413. Springer, Berlin, 1997. Topics in propagation of chaos. A.-S Sznitman, École d'Été de Probabilités de Saint-Flour XIX-1989. BerlinSpringer1464A.-S. Sznitman. Topics in propagation of chaos. In École d'Été de Probabilités de Saint-Flour XIX-1989, volume 1464 of Lecture Notes in Math., pages 165-251. Springer, Berlin, 1991. General approximation method for the distribution of Markov processes conditioned not to be killed. D Villemonais, ArXiv e-printsD. Villemonais. General approximation method for the distribution of Markov processes conditioned not to be killed. ArXiv e-prints, June 2011. Interacting particle systems and Yaglom limit approximation of diffusions with unbounded drift. D Villemonais, Electron. J. Probab. 1661D. Villemonais. Interacting particle systems and Yaglom limit approximation of diffusions with unbounded drift. Electron. J. Probab., 16:no. 61, 1663-1692, 2011. Reversibility and the age of an allele. I. Moran's infinitely many neutral alleles model. G A Watterson, Theoret. Population Biology. 103G. A. Watterson. Reversibility and the age of an allele. I. Moran's infinitely many neutral alleles model. Theoret. Population Biology, 10(3):239-253, 1976. . Cloez) Université De Bertrand, Institut Toulouse, De, Cnrs De Toulouse, Umr5219, ; Laboratoire D&apos;analyse Et De Mathématiques France E-Mail Address, Cnrs Appliquées, Université Paris-Est Marne-La-Vallée Umr8050, France , [email protected] (Marie-Noémie THAIBertrand CLOEZ) UNIVERSITÉ DE TOULOUSE, INSTITUT DE MATHÉMATIQUES DE TOULOUSE, CNRS UMR5219, FRANCE E-mail address: [email protected] (Marie-Noémie THAI) LABORATOIRE D'ANALYSE ET DE MATHÉMATIQUES APPLIQUÉES, CNRS UMR8050, UNIVERSITÉ PARIS-EST MARNE-LA-VALLÉE, FRANCE E-mail address: [email protected]
[]